uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,941,325,220,588
arxiv
\section*{Summary} \end{center} In this paper, we propose a robust concurrent multiscale method for continuum-continuum coupling based on the cut finite element method. The computational domain is defined in a fully non-conforming fashion by approximate signed distance functions over a fixed background grid and decomposed into microscale and macroscale regions by a novel zooming technique. The zoom interface is represented by a signed distance function which is allowed to intersect the computational mesh arbitrarily. We refine the mesh inside the zooming region hierarchically for high-resolution computations. In the examples considered here, the microstructure can possess void, and hard inclusions and the corresponding geometry is defined by a signed distance function interpolated over the refined mesh. In our zooming technique, the zooming interface is allowed to intersect the microstructure interface in a arbitrary way. Then, the coupling between the subdomains is applied using Nitsche's method across interfaces. This multiresolution framework proposes an efficient stabilized algorithm to ensure the stability of elements cut by the zooming and the microstructure interfaces. It is tested for several multiscale examples to demonstrate its robustness and efficiency for elasticity and plasticity problems.\\ \text{Keywords: Concurrent Multiscale; Unfitted Multimesh; CutFEM; Nitsche; Plasticity} \\ \section{Introduction} Numerical analysis of heterogeneous materials such as composites are conventionally carried out with properties obtained from homogenization methods (see, e.g. \cite{tadmor_miller_2011, Kanoute.09, FEYEL03}), passing data from small to large length scales wherein the macroscale properties are achieved by averaging stresses and strains over a periodic representative volume element (RVE). However, these homogenization methods suffer from drawbacks, including macroscopic uniformity and RVE periodicity assumptions. The uniformity assumption is not satisfied in critical regions of high gradients like interfaces, complex geometries with sharp angles, and plasticity and softening regions. The periodicity assumption is also not fulfilled when the material's microstructure is nonuniform. In this context, using full micro-scale models leads to accurate responses, but it is not tractable for large scale structures. These problems have been tackled effectively by concurrent multiscale methods \cite{ ZOHDI19992507, RAGHAVAN2004497} that can bridge the microstructural and homogenized macro-scale descriptions. Domain partitioning in these multiscale methods distinguishes regions requiring different resolutions, zooming in for microscale modelling at critical regions, and using macroscale analysis elsewhere. The major challenge with concurrent multiscale methods is adequately modelling the coupling between scales. To address this issue, two main categories of solutions are available. The first category is overlapping domain decomposition methods which are mainly convenient approaches to tackle the difficulties of coupling two scales with incompatible kinematics, for instance, the bridging domain method for coupling molecular dynamics with continuum \cite{Xiao2004, Belytschko03} and the quasicontinuum method for atomistic-continuum coupling \cite{Tadmor1996, Beex.Kerfriden.ea.14}. The Arlequin method \cite{Dhia.98} is another so-called overlapping technique that partitions the energy in the overlapping domain over the subdomains to enforce the compatibility of the mechanical states. This method has been applied successfully for coupling continuum-continuum \cite{Dhia.Rateau.05, Dhia.08}, discrete-continuum \cite{Bauman08} and atomistic-continuum \cite{Fish07}. The second category is based on non-overlapping domain decomposition techniques that couple the two models over an interface, for instance, in the context of Mortar Method \cite{Lamichhane.Wohlmuth.04} which connects two scales in an average sense. In non-overlapping techniques, two models with or without different scales or physics are linked together. The restrictions imposed over the interface can be carried out either using Lagrange multipliers \cite{BURMAN20102680, Ji04, dsouza21} or Nitsche's method \cite{BURMAN2012328, CAI2021113880}. The geometric description in domain decomposition methods can be based on either conforming or non-conforming techniques. Employing conforming methods such as classical FEM \cite{hughes2012finite} and Arbitrary Lagrangian Eulerian Methods \cite{Donea04} increase the computational preprocessing costs, particularly when the structure geometry is complex or time-dependent. On the contrary, non-conforming domain decomposition methods aim at not only decoupling the geometric description from the mesh for approximating solutions but also ensuring the stability of solvers. Implementing non-conforming techniques for domain decomposition methods can significantly simplify the mesh generation. The well-known examples of non-conforming methods are; extended finite element method (XFEM) \cite{Moes99}, cut finite element method (CutFEM) \cite{Burman.Claus.ea.15}, assumed enhanced strain (AES) method \cite{BORJA20001529} and finite cell method (FCM) \cite{Parvizian07}. Using these methods in the context of domain decomposition methods requires robust numerical integration and stabilization techniques. This paper focuses on extending the CutFEM integration and stabilization algorithm for non-overlapping domain decomposition problems. CutFEM technique, as a variation of XFEM, treats the elements with discontinuities by using overlapped fictitious domain technique combined with the ghost penalty regularization to decouple the geometry from meshing and also guarantee the solver stability. This method has been developed for problems in two-phase fluid flow \cite{Claus19.2, FRACHON201977}, multi-physics \cite{hansbo2016cut, Claus.Bigot.ea.18} and contact mechanics \cite{Claus.Kerfriden.18, Claus20}. However, there are only a few extensions to multiscale and multiresolution methodologies in the literature. We refer to a multimesh framework suggested by \cite{JOHANSSON2019672, DOKKEN2020113129, JOHANSSON2020b} and a mixed multiscale proposed by \cite{mikaeili2021mixed}, as examples of the non-overlapping and overlapping domain decomposition methods, respectively. The former method combines Nitche's method with ghost penalty regularization \cite{Burman.10} to alleviate the meshing burdensome of multi-component structures, which allows each part to own separate meshes and intersect and overlap each other arbitrarily, while the latter method uses ghost penalty regularization within a novel mixing framework to address the integration and stability issues. This study proposes a novel multiresolution framework based on the CutFEM for multiscale modelling of heterogeneous structures with complex microstructures. The structures can possess linear and nonlinear material properties. Within this framework, we first define regions of interest or "zooms" implicitly through level set functions interpolated over a fixed coarse mesh background. The corresponding interface can intersect the background mesh arbitrarily. Due to the necessity of high-resolution simulations inside the zooms, we refine the elements inside hierarchically. Then, another level set function is introduced to define the microstructure inside zoom, which is interpolated over the high-resolution mesh. The main novelty of the paper is in the way we treat the bilinear and linear forms of the cut elements by zoom and microstructure interfaces. We use Nitsche's method over the interfaces to glue different regions together. Then to guarantee the well-conditioning of the multiresolution system matrix and the stability of the solver, the cut elements are regularized with the ghost penalty technique. The paper is outlined as follows. In section 2, we will present a concurrent multiscale formulation for linear elasticity and plasticity problems. Then we will discretize the multiscale formulation within the CutFEM algorithm. In section 3, the proposed multiscale framework will be tested for different heterogeneous structures. \clearpage \section{Multiresolution finite element: zooming without meshing} \subsection{Heterogeneous elasticity problem: strong form} \subsubsection{Semi-discrete boundary value problem} To introduce the method, we consider a time-dependent elasticity problem for a two-phase composite material occupying domain $\Omega = \Omega^0 \cup \Omega^1$, where $\Omega^0 \cap \Omega^1$ = \{ \, \}, in dimension $d \in \{ 2,3\} $. In our presentation, $\Omega^0$ corresponds to the matrix phase of the composite, and $\Omega^1$ corresponds to an inclusion phase (ellipsoids). The semi-discrete problem of elasticity that we wish to solve is the following. Time interval $\mathcal{T} = [0 \, T]$ into $N$ equally spaced time intervals. At discrete times in $\mathcal{T}_{\Delta T} = \{ t_1=\Delta T , \, t_2=2 \Delta T , \, ... \, , t_N= N \Delta T = T \}$, we look for displacement $ u _n := \{ u^{n,1}, u^{n,2}\} : \Omega^0 \times \Omega^1 \rightarrow \mathbb{R}^D \times \mathbb{R}^D$ satisfying, \begin{equation} \forall \, i \in \{ 0,1\}, \qquad \text{div} \, \sigma^{i,n}( \nabla_s u^i) + f^n = 0 \qquad \text{in} \, \Omega^i \, . \end{equation} The boundary conditions of the elasticity problem are \begin{equation} \label{eq:DBC} \forall \, i \in \{ 0,1\}, \qquad u^{i,n} = u^n_d \qquad \text{over} \ \partial \Omega_u \cap\partial \Omega^i \end{equation} and \begin{equation} \forall \, i \in \{ 0,1\}, \qquad \sigma^i(\nabla_s u^{i,n}) \cdot n_{\partial \Omega} = t^n_d \qquad \text{over} \ \partial \Omega_t \cap \partial \Omega^i \end{equation} where $\partial \Omega = \partial \Omega_t \cup \partial \Omega_u$, $ \partial \Omega_t \cap \partial \Omega_u = \{ 0 \}$. Fields $f^n$, $u^n_d$ and $t^n_d$ are given time-discrete fields. $n_{\partial \Omega}$ denotes the outer normal to the boundary. \\ In this contribution, domain $\Omega^0$ and $\Omega^f$ are implicitly defined via the values of a time-independent continuous level set function $\phi^f \in \mathcal{C}^0(\Omega)$. More precisely, we suppose that \begin{equation}\begin{array}{rcl} \Omega^0 = \{ x \in \Omega \, | \, \phi^f(x) \geq 0 \} \\ \Omega^1 = \{ x \in \Omega \, | \, \phi^f(x) \leq 0 \} \end{array} \, . \end{equation} \subsubsection{Linear elasticity} If we assume that the two phases of the composite are linear elastic, time-independent and homogeneous, the stress functions $\sigma^{i,n}$ may be expressed as \begin{equation} \forall \, i \in \{ 0,1\}, \qquad \sigma^{i,n}(\nabla_s u^{i,n}) := C^i :\nabla_s u^{i,n} \qquad \text{in} \, \Omega^i \, , \end{equation} where $\nabla_s \, . \, =\frac{1}{2} (\nabla \, . \, +\nabla^T \, . \, ) $ and $C^i$ is the fourth-order Hooke tensor of the material occupying phase $i$. This tensor may be expressed as a function of the the Lam\'e coefficients $\lambda^i$ and $\mu^i$ as follows: \begin{equation} \forall \, i \in \{ 0,1\}, \qquad C^i : \nabla_s \, . = \lambda^i \text{Tr} ( \nabla_s \, . \, ) \mathbb{I} + 2 \mu^i \nabla_s \, . \, . \end{equation} \subsubsection{von Mises plasticity} \paragraph{Time continuous constitutive law.} We consider the following von Mises plasticity model. The stress $s$ in the material is given by \begin{equation} s = C: ( \epsilon - \epsilon_p) \end{equation} as a function of the strain $\epsilon$ (\textit{i.e.} the symmetric part of the displacement gradient) and the plastic strain $\epsilon_p$. The yield surface is defined as \begin{equation} f(s_D,q,p) = \sqrt{ \frac{3}{2} (s_D-q) : (s_D-q) } - (Y_0 + R(p)) \end{equation} where $s_D$ denotes the deviatoric part of $\sigma$, $p$ denotes the cumulative plastic strain and $q$ the back stress. The flow rules are as follows: \begin{equation} \lambda \geq 0 \qquad \lambda f(s_D,q,p) \qquad f(s_D,q,p) \leq 0 \end{equation} \begin{equation} \begin{array}{l} \displaystyle \dot{\epsilon}_p = \lambda \left( \frac{s_D-q}{ \| s_D-q \|} \right) \\ \displaystyle \dot{q} = \lambda \left( \bar{H} : \frac{s_D-q}{ \| s_D-q \|} \right) \\ \displaystyle \dot{p} = \lambda \end{array} \end{equation} where $\lambda$ is the plastic multiplier, $R(p)$ is the isotropic hardening function, $Y_0$ is a constant yield parameter and $H$ is a fourth-order kinematic hardening tensor. In our examples, $\bar{H}$ will be vanishingly small (no kinematic hardening), and $R(p) = \hat{H}p$ (positive linear isotropic hardening). \paragraph{Implicit time integration} The previous ODE may be discretised in time using an implicit Euler scheme. This leads to the following semi-discrete material law. \begin{equation} \begin{array}{l} \displaystyle \frac{1}{\Delta T} \left( C^{-1}:s^{n+1} - \epsilon^{n+1} + \epsilon^n_p \right) + \lambda \, \frac{s_D^{n+1}-q^{n+1}}{ \| s_D^{n+1}-q^{n+1} \|} = 0 \\ \displaystyle \frac{1}{\Delta T} \left( \bar{H}^{-1}: (q^{n+1} - q^n) \right) + \lambda \, \frac{s_D^{n+1}-q^{n+1}}{ \| s_D^{n+1}-q^{n+1} \|} = 0 \\ \displaystyle \lambda \geq 0 \qquad \lambda f(s_D^{n+1},q^{n+1},p^{n}+ \lambda \Delta T) \qquad f(s_D^{n+1},q^{n+1},p^{n}+ \lambda \Delta T ) \leq 0 \end{array} \end{equation} Given $( \epsilon^{n+1} , \epsilon_p^n, q^{n},p^n ) $, the previous nonlinear system of equation (the last three constraints can be recast as a single nonlinear equality using the Heaviside function) can be solved for $(s^{n+1},q^{n+1},\lambda)$ using the usual combination of operator splitting and Newton iterative solution scheme. The update of internal variables are is performed according to \begin{equation} \begin{array}{l} \displaystyle \epsilon^{n+1}_p = \epsilon^{n}_p + \lambda \left( \frac{s^{n+1}_D-q^{n+1}}{ \| s^{n+1}_D-q^{n+1} \|} \right) \Delta T \\ \displaystyle p^{n+1}= p^{n} + \lambda \Delta T \end{array} \end{equation} The procedure can therefore be summarised as an (implicit defined) relationship \begin{equation} s^{n+1} = s_{\Delta T} \left( \epsilon^{n+1} , (\epsilon_p^n, q^{n},p^n ) ; \mu , \Delta T \right) \end{equation} where $\mu$ is a real-valued vector containing all the parameters of the constitutive law: $Y_0$, all the free parameters of tensor $\bar{H}$ and that of $R$). \paragraph{Semi-discrete implicit stress functions} At time $t_n > 0$, for any phase index $i \in \llbracket 1 \, 2 \rrbracket$ of the composite material, we may replace the elastic constitutive law by the following nolinear function \begin{equation} \sigma^{i,n} (\nabla_s u^{i,n}) = s_{\Delta T} \left( \nabla_s u^{i,n} , \xi^{i,n-1} ;\mu^i , \Delta T \right) \end{equation} where the field of past internal variables $\xi^{i,n-1} = ( \epsilon_p^{n-1}, q^{n-1}, p^{n-1} )$ defined over $\Omega^i$ are sequentially and locally updated according to the procedure outlined above. We suppose that at the beginning of the simulation, all the internal variables are null. \\ \subsection{Multiresolution problem in weak form} Keeping this in mind, we now split domain $\Omega$ arbitrarily into two non-overlapping domains: the coarse domain $\Omega^c =: \Omega^2$ and the fine domain $\Omega^f$. Let us now redefine $\Omega^{i}$ as $\Omega^f \cap \Omega^i$, for $i \in \{ 0,1\}$ (level set $\phi^f$ is unaffected by this change of notation). The interfaces between the domains \begin{equation} \begin{aligned} \Gamma^0 = \Omega^0 \cap \Omega^1 \\ \Gamma^1 = \Omega^0 \cap \Omega^2 \\ \Gamma^2 = \Omega^1 \cap \Omega^2 \end{aligned} \end{equation} \begin{figure}[h] \centering \includegraphics[scale=.33]{Schm1.png} \caption{Domain partitioning for the multiscale method} \label{fig:schm1} \end{figure} In the coarse domain, we introduce a surrogate material model with slowly varying parameters in space. If the coarse material is elastic and homogeneous, it is characterized by constant tensor $C^c(x)$ (which may be obtained, for instance, via some form of homogenisation of the composite material), whose action reads as \begin{equation} \qquad C^c : \nabla_s \, . = \lambda^c \text{Tr} ( \nabla_s \, . \, ) \mathbb{I} + 2 \mu^c \nabla_c \, . \, \end{equation} The resulting stress function is \begin{equation} \sigma^c(\nabla_s u^c) := C^c :\nabla_s u^c \qquad \text{in} \ \Omega^c \, . \end{equation} If the coarse material is plastic, we define the associated stress update at the $n^{th}$ time increment, $n \in \llbracket 1 \, N \rrbracket$, by \begin{equation} \sigma^{c,n} (\nabla_s u^{c,n}) = s_{\Delta T} \left( \nabla_s u^{c,n} , \xi^{c,n-1} ; \mu^c , \Delta T \right) \end{equation} Now, the multiresolution problem of elasticity, at time $t_n \in \mathcal{T}_{\Delta T}$, is as follows. We look for a displacement field $u:= \{ u^0, u^{1}, u^{2} \}:\Omega^0 \times \Omega^1 \times \Omega^2 \rightarrow \mathbb{R}^D \times \mathbb{R}^D \times \mathbb{R}^D$ ($u$ is overwritten to simplify the notations) satisfying \begin{equation} \sum_{i=0}^2 \int_{\Omega^i} \sigma^i( \nabla_s u^i) : \nabla_s \delta u \, dx = \sum_{i=0}^2 \int_{\Omega^i} f \cdot \delta u^i \, dx + \sum_{i=0}^2 \int_{\partial \Omega^i_t} t_d \cdot \delta u^i \, dx \, , \end{equation} where \begin{equation} \forall \, i \in \{ 0,1,2\}, \qquad \partial \Omega^i_t = \Omega^i \cap \partial \Omega_t \end{equation} In the previous variational statement, arbitrary triplet $\delta u:= \{ \delta u^0, \delta u^{1}, \delta u^{2} \}:\Omega^0 \times \Omega^{1} \times \Omega^{2} \rightarrow \mathbb{R}^D \times \mathbb{R}^D \times \mathbb{R}^D$ is required to satisfy the homogeneous Dirichlet conditions \begin{equation} \forall \, i \in \{0, 1,2\}, \qquad u^i = 0 \qquad \text{over} \ \partial \Omega^i_u := \partial \Omega_u \cap\partial \Omega^i \end{equation} In the proposed multiresolution scheme, coarse domain $\Omega^c$ is defined as \begin{equation} \Omega^c = \{ x \in \Omega \, | \, \phi^c(x) \leq 0 \} \, , \end{equation} where $\phi^c \in \mathcal{C}^0(\Omega)$ is a continuous level set function. \subsection{Discretisation of the multiresolution problem} \subsubsection{Discretisation of the geometry} Let us introduce a coarse triangulation $\mathcal{T}_H$ of domain $\Omega$. The tessellated domain is denoted by $\Omega_H$. Furthermore, let us introduce the finite element space of continuous piecewise linear functions, \textit{i.e.} \begin{equation} \mathcal{Q}_H : = \{ w \in \mathcal{C}^0(\Omega_H): w |_{K} \in \mathcal{P}^1(K) \, , \forall K \in \mathcal{T}_H \} \end{equation} We now define the finite element approximation of coarse domain $\Omega^c$ as \begin{equation} \Omega_H^c = \Omega_H^2 = \{ x \in \Omega_H \, | \, \phi^c_H(x) \leq 0 \} \end{equation} where $\phi^c_H \in \mathcal{Q}_H$ is the coarse nodal interpolant of $\phi^c$. Let us now introduce a hierarchical subtriangulation $\mathcal{T}_h$ of $\mathcal{T}_H$, with $h \ll H$. Due to the hierarchical structure, the union of all triangles of $\mathcal{T}_h$ is the coarse finite element domain $\mathcal{T}_H$. We define space \begin{equation} \mathcal{Q}_{(H,h)} : = \{ w \in \mathcal{C}^0(\Omega_H): w |_{K} \in \mathcal{P}^1(K) \, , \forall K \in \mathcal{T}_h \} \end{equation} With this definition, domains $\Omega^0$ and $\Omega^1$ are discretised as follows. \begin{equation}\begin{array}{rcl} \Omega_{(H,h)}^0 = \{ x \in \Omega_H \, | \, \phi_H^c(x) \geq 0 , \, \phi_h^f(x) \geq 0 \} \\ \Omega_{(H,h)}^1 = \{ x \in \Omega_H \, | \, \phi_H^c(x) \geq 0 , \, \phi_h^f(x) \leq 0 \} \end{array} \, . \end{equation} We define the interface between the fine domains as \begin{equation} \Gamma_{(H,h)}^{0} = \{ x \in \Omega_H \, | \, \phi_H^c(x) \geq 0 , \, \phi^f_{(H,h)}(x) = 0 \} \end{equation} and the interfaces between the coarse and the fine domains as \begin{equation}\begin{aligned} \Gamma_H^{fc} = \{ x \in \Omega_H \, | \, \phi^c_H(x) = 0 \}\\ \Gamma_{(H,h)}^{1} = \{ x \in \Gamma_H^{fc} \, | \, \phi^f_h(x) \geq 0 \} \\ \Gamma_{(H,h)}^{2} = \{ x \in \Gamma_H^{fc} \, | \, \phi^f_h(x) \leq 0 \}\end{aligned} \end{equation} Notice that finely discretized quantities are parametrised by a pair of mesh characteristics $ \ \mathcal{H} = (H,h)$. this is due to the hierarchical structure of the multiresolution scheme that we have introduced (the coarse domain ``overshadows" the composite material). To simplify the notations, the coarse sets and variables that only depend on $H$ will also be written to be dependent on $\mathcal{H}$. \subsubsection{Overlapping domain decomposition} For the three different domains of the multiresolution scheme, we need to define appropriate extended domains. Such an extended domain is composed of all the elements that have a non-void intersection with its non-extended counterpart. Hence, the set of all elements of $\mathcal{T}_\mathcal{H}$ that have a non-zero intersection with $\Omega_\mathcal{H}^c$ is \begin{equation} \hat{\mathcal{T}}_\mathcal{H}^c := \{ K \in \mathcal{T}_\mathcal{H}: K \cap \Omega_\mathcal{H}^c \neq \emptyset\} \end{equation} The fictitious domain domain corresponding to this set is $\hat{\Omega}_\mathcal{H}^c := \bigcup_{K \in \hat{\mathcal{T}}_\mathcal{H}^c } K $ Similarly for the fine domains, \begin{equation} \forall i \in \{ 0,1\}, \qquad \hat{\mathcal{T}}_\mathcal{H}^i := \{ K \in \mathcal{T}_\mathcal{H}: K \cap \Omega_\mathcal{H}^i \neq \emptyset\} \end{equation} The domains corresponding to these sets are denoted by $\hat{\Omega}_\mathcal{H}^i := \bigcup_{K \in \hat{\mathcal{T}}_\mathcal{H}^i } K $, for $i=0$ and for $i=1$. \subsubsection{Extended interface FE spaces} We will look for an approximation $u_\mathcal{H} = \left( u_\mathcal{H}^0, u_\mathcal{H}^1,u_\mathcal{H}^2 \right) $ of the multiresolution elasticity problem in space $\mathcal{U}_\mathcal{H} = \mathcal{U}_\mathcal{H}^0 \times \mathcal{U}_\mathcal{H}^1 \times \mathcal{U}_\mathcal{H}^2$, where \begin{equation} \begin{aligned} \mathcal{U}_\mathcal{H}^c = \mathcal{U}_\mathcal{H}^2 &:= \{ w \in \mathcal{C}^0(\hat{\Omega}_\mathcal{H}^c): w |_{K} \in \mathcal{P}^1(K) \, \forall K \in \hat{\mathcal{T}}_\mathcal{H}^c \} \\ \forall i \in \{ 0, 1\} , \quad \hat{\mathcal{U}}_\mathcal{H}^i &:= \{ w \in \mathcal{C}^0(\hat{\Omega}_\mathcal{H}^i): w |_{K} \in \mathcal{P}^1(K) \, \forall K \in \hat{\mathcal{T}}_\mathcal{H}^i \} \end{aligned} \end{equation} Notice that $u_\mathcal{H}$ is multi-valued in the elements that are cut by the two embedded interfaces. This feature allows us to represent discontinuities at the two interfaces. The field of internal variables $\xi^{i,n}$, for any $n \in \llbracket 0 \, N\rrbracket$ and for any $i \in \llbracket 0 \, 2\rrbracket$, will be defined over the corresponding approximated domain ${\Omega}_\mathcal{H}^i$. These fields do not need to be extended to the fictitious domain. \subsubsection{Additional sets} We now define some additional sets, which is required to introduce the stabilisation strategy for our implicit boundary multiresolution formulation. For stabilisation purpose, let us denote all elements which are intersected by $\Gamma_\mathcal{H}^{fc}$ by \begin{equation} \hat{\mathcal{G}}^{fc}_\mathcal{H} := \{ K \in \mathcal{T}_\mathcal{H} \, | \, K \cap \Gamma_\mathcal{H}^{fc} \neq \emptyset\} \end{equation} The domain corresponding to this set is denoted by $\hat{\Gamma}^{fc}_\mathcal{H} := \bigcup_{K \in \hat{\mathcal{G}}^{fc}_\mathcal{H} } K $. Similarly for the fine domains, for $i \in \{ 0, 1\}$, \begin{equation} \hat{\mathcal{G}}^i_\mathcal{H} := \{ K \in \mathcal{T}_\mathcal{H} \, | \, K \cap \Gamma_\mathcal{H}^i \neq \emptyset\} \end{equation} and the corresponding domains will be denoted by $\hat{\Gamma}^i_\mathcal{H} := \bigcup_{K \in \hat{\mathcal{G}}^i_\mathcal{H} } K $. We now define the set of Ghost-Penalty element edges for fictitious domain $\hat{\Omega}_\mathcal{H}^0$ \begin{equation} \hat{\mathcal{F}}_G^0:= \{ F = K \cap K' : K \in \hat{\mathcal{T}}_\mathcal{H}^0 \mbox{ and } K' \in \hat{\mathcal{T}}_\mathcal{H}^0, \, F \cap \hat{\Gamma}^0_\mathcal{H} \neq \emptyset \} \end{equation} and for fictitious domain $\hat{\Omega}_\mathcal{H}^i $, $i \in \{ 1, 2\}$ as \begin{equation} \hat{\mathcal{F}}_G^i:= \{ F = K \cap K' : K \in \hat{\mathcal{T}}_\mathcal{H}^i \mbox{ and } K' \in \hat{\mathcal{T}}_\mathcal{H}^i, \, F \cap \hat{\Gamma}^i_\mathcal{H} \neq \emptyset \} \end{equation} \subsection{Implicit boundary finite element formulation} The finite element multiresolution formulation is as follows: for any $\delta u_\mathcal{H} \in \mathcal{U}_\mathcal{H}$ satisfying the homogeneous Dirichlet boundary conditions, \begin{equation} a_\mathcal{H}(u_\mathcal{H},\delta u_\mathcal{H}) + a^\sharp_\mathcal{H}(u_\mathcal{H},\delta u_\mathcal{H}) + a^\heartsuit_\mathcal{H}(u_\mathcal{H},\delta u_\mathcal{H}) = l_\mathcal{H}(\delta u_\mathcal{H}) \, . \end{equation} In the previous formulation, the bilinear form $a_\mathcal{H}$ is defined by \begin{equation} a_\mathcal{H}(u_\mathcal{H},\delta u_\mathcal{H}) = \sum_{i=0}^2 \int_{\Omega_\mathcal{H}^i} \sigma^i(\nabla_s u_\mathcal{H}^i) : \nabla_s \delta u^i_\mathcal{H} \, dx \, . \end{equation} and the linear form $l_\mathcal{H}$ is as follows: \begin{equation} \displaystyle l_\mathcal{H}(\delta u_\mathcal{H}) = \sum_{i=0}^2 \int_{\Omega_\mathcal{H}^i} f \cdot \delta u_\mathcal{H}^i \, dx + \sum_{i=0}^2 \int_{\partial \Omega_{t,\mathcal{H}}^i} t_d \cdot \delta u_\mathcal{H}^i \, dx \, , \end{equation} Term $a^\sharp_\mathcal{H}$ is composed of terms that allows gluing the three domains together, using Nitsche's method. It is further expanded as \begin{equation} a^\sharp_\mathcal{H}(u_\mathcal{H},\delta u_\mathcal{H}) = a^{0,\sharp}_\mathcal{H}(u_\mathcal{H},\delta u_\mathcal{H}) + a^{1,\sharp}_\mathcal{H}(u_\mathcal{H},\delta u_\mathcal{H}) + a^{2,\sharp}_\mathcal{H}(u_\mathcal{H},\delta u_\mathcal{H} ) \, , \end{equation} where the first term is for the interface between coarse and fine domains, while the second terms is for the interface between the matrix and the inclusions. We have that \begin{equation} \begin{array}{rcl} \displaystyle a^{i,\sharp}_\mathcal{H}(u_\mathcal{H},\delta u_\mathcal{H}) & = & \displaystyle \beta^i \int_{\Gamma_\mathcal{H}^i} \llbracket u_\mathcal{H} \rrbracket^i \cdot \llbracket \delta u_\mathcal{H} \rrbracket^i \, dx \\ & - & \displaystyle \int_{\Gamma_\mathcal{H}^i} \left\{ t \right\}^i(u_\mathcal{H}) \cdot \llbracket \delta u_\mathcal{H} \rrbracket^i \, dx \\ & - & \displaystyle \int_{\Gamma_\mathcal{H}^i} \left\{ t \right\}^i( \delta u_\mathcal{H}) \cdot \llbracket u_\mathcal{H} \rrbracket^i \, dx \, . \end{array} \end{equation} where \begin{equation} \begin{aligned} \llbracket u_\mathcal{H} \rrbracket^0 = u^1_\mathcal{H} - u^2_\mathcal{H} \\ \llbracket u_\mathcal{H} \rrbracket^1 = u^0_\mathcal{H} - u^2_\mathcal{H} \\ \llbracket u_\mathcal{H} \rrbracket^2 = u^0_\mathcal{H} - u^1_\mathcal{H} \end{aligned} \end{equation} denotes the jumps in the displacements across $\Gamma_\mathcal{H}^0, \Gamma_\mathcal{H}^1$ and $\Gamma_\mathcal{H}^2$ respectively; and $\left\{ t \right\}^i$ denotes the following weighted averages \begin{equation} \begin{aligned} \left\{ t \right\}^0 = w_1^0 \sigma^1(\nabla_s u^1_\mathcal{H}) \cdot n_0 + w_2^0 \sigma^2(\nabla_s u^2_\mathcal{H}) \cdot n_0 \\ \left\{ t \right\}^1 = w_1^1 \sigma^0(\nabla_s u^0_\mathcal{H})\cdot n_0 + w_2^1 \sigma^2(\nabla_s u^2_\mathcal{H}) \cdot n_0 \\ \left\{ t \right\}^2 = w_1^2 \sigma^0(\nabla_s u^0_\mathcal{H}) \cdot n_1 + w_2^2 \sigma^1(\nabla_s u^1_\mathcal{H}) \cdot n_1 \end{aligned} \end{equation} where $n_0 = - \frac{\nabla \phi^c}{| \nabla \phi^c |}$, $n_1 = - \frac{\nabla \phi^f}{| \nabla \phi^f |}$. \begin{equation} \begin{aligned} w_1^0 = \frac{\frac{E^c}{H}}{\frac{E^1}{h} + \frac{E^c}{H}}, \quad w_2^0 = \frac{\frac{E^1}{h}}{\frac{E^1}{h} + \frac{E^c}{H}} \\ w_1^1 = \frac{\frac{E^c}{H}}{\frac{E^0}{h} + \frac{E^c}{H}}, \quad w_2^1 = \frac{\frac{E^0}{h}}{\frac{E^0}{h} + \frac{E^c}{H}} \\ w_1^2 = \frac{E^1}{E^0 + E^1 }, \quad w_2^2 = \frac{E^0}{E^0 + E^1} \end{aligned} \end{equation} \begin{equation} \begin{aligned} \beta^0 = \frac{\frac{E^c}{H} \frac{E^1}{h}}{\frac{E^1}{h} + \frac{E^c}{H}},\\ \beta^1 = \frac{\frac{E^c}{H}\frac{E^0}{h}}{\frac{E^0}{h} + \frac{E^c}{H}}, \\\ \beta^2= \frac{E^0 E^1}{h (E^0 + E^1) }, \end{aligned} \end{equation} Similarly for the interface between the matrix and the inclusions, Finally, $a^\heartsuit_\mathcal{H}$ is an interior penalty regularisation term that reads as, for $i \in \{ 0, 1, 2\}$, \begin{equation} a^\heartsuit_\mathcal{H}(u_\mathcal{H},\delta u_\mathcal{H}) = \sum_{F\in {\hat{\mathcal{F}}}^i _G} \bigg(\int_F {\beta^i \mathcal{H}^i} \llbracket {\nabla_s{u_\mathcal{H}}} \rrbracket \llbracket {\nabla_s{(\delta u_\mathcal{H})}} \rrbracket \ dx \bigg) \, . \end{equation} The nonlinear system of equations described above is solved by a standard Newton algorithm. \clearpage \section{Numerical Results} This section first verifies the proposed multiresolution framework for a simplified multiscale elasticity problem. Then, we adopt von Mises material for the multiscale model and assess it for two types of hard and void micro-inclusions. Eventually, we assess the performance of a zoom with changing geometrical properties during the solution of nonlinear problems. All the numerical results are produced by the CutFEM library \cite{Burman.Claus.ea.15}, developed in FEniCS \cite{fenics}. \subsection{Quasi-uniform porous structure} \subsubsection{Verification test} In this section, the proposed multiresolution framework is assessed for a heterogeneous structure with micropores and then compared with a full microscale FEM model and a mixed multiscale method proposed in \cite{mikaeili2021mixed}. We consider the same quasi-uniform porous medium given in \cite{mikaeili2021mixed} which includes circular pores distributed all over the domain (as depicted in Figure \ref{fig:model1schm}). The material behaviour is assumed as elastic and isotropic. According to \cite{mikaeili2021mixed}, the material properties for matrix are given as $E^0=1$ and $\nu^0 = 0.3$, and for the homogenized model are derived by Mori-Tanaka (MT) method \cite{Mura87, IMANI201816489} as following: $E^2=0.78$ and $\nu^2 = 0.3$. The computational meshes for the FEM and multiresolution models are shown in Figure \ref{fig:Model1meshesS}a,b, which contain linear Lagrangian elements with the smallest element size of $h_{min}=0.054$. The element size inside the zoom area is the same as in reference models for verification purposes. As shown for the discretized domain in Figure \ref{fig:model1meshzoom}, the three types interfaces, $\Gamma^{0} _{(H,h)}$, $\Gamma^1 _{(H,h)}$ and $\Gamma^2 _{(H,h)}$ intersect the background mesh in an arbitrary fashion. \begin{figure}[h] \centering \includegraphics[scale=.3]{Model2schm.png} \caption{Boundary conditions and geometry of a heterogeneous structure with compression test} \label{fig:model1schm} \end{figure} \begin{figure}[h] \centering \subfloat[ \label{Mixed01}]{% \includegraphics[scale=0.27]{MeshB_RefB.png} } \hfill \subfloat[ \label{Mixed08}]{% \includegraphics[scale=0.22]{Model1meshCut.png} } \hfill \caption{Discretized domains; a) FE conforming mesh and b) CutFEM non-conforming mesh.} \label{fig:Model1meshesS} \end{figure} \begin{figure}[h] \centering \includegraphics[scale=.25]{Model1meshzoomb.png} \caption{Discretized domain for the multiscale CutFEM including $\Gamma^0 _{(H,h)}$, $\Gamma^1 _{(H,h)}$ and $\Gamma^2 _{(H,h)}$} \label{fig:model1meshzoom} \end{figure} A compression test is conducted for the heterogeneous structure where the displacements are fixed along the x-direction and y-direction on the lower end, and force $f=(0,-0.01)$ is prescribed along the top edge. The FEM displacement component $u_y$ contour is obtained and used as a reference solution, see Figure \ref{fig:Model1U}a. The same test is carried out for the multiscale model. The corresponding displacement field component is shown in Figure \ref{fig:Model1U}b. When our multiscale model is compared with the FEM and mixed multiscale models, a close similarity of $u_y$ is observed inside the zooming region. Outside of the zoom, again, a satisfactory agreement is achieved. \begin{figure}[h] \centering \subfloat[ \label{Mixed01}]{% \includegraphics[scale=0.24]{U_CutBHigh.png} } \hfill \subfloat[ \label{Mixed08}]{% \includegraphics[scale=0.25]{U_A_multi.png} } \hfill \caption{Displacement field component $u_y$; a) FEM and b) Multiscale CutFEM.} \label{fig:Model1U} \end{figure} We compute the stress field component $\sigma_{yy}$ in Figure \ref{fig:Model1sigma}. Here again, a good agreement is achieved between the multiscale and reference models. The homogeneous model adopted in the coarse domain of the multiscale model smooth out the fluctuations produced by the coarse pores, and the overall trend in this domain is captured very well. \begin{figure}[h] \centering \subfloat[ \label{Mixed01}]{% \includegraphics[scale=0.3]{Model1_sigfem.png} } \hfill \subfloat[ \label{Mixed08}]{% \includegraphics[scale=0.3]{Model1_sigmulti.png} } \hfill \caption{Stress field component $\sigma_{yy}$; a) FEM and b) Multiscale CutFEM.} \label{fig:Model1sigma} \end{figure} \clearpage For further investigations, we study the effects of mesh coarsening in the coarse region of the multiresolution framework on the energy norm of the error field. The corresponding mesh layouts for the fine and coarse multiresolution models are depicted in Figures \ref{fig:Model1meshesS}b and \ref{fig:Model1errormesh}b, respectively. While the mesh outside the zoom is different for two multiresolution models, the mesh inside is considered the same size. Moreover, as shown in Figure \ref{fig:Model1errormesh}a, a full fine resolution mesh is used for computation of the error field. We compute the energy norm of the error field with respect to the reference FE model according to the following formulation. \begin{equation} \| e \| = \sqrt{ \int_{\Omega} \nabla_s e : \nabla_s e \, dx \, }. \end{equation} where $e =u_{\text{ref}}-u$. $u_{\text{ref}}$ and $u$ denote the displacement for FE and multiresolution models, respectively. The corresponding energy norm for the two multiresolution models with respect to the reference FE model is computed in Figure \ref{fig:Model1error}. The results show that refining mesh outside of the zoom reduces the energy norm of the error field inside the zoom. \begin{figure}[h] \centering \subfloat[ \label{errormesh}]{% \includegraphics[scale=0.38]{errorfield_mesh.png} } \hfill \subfloat[ \label{coarsemulti}]{% \includegraphics[scale=0.79]{Coarsemesh.png} } \hfill \caption{Computational meshes; a) Error field and b) multiresolution CutFEM with coarse mesh.} \label{fig:Model1errormesh} \end{figure} \begin{figure}[h] \centering \subfloat[ \label{Mixed01}]{% \includegraphics[scale=.9]{FineEnorm.png} } \hfill \subfloat[ \label{Mixed08}]{% \includegraphics[scale=.9]{CoarseEnorm.png} } \hfill \caption{Energy norm of error field for Multiscale CutFEM; a) Fine mesh and b) Coarse mesh } \label{fig:Model1error} \end{figure} \clearpage \subsection{S shape heterogeneous structure with a fixed zoom} In this section, we assess the ability and efficiency of the multiresolution CutFEM in modelling heterogeneous structures with nonlinear material properties and different types of heterogeneities. We consider an S shape heterogeneous structure with a random distribution of heterogeneities. As shown in Figure \ref{fig:model2schmincl}, the heterogeneities can be either voids or hard inclusions. We assume von Mises elastoplastic material behaviour for these structures. The material properties heterogeneous structures are: $2E^0=E^1=2$, $\nu^0=\nu^1=\nu^2=0.3$, $\sigma_c ^0=\sigma_c ^1=\sigma_c ^2=0.25$ and $\hat{H}_p ^0= \hat{H}_p ^1= \hat{H}_p ^2= 10^{-2}$. The material properties for the macroscale homogenized model with voids and inclusions are calculated by using MT as follows, respectively: $E^2= 0.5, 1.3$. To analyze the influence of different microstructural features on the accuracy of the proposed multiscale framework, we consider the geometry and distribution of the voids and hard inclusions to be similar in the two structures. We restrict the displacement along the x and y directions on the lower end and apply force $f=(0,0.18)$ incrementally on the top edge along the y-direction. We employ two circular zooms fixed over a background mesh (see Figure \ref{fig:model2meshzoom}). We refine the mesh inside the zoom regions with a refinement scale defined as $s=\frac{1}{16}$ (means each coarse element is subdivided hierarchically into 16 fine elements), where the largest element size is $h_{max}=0.06$. The discretized physical domain of multiresolution models in Figures \ref{fig:model2meshzoom}b,c show that all the three interfaces intersect the coarse background mesh (see Figure \ref{fig:model2meshzoom}a) for both models in a fully arbitrary manner. Next, we solve the nonlinear problem and assess the corresponding solution fields. The displacement field component $u_y$ for two models in Figure \ref{fig:model2disp} is smooth, especially in cut elements. Moreover, the stress field component $\sigma_{yy}$ shown in Figure \ref{fig:model2stress}, is smooth for both structures. However, as expected, the structure with hard inclusion inherits more stiffness and absorbs more stresses inside and outside the zoom. \begin{figure}[h] \centering \subfloat[ \label{Mixed01}]{% \includegraphics[scale=0.3]{Model2_schmporousc.png} } \hfill \subfloat[ \label{Mixed08}]{% \includegraphics[scale=0.3]{Model_schminclusionc.png} } \hfill \caption{Geometry of the heterogeneous structures; a) heterogeneities are voids, b) heterogeneities are hard inclusions} \label{fig:model2schmincl} \end{figure} \begin{figure}[h] \centering \subfloat[ \label{Coarse}]{% \includegraphics[scale=0.27]{model2coarsemesh.png} } \hfill \subfloat[ \label{Pores}]{% \includegraphics[scale=0.3]{Model2meshCutpores.png} } \hfill \subfloat[ \label{HardInc}]{% \includegraphics[scale=.3]{Model2meshCutb.png} } \hfill \caption{Computational meshes; a) coarse mesh, b) multiresolution mesh for the porous microstructure, c) multiresolution mesh for the microstructure with hard inclusions} \label{fig:model2meshzoom} \end{figure} \begin{figure}[h] \centering \subfloat[ \label{Mixed01}]{% \includegraphics[scale=0.32]{disp_pores_fix.png} } \hfill \subfloat[ \label{Mixed08}]{% \includegraphics[scale=0.32]{disp_inclu_fix.png} } \hfill \caption{Displacement field $u_y$ for the heterogeneous structures in the last time step; a) heterogeneities are voids, b) heterogeneities are hard inclusions} \label{fig:model2disp} \end{figure} \begin{figure}[h] \centering \subfloat[ \label{Mixed01}]{% \includegraphics[scale=0.32]{sig_pore_fix.png} } \hfill \subfloat[ \label{Mixed08}]{% \includegraphics[scale=0.32]{sig_inclu_fix.png} } \hfill \caption{Stress component $\sigma_{yy}$ for the last time step; a) heterogeneities are voids, b) heterogeneities are hard inclusions} \label{fig:model2stress} \end{figure} \clearpage \subsection{S shape porous structure with a moving zoom} This section is devoted to the numerical study of a moving zoom for the proposed multiresolution with a von Mises plasticity material behaviour. We here consider the S shape microporous structure analyzed in section 3.2. However, contrary to the previous section, we will not fix the zooms over the background mesh but relocate them during the simulation. As shown in Figure \ref{fig:model3mesh}, this relocation is carried out arbitrarily and independent of background mesh configuration. In this study, we change the location and size of zooming manually during the simulation to assess the numerical efficiency; however, using an adaptive approach would be more relevant from the physics point of view. \begin{figure}[h] \centering \subfloat[ \label{meshmov1}]{% \includegraphics[scale=0.3]{mesh_mov_1b.png} } \hfill \subfloat[ \label{meshmov2}]{% \includegraphics[scale=0.3]{mesh_mov_2b.png} } \hfill \subfloat[ \label{meshmov3}]{% \includegraphics[scale=0.3]{mesh_mov_3b.png} } \hfill \caption{Computational meshes for the microporous heterogeneous structure with different set of zooms at various time steps} \label{fig:model3mesh} \end{figure} We show the displacement component $u_y$ field for three different time in Figure \ref{fig:model3disp}. The results show that the multiscale solution with a nonlinear material stays convergent in each time step, even with the relocation of the zooming region. Furthermore, the global behaviour during simulation stays smooth and without oscillations. Next, we show the results in the form of plastic strain growth during the simulation. We compute the effective plastic strain at three-time steps and show the results in Figure \ref{fig:model3sig}. The changes in the zooms' location and size during the simulation are intended to capture the plastic strain growth. \begin{figure}[h] \centering \subfloat[ \label{dispmov1}]{% \includegraphics[scale=0.3]{disp_mov_1b.png} } \hfill \subfloat[ \label{dispmov2}]{% \includegraphics[scale=0.3]{disp_mov_2b.png} } \hfill \subfloat[ \label{dispmov3}]{% \includegraphics[scale=0.3]{disp_mov_3b.png} } \hfill \caption{Displacement component $u_y$ for the microporous heterogeneous structure with different set of zooms at various time steps} \label{fig:model3disp} \end{figure} \begin{figure}[h] \centering \subfloat[ \label{sigmov1}]{% \includegraphics[scale=0.3]{sig_mov_1b.png} } \hfill \subfloat[ \label{sigmov2}]{% \includegraphics[scale=0.3]{sig_mov_2b.png} } \hfill \subfloat[ \label{sigmov3}]{% \includegraphics[scale=0.3]{sig_mov_3b.png} } \hfill \caption{Effective plastic strain $\bar{\epsilon}_{p}$ at different time steps} \label{fig:model3sig} \end{figure} \clearpage \section{Conclusions} A hierarchical multiresolution framework in the context of CutFEM has been developed for elastic and elastoplastic materials. Analytical distance functions have been exploited over the background mesh to define the zooming region and microstructure. The corresponding interfaces not only can intersect the background mesh in an arbitrary fashion but also are allowed to intersect each other freely. A full mesh independency in this framework is achieved by utilizing the ghost penalty regularization, which ensures the stability of cut elements. Furthermore, Nitsche's method has been exploited to enforce constraints over the interfaces between different regions, i.e. the interface between microscale and macroscale regions and the interface between matrix and inclusions. The elements inside the zooming regions have been refined hierarchically for high-resolution analysis and then used for the interpolation of microstructure signed distance function. The corresponding results are compared with the counterpart FE models. For both models, linear Lagrangian elements are employed. Three types of numerical examples have been assessed for the multiresolution framework: a fixed zoom for elasticity problems, a fixed zoom for plasticity problems and a moving zoom for plasticity problems. The first example is compared with the findings of a FEM as a reference model, and an excellent agreement is shown between the results. The results with the second example, which is applied for two types of inclusions (voids and hard inclusions) owning von Mises material, are satisfactory in terms of smooth distributions of global and local responses. Eventually, in the third example, the relocation of zooming regions during the solution of nonlinear problems has been assessed. The results showed that the zooming location could be changed successfully during the solution of a nonlinear system, which is a robust tool to apply for simulating propagation phenomena. \section*{Acknowledgement} The authors acknowledges the support of Cardiff University, funded by the European Union’s Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No. 764644. \\ \bibliographystyle{unsrt}
1,941,325,220,589
arxiv
\subsection{Directions for Future Work} The security notions of variants of collision resistance, including plain collision resistance and multi-collision resistance, can be phrased in the language of entropy. For example, plain collision resistance requires that once a hash value $y$ is fixed the (max) entropy of preimages that any efficient adversary can find is zero. In multi-collision resistance, it may be larger than zero, even for every $y$, but still bounded by the size of allowed multi collisions. In distributional collision resistance, the (Shannon) entropy is close to maximal. Yet, the range of applications of collision resistance (or even multi-collision resistance) is significantly larger than those of distributional collision resistance. Perhaps the most basic such application is \emph{succinct} commitment protocols which are known from plain/multi-collision resistance but not from distributional collision resistance (by \emph{succinct} we mean that the total communication is shorter than the string being committed to). Thus, with the above entropy perspective in mind, a natural question is to characterize the full range or parameters between distributional and plain collision resistance and understand for each of them what are the applications implied. A more concrete question is to find the minimal notion of security for collision resistance that implies succinct commitments. A different line of questions concerns understanding better the notion of distributional collision resistance and constructing it from more assumptions. Komargodski and Yogev constructed it from multi-collision resistance and from the average-case hardness of SZK. Can we construct it, for example, from the multivariate quadratic (MQ) assumption~\cite{MatsumotoI88} or can we show an attack for random degree 2 mappings? Indeed, we know that random degree 2 mappings cannot be used for plain collision resistant hashing~\cite[Theorem 5.3]{ApplebaumHIKV17}. \subsection{From Statistically Hiding Commitments to dCRH\xspace -- Proof of Theorem~\ref{thm:dcrh_from_stat_hiding}}\label{sec:dcrh_from_stat_hiding} Let $\pi=(\mathcal{S}, \mathcal{R}, \mathcal{V})$ be a binding and statistically hiding two-message commitment scheme. We show that there exists a dCRH\xspace family ${\mathcal H}$. To sample a hash function in the family with security parameter $n$, we use the receiver's first message of the protocol. Namely, we set the hash function as $h \gets \mathcal{R}(1^n)$. Then, to evaluate $h$ on input $x$ we first parse $x$ as $x=(b,r)$, where $b$ is a bit, and output a commitment to the bit $b$ using randomness $r$, with respect to the receiver message $h$. That is, we set $$h(x)=\mathcal{S}(h,b;r).$$ Since $\pi$ is efficient, then sampling and evaluating $h$ are polynomial-time procedures. This concludes the definition of our family ${\mathcal H}$ of hash functions. (Note that the functions in the family are not necessarily compressing.) We next argue security. Suppose toward contradiction that ${\mathcal H}$ is not a dCRH\xspace according to Definition~\ref{def:dcrh}. Then, for any $\delta(n)= n^{-O(1)}$ there exists an adversary $\mathsf{A}$, such that \begin{align}\label{eq:sdbreaker} \mathbf{\Delta}\left((h,\mathsf{A}(1^n,h)),(h,\operatorname{\sf Col}(h))\right) \le \delta, \end{align} for infinitely many $n$'s. From hereon, we fix $\delta$ to be any function such that $n^{-O(1)}< \delta< \frac{1}{2}-n^{-O(1)}$. We show how to use $\mathsf{A}$ to break the binding property of the commitment scheme. Our cheating receiver $\mathcal{R}^*$ is defined as follows: On input $h$, $\mathcal{R}^*$ runs $\mathsf{A}(h)$ to get $x$ and $x'$, interprets $x=(b,r)$ and $x'=(b',r')$ and outputs $b$ and $b'$ along with their openings $r$ and $r'$, respectively. Our goal is to show that $x=(b,r)$ and $x'=(b',r')$ are two valid distinct openings to the commitment scheme. By Equation \eqref{eq:sdbreaker}, it suffices to analyze the success probability when the pair $(x,x')$ is sampled according to the distribution $\operatorname{\sf Col}_h$, and show that it is at least $1/2-{\sf negl}(n)$. From the definition of $\operatorname{\sf Col}_h$, we have that $h(x)=h(x')$ and thus $\mathcal{S}(h,b;r)=\mathcal{S}(h,b';r') \coloneqq y$. In other words, the second message of the protocol for $b$ with randomness $r$ and $b'$ with randomness $r'$ are the same, and thus both pass as valid openings in the reveal stage of the protocol: $\mathcal{V}(h,y,b,r)=1$ and $\mathcal{V}(h,y,b',r')=1$. We are left to show that these are two \emph{distinct} openings for the commitment, namely, $b \neq b'$. To show this, we use the statistically hiding property of the commitment scheme. The following claim concludes the proof. \begin{claim} Fix any $h$. Then for $((b,r),(b',r')) \gets \operatorname{\sf Col}(h)$ it holds that $\pr{b\neq b'} \ge 1/2 - {\sf negl}(n)~.$ \end{claim} \begin{proof} Let $B$ be the uniform distribution on bits and $R$ the uniform distribution on commitment randomness. For every commitment $c$, let $B_c$ be the distribution on bits given by sampling $(b,r) \gets (B,R)$ conditioned on $\mathcal{S}(h,b;r)=c$. Let $C$ be the distribution on random commitments to a random bit. By the statistical hiding property of the commitment scheme, $$ \Delta((\mathcal{S}(h,B,R),B),(\mathcal{S}(h,B',R),B)) \leq \varepsilon\enspace, $$ where $B'$ is an independent copy of $B$, and $\epsilon ={\sf negl}(n)$ is a negligible function. Furthermore, $$ \Delta((\mathcal{S}(h,B,R),B),(\mathcal{S}(h,B',R),B)) = \Delta((C,B_C),(C,B))= \EE{\begin{subarray}{c} c \gets C\end{subarray}}{\Delta(B_c,B)} \enspace. $$ By Markov's inequality, it holds that $$ \prob{c\gets C}{ \Delta(B_c,B) \geq \sqrt{\varepsilon} } \leq \sqrt{\varepsilon}\enspace. $$ To conclude the proof note that \begin{align*} &\pr{b = b' : (b,r),(b',r')\gets \operatorname{\sf Col}_h} = \pr{b = b' \colon\; \begin{array}{l} (b,r) \gets (B,R)\\ c = \mathcal{S}(h,b;r)\\ b' \gets B_c \end{array}} \leq \\ &\pr{b = b' \colon \begin{array}{l} (b,r) \gets (B,R)\\ c = \mathcal{S}(h,b;r)\\ b' \gets B_c\\ \Delta(B_c,B) \leq \sqrt{\varepsilon} \end{array}} + \prob{c\gets C}{ \Delta(B_c,B) \geq \sqrt{\varepsilon}}\leq\\ &\left(\frac{1}{2}+\sqrt{\varepsilon}\right)+\sqrt{\varepsilon}=\frac{1}{2}+{\sf negl}(n) ~. \end{align*} \end{proof} Overall, the success probability of $\mathsf{A}$ is at least $1/2 - {\sf negl}(n) -\delta\geq n^{-O(1)}$. \paragraph{Using string commitments.} The above proof constructs dCRH\xspace from statistically hiding \emph{bit} commitment schemes. For schemes that support commitments to {\em strings}, following the above proof gives a stronger notion of dCRH\xspace, where the adversary's output distribution is $(1-{\sf negl}(n))$-far from a random collision distribution. Technically, the change in the proof is to interpret $b$ in $x=(b,r)$ as a string of length $n$, rather than as a single bit. The proof remains the same except that the probability that $b = b'$ is (negligibly close to) $2^{-n}$ instead of $1/2$. Thus, overall the success probability of $\mathsf{A}$ is at least $1- {\sf negl}(n) - \delta$. To ensure a polynomial success probability we can allow any $\delta = 1- n^{-O(1)}$. \section{From dCRH\xspace to Statistically Hiding Commitments and Back} We show distributional collision resistant hash functions imply constant-round statistically hiding commitments. \begin{theorem}\label{thm:stat_hide_dcrh} Assume the existence of a distributional collision resistant hash function family. Then, there exists a constant-round statistically hiding and computationally binding commitment scheme. \end{theorem} Our proof relies on the transformation of Haitner et al.~\cite{HaitnerRVW09,HaitnerReVaWe18}, translating inaccessible-entropy generators to statistically hiding commitments. Concretely, we construct appropriate inaccessible-entropy generators from distributional collision resistant hash functions. In \Cref{sec:prelim:inaccessibe}, we recall the necessary definitions and the result of \cite{HaitnerReVaWe18}, and then in \Cref{our-generator}, we prove Theorem~\ref{thm:stat_hide_dcrh}. We complement the above result by showing a loose converse to \Cref{thm:stat_hide_dcrh}, namely that two message statistically hiding commitments (with possibly large communication) imply the existence of distributional collision resistance hashing. \begin{theorem}\label{thm:dcrh_from_stat_hiding} Assume the existence of a binding and statistically hiding two-message commitment scheme. Then, there exists a dCRH\xspace function family. \end{theorem} This proof of \Cref{thm:dcrh_from_stat_hiding} appears in \Cref{sec:dcrh_from_stat_hiding}. \subsection{Preliminaries on Inaccessible Entropy Generators}\label{sec:prelim:inaccessibe} The following definitions of real and accessible entropy of protocols are taken from \cite{HaitnerReVaWe18}. \begin{definition}[Block generators]\label{def:blockGenrator} Let $n$ be a security parameter, and let $c=c(n)$, $s=s(n)$ and $m=m(n)$. An {\sf $m$-block generator} is a function $G \colon \{0,1\}^{c} \times \{0,1\}^{s} \mapsto ({\zo^\ast})^{m}$. It is {\sf efficient} if its running time on input of length $c(n)+ s(n)$ is polynomial in $n$. We call parameter $n$ the {\sf security parameter}, $c$ the {\sf public parameter length}, $s$ the {\sf seed length}, $m$ the {\sf number of blocks}, and $\ell(n) = \max_{(z,x)\in \{0,1\}^{c(n)} \times \{0,1\}^{s(n)},i\in [m(n)]} \size{G(z,x)_i}$ the {\sf maximal block length} of $G$. \end{definition} \begin{definition}[Real sample-entropy]\label{def:RealSamEnt} Let $G$ be an $m$-block generator over $\{0,1\}^{c} \times \{0,1\}^{s}$, let $n\in{\mathbb{N}}$, let $Z_n$ and $X_n$ be uniformly distributed over $\{0,1\}^{c(n)}$ and $\{0,1\}^{s(n)}$, respectively, and let $\mathbf{Y}_n= (Y_{1},\ldots,Y_m) = G(Z_n, X_n)$. For $n\in{\mathbb{N}}$ and $i\in [m(n)]$, define the {\sf real sample-entropy of $\mathbf{y}\in \operatorname{Supp}(Y_{1},\ldots,Y_i)$ given $z\in\operatorname{Supp}(Z_n)$} as $$\operatorname{RealH}_{G,n}(\mathbf{y}| z) = \sum_{j=1}^i \mathsf{H}_{Y_j|Z_n,Y_{<j}}(\mathbf{y}_j|z,\mathbf{y}_{<j}).$$ \end{definition} \noindent We omit the security parameter from the above notation when clear from the context. \begin{definition}[Real entropy] Let $G$ be an $m$-block generator, and let $Z_n$ and $\mathbf{Y}_n$ be as in Definition~\ref{def:RealSamEnt}. Generator $G$ has {\sf real entropy at least $k = k(n)$}, if $$\EE{(z,\mathbf{y}) \gets (Z_n,\mathbf{Y}_n)}{\operatorname{RealH}_{G,n}(\mathbf{y}| z)} \geq k(n)$$ for every $n\in {\mathbb{N}}$. The generator $G$ has {\sf real min-entropy at least $k(n)$ in its \ith block} for some $i = i(n)\in [m(n)]$, if $$\prob{(z,\mathbf{y}) \gets (Z_n,\mathbf{Y}_n)}{\mathsf{H}_{Y_i|Z_n,Y_{<i}}(\mathbf{y}_i|z,\mathbf{y}_{<i}) < k(n)} = {\sf negl}(n).$$ We say the above bounds are {\sf invariant to the public parameter} if they hold for any fixing of the public parameter $Z_n$.\footnote{In particular, this is the case when there is no public parameter, \ie $c= 0$.} \end{definition} It is known that the real Shannon entropy amounts to measuring the standard conditional Shannon entropy of $G$'s output blocks. \begin{lemma}[{\cite[Lemma 3.4]{HaitnerReVaWe18}}] Let $G$, $Z_n$ and $\mathbf{Y}_n$ be as in \cref{def:RealSamEnt} for some $n\in {\mathbb{N}}$, then $$\EE{(z,\mathbf{y}) \gets (Z_n,\mathbf{Y}_n)}{\operatorname{RealH}_{G,n}(\mathbf{y}|z)} = \mathsf{H}(\mathbf{Y}_n|Z_n).$$ \end{lemma} Toward the definition of \emph{inaccessible entropy}, we first define \emph{online block-generators} which are a special type of block generators that toss fresh random coins before outputting each new block. \begin{definition}[Online block generator] Let $n$ be a security parameter, and let $c = c(n)$ and $m=m(n)$. An $m$-block {\sf online} generator is a function ${\widetilde{\Gc}} \colon \{0,1\}^c \times (\{0,1\}^{v})^{m} \mapsto ({\zo^\ast})^{m}$ for some $v =v(n)$, such that the \ith output block of ${\widetilde{\Gc}}$ is a function of (only) its first $i$ input blocks. We denote the {\sf transcript} of ${\widetilde{\Gc}}$ over random input by $T_{{\widetilde{\Gc}}}(1^n) = (Z,R_1,Y_1,\ldots,R_m,Y_m)$, for $Z \gets \{0,1\}^c$, $(R_1,\ldots,R_m) \gets (\{0,1\}^{v})^{m}$ and $(Y_1,\ldots,Y_m) ={\widetilde{\Gc}}(Z,R_1,\ldots,R_i)$. \end{definition} That is, an online block generator is a special type of block generator that tosses fresh random coins before outputting each new block. In the following, we let ${\widetilde{\Gc}}(z,r_1,\ldots,r_i)_i$ stand for ${\widetilde{\Gc}}(z,r_1,\ldots,r_i,x^\ast)_i$ for arbitrary $x^\ast \in (\{0,1\}^{v})^{m -i}$ (note that the choice of $x^\ast$ has no effect on the value of ${\widetilde{\Gc}}(z,r_1,\ldots,r_i,x^\ast)_i$). \begin{definition}[Accessible sample-entropy]\label{def:AccessibleSampleEntropy} Let $n$ be a security parameter, and let ${\widetilde{\Gc}}$ be an online $m=m(n)$-block online generator. The {\sf accessible sample-entropy of $\mathbf{t} =(z,r_1,y_1,\ldots,r_m,y_m)\in \operatorname{Supp}(Z,R_1,Y_1\ldots,R_m,Y_m) = T_{{\widetilde{\Gc}}}(1^n)$} is defined by $$\operatorname{AccH}_{{\widetilde{\Gc}},n}(\mathbf{t}) = \sum_{i=1}^{m} \mathsf{H}_{Y_i|Z,R_{<i}}(y_i|z,r_{<i}).$$ \end{definition} \noindent Again, we omit the security parameter from the above notation when clear from the context. As in the case of real entropy, the expected accessible entropy of a random transcript can be expressed in terms of the standard conditional Shannon entropy. \begin{lemma}[{\cite[Lemma 3.7]{HaitnerReVaWe18}}]\label{lemma:acc_entr} Let ${\widetilde{\Gc}}$ be an online $m$-block generator and let $(Z,R_1,Y_1,\ldots,\allowbreak R_m,\allowbreak Y_m) = T_{\widetilde{\Gc}}(1^n)$ be its transcript. Then, $$\EE{\mathbf{t}\gets T_{{\widetilde{\Gc}}}(Z,1^n)} {\operatorname{AccH}_{{\widetilde{\Gc}}}(\mathbf{t})} = \sum_{i\in[m]} \mathsf{H}(Y_i|Z,R_{<i}).$$ \end{lemma} We focus on efficient generators that are consistent with respect to $G$. That is, the support of their output is contained in that of $G$. \begin{definition}[Consistent generators]\label{def:nonFailingGen} Let $G$ be a block generator over $\{0,1\}^{c(n)} \times \{0,1\}^{s(n)}$. A block (possibly online) generator $G'$ over $\{0,1\}^{c(n)} \times \{0,1\}^{s'(n)}$ is {\sf $G$ consistent} if, for every $n\in {\mathbb{N}}$, it holds that $\operatorname{Supp}(G'(U_{c(n)},U_{s'(n)})) \subseteq \operatorname{Supp}(G(U_{c(n)},U_{s(n)}))$. \end{definition} \begin{definition}[Accessible entropy]\label{def:accessible-entropy} A block generator $G$ has {\sf accessible entropy at most $k = k(n)$} if, for every efficient $G$-consistent, online generator ${\widetilde{\Gc}}$ and all large enough $n$, $$\EE{\mathbf{t}\gets T_{\widetilde{\Gc}}(1^n)} {\operatorname{AccH}_{\widetilde{\Gc}}(\mathbf{t})} \leq k.$$ \end{definition} We call a generator whose real entropy is noticeably higher than it accessible entropy an inaccessible entropy generator. We use the following reduction from inaccessible entropy generators to constant round statistically hiding commitment. \begin{theorem}[{\cite[Thm.\ 6.24]{HaitnerReVaWe18}}] Let $G$ be an efficient block generator with constant number of blocks. Assume $G$'s real Shannon entropy is at least $k(n)$ for some efficiently computable function $k$, and that its accessible entropy is bounded by $k(n) - 1/p(n)$ for some $p\in \mathsf{poly}$. Then there exists a constant-round statistically hiding and computationally binding commitment scheme. Furthermore, if the bound on the real entropy is invariant to the public parameter, then the commitment is receiver public-coin. \end{theorem} \begin{remark}[Inaccessible max/average entropy] Our result relies on the reduction from inaccessible \emph{Shannon} entropy generators to statistically hiding commitments, given in \cite{HaitnerReVaWe18}. The proof of this reduction follows closely the proof in previous versions~\cite{HaitnerVadhan17,HaitnerRVW09}, where the reduction was from inaccessible \emph{max} entropy generators. The extension to Shannon entropy generators is essential for our result. \end{remark} \subsection{From dCRH\xspace to Inaccessible Entropy Generators -- Proof of Theorem~\ref{thm:stat_hide_dcrh}}\label{our-generator} In this section we show that there is a block generator with two blocks in which there is a gap between the real entropy and the accessible entropy. Let ${\mathcal H} = \{ {\mathcal H}_n\colon\allowbreak \{0,1\}^{n} \to \{0,1\}^{m} \}_{n\in {\mathbb{N}}}$ be a dCRH\xspace for $m = m(n)$ and assume that each $h\in {\mathcal H}_n$ requires $c=c(n)$ bits to describe. By Definition~\ref{def:dcrh}, there exists a polynomial $p(\cdot)$ such that for any probabilistic polynomial-time algorithm $\mathsf{A}$, it holds that \begin{align*} \mathbf{\Delta}\left((h,\mathsf{A}(1^n,h)),(h,\operatorname{\sf Col}(h))\right) = \EE{h\leftarrow {\mathcal H}_n}{\mathbf{\Delta}\left(\mathsf{A}(1^n,h),\operatorname{\sf Col}(h) \right)} \ge \frac{1}{p(n)} \end{align*} for large enough $n\in {\mathbb{N}}$, where $h \leftarrow {\mathcal H}_n$. The generator $G\colon \{0,1\}^c\times \{0,1\}^n \to \{0,1\}^m \times \{0,1\}^n$ is defined by \begin{align*} G(h, x) = (h(x), \; x). \end{align*} The public parameter length is $c$ (this is the description size of $h$), the generator consists of two blocks, and the maximal block length is $\max\{n,m\}$. Since the random coins of $G$ define $x$ and $x$ is completely revealed, the real Shannon entropy of $G$ is $n$. That is, \begin{align*} \EE{y\leftarrow G(U_c, U_n)}{\mathsf{RealH}_G(y)} = n. \end{align*} Our goal in the remaining of this section is to show a non-trivial upper bound on the accessible entropy of $G$. We prove the following lemma. \begin{lemma}\label{lemma:accessible} There exists a polynomial $q(\cdot)$ such that for every $G$-consistent online generator ${\widetilde{\Gc}}$, it holds that \begin{align*} \EE{t\leftarrow T_{{\widetilde{\Gc}}} (Z,1^n)}{\mathsf{AccH}_{{\widetilde{\Gc}}}(t)} \le n-\frac{1}{q(n)} \end{align*} for all large enough $n\in {\mathbb{N}}$. \end{lemma} \remove{ \begin{proof} \nir{Here is perhaps a more elementary proof (w/o KL divergence} Fix a $G$-consistent online generator ${\widetilde{\Gc}}$. Let us denote by $Y$ a random variable that corresponds to the first part of $G$ (i.e., the first $m$ bits) and by $X$ the second part (i.e., the last $n$ bits). Denote by $R$ the randomness used by the adversary to sample $Y$. Assume towards contradiction that \nir{for infinitely many $n$?} \begin{align*} \EE{t\leftarrow T_{{\widetilde{\Gc}}} (1^n)}{\mathsf{AccH}_{{\widetilde{\Gc}}}(t)} \ge n-{\sf negl}(n). \end{align*} By definition of accessible entropy, this means that \begin{align}\label{eq1:contradiction} \mathsf{H}(Y) + \mathsf{H}(X \mid Y, R) \ge n-{\sf negl}(n). \end{align} We show how to construct an adversary $\mathsf{A}$ that can break the security of the dCRH. The algorithm $\mathsf{A}$ does the following: \begin{enumerate} \item Sample $r$ and let $y = {\widetilde{\Gc}}(r)_1$ \item Sample $r_1,r_2$ and output $x_1={\widetilde{\Gc}}(r,r_1)_2$ and $x_2 = {\widetilde{\Gc}}(r,r_2)_2$. \end{enumerate} In other words, $\mathsf{A}$ tries to create a collision by running $G$ to get the first block, $y$, and then running it twice (by rewinding) to get two inputs $x_1,x_2$ that are mapped to $y$. Indeed, if ${\widetilde{\Gc}}$ is $G$-consistent. then $x_1$ and $x_2$ collide. To prove that $\mathsf{A}$ samples close-to-random collisions, we prove the following two claims: \begin{claim}\label{clm:clm1_elem} $\mathsf{H}(X) \geq n-{\sf negl}(n)$. \end{claim} \begin{claim}\label{clm:clm2_elem} $\Pr_{(y,r)\gets (Y,R)}\left[\mathsf{H}(X \mid y,r) \geq \log\left|h^{-1}(y)\right|-{\sf negl}(n)\right]\geq 1-{\sf negl}(n).$ \end{claim} The first claim says that the (marginal) distribution of elements $x_1$ output by $\mathsf{A}$ is statistically close to uniform. The second claim implies that with overwhelming probability over $x_1$, the element $x_2$ output by $\mathsf{A}$ is statistically close to uniform over the set of all preimages $h^{-1}(y)$, for $y=h(x_1)$. Accordingly, the two claims together imply that $\mathsf{A}$ breaks the dCRH. \begin{proof}[Proof of Claim \ref{clm:clm1_elem}] By the fact that $X$ determines $Y$, and by Equation~\eqref{eq1:contradiction}, \begin{align*} \mathsf{H}(X) = \mathsf{H}(X,Y) = \mathsf{H}(Y) + \mathsf{H}(X \mid Y) \geq \mathsf{H}(Y) + \mathsf{H}(X \mid Y, R) \ge n-{\sf negl}(n). \end{align*} \end{proof} \begin{proof}[Proof of Claim \ref{clm:clm2_elem}] Let $X^*$ be the distribution on inputs where we first sample $y\gets Y$, and then sample uniformly at random from $h^{-1}(y)$. Then we have \begin{align*} n\geq \mathsf{H}(X^*) = \mathsf{H}(X^*, Y)= \mathsf{H}(Y)+\mathsf{H}(X^*\mid Y) =\mathsf{H}(Y)+\EE{y\gets Y}{\log|h^{-1}(y)|}. \end{align*} Plugging this into Equation~\eqref{eq1:contradiction}, we have \begin{align*} \mathsf{H}(X \mid Y, R) \ge n-\mathsf{H}(Y)-{\sf negl}(n) \geq \EE{y\gets Y}{\log|h^{-1}(y)|} -{\sf negl}(n). \end{align*} Furthermore, we know that for {\em any} $y,r$ in the support of $Y,R$, \begin{align*} \mathsf{H}(X \mid y, r)\leq \mathsf{H}_{\mathsf{max}}(X \mid y, r) = {\log|h^{-1}(y)|}. \end{align*} % By averaging, the last two equations imply \begin{align*} \Pr_{(y,r)\gets (Y,R)}\left[\mathsf{H}(X \mid y,r) \geq \log\left|h^{-1}(y)\right|-{\sf negl}(n)\right]\geq 1-{\sf negl}(n). \end{align*} \end{proof} \end{proof} } \begin{proof} Fix a $G$-consistent online generator ${\widetilde{\Gc}}$. Let us denote by $Y$ a random variable that corresponds to the first part of $G$'s output (i.e., the first $m$ bits) and by $X$ the second part (i.e., the last $n$ bits). Denote by $R$ the randomness used by the adversary to sample $Y$. Denote by $Z$ the random variable that corresponds to the description of the hash function $h$. Fix $q(n) \triangleq 4\cdot p(n)^2 $ Assume towards contradiction that for infinitely many $n$'s it holds that \begin{align*} \EE{t\leftarrow T_{{\widetilde{\Gc}}} (Z, 1^n)}{\mathsf{AccH}_{{\widetilde{\Gc}}}(t)} > n-\frac{1}{q(n)}. \end{align*} By Lemma~\ref{lemma:acc_entr}, this means that \begin{align}\label{eq:contradiction} \mathsf{H}(Y\mid Z) + \mathsf{H}(X \mid Y, Z, R) > n-\frac{1}{q(n)} \end{align} We show how to construct an adversary $\mathsf{A}$ that can break the security of the dCRH. The algorithm $\mathsf{A}$, given a hash function $h\leftarrow {\mathcal H}$, does the following: \begin{enumerate} \item Sample $r$ and let $y = {\widetilde{\Gc}}(h, r)_1$ \item Sample $r_1,r_2$ and output $x_1={\widetilde{\Gc}}(h, r,r_1)_2$ and $x_2 = {\widetilde{\Gc}}(h, r,r_2)_2$. \end{enumerate} In other words, $\mathsf{A}$ tries to create a collision by running $G$ to get the first block, $y$, and then running it twice (by rewinding) to get two inputs $x_1,x_2$ that are mapped to $y$. Indeed, $\mathsf{A}$ runs in polynomial-time and if ${\widetilde{\Gc}}$ is $G$-consistent, then $x_1$ and $x_2$ collide relative to $h$. Denote by $Y^\mathsf{A}$, $X_1^\mathsf{A}$, and $X_2^\mathsf{A}$ be random variables that correspond to the output of the emulated ${\widetilde{\Gc}}$. Furthermore, denote by $(X_1^{\operatorname{\sf Col}},X_2^{\operatorname{\sf Col}})$ a random collision that $\operatorname{\sf Col}(h)$ samples. To finish the proof it remains to show that \begin{align*} \EE{h\leftarrow {\mathcal H}_n}{\mathbf{\Delta}((X_1^\mathsf{A},X_2^\mathsf{A}), (X_1^{\operatorname{\sf Col}},X_2^{\operatorname{\sf Col}}))} \le \frac{1}{p(n)} \end{align*} which is a contradiction. By Pinsker's inequality (\Cref{prop:pinsker}) and the chain rule from \Cref{prop:kl_chain_rule}, it holds that \begin{align*} \mathbf{\Delta}&\left(\left(X_1^\mathsf{A},X_2^\mathsf{A}\right), \left(X_1^{\operatorname{\sf Col}},X_2^{\operatorname{\sf Col}}\right)\right) \le \sqrt{\frac{\ln(2)}{2} \cdot \mathbf{D}_{\mathsf{KL}}(X_1^\mathsf{A},X_2^\mathsf{A} \| X_1^{\operatorname{\sf Col}},X_2^{\operatorname{\sf Col}})} \\ & = \sqrt{\mathbf{D}_{\mathsf{KL}}\left(X_1^\mathsf{A} \|X_1^{\operatorname{\sf Col}}\right) + \EE{x_1\leftarrow X_1^\mathsf{A}}{\mathbf{D}_{\mathsf{KL}}(X_2^\mathsf{A}|_{X_1^\mathsf{A}=x_1} \| X_2^{\operatorname{\sf Col}}|_{X_1^{\operatorname{\sf Col}}=x_1})}} \\ & \leq \sqrt{\mathbf{D}_{\mathsf{KL}}\left(X_1^\mathsf{A} \|X_1^{\operatorname{\sf Col}}\right)} + \sqrt{\EE{x_1\leftarrow X_1^\mathsf{A}}{\mathbf{D}_{\mathsf{KL}}(X_2^\mathsf{A}|_{X_1^\mathsf{A}=x_1} \| X_2^{\operatorname{\sf Col}}|_{X_1^{\operatorname{\sf Col}}=x_1})}}. \end{align*} Hence, by Jensen's inequality (\Cref{prop:jensen}), it holds that \begin{align*} \EE{h\leftarrow {\mathcal H}_n}{\mathbf{\Delta}((X_1^\mathsf{A},X_2^\mathsf{A}), (X_1^{\operatorname{\sf Col}},X_2^{\operatorname{\sf Col}}))} \leq & \sqrt{\EE{h\leftarrow {\mathcal H}_n}{\mathbf{D}_{\mathsf{KL}}(X_1^\mathsf{A} \|X_1^{\operatorname{\sf Col}})}} + \\ & \sqrt{\EE{\begin{subarray}{c} h\leftarrow {\mathcal H}_n\\ x_1\leftarrow X_1^\mathsf{A} \end{subarray}}{\mathbf{D}_{\mathsf{KL}}(X_2^\mathsf{A}|_{X_1^\mathsf{A}=x_1} \| X_2^{\operatorname{\sf Col}}|_{X_1^{\operatorname{\sf Col}}=x_1})}}. \end{align*} We complete the proof using the following claims. \begin{myclaim} \label{claim:1} It holds that $$\EE{h\leftarrow {\mathcal H}_n}{{\mathbf{D}_{\mathsf{KL}}(X_1^\mathsf{A} \|X_1^{\operatorname{\sf Col}})}} \leq \frac{1}{p(n)^2}.$$ \end{myclaim} \begin{myclaim} \label{claim:2} It holds that $$\EE{\begin{subarray}{c} h\leftarrow {\mathcal H}_n\\ x_1\leftarrow X_1^\mathsf{A} \end{subarray}}{\mathbf{D}_{\mathsf{KL}}(X_2^\mathsf{A}|_{X_1^\mathsf{A}=x_1} \| X_2^{\operatorname{\sf Col}}|_{X_1^{\operatorname{\sf Col}}=x_1})} \leq \frac{1}{p(n)^2}.$$ \end{myclaim} \begin{proof}[Proof of Claim~\ref{claim:1}] Recall that $X_1^{\operatorname{\sf Col}}$ is the \emph{uniform} distribution over the inputs of the hash function and thus \begin{align*} \mathbf{D}_{\mathsf{KL}}(X_1^\mathsf{A} \|X_1^{\operatorname{\sf Col}}) = \sum_x \pr{X_1^\mathsf{A} = x}\cdot \log \frac{\pr{X_1^\mathsf{A} = x}}{2^{-n}} = n - \mathsf{H}(X_1^\mathsf{A}). \end{align*} To sample $X_1^\mathsf{A}$, the algorithm $\mathsf{A}$ first runs ${\widetilde{\Gc}}(r)_1$ to get $y$ and then runs $G(r,r_1)$ to get $x_1$. Thus, by Equation~\eqref{eq:contradiction}, it holds that \begin{align*} \EE{h\leftarrow {\mathcal H}_n}{\mathsf{H}(X_1^\mathsf{A})} = \EE{h\leftarrow {\mathcal H}_n}{\mathsf{H}(X)} = \mathsf{H}(X,Y\mid Z) = \mathsf{H}(Y\mid Z) + \mathsf{H}(X\mid Y,Z,R) \geq n-\frac{1}{q(n)}, \end{align*} where the second equality follows since ${\widetilde{\Gc}}$ is $G$-consistent and thus $X$ fully determines $Y$. This implies that \begin{align*} \EE{h\leftarrow {\mathcal H}_n}{{\mathbf{D}_{\mathsf{KL}}(X_1^\mathsf{A} \|X_1^{\operatorname{\sf Col}})}} \leq \frac{1}{q(n)} = \frac{1}{p(n)^2}, \end{align*} as required. \end{proof} \begin{proof}[Proof of Claim~\ref{claim:2}] For $x_1 \in \mathsf{supp}(X_1^\mathsf{A})$, it holds that \begin{align*} \mathbf{D}_{\mathsf{KL}}(X_2^\mathsf{A}|_{X_1^\mathsf{A}=x_1} \| X_2^{\operatorname{\sf Col}}|_{X_1^{\operatorname{\sf Col}}=x_1}) & = \sum_x \pr{X_2^\mathsf{A}=x|_{X_1^\mathsf{A}=x_1}}\cdot \log \frac{\pr{X_2^\mathsf{A}=x|_{X_1^\mathsf{A}=x_1}}}{|h^{-1}(h(x_1))|^{-1}} \\& = \log |h^{-1}(h(x_1))| - \mathsf{H}(X_2^\mathsf{A} |_{X_1^\mathsf{A}=x_1}) . \end{align*} Hence, \begin{align*} \EE{\begin{subarray}{c} h\leftarrow {\mathcal H}_n\\ x_1\leftarrow X_1^\mathsf{A} \end{subarray}}{\mathbf{D}_{\mathsf{KL}}(X_2^\mathsf{A}|_{X_1^\mathsf{A}=x_1} \| X_2^{\operatorname{\sf Col}}|_{X_1^{\operatorname{\sf Col}}=x_1})} & = \EE{\begin{subarray}{c} h\leftarrow {\mathcal H}_n\\ x_1\leftarrow X_1^\mathsf{A} \end{subarray}}{\log |h^{-1}(h(x_1))| - \mathsf{H}(X_2^\mathsf{A} |_{X_1^\mathsf{A}=x_1})} . \end{align*} Notice that the distribution of $X_2^\mathsf{A}$ only depends on $y=h(x_1)$, that is, $X_2^\mathsf{A} |_{X_1^\mathsf{A}=x_1}$ is distributed exactly as $X_2^\mathsf{A} |_{X_1^\mathsf{A}=x_1'}$ for every $x_1$ and $x_1'$ that such that $y=h(x_1)=h(x_1')$. Thus, we have that $X_2^\mathsf{A}|_{X_1^\mathsf{A}=x_1}$ is distributed exactly as $X|_{Y=y}$ and the distribution of $h(X_1)$ is distributed as $Y$. Namely, \begin{align*} \EE{\begin{subarray}{c} h\leftarrow {\mathcal H}_n\\ x_1\leftarrow X_1^\mathsf{A} \end{subarray}}{\mathbf{D}_{\mathsf{KL}}(X_2^\mathsf{A}|_{X_1^\mathsf{A}=x_1} \| X_2^{\operatorname{\sf Col}}|_{X_1^{\operatorname{\sf Col}}=x_1})} & = \EE{\begin{subarray}{c} h\leftarrow {\mathcal H}_n\\ x_1\leftarrow X_1^\mathsf{A} \end{subarray}}{\log |h^{-1}(y)|} - \EE{h\leftarrow {\mathcal H}_n}{\mathsf{H}(X \mid Y,R)}\\& = \EE{\begin{subarray}{c} h\leftarrow {\mathcal H}_n\\ x_1\leftarrow X_1^\mathsf{A} \end{subarray}}{\log |h^{-1}(y)|} - \mathsf{H}(X \mid Y, Z,R) \\ & \le \EE{\begin{subarray}{c} h\leftarrow {\mathcal H}_n\\ x_1\leftarrow X_1^\mathsf{A} \end{subarray}}{\log |h^{-1}(y)|}+\mathsf{H}(Y\mid Z) -n +\frac{1}{q(n)} \\ & =\frac{1}{q(n)}, \end{align*} where the first inequality follows by Equation~\eqref{eq:contradiction} and the second follows since \begin{align*} \EE{\begin{subarray}{c} h\leftarrow {\mathcal H}_n\\ y\leftarrow Y \end{subarray}}{\log |h^{-1}(y)|}+\mathsf{H}(Y\mid Z) & = \EE{\begin{subarray}{c} h\leftarrow {\mathcal H}_n\\ y\leftarrow Y \end{subarray}}{\log |h^{-1}(y)| + \mathsf{H}_Y(y)} \\ & = \EE{\begin{subarray}{c} h\leftarrow {\mathcal H}_n\\ y\leftarrow Y \end{subarray}}{\log \frac{|h^{-1}(y)|}{\pr{Y=y}}} \\ & \le \log \EE{\begin{subarray}{c} h\leftarrow {\mathcal H}_n\\ y\leftarrow Y \end{subarray}}{\frac{|h^{-1}(y)|}{\pr{Y=y}}} = n, \end{align*} where the inequality is by Jensen's inequality (\Cref{prop:jensen}). Thus, overall \begin{align*} \EE{\begin{subarray}{c} h\leftarrow {\mathcal H}_n\\ x_1\leftarrow X_1^\mathsf{A} \end{subarray}}{\mathbf{D}_{\mathsf{KL}}(X_2^\mathsf{A}|_{X_1^\mathsf{A}=x_1} \| X_2^{\operatorname{\sf Col}}|_{X_1^{\operatorname{\sf Col}}=x_1})} \leq {\frac{1}{q(n)}} = \frac{1}{p(n)^2}, \end{align*} as required. \end{proof} \end{proof} \section{Introduction}\label{sec:Introduction} Distributional collision resistant hashing (dCRH\xspace), introduced by Dubrov and Ishai~\cite{DubrovI06}, is a relaxation of the notion of collision resistance. In (plain) collision resistance, it is guaranteed that no efficient adversary can find \emph{any} collision given a random hash function in the family. In dCRH\xspace, it is only guaranteed that no efficient adversary can sample \emph{a random} collision given a random hash function in the family. More precisely, given a random hash function $h$ from the family, it is computationally hard to sample a pair $(x,y)$ such that $x$ is uniform and $y$ is uniform in the preimage set $h^{-1}(x)=\{z \colon h(x) = h(z)\}$. This hardness is captured by requiring that the adversary cannot get statistically-close to this distribution over collisions.\footnote{There are some subtleties in defining this precisely. The definition we use differs from previous ones~\cite{DubrovI06,HarnikN10,KomargodskiY18}. We elaborate on the exact definition and the difference in the technical overview below and in \Cref{sec:dcrh}.} \paragraph{The power of dCRH\xspace.} Intuitively, the notion of dCRH\xspace seems quite weak. The adversary may even be able to sample collisions from the set of {\em all} collisions, but only from a skewed distribution, far from the random one. Komargodski and Yogev \cite{KomargodskiY18} show that dCRH\xspace can be constructed assuming average-case hardness in the complexity class {\em statistical zero-knowledge} ({\class{SZK}}), whereas a similar implication is not known for multi-collision resistance.\footnote{Multi-collision resistance is another relaxation of collision resistance, where it is only hard to find multiple elements that all map to the same image. Multi-collision resistance does not imply dCRH\xspace in a black-box way~\cite{KomargodskiNY18}, but Komargodski and Yogev~\cite{KomargodskiY18} give a non-black-box construction.} (let alone plain collision resistance). This can be seen as evidence suggesting that dCRH\xspace may be weaker than collision resistance, or even multi-collision resistance \cite{KomargodskiNY17,BermanDRV18,BitanskyKP17,KomargodskiNY18}. Furthermore, dCRH\xspace has not led to the same cryptographic applications as collision resistance, or even multi-collision resistance. In fact, dCRH\xspace has no known applications beyond those implied by one-way functions. At the same time, dCRH\xspace is not known to follow from one-way functions, and actually, cannot follow based on black-box reductions~\cite{Simon98}. In fact, it can even be separated from indistinguishability obfuscation (and one-way functions) \cite{AsharovS16}. Overall, we are left with a significant gap in our understanding of the power of dCRH\xspace: \begin{center} \textit{Does the power of dCRH\xspace go beyond one-way functions?} \end{center} \subsection{Our Results} We present the first application of dCRH\xspace that is not known from one-way functions and is provably unachievable from one-way functions in a black-box way. \begin{theorem}\label{thm:dcrh} dCRH\xspace implies \emph{constant-round} statistically hiding commitment scheme. \end{theorem} Such commitment schemes cannot be constructed from one-way functions (or even permutations) in a black-box way due to a result of Haitner, Hoch, Reingold and Segev~\cite{HaitnerHRS15}. They show that the number of rounds in such commitments must grow quasi-linearly in the security parameter. The heart of Theorem \ref{thm:dcrh} is a construction of an inaccessible-entropy generator~\cite{HaitnerRVW09,HaitnerReVaWe18} from dCRH\xspace. An implication of the above result is that constant-round statistically hiding commitments can be constructed from average-case hardness in ${\class{SZK}}$. Indeed, it is known that such hardness implies the existence of a dCRH\xspace~\cite{KomargodskiY18}. \begin{corollary}\label{cor:szk} A Hard-on-average problem in ${\class{SZK}}$ implies a \emph{constant-round} statistically hiding commitment scheme. \end{corollary} The statement of Corollary~\ref{cor:szk} has been treated as known in several previous works (c.f.\ \cite{HaitnerRVW09,DvirGRV11,BitanskyDV17}), but a proof of this statement has so far not been published or (to the best of our knowledge) been publicly available. We also provide an alternative proof of this statement (and in particular, a different commitment scheme) that does not go through a construction of a dCRH\xspace, and is arguably more direct. \paragraph{A limit on the power of dCRH\xspace.} We also show a converse connection between dCRH\xspace and statistically hiding commitments. Specifically, we show that \emph{any} two-message statistically hiding commitment implies a dCRH\xspace function family. \begin{theorem} Any two-message statistically hiding commitment scheme implies dCRH\xspace. \end{theorem} This establishes a loose equivalence between dCRH\xspace and statistically hiding commitments. Indeed, the commitments we construct from dCRH\xspace require more than two messages. Interestingly, we can even show that such commitments imply a stronger notion of dCRH\xspace where the adversary's output distribution is not only noticeably far from the random collision distribution, but is $(1-{\sf negl}(n))$-far. \subsection{Related Work on Statistically Hiding Commitments} Commitment schemes, the digital analog of sealed envelopes, are central to cryptography. More precisely, a commitment scheme is a two-stage interactive protocol between a sender $S$ and a receiver $R$. After the commit stage, $S$ is bound to (at most) one value, which stays hidden from $R$, and in the reveal stage $R$ learns this value. The immediate question arising is what it means to be ``bound to'' and to be ``hidden''. Each of these security properties can come in two main flavors, either \emph{computational security}, where a polynomial-time adversary cannot violate the property except with negligible probability, or the stronger notion of \emph{statistical security}, where even an unbounded adversary cannot violate the property except with negligible probability. However, it is known that there do \emph{not} exist commitment schemes that are simultaneously statistically hiding and statistically binding. There exists a one-message (i.e., non-interactive) statistically binding commitment schemes assuming one-way permutations (Blum~\cite{Blum81}). From one-way functions, such commitments can be achieved by a two-message protocol (Naor~\cite{Naor91} and H{\aa}stad, Impagliazzo, Levin and Luby~\cite{HILL99}). Statistically hiding commitments schemes have proven to be somewhat more difficult to construct. Naor, Ostrovsky, Venkatesan and Yung~\cite{NaorOVY92} gave a statistically hiding commitment scheme protocol based on one-way permutations, whose linear number of rounds matched the lower bound of \cite{HaitnerHRS15} mentioned above. After many years, this result was improved by Haitner, Nguyen, Ong, Reingold and Vadhan~\cite{HaitnerNORV09} constructing such commitment based on the minimal hardness assumption that one-way functions exist. The reduction of \cite{HaitnerNORV09} was later simplified and made more efficient by Haitner, Reingold, Vadhan and Wee~\ \cite{HaitnerRVW09,HaitnerReVaWe18} to match, in some settings, the round complexity lower bound of \cite{HaitnerHRS15}. Constant-round statistically hiding commitment protocols are known to exist based on families of collision resistant hash functions~\cite{NaorY89,DamgardPP93,HaleviM96}. Recently, Berman, Degwekar, Rothblum and Vasudevan~\cite{BermanDRV18} and Komargodski, Naor and Yogev~\cite{KomargodskiNY18} constructed constant-round statistically hiding commitment protocols assuming the existence of \emph{multi}-collision resistant hash functions. Constant-round statistically hiding commitments are a basic building block in many fundamental applications. Two prominent examples are constructions of {\em constant-round} zero-knowledge proofs for all {{\class{NP}}} (Goldreich and Kahan~\cite{GoldreichKahan}) and {\em constant-round} public-coin statistical zero-knowledge arguments for {{\class{NP}}} (Barak~\cite{Barak01}, Pass and Rosen~\cite{PassR08}). Statistically hiding commitment are also known to be tightly related to the hardness of the class of problems that posses a statistical zero-knowledge protocol, i.e., the class {\sf SZK}. Ong and Vadhan~\cite{OngV08} showed that a language in ${\sf NP}$ has a zero-knowledge protocol if and only if the language has an ``instance-dependent'' commitment scheme. An instance-dependent commitment scheme for a given language is a commitment scheme that can depend on an instance of the language, and where the hiding and binding properties are required to hold only on the YES and NO instances of the language, respectively. \input{Open} \section{Preliminaries}\label{sec:prelims} Unless stated otherwise, the logarithms in this paper are base 2. For a distribution ${\mathcal D}$ we denote by $x \leftarrow {\mathcal D}$ an element chosen from ${\mathcal D}$ uniformly at random. For an integer $n \in \mathbb{N}$ we denote by $[n]$ the set $\{1,\ldots, n\}$. We denote by $U_n$ the uniform distribution over $n$-bit strings. We denote by $\circ$ the string concatenation operation. A function ${\sf negl}\colon{\mathbb{N}}\to{\mathbb{R}}^+$ is \emph{negligible} if for every constant $c > 0$, there exists an integer $N_c$ such that ${\sf negl}(n) < n^{-c}$ for all $n > N_c$. \subsection{Cryptographic Primitives}\label{sec:prelim:uowhf} A function $f$, with input length $m_1(n)$ and outputs length $m_2(n)$, specifies for every $n\in {\mathbb{N}}$ a function $f_n\colon\{0,1\}^{m_1(n)}\to\{0,1\}^{m_2(n)}$. We only consider functions with polynomial input lengths (in $n$) and occasionally abuse notation and write $f(x)$ rather than $f_n(x)$ for simplicity. The function $f$ is computable in polynomial time (efficiently computable) if there exists a probabilistic machine that for any $x \in \{0,1\}^ {m_1(n)}$ outputs $f_n(x)$ and runs in time polynomial in $n$. A function family ensemble is an infinite set of function families, whose elements (families) are indexed by the set of integers. Let $\mathcal{F} = \{\mathcal{F}_n\colon \mathcal D_n\to\mathcal R_n\}_{n\in {\mathbb{N}}}$ stand for an ensemble of function families, where each $f\in \mathcal{F}_n$ has domain $\mathcal D_n$ and range $\mathcal R_n$. An efficient function family ensemble is one that has an efficient sampling and evaluation algorithms. \begin{definition}[Efficient function family ensemble] A function family ensemble $\mathcal{F} = \{\mathcal{F}_n\colon \mathcal D_n\to\mathcal R_n\}_{n\in {\mathbb{N}}}$ is efficient if: \begin{itemize} \item $\mathcal{F}$ is samplable in polynomial time: there exists a probabilistic polynomial-time machine that given $1^n$, outputs (the description of) a uniform element in $\mathcal{F}_n$. \item There exists a deterministic algorithm that given $x \in \mathcal D_n$ and (a description of) $f \in \mathcal F_n$, runs in time $\mathsf{poly}(n, |x|)$ and outputs $f(x)$. \end{itemize} \end{definition} \subsection{Distance and Entropy Measures} \begin{definition}[Statistical distance] The \emph{statistical distance} between two random variables $X,Y$ over a finite domain $\Omega$, is defined by \begin{align*} \mathbf{\Delta}(X,Y) \triangleq \frac{1}{2}\cdot \sum_{x\in \Omega}^{} \left|{\pr{X=x} - \pr{Y=x}}\right|. \end{align*} We say that $X$ and $Y$ are $\delta$-close (resp.\ -far) if $\mathbf{\Delta}(X,Y)\leq \delta$ (resp.\ $\mathbf{\Delta}(X,Y)\geq \delta$). \end{definition} \paragraph{Entropy.} Let $X$ be a random variable. For any $x\in \mathsf{supp}(X)$, the sample-entropy of $x$ with respect to $X$ is \begin{align*} \mathsf{H}_X(x) = \log \left(\frac{1}{\pr{X=x}}\right). \end{align*} The Shannon entropy of $X$ is defined as: \begin{align*} \mathsf{H}(X) = \EE{x\leftarrow X}{\mathsf{H}_X(x)}. \end{align*} \paragraph{Conditional entropy.} Let $(X,Y)$ be a jointly distributed random variable. \begin{itemize} \item For any $(x,y)\in \mathsf{supp}(X,Y)$, the conditional sample-entropy to be \begin{align*} \mathsf{H}_{X\mid Y}(x \mid y) = \log \left(\frac{1}{\pr{X=x \mid Y=y}}\right). \end{align*} \item The conditional Shannon entropy is \begin{align*} \mathsf{H}(X\mid Y) = \EE{(x,y)\leftarrow (X,Y)}{\mathsf{H}_{X\mid Y}(x \mid y)} = \EE{y\leftarrow Y}{\mathsf{H}(X|_{Y=y})} = \mathsf{H}(X,Y) - \mathsf{H}(Y). \end{align*} \end{itemize} \paragraph{Relative entropy.} We also use basic facts about relative entropy (\aka, Kullback-Leibler divergence). \begin{definition}[Relative entropy] Let $X$ and $Y$ be two random variables over a finite domain $\Omega$. The \emph{relative entropy} is \begin{align*} \mathbf{D}_{\mathsf{KL}}(X \| Y) = \sum_{x\in\Omega}\pr{X=x}\cdot \log\left( \frac{\pr{X=x}}{\pr{Y=x}}\right). \end{align*} \end{definition} \begin{proposition}[Chain rule]\label{prop:kl_chain_rule} Let $(X_1,X_2)$ and $(Y_1,Y_2)$ be random variables. It holds that \begin{align*} \mathbf{D}_{\mathsf{KL}}((X_1,X_2)\|(Y_1,Y_2)) = \mathbf{D}_{\mathsf{KL}}(X_1 \| Y_1) + \EE{x\leftarrow X_1}{\mathbf{D}_{\mathsf{KL}}(X_2|_{X_1=x} \| Y_2|_{Y_1=x})}. \end{align*} \end{proposition} A well-known relation between statistical distance and relative entropy is given by Pinsker's inequality. \begin{proposition}[Pinsker's inequality]\label{prop:pinsker} For any two random variables $X$ and $Y$ over a finite domain it holds that \begin{align*} \mathbf{\Delta}(X,Y) \le \sqrt{\frac{\ln{2}}{2}\cdot \mathbf{D}_{\mathsf{KL}}(X \| Y)}. \end{align*} \end{proposition} Another useful inequality is Jensen's inequality. \begin{proposition}[Jensen's inequality]\label{prop:jensen} If $X$ is a random variable and $f$ is concave, then \begin{align*} \E{f(X)} \leq f(\E{X}). \end{align*} \end{proposition} \subsection{Commitment Schemes}\label{sec:prelim:commit} A commitment scheme is a two-stage interactive protocol between a sender $\mathcal{S}$ and a receiver $\mathcal{R}$. The goal of such a scheme is that after the first stage of the protocol, called the commit protocol, the sender is bound to at most one value. In the second stage, called the opening protocol, the sender opens its committed value to the receiver. Here, we are interested in statistically hiding and computationally binding commitments. Also, for simplicity, we restrict our attention to protocols that can be used to commit to bits (i.e., strings of length 1). In more detail, a commitment scheme is defined via a pair of probabilistic polynomial-time algorithms $(\mathcal{S}, \mathcal{R}, \mathcal{V})$ such that: \begin{itemize} \item The commit protocol: $\mathcal{S}$ receives as input the security parameter $1^n$ and a bit $b\in \{0,1\}$. $\mathcal{R}$ receives as input the security parameter $1^n$. At the end of this stage, $\mathcal{S}$ outputs $\mathsf{decom}$ (the decommitment) and $\mathcal{R}$ outputs $\mathsf{com}$ (the commitment). \item The verification: $\mathcal{V}$ receives as input the security parameter $1^n$, a commitment $\mathsf{com}$, a decommitment $\mathsf{decom}$, and outputs either a bit $b$ or $\bot$. \end{itemize} A commitment scheme is {\em public coin} if all messages sent by the receiver are independent random coins. Denote by $(\mathsf{decom},\mathsf{com})\leftarrow \langle \mathcal{S} (1^n, b), \mathcal{R} \rangle$ the experiment in which $\mathcal{S}$ and $\mathcal{R}$ interact with the given inputs and uniformly random coins, and eventually $\mathcal{S}$ outputs a decommitment string and $\mathcal{R}$ outputs a commitment. The completeness of the protocol says that for all $n\in {\mathbb{N}}$, every $b\in\{0,1\}$, and every tuple $(\mathsf{decom},\mathsf{com})$ in the support of $\langle \mathcal{S} (1^n, b), \mathcal{R} \rangle$, it holds that $\mathcal{V} (\mathsf{decom},\mathsf{com}) = b$. Unless otherwise stated, $\mathcal{V}$ is the canonical verifier that receives the sender's coins as part of the decommitment and checks their consistency with the transcript. Below we define two security properties one can require from a commitment scheme. The properties we list are \emph{statistical-hiding} and \emph{computational-binding}. These roughly say that after the commit stage, the sender is \emph{bound} to a specific value but the receiver cannot know this value. \begin{definition}[binding] A commitment scheme $(\mathcal{S}, \mathcal{R}, \mathcal{V})$ is binding if for every probabilistic polynomial-time adversary $\mathcal{S}^*$ there exits a negligible function ${\sf negl}(n)$ such that \begin{align*} \pr{\MyAtop{ \mathcal{V}(\mathsf{decom},\mathsf{com})=0 \text{ and }}{\mathcal{V}(\mathsf{decom}',\mathsf{com})=1 } \;:\; (\mathsf{decom},\mathsf{decom}',\mathsf{com}) \leftarrow \langle \mathcal{S}^* (1^n), \mathcal{R}\rangle}\leq {\sf negl}(n) \end{align*} for all $n\in {\mathbb{N}}$, where the probability is taken over the random coins of both $\mathcal{S}^*$ and $\mathcal{R}$. \end{definition} Given a commitment scheme $(\mathcal{S}, \mathcal{R}, \mathcal{V})$ and an adversary $\mathcal{R}^*$, we denote by $\mathsf{view}_{\langle \mathcal{S}(b), \mathcal{R}^*\rangle}(n)$ the distribution on the view of $\mathcal{R}^*$ when interacting with $\mathcal{S}(1^n, b)$. The view consists of $\mathcal{R}^*$'s random coins and the sequence of messages it received from $\mathcal{S}$. The distribution is taken over the random coins of both $\mathcal{S}$ and $\mathcal{R}$. Without loss of generality, whenever $\mathcal{R}^*$ has no computational restrictions, we can assume it is deterministic. \begin{definition}[hiding] A commitment scheme $(\mathcal{S}, \mathcal{R}, \mathcal{V})$ is statistically hiding if there exists a negligible function ${\sf negl}(n)$ such that for every (deterministic) adversary $\mathcal{R}^*$ it holds that \begin{align*} \mathbf{\Delta}\left(\{\mathsf{view}_{\langle \mathcal{S}(0), \mathcal{R}^*\rangle}(n)\}, \{\mathsf{view}_{\langle \mathcal{S}(1), \mathcal{R}^*\rangle}(n)\}\right) \leq {\sf negl}(n) \end{align*} for all $n\in {\mathbb{N}}$. \end{definition} \subsection{Distributional Collision Resistant Hash Functions}\label{sec:dcrh} Roughly speaking, a distributional collision resistant hash function \cite{DubrovI06} guarantees that no efficient adversary can sample a uniformly random collision. We start by defining more precisely what we mean by a random collision throughout the paper, and then move to the actual definition. \begin{definition}[Ideal collision finder] Let $\operatorname{\sf Col}$ be the random function that given a (description) of a function $h\colon\{0,1\}^{n}\to\{0,1\}^m$ as input, returns a collision $(x_1,x_2)$ \wrt $h$ as follows: it samples a uniformly random element, $x_1 \leftarrow \{0,1\}^{n}$, and then samples a uniformly random element that collides with $x_1$ under $h$, $x_2 \leftarrow \{x \in \{0,1\}^{n} \colon h(x)=h(x_1)\}$. (Note that possibly, $x_1=x_2$.) \end{definition} \begin{definition}[Distributional collision resistant hashing]\label{def:dcrh} Let ${\mathcal H} = \{ {\mathcal H}_n\colon\allowbreak \{0,1\}^{n} \to \{0,1\}^{m(n)} \}_{n\in {\mathbb{N}}}$ be an efficient function family ensemble. We say that ${\mathcal H}$ is a secure \emph{distributional collision resistant hash} ($dCRH\xspace$) function family if there exists a polynomial $p(\cdot)$ such that for any probabilistic polynomial-time algorithm $\mathsf{A}$, it holds that \begin{align*} \mathbf{\Delta}\left((h,\mathsf{A}(1^n,h)),(h,\operatorname{\sf Col}(h))\right) \ge \frac{1}{p(n)}, \end{align*} for $h\leftarrow {\mathcal H}_n$ and large enough $n\in{\mathbb{N}}$. \end{definition} \paragraph{Comparison with the previous definition.} Our definition deviates from the previous definition of distributional collision resistance considered in~\cite{DubrovI06,HarnikN10,KomargodskiY18}. The definition in the above-mentioned works is equivalent to requiring that for any efficient adversary $\mathsf{A}$, there exists a polynomial $p_{\mathsf{A}}$, such that the collision output by $\mathsf{A}$ is $\frac{1}{p_{\mathsf{A}}(n)}$-far from a random collision on average (over $h$). Our definition switches the order of quantifiers, requiring that there is one such polynomial $p(\cdot)$ for all adversaries $\mathsf{A}$. We note that the previous definition is, in fact, not even known to imply one-way functions. In contrast, the definition presented here strengthens that of {\em distributional one-way functions}, which in turn implies one-way functions~\cite{ImpagliazzoL89}. Additionally, note that both constructions of distributional collision resistance in~\cite{KomargodskiY18} (from multi-collision resistance and from {\class{SZK}}~hardness) satisfy our stronger notion of security (with a similar proof). \paragraph{On compression.} As opposed to classical notions of collision resistance (such as plain collision resistance or multi-collision resistance), it makes sense to require distributional collision resistance even for \emph{non-compressing} functions. So we do not put a restriction on the order between $n$ and $m(n)$. As a matter of fact, by padding, the input, arbitrary polynomial compression can be assumed without loss of generality. \section{From SZK-Hardness to Statistically Hiding Commitments}\label{sec:szk} In this section, we give a direct construction of a constant-round statistically hiding commitment from average-case hardness in {\class{SZK}}. This gives an alternative proof to \Cref{cor:szk}. \subsection{Hard on Average Promise Problems} \begin{definition} A promise problem $(\Pi_Y,\Pi_N)$ consists of two disjoint sets of {\em yes instances} $\Pi_Y$ and {\em no instances} $\Pi_N$. \end{definition} \begin{definition} A promise problem $(\Pi_Y,\Pi_N)$ is hard on average if there exists a probabilistic polynomial-time sampler $\Pi$ with support $\Pi_Y\cup\Pi_N$, such that for any probabilistic polynomial-time decider $D$, there exists a negligible function ${\sf negl}(n)$, such that $$ \prob{r\gets \{0,1\}^n}{x \in \Pi_{D(x)} \mid x\gets \Pi(r)} \leq \frac{1}{2}+{\sf negl}(n)\enspace. $$ \end{definition} \subsection{Instance-Dependent Commitments} \begin{definition}[\cite{OngV08}] An instance-dependent commitment scheme $\mathcal{IDC}$ for a promise problem $(\Pi_Y,\Pi_N)$ is a commitment scheme where all algorithms get as auxiliary input an instance $x\in \{0,1\}^*$. The induced family of schemes $\set{\mathcal{IDC}_x}_{x\in \{0,1\}^*}$ is \begin{itemize} \item statistically binding when $x\in \Pi_N$, \item statistically hiding when $x\in \Pi_Y$. \end{itemize} \end{definition} \begin{theorem}[\cite{OngV08}] Any promise problem $(\Pi_Y,\Pi_N)\in {\class{SZK}}$ has a constant-round instance-dependent commitment. \end{theorem} \subsection{Witness-Indistinguishable Proofs} \begin{definition} A proof system $\mathcal{WI}$ for an ${\class{NP}}$ relation $R$ is witness indistinguishable if for any $x,w_0,w_1$ such that $(x,w_0),(x,w_1)\in R$, the verifier's view given a proof using $w_0$ is computationally indistinguishable from its view given a proof using $w_1$. \end{definition} Constant-round $\mathcal{WI}$ proofs systems are known from any constant-round statistically-binding commitments \cite{GMW87}. Statistically-binding commitments can be constructed from one-way functions \cite{Naor91}, and thus can also be obtained from average-case hardness in ${\class{SZK}}$ \cite{OstrovskyW93}. \begin{theorem}[\cite{GMW87,Naor91,OstrovskyW93}] Assuming hard-on-average problems in ${\class{SZK}}$, there exist constant-round witness-indistinguishable proof systems. \end{theorem} \subsection{The Commitment Protocol} Here, we give the details of our protocol. Our protocol uses the following ingredients and notation: \begin{itemize} \item A $\mathcal{WI}$ proof for ${\class{NP}}$. \item A hard-on average ${\class{SZK}}$ problem $(\Pi_Y,\Pi_N)$ with sampler $\Pi$. \item An instance-dependent commitment scheme $\mathcal{IDC}$ for $\Pi$. \end{itemize} We describe the commitment scheme in Figure \ref{fig:com_szk}. \protocol {Protocol \ref{fig:com_szk}} {A constant round statistically hiding commitment from ${\class{SZK}}$ hardness.} {fig:com_szk} { Sender input: a bit $m \in \{0,1\}$.\\ Common input: security parameter $1^n$. \paragraph{Coin tossing into the well} \begin{itemize} \item $\mathcal{R}$ samples $2n$ independent random strings $\rho_{i,b}\gets \{0,1\}^n$, for $i\in[n],b\in \{0,1\}$. \item The parties then execute (in parallel) $2n$ statistically-binding commitment protocols $\mathcal{SBC}$ in which $\mathcal{R}$ commits to each of the strings $\rho_{i,b}$. We denote the transcript of each such commitment by $C_{i,b}$. \item $\mathcal{S}$ samples $2n$ independent random strings $ \sigma_{i,b}\gets \{0,1\}^n$, and sends them to $\mathcal{R}$. \item $\mathcal{R}$ sets $r_{i,b} = \rho_{i,b}\oplus\sigma_{i,b}$. \end{itemize} \paragraph{Generating hard instances} \begin{itemize} \item $\mathcal{R}$ generates $2n$ instances $x_{i,b} \gets \Pi(r_{i,b})$, using the strings $r_{i,b}$ as randomness, and sends the instances to $\mathcal{S}$. \item The parties then execute a $\mathcal{WI}$ protocol in which $\mathcal{R}$ proves to $\mathcal{S}$ that there exists a $b\in \{0,1\}$ such that for all $i\in [n]$, $x_{i,b}$ was generated consistently. That is, there exist strings $\set{\rho_{i,b}}_{i\in[n]}$ that are consistent with the receiver's commitments $\set{C_{i,b}}_{i\in[n]}$, and $x_{i,b}=\Pi(\rho_{i,b}\oplus \sigma_{i,b})$. As the witness, $\mathcal{R}$ uses $b=0$ and the strings $\set{\rho_{i,0}}_{i\in[n]}$ sampled earlier in the protocol. \end{itemize} \paragraph{Instance-binding commitment} \begin{itemize} \item The sender samples $2n$ random bits $m_{i,b}$ subject to $m=\bigoplus_{i,b} m_{i,b}$. \item The parties then execute (in parallel) $2n$ instance-dependent commitment protocols $\mathcal{IDC}_{x_{i,b}}$ in which $\mathcal{S}$ commits to each bit $m_{i,b}$ using the instance $x_{i,b}$. \end{itemize} } \subsection{Analysis} \begin{proposition} Protocol \ref{fig:com_szk} is computationally binding. \end{proposition} \begin{proof} Let $\mathcal{S}^*$ be any probabilistic polynomial-time sender that breaks binding in Protocol~\ref{fig:com_szk} with probability~$\varepsilon$. We use $\mathcal{S}^*$ to construct a probabilistic polynomial-time decider $D$ for the ${\class{SZK}}$ problem $\Pi$ with advantage $\varepsilon/4n -{\sf negl}(n)$. Given an instance $x \gets \Pi$, the decider $D$ proceeds as follows: \begin{itemize} \item It samples at random $i^*\in [n]$ and $b^*\in \{0,1\}$. \item It executes the protocol $(\mathcal{S}^*,\mathcal{R})$ with the following exceptions: \begin{itemize} \item The instance $x_{i^*,b^*}$, generated by $\mathcal{R}$, is replaced with the instance $x$, given to $D$ as input. \item In the $\mathcal{WI}$ protocol, as the witness we use $1 \oplus b^*$ and the strings $\set{\rho_{i,1\oplus b^*}}_{i\in[n]}$ (instead of $0$ and the strings $\set{\rho_{i,0}}_{i\in[n]})$. \end{itemize} \item Then, at the opening phase, if $\mathcal{S}^*$ equivocally opens the $(i^*,b^*)$-th instance-dependent commitment, $D$ declares that $x\in \Pi_Y$. Otherwise, it declares that $x\in \Pi_\beta$ for a random $\beta\in\set{Y,N}$. \end{itemize} \paragraph{Analyzing $D$'s advantage.} Denote by $E$ the event that in the above experiment $\mathcal{S}^*$ equivocally opens the $(i^*,b^*)$-th instance-dependent commitment. We first observe that the advantage of $D$ in deciding $\Pi$ is at least as large as the probability that $E$ occurs. \begin{myclaim}\label{clm:deci_prob} $\pr{ x \in \Pi_{D(x)} } \geq \frac{1+\pr{E} }{2} - {\sf negl}(n)$. \end{myclaim} \begin{proof}By the definition of $D$, \begin{align*} &\pr{ x \in \Pi_{D(x)} \mid E } = \pr{ x \in \Pi_{Y} \mid E } = 1 - \pr{ x \in \Pi_{N} \mid E } \geq 1 - \frac{\pr{E\mid x \in \Pi_{N}}}{\pr{E}} \enspace,\\ &\pr{ x \in \Pi_{D(x)} \mid \overline{E} } = \frac{1}{2}\enspace. \end{align*} Furthermore, if $x\in \Pi_N$ (namely, it is a no instance), then $\mathcal{IDC}_x$ is binding, and thus \begin{align*} &\pr{E\mid x \in \Pi_{N}}= {\sf negl}(n)\enspace. \end{align*} Claim \ref{clm:deci_prob} now follows by the law of total probability. \end{proof} From hereon, we focus on showing that $E$ occurs with high probability. \begin{myclaim}\label{clm:equi_prob} $\pr{E}\geq \frac{\varepsilon}{2n} -{\sf negl}(n)$. \end{myclaim} \begin{proof} To prove the claim, we consider hybrid experiments $\mathcal{H}_0,\dots,\mathcal{H}_4$, and show that that the view of the sender $\mathcal{S}^*$ changes in a computationally indistinguishable manner throughout the hybrids. We then bound the probability that $E$ occurs in the last hybrid experiment. \begin{description} \item[$\mathcal{H}_0$:] In this experiment, we consider an execution of $D(x)$ as specified above. \item[$\mathcal{H}_1$:] Here $x$ is not sampled ahead of time, but rather first the value $\sigma_{i^*,b^*}$ is obtained from $\mathcal{S}^*$, then a random value $\rho'\gets \{0,1\}^n$ is sampled, and $x$ is sampled using randomness $r_{i^*,b^*}=\sigma_{i^*,b^*}\oplus \rho'$. Since $\rho'$ is sampled independently of the rest of the experiment, the sender's view in $\mathcal{H}_1$ is identically distributed to its view in $\mathcal{H}_0$. \item[$\mathcal{H}_2$:] Here the $(i^*,b^*)$-th commitment to $\rho_{i^*,b^*}$ is replaced with a commitment to $\rho'$. By the (computational) hiding of the commitment $\mathcal{SBC}$, the sender's view in $\mathcal{H}_2$ is computationally indistinguishable from its view in $\mathcal{H}_1$. \item[$\mathcal{H}_3$:] Here, in the $\mathcal{WI}$ protocol, instead of using as the witness $1 \oplus b^*$ and the strings $\set{\rho_{i,1\oplus b^*}}_i$, we use $0$ and the strings $\set{\rho_{i,0}}_i$. By the (computational) witness-indistinguishability of the protocol, the sender's view in $\mathcal{H}_3$ is computationally indistinguishable from its view in $\mathcal{H}_2$. \item[$\mathcal{H}_4$:] In this experiment, we consider a standard execution of the protocol between $\mathcal{S}^*$ and $\mathcal{R}$ (without any exceptions). The sender's view in this hybrid is identical to its view in $\mathcal{H}_3$ (by renaming $\rho'=\rho_{i^*,b^*}$ and $x=x_{i^*,b^*}$). \end{description} It is left to bound from below the probability that $E$ occurs in $\mathcal{H}_4$. That is, when we consider a standard execution of $(\mathcal{S}^*,\mathcal{R})$ and sample $(i^*,b^*)$ independently at random. Indeed, note that since the plaintext bit $m$ is uniquely determined by the bits $\set{m_{i,b}}_{i,b}$. Whenever $\mathcal{S}^*$ equivocally opens the commitment to two distinct bits, there exists (at least one) $(i,b)$ such that $\mathcal{S}^*$ equivocally opens the $(i,b)$-th instance-dependent commitment. Since in a standard execution $\mathcal{S}^*$ equivocally opens the commitment with probability at least $\varepsilon$, and $(i^*,b^*)$ is sampled independently, $E$ occurs in this experiment with probability at least $\frac{\varepsilon}{2n}$. Claim \ref{clm:equi_prob} follows. \end{proof} This completes the proof that the scheme is binding. \end{proof} \begin{proposition} Protocol \ref{fig:com_szk} is statistically hiding. \end{proposition} \begin{proof} Let $\mathcal{R}^*$ be any (computationally unbounded) receiver. We show that the view of $\mathcal{R}^*$ given a commitment to $m=0$ is statistically indistinguishable from its view given a commitment to $m=1$. For this purpose, consider the view of the receiver $\mathcal{R}^*$ after the coin tossing and instance-generation phase (and before the instance-dependent commitment phase). We shall refer to this as the {\em preamble view}. We say that the preamble view is {\em admissible}, if either of the following occurs: \begin{itemize} \item Let $\set{x_{i,b}}_{i,b}$ be the instances sent by $\mathcal{R}^*$. Then there exists $i^*,b^*$ such that $x_{i^*,b^*} \in \Pi_Y$. \item The sender $\mathcal{S}$ rejects the $\mathcal{WI}$ proof that $\set{x_{i,b}}_{i,b}$ were properly generated. \end{itemize} To complete the proof, we show that the preamble view is admissible with overwhelming probability, and that conditioned on any admissible preamble view, the view of $\mathcal{R}^*$ given a commitment to $m=0$ is statistically indistinguishable from its view given a commitment to $m=1$. Since the preamble view is completely independent of $m$, the above two conditions are sufficient to establish statistical indistinguishability of the total views. \begin{myclaim}\label{clm:adm_prob} The probability that the preamble view is not admissible is negligible. \end{myclaim} \begin{proof} Let $A$ be the event that the $\mathcal{WI}$ proof is accepted and let $Y$ be the event that for some $(i,b)$, $x_{i,b}$ is a yes instance. To show that the preamble view is not admissible with negligible probability, we would like to prove that $$ \pr{A\wedge \overline{Y}} \leq {\sf negl}(n)\enspace. $$ Let $T$ be the event that the statement proven by $\mathcal{R}^*$ in the $\mathcal{WI}$ protocol is true. Namely, there exists $b\in \{0,1\}$ such that all $\set{x_{i,b}}_{i}$ are generated consistently with the coin-tossing phase (and in particular where the coin-tossing phase consists of valid commitments $\set{C_{i,b}}_{i}$). First, note that by the soundness of the $\mathcal{WI}$ system, the probability that the preamble is admissible, and in particular the proof is accepted, when the statement is false, is negligible: $$\pr{A \wedge \overline{T}}\leq {\sf negl}(n)\enspace.$$ We now show: $$\pr{\overline{Y} \wedge {T}}\leq {\sf negl}(n)\enspace.$$ For this purpose, fix any $\mathcal{SBC}$ commitments $\set{C_{i,b}}_{i,b}$. Let $F=F[\set{C_{i,b}}_{i,b}]$ be the event, over the sender randomness $\set{\sigma_{i,b}}_{i,b}$, that there exists $\beta\in \{0,1\}$ such that $\set{C_{i,\beta}}_i$ are valid commitments to strings $\set{\rho_{i,\beta}}_i$ and for all $i$, $\Pi(\rho_{i,\beta}\oplus \sigma_{i,\beta})=x_{i,\beta}\in \Pi_N$. We show $$ \pr{F} \leq 2^{-\Omega(n)}\enspace. $$ This is sufficient since $$ \pr{\overline{Y}\wedge T} \leq \max_{\begin{subarray}{c} C_{1,0} \dots C_{n,0}\\ C_{1,1} \dots C_{n,1} \end{subarray} }\pr{F} \leq 2^{-\Omega(n)}\enspace. $$ To bound the probability that $F$ occurs, fix any $\beta$ and commitments $\set{C_{i,\beta}}_i$ to strings $\set{\rho_{i,\beta}}_i$. Then the strings $\rho_{i,\beta}\oplus\sigma_{i,\beta}$ are distributed uniformly and independently at random. Since $\Pi \in \Pi_Y$ with probability at least $0.49$, and taking a union bound over both $\beta\in \{0,1\}$, the bound follows. This concludes the proof of Claim \ref{clm:adm_prob}. \end{proof} \begin{myclaim}\label{clm:stat_ind} Fix any admissible preamble view $V$. Then, conditioned on $V$ the view of $\mathcal{R}^*$ when given a commitment to $m=0$ is statistically indistinguishable from its view when given a commitment to $m=1$. \end{myclaim} \begin{proof} If $V$ is such that the $\mathcal{WI}$ proof is rejected then $\mathcal{S}$ aborts and the view of $\mathcal{R}^*$ remains independent of $m$. Thus, from hereon, we assume that the instances corresponding to $V$ include an instance $x_{i^*,b^*}\in \Pi_Y$. In particular, the corresponding instance-dependent commitment $\mathcal{IDC}_{x_{i^*,b^*}}$ is statistically hiding. It is left to note that in any execution $(\mathcal{S},\mathcal{R}^*)$, with either $m\in\{0,1\}$, the bits $M_{-i}:=\set{m_{i,b}}_{(i,b)\neq (i^*,b^*)}$ are distributed uniformly and independently at random. Conditioned on $V$ and $M_{-i}$, only the bit $$m_{i^*,b^*} = m\bigoplus_{m'\in M_{-i}} m' $$ depends on $m$. By the statistical hiding of $\mathcal{IDC}_{x_{i^*,b^*}}$ a commitment to $0\bigoplus_{m'\in M_{-i}} m'$ is statistically indistinguishable from a commitment to $1\bigoplus_{m'\in M_{-i}} m'$. This concludes the proof of Claim \ref{clm:stat_ind}. \end{proof} \end{proof} \section{Technical Overview}\label{sec:Technical} In this section, we give an overview of our techniques. We start with a more precise statement of the definition of dCRH\xspace and a comparison with previous versions of its definition. A dCRH\xspace is a family of functions ${\mathcal H}_n = \{h\colon \{0,1\}^n\to\{0,1\}^m \}$. (The functions are not necessarily compressing.) The security guarantee is that there exists a universal polynomial $p(\cdot)$ such that for every efficient adversary $\mathsf{A}$ it holds that \begin{align*} \mathbf{\Delta}\left((h,\mathsf{A}(1^n,h)),(h,\operatorname{\sf Col}(h))\right) \ge \frac{1}{p(n)}, \end{align*} where $\mathbf{\Delta}$ denotes statistical distance, $h\leftarrow {\mathcal H}_n$ is chosen uniformly at random, and $\operatorname{\sf Col}$ is a random variable that is sampled in the following way: Given $h$, first sample $x_1\leftarrow \{0,1\}^n$ uniformly at random and then sample $x_2$ uniformly at random from the set of all preimages of $x_1$ relative to $h$ (namely, from the set $\{x \colon h(x)=h(x_1)\}$). Note that $\operatorname{\sf Col}$ may not be efficiently samplable and intuitively, the hardness of dCRH\xspace says that there is no efficient way to sample from $\operatorname{\sf Col}$, even approximately. Our definition is stronger than previous definitions of dCRH\xspace~\cite{DubrovI06,HarnikN10,KomargodskiY18} by that we require the existence of a universal polynomial $p(\cdot)$, whereas previous definitions allow a different polynomial per adversary. Our modification seems necessary to get non-trivial applications of dCRH\xspace, as the previous definitions are not known to imply one-way functions. In contrast, our notion of dCRH\xspace implies distributional one-way functions which, in turn, imply one-way functions~\cite{ImpagliazzoL89} (indeed, the definition of distributional one-way functions requires a universal polynomial rather than one per adversary).\footnote{The previous definition is known to imply a weaker notion of distributional one-way functions (with a different polynomial bound per each adversary)~\cite{HarnikN10}, which is not known to imply one-way functions.} We note that previous constructions of dCRH\xspace (from multi-collision resistance and ${\class{SZK}}$-hardness)~\cite{KomargodskiY18} apply to our stronger notion as well. \subsection{Commitments from dCRH\xspace and Back} We now describe our construction of constant-round statistically hiding commitments from dCRH\xspace. To understand the difficulty, let us recall the standard approach to constructing statistically hiding commitments from (fully) collision resistant hash functions \cite{NaorY89,DamgardPP93,HaleviM96}. Here to commit to a bit $b$, we hash a random string $x$, and output $(h(x),s, b \oplus Ext_s(x))$, where $s$ is a seed for a strong randomness extractor $Ext$ and $b$ is padded with a (close to) random bit extracted from $x$. When $h$ is collision resistant, $x$ is computationally fixed and thus so is the bit $b$. However, for a dCRH\xspace $h$, this is far from being the case: for any $y$, the sender might potentially be able to sample preimages from the set of all preimages. The hash $h(x)$, however, does yield a weak binding guarantee. For simplicity of exposition, let us assume that any $y\in \{0,1\}^m$ has exactly $2^k$ preimages under $h$ in $\{0,1\}^n$. Then, for a noticeable fraction of commitments $y$, the adversary cannot open $y$ to a uniform $x$ in the preimage set $h^{-1}(y)$. In particular, the adversary must choose between two types of {\em entropy losses}: it either outputs a commitment $y$ of entropy $m'$ noticeably smaller than $m$, or after the commitment, it can only open to a value $x$ of entropy $k'$ noticeably smaller than $k$. One way or the other, in total $m'+k'$ must be noticeably smaller than $n=m+k$. This naturally leads us to the notion of {\em inaccessible entropy} defined by Haitner, Reingold, Vadhan and Wee~\cite{HaitnerRVW09,HaitnerReVaWe18}. Let us briefly recall what inaccessible entropy is (see \Cref{sec:prelim:inaccessibe} for a precise definition). The entropy of a random variable $X$ is a measure of ``the amount of randomness'' that $X$ contains. The notion of (in)accessible entropy measures the feasibility of sampling high-entropy strings that are {\em consistent} with a given random process. Consider the two-block generator (algorithm) $G$ that samples $x \gets \{0,1\}^n$, and then outputs $y= h(x)$ and $x$. The {\em real entropy} of $G$ is defined as the entropy of the generator's (total) output in a random execution, and is clearly equal to $n$, the length of $x$. The \emph{accessible entropy} of $G$ measures the entropy of these output blocks from the point of view of an efficient \emph{$G$-consistent} generator, which might act arbitrarily, but still outputs a value in the support of $G$. Assume for instance that $h$ had been (fully) collision resistant. Then from the point of view of any efficient $G$-consistent generator ${\widetilde{\Gc}}$, conditioned on its first block $y$, and its internal randomness, its second output block is fixed (otherwise, $G$ can be used for finding a collision). In other words, while the value of $x$ given $y$ may have entropy $k = n -m$, this entropy is completely {\em inaccessible } for an efficient $G$-consistent generator. (Note that we do not measure here the entropy of the output blocks of ${\widetilde{\Gc}}$, which clearly can be as high as the real entropy of $G$ by taking ${\widetilde{\Gc}}= G$. Rather, we measure the entropy of the block from \emph{${\widetilde{\Gc}}$'s point of view}, and in particular, the entropy of its second block given the randomness used for generating the first block.). Haitner et al.\ show that any noticeable gap between the real entropy and the inaccessible entropy of such an efficient generator can be leveraged for constructing statistically hiding commitments, with a number of rounds that is linear in the number of blocks. Going back to dCRH\xspace, we have already argued that in the simple case that $h$ is regular and onto $\{0,1\}^m$, we get a noticeable gap between the real entropy $n=m+k$ and the accessible entropy $m'+k' \leq m+k - 1/\mathsf{poly}(n)$. We prove that this is, in fact, true for any dCRH\xspace: \begin{lemma}\label{thm:dcrhtpIAE} dCRH\xspace implies a two-block inaccessible entropy generator. \end{lemma} The block generator itself is the simple generator described above: $$ \text{output $h(x)$ and then $x$, for $x\gets \{0,1\}^n$}\enspace. $$ The proof, however, is more involved than in the case of collision resistance. In particular, it is sensitive to the exact notion of entropy used. Collision resistant hash functions satisfy a very clean and simple guarantee --- the {\em maximum entropy}, capturing the support size, is always at most $m < n$. In contrast, for dCRH\xspace (compressing or not), the maximum entropy could be as large as $n$, which goes back to the fact that the adversary may be able to sample from the set of {\em all} collisions (albeit from a skewed distribution). Still, we show a gap with respect to average (a.k.a Shannon) accessible entropy, which suffices for constructing statistically hiding commitments \cite{HaitnerReVaWe18}. \paragraph{From commitments back to dCRH\xspace.} We show that any two-message statistically hiding commitment implies a dCRH\xspace function family. Let $(\mathcal{S},\mathcal{R})$ be the sender and receiver of a statistically hiding bit commitment. The first message sent by the receiver is the description of the hash function: $h\leftarrow \mathcal{R}(1^n)$. The sender's commitment to a bit $b$, using randomness $r$, is the hash of $x=(b,r)$. That is, $h(x) = \mathcal{S}(h,b;r)$. To argue that this is a dCRH\xspace, we show that any attacker that can sample collisions that are close to the random collision distribution $\operatorname{\sf Col}$ can also break the binding of the commitment scheme. For this, it suffices to show that a collision $(b,r),(b',r')$ sampled from $\operatorname{\sf Col}$, translates to equivocation --- the corresponding commitment can be opened to two distinct bits $b\neq b'$. Roughly speaking, this is because statistical hiding implies that a random collision to a random bit $b$ (corresponding to a random hash value) is statistically independent of the underlying committed bit. In particular, a random preimage of such a commitment will consist of a different bit $b'$ with probability roughly $1/2$. See details in \Cref{sec:dcrh_from_stat_hiding}. \subsection{Commitments from SZK Hardness} We now give an overview of our construction of statistically hiding commitments directly from average-case hardness in ${\class{SZK}}$. Our starting point is a result of Ong and Vadhan~\cite{OngV08} showing that any promise problem in ${\class{SZK}}$ has an {\em instance-dependent commitment.} These are commitments that are also parameterized by an instance $x$, such that if $x$ is a {\em yes instance}, they are statistically hiding and if $x$ is a {\em no instance}, they are statistically binding. We construct statistically hiding commitments from instance-dependent commitments for a hard-on-average problem $\Pi=(\Pi_N,\Pi_Y)$ in ${\class{SZK}}$. \paragraph{A first attempt: using zero-knowledge proofs.} To convey the basic idea behind the construction, let us first assume that $\Pi$ satisfies a strong form of average-case hardness where we can efficiently sample no-instances from $\Pi_N$ and yes-instances from $\Pi_Y$ so that the two distributions are computationally indistinguishable. Then a natural protocol for committing to a message $m$ is the following: The receiver $\mathcal{R}$ would sample a yes-instance $x\gets \Pi_Y$, and send it to the sender $\mathcal{S}$ along with zero-knowledge proof \cite{GMR} that $x$ is indeed a yes-instance. The sender $\mathcal{S}$ would then commit to $m$ using an $x$-dependent commitment. To see that the scheme is statistically hiding, we rely on the soundness of the proof which guarantees that $x$ is indeed a yes-instance, and then on the hiding of the instance-dependent scheme. To prove (computational) binding, we rely on zero knowledge property and the hardness of $\Pi$. Specifically, by zero knowledge, instead of sampling $x$ from $\Pi_Y$, we can sample it from any computationally indistinguishable distribution, without changing the probability that an efficient malicious sender breaks binding. In particular, by the assumed hardness of $\Pi$, we can sample $x$ from $\Pi_N$. Now, however, the instance-dependent commitment guarantees binding, implying that the malicious sender will not be able to equivocate. The main problem with this construction is that constant-round zero-knowledge proofs (with a negligible soundness error) are only known assuming constant-round statistically hiding commitments \cite{GoldreichKahan}, which is exactly what we are trying to construct. \paragraph{A second attempt: using witness-indistinguishable proofs.} Instead of relying on zero-knowledge proofs, we rely on the weaker notion of witness-indistinguishable proofs and use the {\em independent-witnesses paradigm} of Feige and Shamir~\cite{FeigeShamir90}. (Indeed such proofs are known for all of ${\class{NP}}$, based average-case hardness in {\class{SZK}}~\cite{GMW87,Naor91,OstrovskyW93}, see Section \ref{sec:szk} for details.) We change the previous scheme as follows: the receiver $\mathcal{R}$ will now sample {\em two} instances $x_0$ and $x_1$ and provide a witness-indistinguishable proof that at least one of them is a yes-instance. The sender, will secret share the message $m$ into two random messages $m_0,m_1$ such that $m=m_0\oplus m_1$, and return two instance-dependent commitments to $m_0$ and $m_1$ relative to $x_0$ and $x_1$, respectively. Statistical hiding follows quite similarly to the previous protocol --- by the soundness of the proof one of the instances $x_b$ is a yes-instance, and by the hiding of the $x_b$-dependent commitment, the corresponding share $m_b$ is statistically hidden, and thus so is $m$. To prove binding, we first note that by witness indistinguishability, to prove its statement, the receiver could use $x_b$ for either $b\in\{0,1\}$. Then, relying on the hardness of $\Pi$, we can sample $x_{1-b}$ to be a no-instance instead of a yes-instance. If $b$ is chosen at random, the sender cannot predict $b$ better than guessing. At the same time, in order to break binding, the sender must equivocate with respect to at least one of the instance-dependent commitments, and since it cannot equivocate with respect to the no-instance $x_{1-b}$, it cannot break binding unless it can get an advantage in predicting $b$. \paragraph{Our actual scheme.} The only gap remaining between the scheme just described and our actual scheme is our assumption regarding the strong form of average-case hardness of $\Pi$. In contrast, the standard form of average-case hardness only implies a single samplable distribution $D$, such that given a sample $x$ from $D$ it is hard to tell whether $x$ is a yes-instance or a no-instance better than guessing. This requires the following changes to the protocol. First, lacking a samplable distribution on yes-instances, we consider instead the product distribution $D^n$, as a way to sample {\em weak yes instances} --- $n$-tuples of instances where at least one is a yes-instance in $\Pi_Y$. Unlike before, where everything in the support of the yes-instance sampler was guaranteed to be a yes-instance, now we are only guaranteed that a random tuple is a weak yes instance with overwhelming probability. To deal with this weak guarantee, we add a {\em coin-tossing into the well} phase \cite{GMW87}, where the randomness for sampling an instance from $D^n$ is chosen together by the receiver and sender. We refer the reader to Section \ref{sec:szk} for more details. \section{Some Open Questions} \begin{enumerate} \item Build a dCRH from the MQ assumption (or its variants). \item Alternatively, show that a random degree 2 mapping is not a dCRH. \item Give a definition of a dCRH with (min) entropy instead of statistical distance and show that it implies constant-round {\em short} commitment. \item Construct the above dCRH from SZK or its variants. \item Show that 2-round statistically hiding commitment (not short) implies a (standard) dCRH (this proof should be easy). \item Show that r-round statistically public coin hiding commitment (not short) implies a (standard) dCRH (this seems harder). \end{enumerate}
1,941,325,220,590
arxiv
\section{INTRODUCTION} \label{Sec:Intro} The equation of state (EOS) of dense, hot matter is an essential ingredient in modeling neutron stars and hydrodynamical simulations of astrophysical phenomena such as core-collapse supernova explosions, proto-neutron stars, and compact object mergers. In broad terms, two major regions for the EOS can be identified at relatively low temperatures or entropies. At sub-nuclear densities ($n$ of $10^{-7}$ to $\sim 0.1~{\rm fm}^{-3}$), matter is in an inhomogeneous mixture of nucleons (neutrons and protons), light nuclear clusters (alpha particles, deuterons, tritons etc.), and heavy nuclei. Leptons, mainly electrons, are also present to balance the nuclear charges. Uniform matter, and heavy nuclei become progressively more neutron-rich as the density rises. Above about 0.01 fm$^{-3}$, nuclei deform in resonse to competition between surface and Coulomb energies, which may also lead to pasta-like geometrical configurations. By the density 0.1 fm$^{-3}$, the inhomogeneous phase gives way to a uniform phase of nucleons and electrons. Above the nuclear saturation density, $n_0\simeq0.16$ fm$^{-3}$, the uniform phase may become populated with more exotic matter, including Bose (pion or kaon) condensates, hyperons and deconfined quark matter. The appearance of Bose condensates and deconfined quark matter may be through first-order or continuous phase transitions. At large-enough temperatures below $n_0$, the inhomogeneous phase disappears and is again replaced by a uniform phase of nucleons and electrons. At sufficiently high temperatures at every density, thermal populations of hadrons and pions should appear. The composition and thermodynamic properties of matter at a given density $n$, temperature $T$, and overall charge fraction (parametrized by the electron concentration $Y_e=n_e/n$) is determined by minimizing the free energy density. In all realistic situations, matter is charge-neutral, but the net baryonic charge is non-zero and equalized by the net leptonic charge. It can generally be assumed that baryonic species are in strong interaction equilibrium, but equilibrium does not always exist for leptonic species which are subject to weak interactions. In circumstances in which dynamical timescales are long compared to weak interaction timescales, the free energy minimization is also made with respect to $Y_e$. Such matter is said to be in beta equilibrium and its properties are a function of only density and temperature, and, if neutrinos are trapped in matter, the total number of leptons per baryon. Below $n_0$, where generally the only baryons are neutrons and protons and the only leptons are electrons and possibly neutrinos, charge neutrality dictates that the number of electrons per baryon $Y_e$ equals the proton fraction $x=n_p/n$, but at higher densities, the charge fractions of muons, hyperons, Bose condensates and quarks, if present, have to be included. Beta equilibrium may not occur during gravitational collapse or dynamical expansion, such as occurs in Type II supernovae and neutron star mergers. The free energy can be calculated using a variety of methods, but it is generally a complicated functional of the main physical variables $n$, $T$ and $Y_e$ and cannot be expressed analytically. In order to efficiently describe the EOS, it is customary to build three-dimensional tables of its properties. An essential criterion is that full EOS tables be thermodynamically consistent so as not to generate spurious and unphysical entropy during hydrodynamical simulations. Beginning with the work of Lattimer and Swesty (hereafter referred to as LS)\cite{LS}, examples of such tables include the works of Shen et al \cite{Shen1}, Shen et al \cite{Shen2a,Shen2b}, and others \cite{Steiner,Hempel}. We refer the reader to Refs. \cite{SLM94,Oconnor,sys07,Steiner} in which comparisons of outcomes in supernova simulations, for pre-bounce evolution and black hole formation, respectively, have been made using different EOSs. A parallel study \cite{bbj13} of neutron star mergers with different EOSs has also been undertaken. The EOS, in addition to controlling the global hydrodynamical evolution, also determines weak interaction rates including those of electron capture and beta decay reactions and neutrino-matter interactions. These reaction rates depend sensitively on the properties of matter, including the magnitudes of the neutron and proton chemical potentials and effective nucleon masses, among other aspects. Also of considerable importance are the specific heats and susceptabilities of the constituents, which determine, respectively, the thermal and transport properties of matter. Thermal properties, especially, may be easier to diagnose from neutrino observations of supernovae: the timescale for black hole formation, in cases where that happens, appears to be an important example~\cite{bbj13}. One of the most realistic descriptions of the properties of interacting nucleons is the potential model Hamiltonian density of Akmal, Pandharipande, and Ravenhall (APR hereafter)~\cite{apr}, which reproduces the microscopic potential model calculations of Akmal and Pandharipande (AP) \cite{ap}. An interesting feature of the AP model is the occurrence of a neutral pion condensate at supra-nuclear densities for all proton fractions. The AP model is especially relevant because it satisfies several important global criteria that have been gleaned from nuclear physics experiments and astrophysical studies of neutron stars, especially those concerning the neutron star maximum mass and their typical radii. Both isospin-symmetric and isospin-asymmetric properties of cold baryonic (neutron-proton) matter in the vicinity of $n_0$ are of considerable importance, as they govern the masses of nuclei, nucleon-pairing phenomena, collective motions of nucleons within nuclei, the transition density from inhomogeneous to homogeneous bulk matter, the radii of neutron stars, and many observables in medium-energy heavy-ion collisions~\cite{Steiner05}. One of the most important isospin-symmetric properties at $n_0$ is the density derivative of the pressure $P$, or, the incompressibility $K_0$ of matter which is now rather well-determined: $K_0=9(dP/dn)_{n_0,x=1/2,T=0}\simeq230\pm30$ MeV from Refs. \cite{Garg04,Colo04} and $240\pm20$ MeV from Ref. \cite{shlomo06} Another isospin-symmetric nuclear constraint stems from the thermal properties of nuclei and bulk matter. Fermi liquid theory holds that the thermal properties of the equation of state are largely controlled by the nucleon effective masses. In short, experiments indicate that nucleon effective masses are reduced from their bare values ($m$) at $n_0$ for symmetric matter to approximately $m^*_0/m\simeq0.8\pm0.1$ \cite{bohigas79,krivine80} and microscopic theory suggests they further decrease at higher densities. The extraction of $m^*_0$ from nuclear level densities is complicated by uncertain contributions from the surface energy as well as possible energy dependences in $m^*$. Additionally, of great significance is the influence of isospin-asymmetry on the properties of nucleonic matter, not only on the effective masses, but also on its energy $E(n,x,T)$, particularly the symmetry energy parameter $S_v = 1/8(\partial^2E/\partial x^2)_{n_0,x=1/2,T=0}$ and its stiffness parameter $L = 3/8(\partial^2E/\partial n\partial x^2)_{n_0,x=1/2,T=0}$. Starting from the Bethe-Weizacker mass formula~\cite{W35,BB36} and its modernization \cite{moller95,pearson01} for nuclei containing a fraction $x$ of protons, most mass formulas characterize the symmetry energy of nucleonic matter by these two parameters. From a variety of experiments, including measurements of nuclear binding energies, neutron skin thicknesses of heavy nuclei, dipole polarizabilities, and giant dipole resonance energies \cite{L,tsang12} $S_v$ lies in the range 30-35 MeV and $L$ lies in the range 40-60 MeV. Recent developments in the prediction of the properties of pure neutron matter by Gandolfi, Carlson and Reddy \cite{gcr12} and by Hebeler and Schwenk \cite{kths13} suggest very similar values for $S_v$ and $L$ compared to those derived from nuclear experiments. It is worth noting that there exists a phenomenological relation~\cite{lp01} between neutron star radii and zero temperature neutron star matter pressures near $n_0$, which is nearly that of pure neutron matter and largely a function of the $L$ parameter \cite{L}. Astrophysical observations of photospheric radius expansion in X-ray bursts \cite{ogp09} and quiescent low-mass X-ray binaries \cite{gswr13} have been used \cite{slb10,slb13} to conclude that the radii of neutron stars with masses in the range 1.2-1.8$~{\rm M}_\odot$ are between 11.5 km and 13 km, and therefore predict that $L\simeq45\pm10$ MeV, although the astrophysical model dependence of this result may significantly enlarge its uncertainty. Nevertheless, this range overlaps that from nuclear experiments and also that from neutron matter theory, suggesting that systematic dependencies are not playing a major role in the astrophysical determinations. A potentially more important astrophysical constraint originates from mass measurements of neutron stars. A consequence of general relativity is the existence of a maximum neutron star mass for every equation of state. Causality arguments, together with current radius estimates, indicate this is in the range of 2-2.8~${\rm M}_\odot$ \cite{L}. The largest precisely measured neutron star masses are $1.97\pm0.04~{\rm M}_\odot$ \cite{Demorest} and $2.01\pm0.04~{\rm M}_\odot$ \cite{Antoniadis}. It is likely that the true maximum mass is at least a few tenths of a solar mass larger than these measurements. An important issue concerns the quality and relevance of experimental information that could constrain the thermal properties of dense matter. Calibrating the thermal properties of bulk matter from experimental results involves disentangling the effects of several overlapping energy scales (associated with shell and pairing effects, collective motion, etc.) that determine the properties of finite sized nuclei. The level densities of nuclei (inferred through data on, for example, neutron evaporation spectra and the disposition of single particle levels in the valence shells of nuclei \cite{egidy88,egidy05} depend on the Landau effective masses, $m_{n,p}^*$, of neutrons and protons. These masses are sensitive to both the momentum and energy dependence of the nucleon self-energy leading to the so-called the $k$-mass and the $\omega$-mass, emphasized, for example, in Refs. \cite{NY81,FFP81,Prakash83,Mahaux85}. For bulk matter, in which the predominant effect is from the $k$-mass, $m_{n,p}^*/m=0.7\pm 0.1$ has been generally preferred. The specific heat and entropy of nuclei receive substantial contributions from low-lying collective excitations, as shown in Refs. \cite{VV83,VV85}, a subject that needs further exploration to pin down the role of thermal effects in bulk matter. Following the suggestions in Refs. ~\cite{S83,BS83}, the liquid-gas phase transition has received much attention with the finding that the transition temperature for nearly isospin symmetric matter lies in the range 15-20 MeV~\cite{DasGupta01}. Although the critical temperature depends on the incompressibility parameter $K_0$, it is also sensitive to the specific heat of bulk matter in the vicinity of $n_0$ which depends on the effective masses. Further information about the effective masses can be ascertained from fits of the optical model potential to data \cite{myers66}, albeit it at low momenta. The density and the saturating aspect of the high momentum dependence of the real part of the optical model potential has been crucial in explaining the flow of momentum and energy observed in intermediate energy ($< 1$ GeV) collisions of heavy-ions, preserving at the same time the now well-established value of the incompressibility $K_0=230\pm 30$ MeV, as demonstrated in Refs. \cite{Prakash88,Gale90,Danielewicz00}. Notwithstanding these activities, further efforts are needed to calibrate the finite temperature properties of nucleonic matter to reach at least the level of accuracy to which the zero temperature properties have been assessed. Relatively few EOSs have been constructed from underlying interactions satisfying all these important constraints \cite{Steiner}. However, AP and APR satisfy nearly all of them. For APR, $K_0=266$ MeV, $S_2\simeq 32.6$ MeV, $L\simeq58.5$ MeV, and $m^*_0/m=0.7$, within two standard deviations of the experimental ranges. The maximum neutron star mass supported by the APR model is in excess of $2{\rm M}_\odot$, and the radius of a $1.4M_\odot$ star is about 12 km. Despite its obvious positive characteristics, no three-dimensional tabular EOS has been constructed with the APR equation of state. Furthermore, its finite temperature properties for arbitrary degeneracy and proton fractions, including the effects of its pion condensate, have not been studied to date. The chief motivation for the present study is to perform a detailed analysis of the EOS of AP through a study of the properties predicted by its APR parametrization. Particular attention is paid to the density dependence of nucleon effective masses which govern both the qualitative and quantitative behaviors of its thermal properties. Another objective of the present work is to document the analytic relations describing the thermodynamic properties of potential models. These are essential ingredients in the generation of EOS tables based upon modern energy density functionals that employ Skyrme-like energy density functionals. Importantly, the analytic expressions developed here can be utilized to update LS-type liquid droplet EOS models that take the presence of nuclei at subnuclear densities and subcritical temperatures into account. This would represent a significant improvement to existing EOS tables in that they could be replaced with ones including realistic effective masses. Some aspects of the thermal properties of hot, dense matter have been explored in Ref.~ \cite{Prak87} for isospin symmetric matter, but the comparative thermal properties of different Skyrme-like interactions remain largely unexplored. In view of the lack of systematic studies contrasting the predicted thermodynamic properties of the APR model with those of other Skyrme energy density functionals, we are additionally motivated to perform such studies for one particular case, that of the SKa force due to Kohler \cite{ska}. This is one of the EOS's tabulated in the suite of EOS's provided in Ref.~\cite{LattimerEOSs} which is reproduced here in detail. The methods developed here are general and can be advantageously used for other Skyrme-like energy density functions in current use. For both the APR and Ska models, we compute the EOS for uniform matter for temperatures ranging up to 50 MeV, baryon number densities in the range $10^{-7}$ fm$^{-3}$ to 1 fm$^{-3}$, and proton fractions between 0 (pure neutron matter) and 0.5 (isospin-symmetric nuclear matter). Ideal gas photonic and leptonic contributions (both electrons and muons) are included for all models. The results presented here for densities below 0.1 fm$^{-3}$ in the homogeneous phase serve only to gauge differences from the more realistic situation in which supernova matter contains an inhomogeneous phase. Work toward extending calculations to realistically describe the low density/temperature inhomogeneous phase containing finite nuclei is in progress at various levels of sophistication (Droplet model, Hartree, Hartree-Fock, Hartree-Fock-Boguliobov, etc.) beginning with an LS-type liquid droplet model approach and will be reported separately. In addition, hyperons and a possible phase transition to deconfined quark matter are not considered in this work. . The organization of this paper is as follows. In Sec. \ref{Sec:pmodels}, we briefly discuss some of the features of the APR and Ska Hamiltonians, and the ingredients involved in their construction. We then present their single-particle energy spectra and potentials using a variational procedure in Sec III. In Sec. IV, properties of cold, isospin-symmetric matter and consequences for small deviations from zero isospin asymmetry are examined. Analyses of results for the two models include those of energies, pressures, neutron and proton chemical potentials, and inverse susceptibilities. Section V contains our study of the behavior of all the relevant state variables for the APR and SKa models at finite temperature. The numerical results, valid for all regimes of degeneracy, are juxtaposed with approximate ones in the degenerate and non-degenerate limits for which analytical expressions have been derived. Contributions from leptons and photons are also summarized in this section. In Sec. VI, we address the transition from a low-density to a high-density phase in which a neutral pion condensate is present using a Maxwell construction. The numerical results of this section constitute the equation of state of supernova matter for the APR model in the bulk homogeneous phase. Our summary and conclusions are given in Sec. VII. The appendices contain ancillary material employed in this work. In Appendix A, we provide a detailed derivation of the single-particle energy spectra for the potential models used. General expressions for all the state variables of the APR model valid for all neutron-proton asymmetries are collected in Appendix B. The formalism to include contributions from leptons (electrons and positrons) and photons is presented in Appendix C, wherein both the exact and analytical representations are summarized. Numerical methods used in our calculations of the Fermi-Dirac integrals for arbitrary degeneracy are summarized in Appendix D. Appendix E contains thermodynamically consistent prescriptions to render EOS's causal when they become acausal at some high density for both zero and finite temperature cases. \section{POTENTIAL MODELS} \label{Sec:pmodels} In this work, we study the thermal properties of uniform matter predicted by potential models. We focus on an interaction derived from the work of Akmal and Pandharipande (hereafter AP) \cite{ap}, using an approximation developed by Akmal, Pandharipande and Ravenhall (hereafter APR)~\cite{apr}, and a Skyrme \cite{Skyrme} force developed by K\"{o}hler (Ska henceforth) \cite{ska}. We pay special attention to the finite temperature properties of these two models for the physical conditions expected in supernovae and neutron star mergers, which has heretofore not received much attention. The Hamiltonian density of Ska \cite{ska} is a typical example of the approach based on effective zero-range forces pioneered by Skyrme \cite{Skyrme}, which are typically called Skyrme forces. These were further developed to describe properties of bulk matter and nuclei in Ref. \cite{vb}. Skyrme forces are easier to use in this context than finite-range forces (see, e.g., Ref \cite{Gogny}). To date, a vast number of variants of this approach exist in the literature \cite{Dutra} which have varying success in accounting for properties of nuclei and neutron stars. The strength parameters of the Skyrme-like energy density functionals are calibrated at nuclear and sub-nuclear densities to reproduce the properties of many nuclei, their behavior at high densities being constrained largely by neutron-star data. The Hamiltonian density of APR is a parametric fit to the AP microscopic model calculations in which the nucleon-nucleon interaction is modeled by the Argonne $v18$ 2-body potential ~\cite{v18}, the Urbana UIX 3-body potential ~\cite{uix}, and a relativistic boost potential $\delta v$ ~\cite{dv} which is a kinematic correction when the interaction is observed in a frame other than the rest-frame of the nucleons. These microscopic potentials accurately fit scattering data in vacuum and thus incorporate the long scattering lengths of nucleons at low energy. Additionally, they have also been successful in accounting for the binding energies and spectra of light nuclei. An interesting feature of AP, incorporated in the Skyrme-like parametrization of the APR model, is that at supra-nuclear densities a neutral pion condensate appears. Despite the softness induced by the pion condensate in the high density equation of state, the APR model is capable of supporting a neutron star of 2.19 M$_\odot$, in excess of the recent accurate measurements of the masses of PSR J1614-2230 ($1.97\pm0.04~{\rm M}_\odot$) \cite{Demorest} and PSR J0348+0432 ($2.01\pm0.04~{\rm M}_\odot$) \cite{Antoniadis}. Being non-relativistic potential models, both the APR and Ska models have the potential to become acausal (that is, the speed of sound exceeds the speed of light) at high density. A practical fix to keep their behaviors causal which is thermodynamically consistent is possible and is adopted in this work (see Appendix E). Our choice of these two models was motivated by several considerations, including the facts that (i) both models yield similar results for the equilibrium density, binding energy, symmetry energy, and compression modulus of symmetric matter, as well as for the maximum mass of neutron stars and (ii) the two models differ significantly in other properties such as their Landau effective masses (important for thermal properties), derivatives of their symmetry energy at nuclear density (important for the high density behavior of isospin asymmetry energies), skewness (i.e., the derivative of the compression modulus) at nuclear density, and their predicted radii corresponding to the maximum mass configuration. The impact of the different features of these two models for their thermal properties is one of the main foci of our work here. The methods used to explore their thermal effects are applicable and easily adapted to other Skyrme-like energy density functionals. \subsection{Hamiltonian density of APR} Explicitly, the APR Hamiltonian density is given by \cite{apr} \begin{eqnarray} \mathcal{H}_{APR} & = & \left[\frac{\hbar^2}{2m}+(p_3+(1-x)p_5)ne^{-p_4 n}\right]\tau_n \nonumber \\ & & +\left[\frac{\hbar^2}{2m}+(p_3+x p_5)ne^{-p_4 n}\right]\tau_p \nonumber \\ & & +g_1(n)[1-(1-2x)^2)]+g_2(n)(1-2x)^2, \label{HAPR} \end{eqnarray} where $n=n_n+n_p$ is the baryon density, $x=n_p/n$ is the proton fraction, and \begin{eqnarray} n_i & = & \frac{1}{\pi^2}\int dk_i~\frac{k_i^2}{1+e^{(\epsilon_{k_i}-\mu_i)/T}} \\ \tau_i & = & \frac{1}{\pi^2}\int dk_i~\frac{k_i^4}{1+e^{(\epsilon_{k_i}-\mu_i)/T}} \end{eqnarray} are the number densities and kinetic energy densities of nucleon species $i=n,p$, respectively. The quantities $\epsilon_{k_i}$, $\mu_i$ and $T$ are the single-particle spectra, chemical potentials and temperature (with Boltzmann's constant $k_B$ set to unity), respectively. The first two terms on the right-hand side of this expression are due to kinetic energy and momentum-dependent interactions while the last terms are due to density-dependent interactions. Compared to a classical Skyrme interaction, such as Ska (described below), this model has a more complex density dependence in the single-particle potentials and effective masses. Due to the occurrence of a neutral pion condensation at supra-nuclear densities, the potential energy density functions $g_1$ and $g_2$ take different forms on either side of the transition density. In the low density phase (LDP) \begin{eqnarray} g_{1L} & = & -n^2\left[ p_1+p_2n+p_6n^2+(p_{10}+p_{11}n)e^{-p_9^2n^2}\right] \\ g_{2L} & = & -n^2\left( \frac{p_{12}}{n}+p_7+p_8n+p_{13}e^{-p_9^2n^2} \right), \end{eqnarray} whereas, in the high density phase (HDP) \begin{eqnarray} g_{1H} & = g_{1L}-n^2\left[p_{17}(n-p_{19})+p_{21}(n-p_{19})^2 \right] e^{p_{18}(n-p_{19})} \nonumber \\ \\ g_{2H} & = g_{2L}-n^2\left[p_{15}(n-p_{20})+p_{14}(n-p_{20})^2 \right] e^{p_{16}(n-p_{20})}\,. \nonumber \\ \end{eqnarray} The values of the parameters $p_1$ through $p_{21}$, as well as their dimensions which ensure that $\mathcal{H}_{APR}$ has units of $\mbox{MeV~fm}^{-3}$, are presented in Table I. Alternate choices for the underlying microscopic physics lead to different fits to the above generic form, so even though $p_{13}, p_{14}$ and $p_{21}$ are all 0 in our case, we carry the terms containing these coefficients in the algebra of Appendix B. \begin{table}[!h] \begin{center} \begin{tabular}{|l|c||l|c|} \hline $p_1 $ & $337.2~\mbox{MeV~fm}^3$ & $p_{14}$ & 0 \\ $p_2 $ & $ -382.0~\mbox{MeV~fm}^6$ & $p_{15}$ & $287.0~\mbox{MeV~fm}^6$ \\ $p_3 $ & $ 89.8~\mbox{MeV~fm}^5$ & $p_{16}$ & $-1.54~\mbox{fm}^3$ \\ $p_4 $ & $ 0.457~\mbox{fm}^3$ & $p_{17}$ & $175.0~\mbox{MeV~fm}^6 $ \\ $p_5 $ &$-59.0~\mbox{MeV~fm}^5$ & $p_{18}$ & $-1.45~\mbox{fm}^3$ \\ $p_6 $ & $-19.1~\mbox{MeV~fm}^9$ & $p_{19}$ & $0.32~\mbox{fm}^{-3}$ \\ $p_7 $ & $214.6~\mbox{MeV~fm}^3$ & $p_{20}$ & $0.195~\mbox{fm}^{-3}$ \\ $p_8 $ & $ -384.0~\mbox{MeV~fm}^6$ & $p_{21}$ & 0 \\ $p_9 $ & $ 6.4~\mbox{fm}^6$ & & \\ $p_{10} $ & $ 69.0~\mbox{MeV~fm}^3$ & & \\ $p_{11}$ & $ -33.0~\mbox{MeV~fm}^6$ & & \\ $p_{12} $ & $ 0.35~\mbox{MeV}$ & & \\ $p_{13} $ & 0 & & \\ \hline \end{tabular} \caption[Parameter values for $\mathcal{H}_{APR}$]{Parameter values for the Hamiltonian density of Akmal, Pandharipande, and Ravenhall \cite{apr}. Values in the last column are specific to the high density phase (HDP). The dimensions are such that the Hamiltonian density is in MeV fm$^{-3}$.} \end{center} \end{table} The trajectory in the $n-x$ plane, for any temperature, along which the transition from the LDP to the HDP occurs is obtained by solving \begin{eqnarray} && g_{1L}[1-(1-2x)^2]+g_{2L}(1-2x)^2 \nonumber \\ && \hspace{40pt} = g_{1H}[1-(1-2x)^2]+g_{2H}(1-2x)^2\,. \label{transition} \end{eqnarray} The solution gives a transition density $n_t=0.32 ~\mbox{fm}^{-3}$ for symmetric nuclear matter $(x=1/2)$ and $n_t=0.195 ~\mbox{fm}^{-3}$ for pure neutron matter $(x=0)$. For intermediate values of $x$, the transition density is approximated to high accuracy by the polynomial fit \begin{equation} n_t(x) =0.1956+ 0.3389~x + 0.2918~x^2 - 1.2614~x^3 + 0.6307~x^4\, . \label{polfit} \end{equation} In calculations of subsequent sections, the transition from the LDP to the HDP at zero and finite temperatures will be made through the use of the above polynomial fit. The mixed phase region is determined via a Maxwell construction for the numerical purposes of which $n_t$ is used as an input. We show in Sec. VI that while the transition is independent of $T$ for any $x$, the two densities which define the boundary of the phase-coexistence region do exhibit a weak dependence on temperature. \subsection{Hamiltonian density of Ska} The Hamiltonian density of Ska \cite{ska} based on the Skyrme energy density functional approach is expressed as \begin{eqnarray} \mathcal{H}_{Ska} & = & \frac{\hbar^2}{2m_n}\tau_n+\frac{\hbar^2}{2m_p}\tau_p \nonumber \\ & & +n(\tau_n+\tau_p)\left[\frac{t_1}{4}\left(1+\frac{x_1}{2}\right) +\frac{t_2}{4}\left(1+\frac{x_2}{2}\right)\right] \nonumber \\ & & +(\tau_n n_n+\tau_p n_p)\left[\frac{t_2}{4}\left(\frac{1}{2}+x_2\right) -\frac{t_1}{4}\left(\frac{1}{2}+x_1\right)\right] \nonumber \\ & & +\frac{t_o}{2}\left(1+\frac{x_o}{2}\right)n^2 -\frac{t_o}{2}\left(\frac{1}{2}+x_o\right)(n_n^2+n_p^2) \nonumber \\ & & \left[\frac{t_3}{12}\left(1+\frac{x_3}{2}\right)n^2 -\frac{t_3}{12}\left(\frac{1}{2}+x_3\right)(n_n^2+n_p^2)\right]n^{\epsilon} \nonumber \\ \label{HSKA} \end{eqnarray} Terms involving $\tau_i$ with $i=n,p$ are purely kinetic in origin whereas terms involving $n\tau_i$ and $n_i\tau_i$ arise from the exchange part of the nucleon-nucleon interaction. The latter determine the density dependence of the effective masses (see below). The remaining terms, dependent on powers of the individual and total densities give the potential part of the energy density. The various strength parameters are calibrated to desired properties of bulk matter and of nuclei chiefly close to the empirical nuclear equilibrium density. Many other parametrizations of the Skyrme-like energy density functional also exist \cite{Dutra} and are characterized by different values of observable physical quantities (see below). The parameters $t_o$ through $t_3$, $x_o$ through $x_3$, and $\epsilon$ for the Ska model~\cite{ska} are listed in Table II. \begin{table}[!h] \begin{center} \begin{tabular}{|c|c|c|c|} \hline i & $t_i$ & $x_i$ & $\epsilon$ \\ \hline 0 & $-1602.78 ~\mbox{MeVfm}^6$ & 0.02 & 1/3 \\ 1 & $570.88 ~\mbox{fm}^3$ & 0 & \\ 2 & $-67.7 ~\mbox{fm}^3$ & 0 & \\ 3 & $8000.0 ~\mbox{MeVfm}^7$ & -0.286 & \\ \hline \end{tabular} \caption[Parameter values for $\mathcal{H}_{Ska}$.]{Parameter values for the Ska Hamiltonian density~\cite{ska}.The dimensions are such that the Hamiltonian density is in MeV fm$^{-3}$.} \end{center} \end{table} \section{SINGLE-PARTICLE ENERGY SPECTRA} The single-particle energy spectra $\epsilon_{k_i},~(i=n,p)$ that appear in the Fermi-Dirac (FD) distribution functions \\ $n_{k_i} = \left[{1+e^{(\epsilon_{k_i}-\mu_i)/T}}\right]^{-1}$ are obtained from functional derivatives of the Hamiltonian density (see appendix A for derivation): \begin{eqnarray} \epsilon_{k_i} & = & k_i^2\frac{\partial \mathcal{H}}{\partial \tau_i} + \frac{\partial \mathcal{H}}{\partial n_i}. \label{spectra} \end{eqnarray} The ensuing results can be expressed as \begin{eqnarray} \epsilon_{k_n} &=& \frac{\hbar^2k^2}{2m} + U_n(n,k) \nonumber \\ \epsilon_{k_p} &=& \frac{\hbar^2k^2}{2m} + U_p(n,k)\,, \label{spectranp} \end{eqnarray} where $m$ is the nucleon mass in vacuum, and $U_n$ and $U_p$ are the neutron and proton single-particle momentum-dependent potentials, respectively. Utilizing these spectra, the Landau effective masses $m_i^*$ are \begin{eqnarray} m_i^* \equiv k_{F_i} \left[ \left| \frac {\partial \epsilon_{k_i}} {\partial k} \right|_{k_{F_i}} \right]^{-1} \,, \label{effmi} \end{eqnarray} where $k_{F_i}$ are the Fermi-momenta of species $i$. Physical quantities such as the thermal energy, thermal pressure, susceptibilities, specific heats at constant volume and pressure, and entropy all depend sensitively on these effective masses as highlighted in later sections. \subsection*{APR single-particle potentials} From Eq. (\ref{HAPR}) and Eq. (\ref{spectranp}), the explicit forms of the single-particle potentials for the LDP Hamiltonian density of APR are \begin{eqnarray} U_{nL}(n,k) &=& (p_3+Y_np_5)ne^{-p_4n}k^2 \nonumber \\ &+& \left\{\left[p_3+p_5-p_4n(p_3+Y_np_5)\right]\tau_n \right. \nonumber \\ &+& \left.\left[p_3-p_4n(p_3+Y_pp_5)\right]\tau_p\right\} e^{p_4n} \nonumber \\ &+& 4Y_p\frac{g_{1L}}{n} + 2(Y_n-Y_p)\frac{g_{2L}}{n} \nonumber \\ &+& 4Y_nY_pf_{1L} + (Y_n-Y_p)^2f_{2L} \nonumber \\ U_{pL}(n,k) &=& (p_3+Y_pp_5)ne^{-p_4n}k^2 \nonumber \\ &+& \left\{\left[p_3+p_5-p_4n(p_3+Y_pp_5)\right]\tau_p \right. \nonumber \\ &+& \left.\left[p_3-p_4n(p_3+Y_np_5)\right]\tau_n\right\} e^{p_4n} \nonumber \\ &+& 4Y_n\frac{g_{1L}}{n} + 2(Y_p-Y_n)\frac{g_{2L}}{n} \nonumber \\ &+& 4Y_nY_pf_{1L} + (Y_n-Y_p)^2f_{2L} \,, \label{Unps} \end{eqnarray} with $Y_p=x$ and $Y_n=1-x$, and where \begin{eqnarray} f_{1L} = \frac{dg_{1L}}{dn} - \frac{2g_{1L}}{n}~~ {\rm and}~~ f_{2L} = \frac{dg_{2L}}{dn} - \frac{2g_{2L}}{n}. \end{eqnarray} In the HDP, \begin{eqnarray} U_{nH}(n,k) & = & U_{nL}(n,k)-\frac{4Y_p(Y_n-Y_p)}{n}(\delta g_1-\delta g_2) \nonumber \\ &+&4Y_nY_p\delta f_1 + (Y_n-Y_p)^2\delta f_2 \\ U_{pH}(n,k) & = & U_{pL}(n,k)+\frac{4Y_n(Y_n-Y_p)}{n}(\delta g_1-\delta g_2) \nonumber \\ &+&4Y_nY_p\delta f_1 + (Y_n-Y_p)^2\delta f_2. \end{eqnarray} The functions $\delta g_1,~\delta g_2,~\delta f_1$, and $\delta f_2$ are defined in Appendix B. The corresponding effective masses from Eq. (\ref{effmi}) are \begin{eqnarray} \frac {m_i^*}{m} = \left[ 1 + \frac {2m}{\hbar^2} (p_3+Y_ip_5)ne^{-p_4 n} \right]^{-1} \,, \label{effmAPR} \end{eqnarray} where $Y_i = (1-x)$ for neutrons ($i=n$) and $Y_i=x$ for protons ($i=p$). Subsuming the $k^2$-dependent parts of $U_i(n,k)$ in Eq. (\ref{Unps}) into the kinetic energy terms in Eqs. (\ref{spectranp}), the single-particle energies may be expressed as \begin{eqnarray} \epsilon_{k_i} &=& \frac{\hbar^2k^2}{2m_i^*} + V_i(n) \,, \label{newspectranp} \end{eqnarray} where the functional forms of $V_i(n)$ are readily ascertained from the relations in Eq. (\ref{Unps}). The quadratic momentum-dependence of the single particle spectra, albeit density and concentration dependent through the effective masses, is akin to that of free Fermi gases. Consequently, the thermal state variables can be calculated as for free Fermi gases, but with attendant modifications arising from the density-dependent effective masses as will be discussed later. \subsection*{Skyrme single-particle potentials} Explicit forms of the single-particle potentials for the Ska Hamiltonian are given by \begin{eqnarray} U_n(n,k) &=& (X_1+Y_nX_2)nk^2 + (X_1+X_2)\tau_n+X_1\tau_p \nonumber \\ &+& 2n(X_3+Y_pX_4) + n^{1+\epsilon}[(2+\epsilon)X_5 \nonumber \\ &+& 2Y_n+\epsilon({Y_n}^2 +{Y_p}^2)] \nonumber \\ U_p(n,k) &=& (X_1+Y_pX_2)nk^2 + (X_1+X_2)\tau_p+X_1\tau_n \nonumber \\ &+& 2n(X_3+Y_nX_5) + n^{1+\epsilon}[(2+\epsilon)X_6 \nonumber \\ &+& 2Y_p+\epsilon({Y_n}^2 +{Y_p}^2)] \,, \label{UnpsSka} \end{eqnarray} where \begin{eqnarray} X_1 &=& \frac{1}{4} \left[ t_1 \left(1+\frac{x_1}{2}\right) + t_2 \left(1+\frac{x_2}{2} \right) \right] \nonumber\\ X_2 &=& \frac{1}{4} \left[t_2 \left(\frac{1}{2}+x_2 \right)-t_1\left( \frac{1}{2}+x_1 \right) \right] \nonumber\\ X_3 &=& \frac{t_0}{2} \left(1+\frac{x_0}{2}\right)\,; \quad X_4 = -\frac{t_0}{2} \left(\frac{1}{2}+x_0\right) \nonumber\\ X_5 &=& \frac{t_3}{12} \left(1+\frac{x_3}{2} \right)\,; \quad X_6 = -\frac{t_3}{12} \left(\frac{1}{2}+x_3 \right)\,. \end{eqnarray} From Eq. (\ref{effmi}), the density-dependent Landau effective masses are \begin{eqnarray} \frac {m_i^*}{m} = \left[ 1 + \frac {2m}{\hbar^2} (X_1+Y_iX_2)n \right]^{-1} \,. \label{effmSka} \end{eqnarray} The single-particle spectra have therefore the same structure as in Eq. (\ref{newspectranp}), but with the potential terms $V_i(n)$ inferred from Eq. (\ref{UnpsSka}). \section{ZERO TEMPERATURE PROPERTIES} At temperature T=0, nucleons are restricted to their lowest available quantum states. Therefore, the Fermi-Dirac distribution functions that appear in the integrals of the number density and the kinetic energy density become step-functions: \begin{eqnarray} n_{ki} & = & \theta(\epsilon_{ki}-\epsilon_{Fi}), \end{eqnarray} where $\epsilon_{Fi}$ is the energy at the Fermi surface for species $i$. Consequently, \begin{eqnarray} n_i & = & \frac{1}{\pi^2}\int_0^{k_{Fi}}k_i^2dk_i = \frac{k_{Fi}^3}{3\pi^2} \\ \tau_i & = & \frac{1}{\pi^2}\int_0^{k_{Fi}}k_i^4dk_i = \frac{k_{Fi}^5}{5\pi^2} = \frac 35 n_ik_{Fi}^2 \,. \end{eqnarray} Thus, the kinetic energy densities can be written as simple functions of the number density $n$ and the proton fraction $x$ : \begin{eqnarray} \tau_p & = & \frac{1}{5\pi^2}(3\pi^2n_p)^{5/3} = \frac{1}{5\pi^2}(3\pi^2nx)^{5/3} \\ \tau_n & = & \frac{1}{5\pi^2}(3\pi^2n_n)^{5/3} = \frac{1}{5\pi^2}(3\pi^2n(1-x))^{5/3}. \end{eqnarray} We can therefore write \[\mathcal{H}(n_p,n_n,\tau_p,\tau_n;T=0)=\mathcal{H}(n,x)\,,\] and use standard thermodynamic relations to get the various quantities of interest, some examples of which are listed below beginning with $x=1/2$ for isospin symmetric nuclear matter. General expressions for arbitrary $x$ are provided in Appendix B. \subsection{Isospin symmetric nuclear matter} \subsection*{The APR Hamiltonian} It is convenient to write $\mathcal{H}_{APR}$ as the sum of a kinetic part $\mathcal{H}_k$, a part consisting of momentum-dependent interactions $\mathcal{H}_m$, and a density-dependent interactions part $\mathcal{H}_d$. The energy per particle of symmetric nuclear matter $E$ can then be similarly decomposed as \begin{equation} E \equiv \frac{\mathcal{H_{APR}}}{n} = {E_k} + {E_m} + {E_d} \,, \end{equation} where \begin{eqnarray} {E_k} &=& \frac 35 \frac {\hbar^2k_F^2}{2m}\,; \quad k_F = (3\pi^2n/2)^{1/3} \nonumber \\ {E_m} &=& \frac 35 nk_F^2e^{-p_4 n} (p_3 + p_5/2 ) \nonumber \\ {E_{dL}} &=& \frac {g_{1L}}{n} \,, \quad E_{dH} = \frac{g_{1H}}{n} = E_{dL}+\frac{\delta g_1}{n} \,. \end{eqnarray} The corresponding pressure is \begin{eqnarray} P &=& n^2\frac{\partial E}{\partial n} = P_k + P_m + P_d \nonumber \\ P_k &=& \frac 23 n {E_k}\,, P_m = \left( \frac 53 - p_4n \right) n {E_m} \nonumber \\ P_{dL} &=& n \left( {E_d} + f_{1L} \right) \nonumber \\ P_{dH} &=& P_{dL}-\delta g_1+n\delta f_1 \,. \end{eqnarray} The nucleon chemical potential takes the form \begin{eqnarray} \mu &=& \frac {\partial \mathcal {H}}{\partial n} = \mu_k + \mu_m + \mu_d \nonumber \\ \mu_k &=& \frac 53 {E_k} = \frac {\hbar^2k_F^2}{2m} \nonumber\\ \mu_m &=& nk_F^2e^{-p_4 n} \left\{ p_5 \left( \frac 45 - \frac {p_4n}{2} \right) + p_3 \left ( \frac {8}{3} - p_4 n\right) \right\} \nonumber \\ \mu_{dL} &=& \frac {dg_{1L}}{dn}\,,\quad \mu_{dH} = \mu_{dL}+\delta f_1 \,. \end{eqnarray} The inverse susceptibility is given by \begin{eqnarray} \chi^{-1 } &=& \frac {\partial\mu}{\partial n} = \chi_k^{-1 } + \chi_m^{-1 } + \chi_d^{-1 } \nonumber \\ \chi_k^{-1 } &=& \frac 23 \frac {\mu_k}{n} \nonumber \\ \chi_m^{-1 } &=& -p_4\mu_m + \frac 35 k_F^2 e^{-p_4n} \nonumber \\ &*& \left\{ \frac 43 p_5 \left( \frac {10}{3} - p_4n \right) + \frac 23 p_3 \left( \frac {25}{3} -4p_4 n \right) \right\} \nonumber \\ \chi_{dL}^{-1 } &=& 8 \frac {f_{1L}}{n} + 4 h_{1L} \nonumber \\ \chi_{dH}^{-1} &=& \chi_{dL}^{-1}-\frac{2}{n^2}(\delta g_1-\delta g_2)+\delta h_1 \,, \end{eqnarray} where \begin{eqnarray} h_{1L} = \frac{df_{1L}}{dn} - \frac{2f_{1L}}{n} \end{eqnarray} and $\delta h_1$ can be found in App.B. The nuclear matter incompressibility is given by \begin{eqnarray} K &=& 9\frac{dP}{dn} = K_k + K_m + K_d \nonumber \\ K_k &=& 10 {E_k} = 6 \frac {\hbar^2k_F^2}{2m} \nonumber \\ K_m &=& \left( 40-48p_4n + 9 p_4^2n^2 \right) {E_m} \nonumber \\ K_{dL} &=& 18 {E_d} + 9 \left[ 4f_{1L} + n h_{1L} \right] \nonumber \\ K_{dH} &=& K_{dL} + 9 n \delta h_1 \,. \end{eqnarray} The speed of sound can be written in terms of $\mu$ and $K$ or $\chi^{-1}$ as \begin{equation} \left(\frac{c_s}{c}\right)^2 = \frac{K}{9(\mu+m)} = \frac{n\chi^{-1}}{\mu+m} \label{cs} \end{equation} From this relation, it can be shown that the APR model becomes acausal ($c_s/c = 1$) at $n = 0.841$ fm$^{-3}$ in the case of symmetric matter. The speed of sound $c_s$, and the response functions $K$ and $\chi$ are generated by density fluctuations. Evidently, they are not independent of each other (relationships between them in the case of general asymmetry are given in Appendix B). Each quantity, however, is useful in its own right for a number of applications. For example, $c_s$ is necessary in implementing causality (see Appendix E), $K$ is essential to the calculation of the liquid-gas phase transition (Sec.V), and $\chi$ is required in the numerical scheme by which the mixed-phase region, at the onset of pion condensation, is constructed (Sec.VI). At finite temperature, this group also includes the specific heats at constant volume and pressure, $C_V$ and $C_P$. The latter can be used to identify phase transitions, address causality at finite $T$ and, furthermore, are related to hydrodynamic time-scales as in the collapse to black holes. \subsection*{The Skyrme Hamiltonian} Similarly to the APR Hamiltonian we write $\mathcal{H}_{Ska}$ as the sum of a kinetic part $\mathcal{H}_k$, momentum-dependent interactions $\mathcal{H}_m$, and a density-dependent interactions $\mathcal{H}_d$. The energy per particle is then given by \begin{equation} E \equiv \frac{\mathcal{H}_{Ska}}{n} = {E_k} + {E_m} + {E_d} \,, \end{equation} where \begin{eqnarray} {E_k} &=& \frac 35 \frac {\hbar^2k_F^2}{2m}\,, \quad {E_m} = \frac 35 nk_F^2 \left(X_1+\frac{1}{2}X_2 \right)\, \nonumber \\ {E_d} &=& n \left[X_3+\frac{1}{2}X_4+n^\epsilon \left( X_5 + \frac{1}{2}X_6 \right) \right] \,. \end{eqnarray} Contributions to the pressure arise from \begin{eqnarray} P_k &=& \frac 23 n {E_k}\,, P_m = \frac 53 n {E_m} \nonumber \\ P_d &=& n \left( {E_d} + \epsilon n^{\epsilon+1}\left(X_5+\frac 12 X_6 \right) \right) \,. \end{eqnarray} The nucleon chemical potential receives contributions from \begin{eqnarray} \mu_k &=& \frac 53 {E_k} \,, \quad \mu_m = \frac 83 {E_m} \nonumber \\ \mu_d &=& 2 {E_d}+\epsilon\left(X_5+\frac 12 X_6\right)n^{\epsilon+1} \,. \end{eqnarray} The inverse susceptibility is composed of terms involving \begin{eqnarray} \chi_k^{-1 } &=& \frac 23 \frac {\mu_k}{n}\,, \quad \chi_m^{-1 } = \frac{25}{12} \frac {\mu_m}{n} + \frac{4m}{\hbar^2} X_2 \mu_k \nonumber \\ \chi_d^{-1 } &=& \frac{\mu_d}{n}+n^\epsilon\left[\left(X_5+\frac 12 X_6\right)\epsilon+X_6\right] + X_4 \,. \end{eqnarray} The nuclear matter incompressibility is determined by the terms \begin{eqnarray} K_k &=& 10 {E_k}\,, \quad K_m = 40 {E_m} \nonumber \\ K_d &=& 18 {E_d} + 9 \epsilon\left(\epsilon+3\right)n^{1+\epsilon}\left[X_5+\frac 12 X_6\right] \,. \end{eqnarray} Combining the above results with Eq.(\ref{cs}) we find that Ska violates causality for baryon densities above $n = 1.028$ fm$^{-3}$. \subsection{Isospin asymmetric matter} Here, we focus on the energetics of matter with neutron excess beginning with some general considerations that are model independent. The neutron-proton asymmetry is commonly characterized by the parameter $\alpha = (n_n-n_p)/n$ which is connected to the proton fraction $x$ through the simple relation $\alpha=1-2x$. The expansion of the energy per particle $E(n,\alpha) = \mathcal{H}/n$ of isospin asymmetric matter in powers $\alpha$, is given by: \begin{eqnarray} E(n,\alpha) &=& E(n,0)+\sum_{l=2,4,\ldots}S_l(n)\alpha^l \label{Ealpha} \end{eqnarray} where \begin{equation} S_l = \left.\frac{1}{l!}\frac{\partial^l E(n,\alpha)}{\partial \alpha^l}\right|_{\alpha=0} ~~;~~l=2,4,\ldots \end{equation} Similarly, the pressure of isospin-asymmetric matter can be written as \begin{eqnarray} P(n,\alpha) &=&n^2\frac{\partial E(n,\alpha)}{\partial n} \\ &=& P(n,0)+\frac{n}{3}\sum_{l=2,4,\ldots}L_l(n)\alpha^l \label{Palpha} \end{eqnarray} where \begin{equation} L_l = 3n\frac{dS_l(n)}{dn} \end{equation} Evaluating Eqs. (\ref{Ealpha})-(\ref{Palpha}) for pure neutron matter at the saturation density $n_0$ of symmetric matter to $O(\alpha^2)$ gives \begin{eqnarray} E(n_0,1) &\simeq& E_0 +S_v \\ P(n_0,1) &\simeq& \frac{Ln_0}{3} \end{eqnarray} where $E_0 = E(n_0,0)$ is the saturation energy of nuclear matter, $S_v=S_2(n_0)$ is its symmetry energy parameter that characterizes the energy cost involved in restoring isospin symmetry from small deviations, and $L=L_2(n_0)$ is its stiffness parameter. By the definition of $n_0$, $P(n_0,0)=0$. Only even powers of $\alpha$ survive in the two series in Eqs. \ref{Ealpha} and \ref {Palpha} above because the two nucleon species are treated symmetrically in the Hamiltonian. Furthermore, due to the near complete isospin invariance of the nucleon-nucleon interaction, the density dependent potential terms are generally carried only up to $O(\alpha^2)$; that is, $S_l(n)$ and $L_l(n)$ for $l>2$ receive contributions just from the kinetic energy and the momentum-dependent interactions. Finally, as demonstrated in Refs. \cite{lagaris81, wff88, bombaci91, gerry98}, $S_2(n)\gg S_4(n),S_6(n),\ldots$ and hence coefficients with $l=2$ suffice in describing bulk matter even when $\alpha \sim 1$. While the full calculations are rather involved, the dominance of $S_2(n)$ can be illustrated in a simple manner by turning to the isospin-asymmetric free gas whose kinetic energy can be expressed as \begin{eqnarray} E^{kin} = \frac{1}{3} E_F~\left[ \frac{1}{2} \left\{ (1+\alpha)^{5/3} + (1-\alpha)^{5/3} \right\} - 1 \right]\,, \end{eqnarray} where \begin{eqnarray} E_F = \frac{\hbar^2k_F^2}{2m} = \frac{\hbar^2}{2m}\left(\frac{3\pi^2n}{2}\right)^{2/3} \end{eqnarray} is the Fermi energy of non-interacting nucleons in symmetric nuclear matter. Through a Taylor expansion of terms involving $\alpha$ (terms in odd powers of $\alpha$ canceling), the various contributions from kinetic energy are \begin{eqnarray} S_2^{kin}(n) = \frac{1}{3} E_F,~ S_4^{kin}(n) = \frac{1}{81} E_F,~ S_6^{kin}(n) = \frac{7}{2187}E_F\ldots \nonumber \\ \label{s_kin} \label{Skins} \end{eqnarray} the series converging rapidly to the exact result of $(E_F/3)~(2^{2/3} -1)$. At the empirical nuclear equilibrium density of $n_0=0.16~{\rm fm}^{-3}$, $S_2^{kin}(n_0) \simeq 12.28$ MeV, whereas its associated stiffness parameter is $L^{kin} = (2/3)E_{F_0} \simeq 24.56$ MeV. As mentioned earlier, in the presence of interactions, $S_4(n),S_6(n),\ldots$ are modified solely by the momentum-dependent terms which, predominantly, give rise to the effective mass while preserving the relative sizes of the $S_l$'s and their derivatives (For APR, at $n_0$, $S_2/S_4 \simeq 35$ and $L_2/L_4 \simeq 18$ whereas for Ska, $S_2/S_4 \simeq 29$ and $L_2/L_4 \simeq 17$.). Thus, we can write \begin{equation} P(n,\alpha) \simeq n^2\left[E'(n,0)+\alpha^2S_2'(n)\right] \label{pnalpha} \end{equation} where the primes denote derivatives with respect to the density $n$. By expanding $E'(n,0)$ and $S_2'(n)$ about the saturation density $n_0$ of symmetric matter (noting that $E'(n_0,0)=0$), we obtain \begin{eqnarray} E'(n,0) &\simeq& \frac{K_0}{9n_0}\delta + \frac{Q_0}{54n_0}\delta^2 \label{en0}\\ S_2'(n) &\simeq& \frac{L}{3n_0} + \frac{K_{S_2}}{9n_0}\delta + \frac{Q_{S_2}}{54n_0}\delta^2 \label{s2pr} \end{eqnarray} where $\delta = (n/n_0)-1$, and \begin{eqnarray} K_0 &=& 9n_0^2\left.\frac{d^2E(n,0)}{dn^2}\right|_{n_0}\,, \quad Q_0 = 27n_0^3\left.\frac{d^3E(n,0)}{dn^3}\right|_{n_0} \\ L &=& 3n_0\left.\frac{dS_2(n)}{dn}\right|_{n_0}\,, \qquad K_{S_2} = 9n_0^2\left.\frac{d^2S_2(n)}{dn^2}\right|_{n_0} \\ Q_{S_2} &=& 27n_0^3\left.\frac{d^3S_2(n)}{dn^3}\right|_{n_0} \end{eqnarray} The skewness $\mathcal{S}$ is related to $K_0$ and $Q_0$ via \begin{eqnarray} \mathcal {S} &=& k_F^3\left.\frac{d^3 E}{dk_F^3}\right|_{\alpha=0,n_0} = 6 K_0 + Q_0 \end{eqnarray} and the symmetry term $K_{\tau}$ of the liquid drop formula for the isospin asymmetric incompressibility ~\cite{kt} is related to $S_v$, $L$, $K_0$, and $K_{S_2}$ via \begin{eqnarray} K_{\tau} &=& K_{S_2} -\frac{LS_v}{K_0}. \end{eqnarray} At the equilibrium density $n_{0\alpha}$ of isospin asymmetric matter, \begin{equation} P(n_{0\alpha},\alpha) = 0 = E'(n_{o\alpha},0)+S_2'(n_{0\alpha})\alpha^2 . \label{Pn0alpha} \end{equation} The insertion of Eqs. (\ref{en0})-(\ref{s2pr}) into Eq. (\ref{Pn0alpha}), while retaining terms up to $O(\delta)$, leads to \cite{blaizot81,prak85} \begin{equation} \delta_{\alpha} \equiv \frac{n_{0\alpha}}{n_0}-1 = - \frac {3L}{K_0}\alpha^2 \equiv -C\alpha^2 \label{delal} \end{equation} to lowest order in $\alpha^2$. This relation allows us to trace the loci of the minima of the energy per particle for changing asymmetries. Further improvement to cover higher values of $\alpha$ requires keeping terms to $O(\delta^2)$ in Eqs.(\ref{en0})-(\ref{s2pr}): \begin{eqnarray} \delta_{\alpha} &=& \frac{3K_0}{Q_0}\frac{\left(1+\frac{K_{S_2}}{K_0}\alpha^2\right)} {\left(1+\frac{Q_{S_2}}{Q_0}\alpha^2\right)} \nonumber \\ &*& \left\{-1+\left[1-\frac{2LQ_0\alpha^2\left(1+\frac{Q_{S_2}}{Q_0}\alpha^2\right)} {K_0^2\left(1+\frac{K_{S_2}}{K_0}\alpha^2\right)^2}\right]^{1/2} \right\}. \label{delal2} \end{eqnarray} In this expression, we have discarded terms involving $L_4$ because, as we mentioned earlier, these are very small and make no significant contributions. Additionally, for APR, $K_{S_2}/K_0 \sim 0.4$ and $Q_{S_2}/Q_0 \sim -1.2$. The large $(>1)$ magnitude of $|Q_{S_2}/Q_0|$ means that for $\alpha \ge 0.7$ (which was the reason for going beyond $\alpha^2$ in the first place), we incur significant error upon expanding Eq. (\ref{delal2}) in a Taylor series in $\alpha$. This problem does not arise for Ska where $K_{S_2}/K_0 \sim 0.3$ and $Q_{S_2}/Q_0 \sim -0.6$. In the latter case, Eq. (\ref{delal2}) can be reduced to the simple form \begin{equation} \delta = -\frac{3L}{K_0}\alpha^2\left[1+\left(\frac{Q_0L}{2K_0^2} -\frac{K_{S_2}}{K_0}\right)\alpha^2\right]. \label{delez} \end{equation} We stress that Eq. (\ref{delez}) is applicable only in situations where $|K_{S_2}/K_0|$ and $|Q_{S_2}/Q_0|$ are much smaller than 1. If this condition does not hold (such as in APR), the more general expression (\ref{delal2}) must be used. Finally, we calculate the incompressibility at the saturation density $n_{0\alpha}$ of asymmetric matter in terms of symmetric matter equilibrium properties, to $O(\alpha^2)$ (see also, Refs. \cite{blaizot81,prak85}). Using Eq. (\ref{pnalpha}) we get, for general $n$, \begin{eqnarray} K(n,\alpha) &=& 9 \frac{\partial P(n,\alpha)}{\partial n} \\ &=& K(n,0)\left(1+A(n)\alpha^2\right) \end{eqnarray} where \begin{eqnarray} K(n,0) &=& 9\left[2nE'(n,0)+n^2E''(n,0)\right] \\ A(n) &=& \frac{9}{K(n,0)}\left[2nS_2'(n)+n^2S_2''(n)\right] \end{eqnarray} At $n=n_{0\alpha}$, \begin{eqnarray} K(n_{0\alpha}) &\simeq& K(n_0,0)+\left.\frac{dK(n,0)}{dn}\right|_{n_0}(n_{0\alpha}-n_0) \\ &=& K_0 + \left(4K_0+\frac{Q_0}{3}\right)\delta_{\alpha} \\ &\simeq& K_0\left[1-\frac{12L}{K_0}\left(1+\frac{Q_0}{12K_0}\right)\alpha^2\right] \\ &\equiv& K_0(1+B\alpha^2) \end{eqnarray} and \begin{eqnarray} A(n_{0\alpha}) &\simeq& \frac{9}{K_0}\left(2n_0\left.\frac{dS_2(n)}{dn}\right|_{n_0} +n_0^2\left.\frac{d^2S_2(n)}{dn^2}\right|_{n_0}\right) \\ &=& \frac{9}{K_0}\left(2n_0\frac{L}{3n_0}+n_0^2\frac{K_{S_2}}{9n_0^2}\right) \\ &=& \frac{6L}{K_0}\left(1+\frac{K_{S_2}}{6L}\right) \equiv A. \end{eqnarray} Hence, to ${\cal{O}}(\alpha^2)$, \begin{eqnarray} K(n_{0\alpha},\alpha) &\simeq& K_0[1+(A+B)\alpha^2] \\ &\equiv& K_0(1+\tilde A \alpha^2)\,, \end{eqnarray} where the coefficient $A$ represents modifications to the compressibility evaluated at $n_0$ due to changing asymmetry, whereas the coefficient $B$ encodes alterations due to the shift of the saturation point of matter as the asymmetry varies. \subsection{Results and analysis} In this section, the zero temperature results obtained from the APR and Ska Hamiltonians are presented. Columns 2 and 3 in Table \ref{satprops} contain the key symmetric nuclear matter properties for both models at their respective equilibrium densities (nearly the same). Note that while the energy per particle $E(n_0) \equiv E_0$ and the compression modulus $K_0$ for both models are similar, the effective masses $m_0^*/m$ are somewhat different near nuclear densities. Significant differences are seen in the skewness parameters $\mathcal {S}$, the Ska model being more asymmetric than the APR model at its equilibrium density. \\ \begin{table}[!h] \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline & APR & Ska & Experiment & Reference\\ \hline $n_0$(fm$^{-3}$) & 0.160 & 0.155 & $0.17\pm0.02$ & \cite{day78,jackson74,myers66,myers96} \\ $E_0$ (MeV) & -16.00 & -15.99 & $-16\pm1$ & \cite{myers66,myers96} \\ $K_0$ (MeV) & 266.0 & 263.2 & $230\pm30$ & \cite{Garg04,Colo04} \\ &&&$240\pm20$ &\cite{shlomo06}\\ $Q_0$ (MeV) & -1054.2 & -300.2 & $-700\pm500$ & \cite{Farine97} \\ $S_v$ (MeV) & 32.59 & 32.91 & 30-35 & \cite{L,tsang12} \\ $L$ (MeV) & 58.46 & 74.62 & 40-70 & \cite{L,tsang12} \\ $K_{S_2}$ (MeV) & -102.6 & -78.46 & $-100\pm200$ & This work \\ $Q_{S_2}$ (MeV) & 1217.0 & 174.5 & ? & \\ $\mathcal{S}$ (MeV)& 541.8 & 1278.9 & $680\pm530$& This work \\ $m_0^*/m$ & 0.70 & 0.61 & $0.8\pm0.1$ & \cite{bohigas79,krivine80} \\ \hline \end{tabular} \caption[Saturation properties of symmetric nuclear matter.]{ Entries in this table are at the equilibrium density $n_0$ of symmetric nuclear matter for the APR and Ska models. $E_0$ is the energy per particle, $K_0$ is the compression modulus, $Q_0$ is related to the third derivative of $E$, $\mathcal{S}$ is the skewness, $m_0^*/m$ is the ratio of the Landau effective mass to mass in vacuum, $S_v$ is the nuclear symmetry energy parameter, and $L$, $K_{S_2}$, and $Q_{S_2}$ are related to the first, second, and third derivative of the symmetry energy, respectively.} \label{satprops} \end{center} \end{table} \begin{figure*}[htb] \centering \begin{minipage}[b]{0.49\linewidth} \centering \includegraphics[width=10cm]{PLOTS/APR_Ms.pdf} \end{minipage} \begin{minipage}[b]{0.49\linewidth} \centering \includegraphics[width=10cm]{PLOTS/Ska_Ms.pdf} \end{minipage} \vskip -0.5cm \caption{Left panel: Ratios of the neutron (solid) and proton (dotted) Landau effective masses to the vacuum mass versus baryon density $n$ for the APR model from Eq. (\ref{effmAPR}). Right panel: Same as the left panel but for the Ska model from Eq. (\ref{effmSka}). Values of the proton fraction $x$ are as indicated in the figure.} \label{APRSKA_Ms} \end{figure*} Among the most important quantities to be discussed are the nucleon Landau effective masses as they are critical to the thermal properties of the equation of state. We show ratios of the neutron and proton Landau effective masses to the vacuum mass versus baryon density $n$ for values of $x=0.5,~0.3$ and 0.1, respectively, in Fig. \ref{APRSKA_Ms}. The left panel is for the APR model from Eq. (\ref{effmAPR}) and the right panel contains similar results for the Ska model from Eq. (\ref{effmSka}). At the equilibrium density $n_0$ of symmetric nuclear matter, $m^*_0$ for Ska is smaller than for APR, and since $|X_2|<2X_1$ and $|p_5|<2p_3$, this means that $m^*$ is also smaller for Ska at every $x$ at $n_0$. Therefore, defining $a_{Ska}=X_1+Y_iX_2$ and $a_{APR}=p_3+Y_ip_5$, we must have $a_{Ska}>a_{APR}e^{-bn_0}$ for any $Y_i\in[0,1]$ from Eqs. (\ref{effmAPR}) and (\ref{effmSka}). It then follows from $p_4>0$ that $m^*_i$ is smaller for Ska at all densities for every value of $x\in[0,1]$ and for both neutrons and protons. Furthermore, since $p_5<0$ and $X_2<0$, we have that $m^*_n(n,x)>m^*_0>m^*_p(n,x)$ for $n>0$ and $x<1/2$. \begin{table}[!h] \begin{center} \begin{tabular}{|c|c|c||c|c|} \hline n (fm$^{-3}$) &AP(SNM)& APR(SNM) &AP(PNM)& APR(PNM) \\ \hline 0.04 & -6.48 & -5.63 & 6.45 & 6.42 \\ 0.08 & -12.13 & -11.56 & 9.65 & 9.58 \\ 0.12 & -15.04 & -14.98 & 13.29 & 13.28 \\ 0.16 & -16.00 & -16.00 & 17.94 & 17.99 \\ 0.20 & -15.09 & -15.16 & 22.92 & 23.57 \\ 0.24 & -12.88 & -12.96 & 27.49 & 28.04 \\ 0.32 & -5.03 & -5.14 & 38.82 & 39.41 \\ 0.40 & 2.13 & 2.62 & 54.95 & 54.72 \\ 0.48 & 15.46 & 15.14 & 75.13 & 74.59 \\ 0.56 & 34.39 & 32.92 & 99.74 & 99.45 \\ 0.64 & 58.35 & 56.22 & 127.58 & 129.57 \\ 0.80 & 121.25 & 119.97 & 205.34 & 206.22 \\ 0.96 & 204.02 & 207.14 & 305.87 & 305.06 \\ \hline \end{tabular} \caption{AP vs APR energies in MeV for symmetric nuclear matter (SNM) and pure neutron matter (PNM) extracted from Ref. \cite{apr}.} \label{APvsAPR} \end{center} \end{table} \begin{figure}[!h] \begin{center} \includegraphics[width=10cm]{PLOTS/APRSKA_MxT_EoA_0T.pdf} \end{center} \vskip -1cm \caption{Zero temperature energy per particle $E$ versus baryon number density for the APR (solid curves) using Eqs. (\ref{EA1})-(\ref{EA2}) and Ska (dashed curves) models at the indicated values of the proton fraction $x$. The crosses on the APR curve for $x=1/2$ show values from column 6 of Table VI in Ref. ~\cite{apr}. Although not shown here, we have verified that similar agreement is obtained with the APR results in column 5 of Table VII in Ref.~\cite{apr} for pure neutron matter (x=0). The cusps in the APR curves are due to the onset of neutral pion condensation.} \label{APRSKA_0T_EoA} \end{figure} Figure \ref{APRSKA_0T_EoA} shows the energy particle $E$ as a function of baryon density $n$ for values of $x=0.5,~0.3$ and $0.1$ for the two models. Our calculated results of APR (solid curves) agree well with those tabulated in Table VI and VII of Ref.~\cite{apr} (shown by crosses for $x=0.5$ in this figure). We also contrast the microscopic AP results for pure neutron matter and symmetric nuclear matter with those obtained from the APR fit in Table \ref{APvsAPR}. (As noted in the introduction, results below $n\simeq 0.1~{\rm {fm}^{-3}}$ can be used to establish differences from the inhomogeneous phase of supernova matter containing nuclei, light nuclear clusters, etc.) The asterisks in Fig. \ref{APRSKA_0T_EoA} show the densities at which the transition from the low density phase (LDP) to the high density phase (HDP) occurs due to pion condensation. While there is good agreement between the results of the two models up to and slightly beyond the equilibrium density, the Ska model is seen to have both higher energies and pressures (slopes of the energy) than the APR model at high densities for all values of $x$. This feature essentially stems from the emergence of the pion condensate in the HDP of APR which softens the corresponding EOS. Both equations of state become acausal at high densities; a scheme to retain causality will be outlined later. Rows 5 and 6 in Table \ref{satprops} list the symmetry energy $S_v$ and its slope parameter $L$ for the two models. Although $S_v$ for both the models are similar, values of $L$ differ significantly. The higher value of $L$ for the Ska model leads to a greater energy and pressure of isospin asymmetric matter than for the APR model near nuclear saturation densities, a feature that persists to higher densities. \\ The density dependent symmetry energy $S_2(n)$ can in general be written as $S_2 = S_{2k} + S_{2m} + S_{2d}$ with $S_{2k}$ as in Eq. (\ref{Skins}). Contributions from the momentum-dependent and density-dependent parts, $S_{2m}$ and $S_{2d}$, depend on the model used. For the APR model, \begin{eqnarray} S_{2m} &=& \frac 13 k_F^2 n e^{-p_4n} \left( p_3 + 2p_5 \right)\,, \nonumber \\ S_{2d} &=& \frac 1n (-g_1 + g_2) \,, \label{S2mAPR} \end{eqnarray} whereas for the Ska model \begin{eqnarray} S_{2m} = \frac 13 k_F^2 n (X_1+2X_2)~{\rm and}~ S_{2d} = \frac n2 (X_4 + X_6 n^\epsilon) \,. \label{S2mSKA} \end{eqnarray} Note that the terms $S_4(n)$ and $S_6(n)$ receive contributions from the momentum-dependent interaction part as well because of terms involving $n_i \tau_i$ in the $\mathcal {H}$'s of Eqs. (\ref{HAPR}) and (\ref{HSKA}). Explicitly, \begin{eqnarray} S_{4m} &= & \frac{1}{3^4} k_F^2 n e^{-p_4n} \left(p_3-p_5\right)\,, \nonumber \\ S_{6m} & =& \frac{7}{3^7} k_F^2 n e^{-p_4n} \left(p_3-\frac 15 p_5\right)\, \label{S46APR} \end{eqnarray} for the APR model, and for the Ska model \begin{eqnarray} S_{4m} &= & \frac{1}{3^4} k_F^2 n \left(X_1-X_2\right)\,, \nonumber \\ S_{6m} & =& \frac{7}{3^7} k_F^2 n \left(X_1-\frac{2}{5} X_2\right)\,. \label{S46SKA} \end{eqnarray} In Fig. \ref{APR_SymE}, the extent to which the functions $S_2(n)$ (which we call the symmetry energy), $S_4(n)$ and $S_6(n)$ from Eqs. (\ref{Skins}), (\ref{S46APR}), and (\ref{S46SKA}) contribute to the difference between pure neutron matter and nuclear matter energy, $\Delta E(n)=E(n,\alpha=1) - E(n,\alpha=0)$ (for which we reserve the term "asymmetry energy") is examined. The left (right) panel shows results for the APR (Ska) model. The symmetry energy $S_2(n)$ adequately accounts for the total $\Delta E(n)$ up to twice $n_0$. However, for densities well in excess of $n_0$, contributions from $S_4(n)~S_6(n)~\cdots$ become important although $S_2(n)$ remains dominant. The jumps in the symmetry energies for APR at $n=p_{19}=0.32$ fm$^{-3}$ (at which transition from the LDP to HDP occurs for $x=0.5$) are due to the definitions of $S_2(n),~S_4(n),~S_6(n)~\cdots$ which involve derivatives taken at $x=0.5$. As the transition to the HDP occurs at lower values of $n$ as $x$ decreases toward $x=0$, the conventional definitions of $S_2(n),~S_4(n),S_6(n)~\cdots$ fail to capture the true behavior of $\Delta E(n)$ in the presence of a phase transition. That is to say, \begin{equation} S(n) \equiv \sum_{l=2,4,\ldots}S_l(n) \ne \Delta E(n) \end{equation} in the vicinity of a phase transition driven by density and composition, regardless of the order to which the sum is carried out. \begin{figure*}[htb] \centering \begin{minipage}[b]{0.49\linewidth} \centering \includegraphics[width=10cm]{PLOTS/APR_SymE_wMaxwell.pdf} \end{minipage} \begin{minipage}[b]{0.49\linewidth} \centering \includegraphics[width=10cm]{PLOTS/Ska_SymE.pdf} \end{minipage} \vskip -0.5cm \caption{Left panel: Symmetry energies for APR (from Eqs. (\ref{s_kin}),(\ref{S2mAPR}) and (\ref{S46APR})) vs baryon density $n$. Right panel: Same as in the left panel but for Ska (Eqs. (\ref{s_kin}),(\ref{S2mSKA}) and (\ref{S46SKA})).} \label{APR_SymE} \end{figure*} Results for the coefficients $A,~B,~C,$ and $\tilde A$ that describe the isospin asymmetry dependence to $\mathcal {O}(\delta_{\alpha})$ of the equilibrium density and compression moduli for the APR and Ska models are displayed in Table \ref{Asymcoeffs}. Since asymmetry lowers the equilibrium density, transitions occurring at supra-nuclear densities do not affect these results. One observes that even though $\mathcal{H}_{APR}$ and $\mathcal{H}_{Ska}$ are calibrated to very similar values of the symmetry energy and the compression modulus, these asymmetry coefficients vary significantly. \begin{table}[h] \begin{center} \renewcommand{\arraystretch}{1.5} \begin{tabular}{|c|c|c|c|c|} \hline Model & $A$ & $B$ & $C$ & $\tilde A=A+B$ \\ \hline APR & 0.933 & -1.766 & 0.659 & -0.833 \\ Ska & 1.403 & -3.079 & 0.851 & -1.676 \\ \hline \end{tabular} \end{center} \caption[Asymmetry Coefficients.]{Results for the coefficients that describe the isospin asymmetry dependence to $\mathcal {O}(\delta_{\alpha})$ of the equilibrium density and compression moduli.} \label{Asymcoeffs} \end{table} The extent to which Eq. (\ref{delal}), inserted into Eq. (\ref{Ealpha}) expanded to $\mathcal {O}(\alpha^2)$, adequately describes the loci of energy minima in the energy per particle of subnuclear matter for arbitrary $\alpha$ is demonstrated in Fig. \ref{APRSKA_loci} for the two models. The dark circles show locations of the minima resulting from the exact calculations using Eqs. (\ref{HAPR}) and (\ref{HSKA}) as the proton fraction $x$ is varied toward that of pure neutron matter. The leading order results shown by the dotted curves accurately trace the loci of minima down to $x = 0.2$. Considering the $\mathcal {O}(\delta_{\alpha}^2)$ contribution in Eq. (\ref{delal2}) improves agreement with the exact results even down to $x=0.1$. \begin{figure*}[htb] \centering \begin{minipage}[b]{0.49\linewidth} \centering \includegraphics[width=10cm]{PLOTS/Locus_of_min_EoA_APR.pdf} \end{minipage} \begin{minipage}[b]{0.49\linewidth} \centering \includegraphics[width=10cm]{PLOTS/Locus_of_min_EoA_Ska.pdf} \end{minipage} \vskip -0.5cm \caption{Left panel: Loci of minima in the energy per particle versus baryon density for the APR (left panel) and Ska (right panel) models for different proton fractions. The dark circles are exact results from Eqs. (\ref{HAPR}) and (\ref{HSKA}). The dotted curves show $\mathcal {O}(\delta_{\alpha})$ results from Eq. (\ref{delal}), whereas the $\mathcal {O}(\delta_{\alpha}^2)$ (Eq. (\ref{delal2})) contributions are shown as dashed lines. } \label{APRSKA_loci} \end{figure*} In Fig. \ref{APRSKA_0T_P}, we show the pressure as a function of $n$ for representative values of $x$. For all $x$, including for neutron matter (not shown), the Ska model has higher pressure than that for the APR model. As with the energy per particle shown in Fig. \ref{APRSKA_0T_EoA}, the larger stiffness of the Ska model relative to the APR model is caused by appearance of a pion condensate in the HDP of the latter. The distinctive jumps in pressure for the APR model are due to the phase transition to a pion condensate, i.e., from the LDP to the HDP which occurs at lower densities for increasingly asymmetric matter. \begin{figure}[htb] \includegraphics[width=9cm]{PLOTS/APRSKA_HL_P_0T.pdf} \caption{Pressure versus baryon density for the APR (Eqs. (\ref{P1})-(\ref{P2})) and Ska models at different proton fractions. The jumps in the APR results are due to phase transition to a pion condensate at the values of $x$ indicated.} \label{APRSKA_0T_P} \end{figure} The neutron and proton chemical potentials, $\mu_n$ and $\mu_p$, versus baryon density for the two models are shown in the first two panels of Fig. \ref{APRSKA_0T_muNPs}. Due to its relative stiffness, results for the Ska model are systematically larger than those for the APR model for all values of the proton fraction $x$. It is worthwhile to mention here that $\hat \mu = \mu_n-\mu_p$ (with modifications from effects of temperature to be discussed in subsequent sections), shown in the rightmost panel of Fig. \ref{APRSKA_0T_muNPs}, controls the reaction rates associated with electron captures and neutrino interactions in supernova matter. \begin{figure*}[htb] \centering \begin{minipage}[b]{0.32\linewidth} \centering \includegraphics[width=7.5cm]{PLOTS/APRSKA_MxT_MuN_0T.pdf} \end{minipage} \begin{minipage}[b]{0.32\linewidth} \centering \includegraphics[width=7.5cm]{PLOTS/APRSKA_MxT_MuP_0T.pdf} \end{minipage} \begin{minipage}[b]{0.32\linewidth} \centering \includegraphics[width=8cm]{PLOTS/APRSKA_MxT_MuHat_0T.pdf} \end{minipage} \vskip -0.5cm \caption{The first (second) panel shows the neutron (proton) chemical potential versus baryon density $n$ for the APR (Eqs. (\ref{MU1})-(\ref{MU2})) and Ska models for different values of $x$. The rightmost panel shows $\hat \mu =\mu_n - \mu_p$. The jumps in the APR results are due to phase transitions to a pion condensate.} \label{APRSKA_0T_muNPs} \end{figure*} The inverse susceptibilities are shown in Fig. \ref{APRSKA_0T_dMudn} for the APR and Ska models at representative proton fractions. The largest qualitative and quantitative differences between the two models occur at supra-nuclear densities for $d\mu_n/dn_n$ and $d\mu_p/dn_p$. The cross derivatives $d\mu_n/dn_p = d\mu_p/dn_n$ are qualitatively similar for the two EOSs, but relatively small quantitative differences between the two models exist. In the case of the APR model, in which a pion condensate appears, these derivatives are required ingredients in the Maxwell construction which determines the phase boundary densities at which the pressure and an average chemical potential are equal (this ensures mechanical and chemical equilibria). These derivatives are also utilized in constructing the full dense matter tabular EOS as will be discussed later. \begin{figure*}[htb] \centering \begin{minipage}[b]{0.32\linewidth} \centering \includegraphics[width=7.5cm]{PLOTS/APRSKA_MxT_dMuNdnN_0T_lgx.pdf} \end{minipage} \begin{minipage}[b]{0.32\linewidth} \centering \includegraphics[width=7.5cm]{PLOTS/APRSKA_MxT_dMuPdnP_0T_lgx.pdf} \end{minipage} \begin{minipage}[b]{0.32\linewidth} \centering \includegraphics[width=7.5cm]{PLOTS/APRSKA_MxT_dMuNdnP_0T_lgx.pdf} \end{minipage} \vskip -0.5cm \caption{Neutron and proton inverse susceptibilities versus baryon density for the APR (Eqs. (\ref{CHI1})-(\ref{CHI2})) and Ska models at the indicated proton fractions $x$. Recall that $d\mu_n/dn_p = d\mu_p/dn_n$. Note that the cross derivatives have a very weak $x$-dependence. The jumps in the APR results are due to phase transitions to a pion condensate. } \label{APRSKA_0T_dMudn} \end{figure*} \section{FINITE TEMPERATURE PROPERTIES} In this section, properties of the APR and Ska models at finite temperature $T$ are calculated. At finite $T$, the Hamiltonian density is a function of four independent variables; namely, the number densities $n_i$ and the kinetic energy densities $\tau_i$ of the two nucleon species. These are, in turn, proportional to the $F_{1/2}$ and $F_{3/2}$ Fermi-Dirac (FD) integrals~\cite{pathria}, respectively: \begin{eqnarray} n_i & = & \frac{1}{2\pi^2}\left(\frac{2m_i^*T}{\hbar^2}\right)^{3/2}F_{1/2i} \label{nT} \\ \tau_i & = & \frac{1}{2\pi^2}\left(\frac{2m_i^*T}{\hbar^2}\right)^{5/2}F_{3/2i} \label{tauT} \\ \mbox{where} ~~~~~F_{\alpha i} & = & \int_0^{\infty}\frac{x_i^{\alpha}}{e^{-\psi_i}e^{x_i}+1}dx_i \\ x_i & = & \frac{1}{T}\left(k_i^2\frac{\partial \mathcal{H}}{\partial \tau_i}\right) = \frac{1}{T}\frac{\hbar^2k_i^2}{2m_i^*} \equiv \frac{\varepsilon_{k_i}}{T} \\ \psi_i & = & \frac{1}{T}\left(\mu_i-\frac{\partial \mathcal{H}}{\partial n_i}\right) = \frac{\mu_i-V_i}{T} \equiv \frac{\nu_i}{T}. \end{eqnarray} The quantity $\psi_i$, generally termed as the degeneracy parameter, is related to the fugacity defined by $z_i = e^{\psi_i}$. In the above equations, one must keep in mind that $m_i^*$ is a function of the number densities of both nucleon species $i=n,p$. Consequently, derivatives of the FD integrals with respect to the densities take the forms \begin{eqnarray} \frac{\partial F_{1/2i}}{\partial n_i} & = & \frac{F_{1/2i}}{n_i}\left(1- \frac{3}{2}\frac{n_i}{m_i^*} \frac{\partial m_i^*}{\partial n_i} \right) \\ \mbox{and} ~~~~~~\frac{\partial F_{1/2i}}{\partial n_j} & = & -\frac{3}{2}\frac{n_i}{m_j^*} \frac{\partial m_i^*}{\partial n_j}F_{1/2i}. \end{eqnarray} FD integrals of different order are connected through their derivatives with respect to $\psi_i$: \begin{eqnarray} \frac{\partial F_{\alpha i}}{\partial \psi_i} & = & \alpha F_{(\alpha-1)i}. \label{dfadpsi} \end{eqnarray} Therefore, \begin{eqnarray} \frac{\partial F_{\alpha i}}{\partial n_i} & = & \frac{\partial F_{\alpha i}}{\partial F_{1/2i}} \frac{\partial F_{1/2 i}}{\partial n_i} \nonumber \\ &=& \frac{\partial F_{\alpha i}}{\partial \psi_i} \left(\frac{\partial F_{1/2 i}}{\partial \psi_i}\right)^{-1} \frac{\partial F_{1/2 i}}{\partial n_i} \nonumber \\ & = & 2\alpha \frac{F_{(\alpha-1) i}}{F_{-1/2i}}\frac{\partial F_{1/2 i}}{\partial n_i}. \label{dfadni} \end{eqnarray} % Similarly, cross derivatives with respect to density of Fermi integrals are given by \begin{eqnarray} \frac{\partial F_{\alpha i}}{\partial n_j} & = & 2\alpha \frac{F_{(\alpha-1) i}}{F_{-1/2i}}\frac{\partial F_{1/2 i}}{\partial n_j}. \label{dfadnj} \end{eqnarray} Utilizing the relations \begin{eqnarray} \frac{\partial}{\partial n} & = & \frac{\partial}{\partial n_n}\left.\frac{\partial n_n}{\partial n}\right|_x + \frac{\partial}{\partial n_p}\left.\frac{\partial n_p}{\partial n}\right|_x =(1-x)\frac{\partial}{\partial n_n}+x\frac{\partial}{\partial n_p} \nonumber \\ \frac{\partial}{\partial x} & = & \frac{\partial}{\partial n_n}\left.\frac{\partial n_n}{\partial x}\right|_n + \frac{\partial}{\partial n_p}\left.\frac{\partial n_p}{\partial x}\right|_n =-n\frac{\partial}{\partial n_n}+n\frac{\partial}{\partial n_p} , \nonumber \end{eqnarray} the derivatives of $F_{\alpha i}$ with respect to $n$ and $x$ are obtained as \begin{eqnarray} \frac{\partial F_{\alpha i}}{\partial n} & = & 2\alpha \frac{F_{(\alpha-1) i}}{F_{-1/2i}} \left[(1-x)\frac{\partial F_{\alpha i}}{\partial n_n} +x\frac{\partial F_{\alpha i}}{\partial n_p}\right] \label{dfadn} \\ \frac{\partial F_{\alpha i}}{\partial x} & = & 2\alpha \frac{F_{(\alpha-1) i}}{F_{-1/2i}}n \left[\frac{\partial F_{\alpha i}}{\partial n_p} -\frac{\partial F_{\alpha i}}{\partial n_n}\right]. \label{dfadx} \end{eqnarray} Using Eqs.~ (\ref{dfadni})-(\ref{dfadx}), we arrive at the following expressions for the density derivatives of the degeneracy parameter and the kinetic energy density: \begin{eqnarray} \frac{\partial \psi_i}{\partial n_i} & = & \frac{2}{F_{-1/2i}}\frac{\partial F_{1/2 i}}{\partial n_i}\,, \quad \frac{\partial \psi_i}{\partial n_j} = \frac{2}{F_{-1/2i}}\frac{\partial F_{1/2 i}}{\partial n_j} \\ \frac{\partial \psi_i}{\partial n} & = & \frac{2}{F_{-1/2i}}\frac{\partial F_{1/2 i}}{\partial n}\,, \quad \frac{\partial \psi_i}{\partial x} = \frac{2}{F_{-1/2i}}\frac{\partial F_{1/2 i}}{\partial x} \end{eqnarray} \begin{eqnarray} \frac{\partial \tau_i}{\partial n_i} & = & \frac{\tau_i}{n_i}\left[ \frac{3F_{1/2i}^2}{F_{3/2i}F_{-1/2i}} \right. \nonumber \\ && \hspace{10pt} \left. +\frac{5}{2}\frac{n_i}{m_i^*} \frac{\partial m_i^*}{\partial n_i}\left( 1-\frac{9}{5}\frac{F_{1/2i}^2}{F_{3/2i}F_{-1/2i}}\right)\right] \label{dtaudni} \\ \frac{\partial \tau_i}{\partial n_j} & = & \frac{5}{2}\frac{\tau_i}{m_i^*} \frac{\partial m_i^*}{\partial n_j}\left( 1-\frac{9}{5}\frac{F_{1/2i}^2}{F_{3/2i}F_{-1/2i}}\right) \label{dtaudnj} \\ \frac{\partial \tau_i}{\partial n} & = & \tau_i \left[\frac{5}{2}\frac{1}{m_i^*} \frac{\partial m_i^*}{\partial n}\right. \nonumber \\ && \hspace{10pt} \left. +\frac{3F_{1/2i}}{F_{3/2i}F_{-1/2i}}\left((1-x)\frac{\partial F_{1/2 i}}{\partial n_n} +x\frac{\partial F_{1/2 i}}{\partial n_p}\right)\right] \nonumber \\ \label{dtaudn} \\ \frac{\partial \tau_i}{\partial x} & = & \tau_i \left[\frac{5}{2}\frac{1}{m_i^*} \frac{\partial m_i^*}{\partial x} \right. \nonumber \\ && \hspace{10pt} \left. +\frac{3F_{1/2i}}{F_{3/2i}F_{-1/2i}}n\left(\frac{\partial F_{1/2 i}}{\partial n_p} -\frac{\partial F_{1/2 i}}{\partial n_n}\right)\right] . \label{dtaudx} \end{eqnarray} These relations will be used in subsequent discussions of the finite-temperature properties. For a rapid evaluation of the FD integrals, two numerical techniques that give accurate results for varying degrees of degeneracy are described in Appendix D. \subsection{Thermal effects} To infer the effects of finite temperature we focus on the thermal part of the various state variables; that is, the difference between the $T=0$ and the finite-$T$ expressions for a given thermodynamic function $X$: \begin{equation} X_{th} = X(n,x,T) - X(n,x,0) \end{equation} This subtraction scheme discards terms that do not depend on the kinetic energy density. The thermal energy is given by \begin{eqnarray} E_{th} & = & E(T) - E(0) \nonumber \\ & = & \frac{1}{n}\sum_i\left[\frac{\hbar^2}{2m_i^*}\tau_i -\frac{3}{5}\mathcal{T}_{Fi}n_i\right] \label{eath} \end{eqnarray} where \begin{equation} \mathcal{T}_{Fi} = \frac{\hbar^2k_{Fi}^2}{2m_i^*}. \end{equation} The thermal pressure takes the form \begin{eqnarray} P_{th} & = & P(T)-P(0) \nonumber \\ & = & \frac{2}{3}\sum_i Q_i\left[\frac{\hbar^2}{2m_i^*}\tau_i -\frac{3}{5}\mathcal{T}_{Fi}n_i\right] , \label{pth} \\ \mbox{where} ~~~ Q_i & = & 1-\frac{3}{2}\frac{n}{m_i^*} \frac{\partial m_i^*}{\partial n} . \label{qi} \end{eqnarray} The quantities $Q_i$ are the consequence of the momentum-dependent interactions in the Hamiltonian which lead to the Landau effective mass. For a free gas, $Q_i = 1$ and $P_{th}=2E_{th}/3$ as usual. The entropy per particle can be written as \begin{eqnarray} S & = & \frac{1}{nT}\sum_i\left[\frac{5}{3}\frac{\hbar^2}{2m_i^*}\tau_i +n_i(V_i-\mu_i)\right] \nonumber \\ & = & \frac{1}{n}\sum_i n_i\left[\frac{5}{3}\frac{F_{3/2i}}{F_{1/2i}}-\mbox{ln}z_i\right]. \label{entr} \end{eqnarray} The thermal free energy density can be expressed as \begin{eqnarray} \mathcal{F}_{th} & = & \mathcal{F}(T)-\mathcal{H}(0) = \mathcal{H}(T)-nTS -\mathcal{H}(0) \nonumber \\ & = & \sum_i\left[\frac{\hbar^2}{2m_i^*}\tau_i-\frac{3}{5}\mathcal{T}_{Fi}n_i -Tn_i\left(\frac{5}{3}\frac{F_{3/2i}}{F_{1/2i}}-\mbox{ln}z_i\right)\right] \nonumber \\ \label{fden} \end{eqnarray} in terms of which the thermal contribution to the chemical potentials are \begin{eqnarray} \mu_{ith} & = & \mu_i(T)-\mu_i(0) = \left.\frac{\partial\mathcal{F}_{th}}{\partial n_i}\right|_{n_j} . \label{muth} \\ \mbox{where} ~~~ \mu_i(T) &= &T\psi_i+V_i \end{eqnarray} The total free energy \begin{equation} F = \sum_i\left[\frac{\hbar^2}{2m_i^*}\frac{\tau_i}{n} -TY_i\left(\frac{5F_{3/2i}}{3F_{1/2i}}-\psi_i\right)\right]+F_d \end{equation} can be expressed, with the aid of \begin{equation} \tau_i = \frac{2m_i^*T}{\hbar^2}\frac{F_{3/2i}}{F_{1/2i}}n_i, \end{equation} as \begin{equation} F = \sum_i\left[TY_i\left(-\frac{2F_{3/2i}}{3F_{1/2i}}+\psi_i\right)\right]+F_d \label{totEfree} \end{equation} The second derivative of the above with respect to the proton fraction $x$ evaluated at $x=1/2$ yields the symmetry energy at finite temperature: \begin{eqnarray} S_2(T) &=& \frac{1}{8}\left.\frac{d^2F}{dx^2}\right|_{x=1/2} \\ &=& -\frac{T}{3}\frac{F_{3/2}}{F_{1/2}^2}\left[\frac{dF_{1/2}}{dx} \right.\nonumber \\ &+&\left.\left(\frac{1}{2F_{1/2}}-\frac{3F_{1/2}}{4F_{3/2}F_{-1/2}}\right)\left(\frac{dF_{1/2}}{dx}\right)^2 -\frac{1}{4}\frac{d^2F_{1/2}}{dx^2}\right] \nonumber \\ &+& S_{2d} \label{s2t} \end{eqnarray} where \begin{eqnarray} F_{\alpha} &\equiv& F_{\alpha i}(x=0.5) \nonumber \\ \frac{dF_{1/2}}{dx} &\equiv& \left.\frac{dF_{1/2n}}{dx}\right|_{x=1/2} =\left. -\frac{dF_{1/2p}}{dx}\right|_{x=1/2} \nonumber \\ &=& -2F_{1/2}\left(1+\frac{3}{4m^*}\frac{dm^*}{dx}\right) \\ \frac{d^2F_{1/2}}{dx^2} &\equiv& \left.\frac{d^2F_{1/2n}}{dx^2}\right|_{x=1/2} =\left. \frac{d^2F_{1/2p}}{dx^2}\right|_{x=1/2} \nonumber \\ &=& \frac{6F_{1/2}}{m^*}\frac{dm^*}{dx} \left(1+\frac{1}{8m^*}\frac{dm^*}{dx}\right) \\ m^* &\equiv& m_n^*(x=1/2) = m_p^*(x=1/2) \label{msym}\\ \frac{dm^*}{dx} &\equiv& \left.\frac{dm_n^*}{dx}\right|_{x=1/2} =\left. -\frac{dm_p^*}{dx}\right|_{x=1/2} \label{dmsym} \end{eqnarray} Note that \begin{equation} \frac{d^2m^*}{dx^2} = \frac{2}{m^*}\frac{dm^*}{dx}. \end{equation} Thus the thermal contributions to the symmetry energy are \begin{equation} S_{2,th} = S_2(T) - S_2(0) \label{s2th} \end{equation} For the calculation of the specific heat at constant volume, we begin by writing the energy per particle as \[ {E} = \frac{1}{n}\sum_i \frac{\hbar^2}{2m_i^*}\tau_i + n\mbox{-dependent terms} \] Then \begin{eqnarray} C_V = \left.\frac{\partial E}{\partial T}\right|_n = \frac{1}{n}\sum_i \frac{\hbar^2}{2m_i^*}\left.\frac{\partial\tau_i}{\partial T}\right|_{n_i} \nonumber \end{eqnarray} The condition that $n_i$ are constant implies \begin{eqnarray} \frac{dn_i}{dT} = 0 & = & \left.\frac{\partial n_i}{\partial T}\right|_{F_{1/2i}} +\left.\frac{\partial n_i}{\partial F_{1/2i}}\right|_T \left.\frac{\partial F_{1/2i}}{\partial T}\right|_{n_i} \nonumber \\ \Rightarrow \left.\frac{\partial n_i}{\partial T}\right|_{F_{1/2i}} & = & -\left.\frac{\partial n_i}{\partial F_{1/2i}}\right|_T \left.\frac{\partial F_{1/2i}}{\partial T}\right|_{n_i} \end{eqnarray} But \begin{eqnarray} \left.\frac{\partial F_{1/2i}}{\partial T}\right|_{n_i} = \left.\frac{\partial \psi_i}{\partial T}\right|_{n_i}\frac{\partial F_{1/2i}}{\partial \psi_i} = \frac{1}{2}F_{-1/2i}\left.\frac{\partial \psi_i}{\partial T}\right|_{n_i} \end{eqnarray} where Eq. (\ref{dfadpsi}) was used in obtaining the second equality. Solving for $\left.\frac{\partial \psi_i}{\partial T}\right|_{n_i}$ gives \[\left.\frac{\partial \psi_i}{\partial T}\right|_{n_i} = -\left.\frac{\partial n_i}{\partial T}\right|_{F_{1/2i}}\left( \left.\frac{\partial n_i}{\partial F_{1/2i}}\right|_T\frac{1}{2}F_{-1/2i}\right)^{-1} \] Using Eq. (\ref{nT}) for the derivatives of $n_i$ with respect to $T$ and $F_{1/2i}$ we get \begin{equation} \left.\frac{\partial \psi_i}{\partial T}\right|_{n_i} = -\frac{3}{T}\frac{F_{1/2i}}{F_{-1/2i}} \label{dpsidT} \end{equation} The $T$-derivative of Eq. (\ref{tauT}) is \begin{eqnarray} \left.\frac{\partial \tau_i}{\partial T}\right|_{n_i} & = & \tau_i\left(\frac{5}{2T} +\frac{1}{F_{3/2i}} \left.\frac{\partial F_{3/2i}}{\partial T}\right|_{n_i}\right) \nonumber \\ & = & \tau_i\left(\frac{5}{2T} +\frac{1}{F_{3/2i}} \left.\frac{\partial \psi_i}{\partial T}\right|_{n_i} \frac{\partial F_{3/2i}}{\partial \psi_i}\right) \nonumber \\ & = & \tau_i\left(\frac{5}{2T} -\frac{9}{2T} \frac{F_{1/2i}^2}{F_{3/2i}F_{-1/2i}}\right) \label{dtaudT} \end{eqnarray} where equations (\ref{dfadpsi}) and (\ref{dpsidT}) have been exploited for the last line. Thus \begin{equation} C_V = \frac{5}{2nT}\sum_i \frac{\hbar^2 \tau_i}{2m_i^*} \left(1-\frac{9}{5}\frac{F_{1/2i}^2}{F_{3/2i}F_{-1/2i}}\right) \label{cv} \end{equation} The starting point of the calculation of the specific heat at constant pressure is \begin{equation} C_P = C_V +\frac{T}{n^2}\frac{\left(\left.\frac{\partial P}{\partial T}\right|_n\right)^2} {\left.\frac{\partial P}{\partial n}\right|_T} \label{cp} \end{equation} The temperature derivative of the pressure at fixed density is given by \begin{eqnarray} \left.\frac{\partial P}{\partial T}\right|_n &=& \frac{2}{3}\sum_i \frac{\hbar^2}{2m_i^*}Q_i\left.\frac{\partial \tau_i}{\partial T}\right|_n \nonumber \\ &=& \frac{5}{3T}\sum_i \frac{\hbar^2}{2m_i^*}Q_i\tau_i\left(1-\frac{9}{5}\frac{F_{1/2i}^2}{F_{3/2i}F_{-1/2i}} \right) \end{eqnarray} where Eq.(\ref{dtaudT}) was used in going from the first line to the second. The density derivative of the pressure at fixed temperature is \begin{eqnarray} \left.\frac{\partial P}{\partial n}\right|_T &=& \frac{\hbar^2}{3}\frac{d}{dn}\left( \sum_i\frac{Q_i\tau_i}{m_i^*}\right) + \frac{dP_d}{dn} \nonumber \\ &=& \frac{\hbar^2}{3}\sum_i\left[\frac{Q_i}{m_i^*}\frac{d\tau_i}{dn}+\frac{\tau_i}{m_i^*}\frac{dQ_i}{dn} -\frac{\tau_iQ_i}{m_i^{*2}}\frac{dm_i^*}{dn}\right] \nonumber \\ &+& \frac{dP_d}{dn} \end{eqnarray} The density derivatives of the kinetic energy density are given in Eqs. (\ref{dtaudni})-(\ref{dtaudx}) and those of $m^*$, $Q$, and $P_d$ in Appendix B. Finally, the inverse susceptibilities are given by \begin{equation} \chi_{ij,th} = \chi_{ij}(T)-\chi_{ij}(0)=\left(\frac{\partial \mu_{ith}}{\partial n_j}\right)^{-1} \label{xi} \end{equation} where \begin{eqnarray} \chi_{ii}(T) & = & \left(\frac{\partial \mu_i}{\partial n_i}\right)^{-1} = \left(T\frac{\partial \psi_i}{\partial n_i} +\frac{\partial V_i}{\partial n_i}\right)^{-1} \nonumber \\ & = & \left[T\left(\frac{\partial F_{1/2i}}{\partial \psi_i}\right)^{-1} \frac{\partial F_{1/2i}}{\partial n_i} +\frac{\partial V_i}{\partial n_i}\right]^{-1} \nonumber \\ & = & \left[\frac{2T}{n_i}\frac{F_{1/2i}}{F_{-1/2i}}\left(1-\frac{3}{2}\frac{n_i}{m_i^*} \frac{\partial m_i^*}{\partial n_i}\right) +\frac{\partial V_i}{\partial n_i}\right]^{-1} , \nonumber \\ \label{xii} \end{eqnarray} \begin{eqnarray} \chi_{ij}(T) = \left[-3T\frac{F_{1/2i}}{F_{-1/2i}}\frac{1}{m_i^*} \frac{\partial m_i^*}{\partial n_j} +\frac{\partial V_i}{\partial n_i}\right]^{-1} ;~ i\ne j. \nonumber \\ \label{xij} \end{eqnarray} \subsection*{Results} We now present numerical results. Comparisons of these results with analytical results in degenerate and non-degenerate situations will be presented in the next sub-section. We begin by examining results of the total pressure (from Eq. (\ref{pth})) as it varies with temperature and density in the sub-nuclear regime for isospin symmetric matter ($x=0.5$). Our results for the APR and Ska models are shown in Fig. \ref{APRSKA_IsoTherm}. The prominent feature in this figure is the onset of a liquid-gas phase transition, the critical temperature and density for which are obtained by the condition \begin{equation} \left.\frac {dP}{dn}\right|_{n_c,T_c} = \left.\frac {d^2P}{dn^2}\right|_{n_c,T_c} = 0 \,. \end{equation} The critical temperatures (densities) for the APR and Ska models were found to be 17.91 MeV (0.057 fm$^{-3}$) and 15.12 MeV (0.056 fm$^{-3}$), respectively, so that \begin{equation} \frac {P_c}{n_cT_c} = \left\{ \begin{array}{ll} 0.347 \,, & \qquad \mbox{for APR} \\ 0.303 \,, & \qquad \mbox{for Ska}\,. \end{array} \right. \end{equation} These results provide an interesting contrast with the value 0.375 for a Van der Waals-like equation of state and the experimental values that lie in the range 0.27-0.31 for noble gases (see, e.g. Ref.~\cite{stanley}, p.69). \begin{figure*}[!ht] \centering \begin{minipage}[b]{0.49\linewidth} \centering \includegraphics[width=10cm]{PLOTS/APR_isotherm_Pres_labels.pdf} \end{minipage} \begin{minipage}[b]{0.49\linewidth} \centering \includegraphics[width=10cm]{PLOTS/Ska_isotherm_Pres_labels.pdf} \end{minipage} \vskip -0.5cm \caption{Pressure of isospin symmetric matter vs baryon density (from Eq. (\ref{pth})) for the APR (left) and Ska (right) models at the indicated temperatures. The point $(P,n)$ on the critical temperature curve of each model at which $dP/dn=d^2P/dn^2=0$ is indicated by the downward arrow.} \label{APRSKA_IsoTherm} \end{figure*} In Fig. \ref{APRSKA_critical}, we show how the critical temperatures and densities vary as a function of proton fraction $Y_p$ in the left panel. Both quantities are scaled to their respective values for symmetric nuclear matter ($Y_p=0.5$). The fall-off of the critical temperature with $Y_p$ is similar for the APR and Ska models, whereas the fall-off of the critical density with $Y_p$ for the Ska model is steeper than for the APR model. The critical proton fractions beyond which the phase transition disappears are similar for both models, that for the APR model being slightly larger than for the SkA model. As is evident from the right panel in this figure, $P_c/n_cT_c$ exhibits very little variation with $Y_p$. \begin{figure*}[!ht] \centering \begin{minipage}[b]{0.49\linewidth} \centering \includegraphics[width=10.75cm]{PLOTS/APRSKA_NcTcVYp.pdf} \end{minipage} \begin{minipage}[b]{0.49\linewidth} \centering \includegraphics[width=10cm]{PLOTS/APRSKA_CritRAT.pdf} \end{minipage} \vskip -0.5cm \caption{Left panel: Critical temperatures (scaled with their respective values for proton fraction $x=0.5$) vs $x$. The inset shows critical densities (scaled with their respective values for $x=0.5$) vs $x$. Right panel: Critical parameter $P_c/n_cT_c$ vs $x$.} \label{APRSKA_critical} \end{figure*} The thermal properties are dominated by the behavior of the effective masses. For all densities, at a given value of $x$, the APR effective masses are larger than for Ska. As a result, thermal contributions to entropy, energy, pressure, free energy, etc. are larger in the case of APR at the same density. This explains the relative behaviors in Figures 10, 11, 12 and 14. The reverse behavior is seen in the thermal part of chemical potentials in Figure 13. This behavior can be understood through the limiting cases (\ref{mudeg}) and (\ref{mund}) where the effective masses enter with an overall negative sign. The thermal energy (from Eq. (\ref{eath})) is shown in Fig. \ref{APRSKA_Eth} for the two models at proton fractions $x$ of 0.5 and 0.1, and at temperatures $T$ of 20 and 50 MeV, respectively, for the two models. Common to both models are the features that the thermal energy (i) decreases, and (ii) is nearly independent of the proton fraction with increasing density. Maximal differences (with respect to $x$) are seen to be in the vicinity of $n_0=0.16~{\rm fm}^{-3}$ for both models. Differences between the two models increase with increasing density, particularly for densities in excess of $n_0$. These common and different features arise due to a combination of effects involving the dependence of the thermal energy on the effective masses as the degree of degeneracy changes with density as will be discussed in the next sub-section with analytical results in hand. \begin{figure}[!h] \includegraphics[width=9cm]{PLOTS/APRSKA_EthoA.pdf} \caption{Thermal energy per particle (Eq. (\ref{eath})) at the indicated proton fractions and temperatures for the APR and Ska models.} \label{APRSKA_Eth} \end{figure} In Fig. \ref{APRSka_Efree_Th}, the difference between the pure neutron matter and nuclear matter free energies $\Delta F_{th} = F(n,T,x=0) - F(n,T,x=0.5)$ is shown for the two models at $T=20$ and 50 MeV, respectively. For both temperatures shown, the APR model has a larger $\Delta F_{th}$ than that of the Ska model. This feature can be understood in terms of the larger thermal energies of the APR model relative to those of the Ska model at the same density and temperature which dominate over the opposing effects of entropy. \begin{figure}[!h] \includegraphics[width=9cm]{PLOTS/APRSka_Efree_Th.pdf} \caption{Difference between the pure neutron matter and nuclear matter free energies (Eq. (\ref{fden})) at the indicated temperatures for the APR and Ska models. } \label{APRSka_Efree_Th} \end{figure} The thermal pressures (from Eq. (\ref{pth})) for the two models are shown in Fig. \ref{APRSKA_Pth} for $x=0.5$ and 0.1, and $T=20$ and 50 MeV as functions of density. Both models display the same trend of rising almost linearly with density until around 1.5 $n_0$ before beginning to saturate at higher densities. This trend is independent of proton fraction and temperature, however; the stiffness in pressure is more pronounced for the higher temperature and lower proton fraction. The agreement between the results of the two models becomes progressively worse as the density increases. As with the thermal energy in Fig. \ref{APRSKA_Eth}, these results are a consequence of the increasing degeneracy with increasing density and the behavior of the effective masses in the two models as our discussion in the next sub-section will reveal. \begin{figure}[!h] \includegraphics[width=9cm]{PLOTS/APRSKA_Pth.pdf} \caption{Thermal pressure vs baryon density (Eq. (\ref{pth})) at the indicated proton fractions and temperatures.} \label{APRSKA_Pth} \end{figure} The neutron and proton thermal chemical potentials (from Eq. (\ref{muth})) plotted as functions of baryon density are presented in the left and right panels of Fig. \ref{APRSKA_MuTH}, respectively. Chemical potentials of fermions inclusive of their zero temperature parts decrease with temperature at a fixed density, hence the negative values of their thermal counterparts. We observe larger neutron and proton thermal chemical potentials from the Ska model when compared with the APR model for all but the lowest baryon densities and at both temperatures. The difference between the two models is greatest at intermidiate densities (between $n_0$ and $2n_0$) and at high temperatures. In the case of the neutron thermal chemical potential there is little difference between isospin symmetric ($x=0.5$) and neutron rich matter ($x=0.1$). This is not the case for the proton chemical potential which displays a much greater difference as isospin asymmetry increases. \begin{figure*}[!ht] \centering \begin{minipage}[b]{0.49\linewidth} \centering \includegraphics[width=9.5cm]{PLOTS/APRSKA_MuNth.pdf} \end{minipage} \begin{minipage}[b]{0.49\linewidth} \centering \includegraphics[width=9.5cm]{PLOTS/APRSKA_MuPth.pdf} \end{minipage} \vskip -0.5cm \caption{Thermal neutron (left) and proton (right) chemical potentials vs baryon density (Eq. (\ref{muth})). } \label{APRSKA_MuTH} \end{figure*} In Fig. \ref{APRSKA_SoA}, we present our results for the entropy per baryon for the APR and Ska models. Our results show that the APR model provides a larger entropy per baryon than the Ska model for all baryon densities, proton fractions and temperatures. The magnitude of the observed difference is independent of proton fraction $x$ and increases with baryon density $n$ and temperature $T$. For extremely low densities, ($n \ll n_0$) the difference in entropy per baryon between the models is negligible as interactions play a minor role in a nearly ideal gas for this quantity. \begin{figure}[!h] \includegraphics[width=9cm]{PLOTS/APRSKA_SoA.pdf} \caption{Entropy per baryon in units of $k_B$ vs baryon density (Eq. (\ref{entr})).} \label{APRSKA_SoA} \end{figure} In Figs. \ref{APRSKA_Xnn} through \ref{APRSKA_Xnp} we present results from Eqs. (\ref{xi})-(\ref{xii}) of the thermal inverse susceptibilities for the APR and Ska models. The neutron-neutron and proton-proton thermal inverse susceptibilities (Figs. \ref{APRSKA_Xnn} and \ref{APRSKA_Xpp}, respectively) show no significant difference between the two models at all baryon densities, proton fractions and temperatures. The neutron-proton thermal inverse susceptibility (Fig. \ref{APRSKA_Xnp}) shows a significant difference between the two models at densities less than $n_0$. The magnitude of this discrepency is independent of proton fraction and only mildly dependent on temperature. This difference can be attributed to the effective masses as it is explicitly shown in Eqs. (\ref{xiind}) and (\ref{xijnd}) (the non-degenerate limit is appropriate for small densities). The leading terms in $\chi_{ii}$ go as $T/n_i$ thus APR and Ska are similar because the effective mass enters only as a correction. On the other hand, $\chi_{ij}$ differ significantly since their leading terms are proportional to $(T/m_i^*)~(dm_i^*/dn_j)$ and thererefore their behavior is primarily influenced by the effective mass. % \begin{figure}[!h] \includegraphics[width=9cm]{PLOTS/APRSKA_XnnTH.pdf} \caption{Neutron-neutron inverse susceptibility vs baryon density (Eqs. (\ref{xi})-(\ref{xii})) for the APR and Ska models at the indicated proton fractions $x$. The two models are visually indistinguishable at both temperatures and proton fractions.} \label{APRSKA_Xnn} \end{figure} \begin{figure}[!h] \includegraphics[width=9cm]{PLOTS/APRSKA_XppTH.pdf} \caption{Proton-proton inverse susceptibility vs baryon density (Eqs. (\ref{xi})-(\ref{xii})). Just as in the case of $\chi_{nn}^{-1}$, the two models are indistinguishable at both temperatures and proton fractions.} \label{APRSKA_Xpp} \end{figure} \begin{figure}[!h] \includegraphics[width=9cm]{PLOTS/APRSKA_XnpTH.pdf} \caption{Neutron-proton susceptibility vs baryon density (Eqs. (\ref{xi}) and (\ref{xij})). Because $d\mu_n/dn_p = d\mu_p/dn_n$, only one of the cross derivatives is shown. Unlike $\chi_{nn}^{-1}$ and $\chi_{pp}^{-1}$, $\chi_{np}^{-1}$ exhibits strong model dependence at low densities.} \label{APRSKA_Xnp} \end{figure} In Fig. \ref{APRSKA_CvCp} results for the specific heats at constant volume and at constant pressure, $C_V$ and $C_P$ (from Eqs. (\ref{cv}) and (\ref{cp})) are shown as functions of baryon density for the APR and Ska models at temperatures of $20$ and $50$ MeV, respectively. Beginning with the value of 1.5 characteristic of a dilute ideal gas, $C_V$ steadily decreases with increasing density as degeneracy begins to set in. As the EOS of the Ska model is stiffer than that of the APR model at high densities, the fall off of $C_V$ with density is correspondingly more rapid. For both models, $C_V$ exhibits little dependence on proton fraction for both temperatures shown. Results of $C_P$, shown in the right panel of this figure, exhibit characteristic maxima that indicate the occurrence of a liquid-gas phase transition at low densities. At $n=n_{c}$ and $T=T_c$, ${dP}/{dn} \rightarrow 0$ (see Fig. \ref{APRSKA_IsoTherm} in which $P$ vs $n$ for the two models are shown at various temperatures) which causes $C_P$ (which is inversely proportional to $dP/dn$) to diverge. For isospin symmetric matter at $T=20$ MeV, the maximum in $C_P$ is greater for the APR model than that for the Ska model. This feature can be understood in terms of $T=20$ MeV being closer to the $T_c=17.91$ MeV of the APR model than to the $T_c=15.12$ MeV for the Ska model. As for $C_V$, there is little dependence on proton fraction for $C_P$. Note that an abrupt jump in $C_P$ also occurs for the APR model at the densities for which a transition from the LDP to the HDP takes place due to the onset of pion condensation (see the inset in the first of the right panel figure for its presence also at $T=20$ MeV.) \begin{figure*}[!ht] \centering \begin{minipage}[b]{0.49\linewidth} \centering \includegraphics[width=9.5cm]{PLOTS/APRSKA_Cv.pdf} \end{minipage} \begin{minipage}[b]{0.49\linewidth} \centering \includegraphics[width=9.5cm]{PLOTS/APRSKA_Cp.pdf} \end{minipage} \vskip -0.5cm \caption{Left panels: Specific heat constant volume, $C_V$ (from Eq. (\ref{cv})) vs baryon density. Right panels: Specific heat at constant pressure, $C_P$ (from Eq. (\ref{cp})) vs baryon density. } \label{APRSKA_CvCp} \end{figure*} \subsection{Limiting cases} In this section, we study the limiting cases when degenerate (low $T$, high $n$ such that $T/E_{F_i} \ll 1$) and non-degenerate (high $T$, low $n$ such that $T/E_{F_i} \gg1$) conditions prevail. In these limits, compact analytical expressions for all thermodynamic variables can be obtained. From a comparison of the exact, but, numerical, results with their analytical counterparts, the density and temperature ranges in which supernova matter is degenerate, partially degenerate or non-degenerate can be established. In addition, such a comparison also provides a consistency check on our numerical calculations of the thermal variables. Because of the varying concentrations of neutrons and protons (and leptons, considered in a later section) encountered, one or the other species may well lie in different regimes of degeneracy. \subsection*{Degenerate limit} In this case, we make use of Landau's Fermi Liquid Theory (FLT) ~\cite{ll9, flt}, which allows for a model-independent discussion of the various thermodynamical functions. The temperature dependence of these functions is governed by the nature of the single particle spectrum. For the APR and Skyrme Hamiltonians, this dependence is characterized by a density dependent effective mass. In FLT, the entropy density $s$ and the number density $n$ maintain the same functional forms as those of a free Fermi gas. For a single-component gas, \begin{eqnarray} s & = & - \sum_{k,\sigma}\left[n_{k\sigma}~\mbox{ln}~n_{k\sigma}+(1-n_{k\sigma})~\mbox{ln}~(1-n_{k\sigma})\right] \label{flts} \\ n & = & \sum_{k,\sigma}n_{k\sigma} \quad {\rm and} \quad n_{k\sigma} = \frac{1}{e^{(\epsilon_{k\sigma}-\mu)/T}+1} \,, \label{fltn} \end{eqnarray} where $k$ is the wave number, and $\sigma$ stands for spin degrees of freedom, respectively. Note that the quasiparticle energy $\epsilon_k$ is itself a function of the distribution function $n_k$. The distribution of particles close to the zero temperature Fermi energy $E_F$ determines the general behavior (degenerate versus non-degenerate) of the system. The low temperature expansion of $s$ is standard and to order $T$ yields \begin{eqnarray} s = \frac {\pi^2}{3} N(0)T = \frac {\pi^2}{k_Fv_F} nT \,, \label{sFLT} \end{eqnarray} where $N(0)$ is the density of states at the Fermi surface: \begin{eqnarray} N(0) = \sum_{\vec k}\delta (\epsilon_{k\sigma}-\mu) = \frac {3n}{k_Fv_F} \,. \end{eqnarray} The quantity $v_F$ is the Fermi velocity: \begin{equation} v_F = \left.\frac{\partial \epsilon_{ks}^o}{\partial k}\right|_{k=k_F} = \frac{k_F}{m^*} \end{equation} The above equation serves as a definition of the quasiparticle effective mass $m^*$. Including the 2 spin degrees of freedom, $n=k_F^3/(3\pi^2)$ so that $N(0)=m^*k_F/\pi^2$. The entropy density in Eq.~(\ref{sFLT}) is often written as \begin{equation} s = 2anT = \frac {\pi^2}{2} n \left[\frac {T}{T_F} \right]\,. \end{equation} Above, the level density parameter $a$ and the Fermi temperature $T_F$ are \begin{eqnarray} a &=& \frac {\pi^2N(0)}{6n} = \frac {\pi^2}{2k_Fv_F} = \frac {\pi^2}{4T_F} \nonumber \\ T_F &=& \frac {1}{2}k_Fv_F = \frac {k_F^2}{2m^*} \,. \end{eqnarray} In normal circumstances, the leading correction to $s$ above is of order $(T/T_F)^2$ unless there exist soft collective modes which give rise to a $(T/T_F)^3 {\rm ln}~(T/T_F)$ behavior ~\cite{flt}. The generalization to a multi-component gas is straightforward. The sums in Eq. (\ref{flts}) and Eq. (\ref{fltn}) go over particle species so that the end result for the entropy density reads as \begin{eqnarray} s & = & \frac{\pi^2}{3}T\sum_i N_i(0) = 2T\sum_ia_in_i \label{sdeg} \\ \mbox{where}~~~ a_i & =& \frac{\pi^2}{2k_{Fi}v_{Fi}} = \frac{\pi^2}{2}\frac{m_i^*}{k_{Fi}^2} \label{levelden} \end{eqnarray} The rest of the thermal variables follow from thermodynamics, particularly the Maxwell relations. The thermal energy is obtained from \begin{eqnarray} \int dE &=& \int TdS = \frac{2}{n}\sum_i a_in_i\int TdT \nonumber \\ \Rightarrow E_{th} &=& \frac{T^2}{n}\sum_i a_in_i \label{edeg} \end{eqnarray} The thermal pressure arises from \begin{eqnarray} \int dp & = & \int_0^T\left(s-n\frac{ds}{dn}\right)dT \nonumber \\ & = & \sum_i\left[a_in_i-n\frac{d(a_in_i)}{dn}\right]T^2. \nonumber \end{eqnarray} Using $a_i=\frac{\pi^2}{2}\frac{m_i^*}{(3\pi^2n_i)^{2/3}}$, we get \begin{eqnarray} n\frac{d(a_in_i)}{dn} = a_in_i-\frac{2a_in}{3} \left(1-\frac{3}{2}\frac{n}{m_i^*}\frac{dm_i^*}{dn}\right) \nonumber \\ \end{eqnarray} This allows us to write the thermal pressure as \begin{eqnarray} P_{th} & = & \frac{2T^2}{3}\sum_ia_i n_i Q_i, \label{pdeg} \end{eqnarray} where $Q_i$ is given by Eq. (\ref{qi}). The thermal chemical potentials are obtained from \begin{eqnarray} \int d\mu_i &=& -\int \frac{ds}{dn_i}dT = -\frac{d}{dn_i}\left(\sum_ja_jn_j\right)T^2 \nonumber \\ \Rightarrow \mu_{ith} & = & -T^2\left[\frac{a_i}{3}+\sum_j\frac{n_ja_j}{m_j^*}\frac{dm_j^*}{dn_i}\right]. \label{mudeg} \end{eqnarray} Thus, the susceptibilities are \begin{eqnarray} \frac{d\mu_{i,th}}{dn_i} &=& -\frac{T^2}{3}\left(-\frac{2}{3}\frac{a_i}{n_i} +2\frac{a_i}{m_i^*}\frac{dm_i^*}{dn_i}\right. \nonumber \\ &&\left.+ 3\frac{n_ia_i}{m_i^*}\frac{d^2m_i^*}{dn_i^2} + 3\frac{n_ja_j}{m_j^*}\frac{d^2m_j^*}{dn_i^2}\right) \label{xiideg} \\ \frac{d\mu_{i,th}}{dn_j} &=& -\frac{T^2}{3}\left(\frac{a_i}{m_i^*}\frac{dm_i^*}{dn_j} +\frac{a_j}{m_j^*}\frac{dm_j^*}{dn_i}\right. \nonumber \\ && \left.+ 3\frac{n_ia_i}{m_i^*}\frac{d^2m_i^*}{dn_idn_j} + 3\frac{n_ja_j}{m_j^*}\frac{d^2m_j^*}{dn_idn_j}\right) \label{xijdeg} \end{eqnarray} The free energy is given by \begin{equation} F_{th} = E_{th} - TS = -E_{th} = -T^2\sum_i a_i Y_i \label{fdeg} \end{equation} from which we get the symmetry energy \begin{eqnarray} S_{2,th} &=& \frac{T^2a}{9}\left[1+\frac{3}{2m^*}\frac{dm^*}{dx} -\frac{9}{4m^{*2}}\left(\frac{dm^*}{dx}\right)^2\right] \label{s2deg} \\ a &=& \frac{\pi^2}{2}\frac{m^*}{\hbar^2}\frac{1}{\left(\frac{3\pi^2n}{2}\right)^{2/3}} \end{eqnarray} where $m^*$ and $dm^*/dx$ are given by Eqs.(\ref{msym}) and (\ref{dmsym}) respectively. From the relation for the thermal energy, the specific heat at constant volume is \begin{equation} C_V = \frac{2T}{n}\sum_i a_i n_i = S = \frac{2E_{th}}{T}. \label{cvdeg} \end{equation} In the degenerate limit, to lowest order in temperature \begin{equation} C_P = C_V . \label{cpdeg} \end{equation} \subsection*{Non-degenerate limit} In the ND limit, the degeneracy (and hence the fugacity) is small, so that the FD functions can be expanded in a Taylor series about $z=0$: \begin{equation} F_{\alpha i} \simeq \Gamma(\alpha+1)\left(z_i-\frac{z_i^2}{2^{\alpha+1}}+\ldots\right) \,. \end{equation} Then the $F_{1/2}$ series is perturbatively inverted to get the fugacity in terms of the number density and the temperature: \begin{eqnarray} z_i & = &\frac{n_i\lambda_i^3}{\gamma} + \frac{1}{2^{3/2}}\left(\frac{n_i\lambda_i^3}{\gamma}\right)^2, \\ \mbox{where} ~~~\lambda_i & = &\left(\frac{2\pi\hbar^2}{m_i^*T}\right)^{1/2} \\ \mbox{and} ~~~~~\gamma & = & 2 ~~~~\mbox{(the spin orientations)}. \nonumber \end{eqnarray} Subsequently, these are used in the other FD integrals so that they, too, are expressed as explicit functions of the number density and the temperature: \begin{eqnarray} F_{3/2i} & = & \frac{3\pi^{1/2}}{4}\frac{n_i\lambda_i^3}{\gamma} \left[1+\frac{1}{2^{5/2}}\frac{n_i\lambda_i^3}{\gamma}\right] \\ F_{1/2i} & = & \frac{\pi^{1/2}}{2}\frac{n_i\lambda_i^3}{\gamma} \\ F_{-1/2i} & = & \pi^{1/2}\frac{n_i\lambda_i^3}{\gamma} \left[1-\frac{1}{2^{3/2}}\frac{n_i\lambda_i^3}{\gamma}\right] \end{eqnarray} Finally, we insert these into equations (\ref{eath})-(\ref{muth}) from which we get: \begin{eqnarray} E_{th} &=& \frac{1}{n}\sum_i\left\{\frac{3}{2}Tn_i \left[1+\frac{n_i}{4}\left(\frac{\pi\hbar^2}{m_i^*T}\right)^{3/2}\right] -\frac{3}{5}\mathcal{T}_{Fi}n_i\right\} \nonumber \\ \label{end} \\ P_{th} &=& \sum_i\left\{TQ_in_i \left[1+\frac{n_i}{4}\left(\frac{\pi\hbar^2}{m_i^*T}\right)^{3/2}\right] -\frac{2}{5}\mathcal{T}_{Fi}n_i\right\} \nonumber \\ \label{pnd} \\ S &=& \frac{1}{n}\sum_in_i\left\{\frac{5}{2}-\mbox{ln}\left[\left(\frac{2\pi\hbar^2}{m_i^*T}\right)^{3/2} \frac{n_i}{2}\right] \right. \nonumber \\ &+&\left.\ \frac{n_i}{8}\left(\frac{\pi\hbar^2}{m_i^*T}\right)^{3/2}\right\} \label{snd} \\ \mu_{ith} &=& -T \left\{-\mbox{ln}\left[\left(\frac{2\pi\hbar^2}{m_i^*T}\right)^{3/2}\frac{n_i}{2}\right] -\frac{n_i}{2}\left(\frac{\pi\hbar^2}{m_i^*T}\right)^{3/2}\right. \nonumber \\ && +\frac{3}{2}\frac{n_i}{m_i^*}\frac{dm_i^*}{dn_i} \left[1+\frac{n_i}{4}\left(\frac{\pi\hbar^2}{m_i^*T}\right)^{3/2}\right] \nonumber \\ && +\frac{3}{2}\frac{n_j}{m_j^*}\frac{dm_j^*}{dn_i} \left.\left[1+\frac{n_j}{4}\left(\frac{\pi\hbar^2}{m_j^*T}\right)^{3/2}\right]\right\} \nonumber \\ && -\mathcal{T}_{Fi}\left[1-\frac{3}{5}\frac{n_i}{m_i^*}\frac{dm_i^*}{dn_i}\right] +\frac{3}{5}\frac{n_j}{m_j^*}\frac{dm_j^*}{dn_i}\mathcal{T}_{Fj} \label{mund} \,. \end{eqnarray} Thus \begin{eqnarray} F_{th} &=& \sum_i\left\{TY_i\left[-1+\ln\left[\left(\frac{2\pi\hbar^2}{m_i^*T}\right)^{3/2} \frac{n_i}{2}\right]\right.\right. \nonumber \\ && \left.\left. +\frac{n_i}{4}\left(\frac{\pi\hbar^2}{m_i^*T}\right)^{3/2}\right] -\frac{3}{5}\mathcal{T}_{Fi}Y_i\right\} \label{fnd}\\ S_{2,th} &=& \frac{T}{8}\left\{8\left(1+\frac{3}{4m^*}\frac{dm^*}{dx}\right) \left[1+\frac{n}{8}\left(\frac{\pi\hbar^2}{m^*T}\right)^{3/2}\right]\right. \nonumber \\ && ~~~~~ -4\left[1+\frac{3}{8m^{*2}}\left(\frac{dm^*}{dx}\right)^2\right] \nonumber \\ && ~~~~~\left. +\frac{3n}{4m^*}\left(\frac{\pi\hbar^2}{m^*T}\right)^{3/2}\frac{dm^*}{dx} \left(1+\frac{1}{8m^*}\frac{dm^*}{dx}\right)\right\} \nonumber \\ && -\frac{\mathcal{T}_F}{3}\left(1+\frac{3}{2m^*}\frac{dm^*}{dx}\right) \label{s2nd} \\ \mathcal{T}_F &=& \left(\frac{3\pi^2n}{2}\right)^{2/3}\frac{\hbar^2}{2m^*} \end{eqnarray} \begin{eqnarray} \frac{d\mu_i}{dn_i} &=& \frac{T}{n_i}\left(1-\frac{3n_i}{m_i^*}\frac{dm_i^*}{dn_i}\right) \left[1+\frac{n_i}{2}\left(\frac{\pi \hbar^2}{m_i^*T}\right)^{3/2} +\frac{2}{3}\frac{\mathcal{T}_{Fi}}{T}\right] \nonumber \\ &+& O\left(\left(\frac{dm^*}{dn}\right)^2, \frac{d^2m^*}{dn^2}\right) \label{xiind} \\ \frac{d\mu_i}{dn_j} &=& -T\left\{\frac{3}{2m_n^*}\frac{dm_i^*}{dn_j} \left[1+\frac{n_i}{2}\left(\frac{\pi \hbar^2}{m_i^*T}\right)^{3/2} +\frac{2}{3}\frac{\mathcal{T}_{Fi}}{T}\right]\right. \nonumber \\ &+& \left.\frac{3}{2m_j^*}\frac{dm_j^*}{dn_i} \left[1+\frac{n_j}{2}\left(\frac{\pi \hbar^2}{m_j^*T}\right)^{3/2} +\frac{2}{3}\frac{\mathcal{T}_{Fj}}{T}\right]\right\} \nonumber \\ &+& O\left(\left(\frac{dm^*}{dn}\right)^2, \frac{d^2m^*}{dn^2}\right) \label{xijnd} \\ C_V & = & \frac{1}{n}\sum_i\left\{\frac{3}{2}n_i \left[1-\frac{n_i}{8}\left(\frac{\pi\hbar^2}{m_i^*T}\right)^{3/2}\right] \right\} . \label{cvnd} \end{eqnarray} The second derivatives and the squares of the first derivatives of the effective mass are neglected because they represent higher order corrections. For $C_P$, we need the temperature and density derivatives of pressure in the non-degenerate limit, for which we use Eq.(\ref{cp}) in conjuction with \begin{equation} P = \sum_i\left[TQ_in_i\left\{1+\frac{n_i}{4}\left(\frac{\pi\hbar^2}{m_i^*T}\right)^{3/2}\right\}\right]+P_d \end{equation} to get \begin{eqnarray} \left.\frac{\partial P}{\partial T}\right|_n &=& \sum_i\left[Q_in_i\left\{1-\frac{n_i}{8}\left(\frac{\pi\hbar^2}{m_i^*T}\right)^{3/2}\right\}\right] \label{dpdtnd} \\ \left.\frac{\partial P}{\partial n}\right|_T &=& \sum_i\left[T\left\{1+\frac{n_i}{4}\left(\frac{\pi\hbar^2}{m_i^*T}\right)^{3/2}\right\} \left(\frac{\partial Q_i}{\partial n}n_i + Q_iY_i\right)\right] \nonumber \\ &+& \sum_i\left[TQ_i^2n_i\frac{Y_i}{4}\left(\frac{\pi\hbar^2}{m_i^*T}\right)^{3/2}\right] + \frac{dP_d}{dn} \,. \label{dpdnnd} \end{eqnarray} \subsection*{Results} This section is devoted to comparisons of results from the analytical formulas obtained in the previous section for the limiting cases with those from the exact calculations presented earlier. In addition to providing us with physical insights about the general trends observed, these comparisons will allow us to delineate the range of densities for which matter with varying isospin asymmetry and temperature can be regarded as either degenerate or non-degenerate. We will restrict our comparisons to results from the APR model only as those for the Ska model yield similar conclusions. \begin{figure}[!h] \includegraphics[width=9cm]{PLOTS/APR_EthoA.pdf} \caption{Thermal energy per particle (Eq. (\ref{eath})) and limiting cases (Eqs. (\ref{edeg}) and (\ref{end})) vs baryon density at the indicated temperatures and proton fractions.} \label{APR_Eth} \end{figure} In Fig. \ref{APR_Eth}, we show the thermal energies $E_{th}$ as a function of baryon density $n$ for $T=20$ MeV (left panel) and 50 MeV (right panel) for proton fractions of $x=0.5$ and 0.1, respectively. The $T^2$ dependence implied by the degenerate approximation in Eq. (\ref{edeg}) is borne out by the the exact results at high densities. Also, the larger the temperature, the larger is the density at which the degenerate approximation approaches the exact result. The effective masses introduce an additional density dependence to the $\sim n^{-2/3}$ behavior characteristic of a free gas of degenerate fermions for which $E_{th}$ would be larger than that with momentum dependent interactions. Note that in the degenerate limit, both the approximate and exact results are nearly $x$- independent. With increasing temperature, the non-degenerate approximation in Eq. (\ref{end})) reproduces the exact results the agreement extending up to nuclear density and even slightly beyond. As for a free Boltzmann gas, the thermal energy is predominantly linear in $T$ in the non-degenerate limit and is only slightly modified by the density dependence of the effective masses. The $\sim n^{-2/3}$ fall off with density arises from the last term in Eq. (\ref{eath}) (the degeneracy energy of fermions at $T=0$) with sub-dominant corrections from the density dependence of the effective masses. Effects of isospin asymmetry are somewhat more pronounced in the non-degenerate case when compared to the degenerate limit. Results for highly asymmetric matter from Eq. (\ref{end})) begin to deviate from the exact results at lower densities than for symmetric matter because the two components are in different regimes of degeneracy. \begin{figure}[!h] \includegraphics[width=8.5cm]{PLOTS/APR_SymE_MxT_Lim.pdf} \caption{Thermal contributions to the symmetry energy, $S_{2,th}$, from Eq. (\ref{s2t}) compared with its limiting cases (Eqs. (\ref{s2deg}) and (\ref{s2nd})) at the indicated temperatures. } \label{APR_S2_Lim} \end{figure} In Fig. \ref{APR_S2_Lim}, thermal contributions to the symmetry energy, $S_{2,th}$ from Eq. (\ref{s2t}) and its limiting cases from Eqs. (\ref{s2deg}) and (\ref{s2nd}) for the APR model are shown as functions of baryon density at temperatures of $20$ and $50$ MeV, respectively. Agreement between the degenerate limit and the exact result is obtained around $3n_0$ for $T=20$ MeV and at much larger densities ($n>1~{\rm fm}^{-3}$) for the $T=50$ MeV. The non-degenerate limit coincides with the exact result for densities less than $\approx0.5n_0$ for the $20$ MeV temperature. At $T=50$ MeV the non-degenerate limit has a greater range of baryon densities for which it agrees with the exact result, reaching up to 1-1.5~$n_0$. A noteworthy feature in this figure is that both the exact and the degenerate result for $S_{2,th}$ become negative after a certain baryon density. Note that for free fermions, $S_{2,th}$ in Eq. (\ref{s2deg}) is strictly positive, pointing to the fact that derivatives of $m^*$ with respect to proton fraction $x$ are at the root of driving $S_{2,th}$ negative. In what follows, we examine the rate at which the identity $\Delta F_{th} = \sum_i S_{i,th}$ with $i$ even (odd terms cancelling) is fulfilled. The left panel of Fig. \ref{APR_Asym_S12_Double} shows the difference of the exact free energies $\Delta F_{th} = F_{th}(n,x=0,T) - F_{th}(n,x=0.5,T)$ at $T=20$ MeV. Also shown are contributions from various $S_{i,th}$ at the same temperature. To be specific, we consider only the degenerate limit results for $S_{i,th}$ in this comparison. It turns out that only $S_2$ turns negative at a finite baryon density, whereas $S_4,~S_6,\cdots$ which contain higher derivatives of $m^*$ with respect to the proton fraction $x$ are all positive whose magnitudes decrease very slowly. We have calculated up to thirty terms in $S_{i,th}$ and show how their sums compare with $\Delta F_{th}$. It is clear that the convergence to the exact result is relatively poor, in contrast to the rapid convergence of symmetry energies at zero temperature (see Fig. \ref{APR_SymE}). The situation is better, although by no means impressive, for $\Delta F_{th} = F_{th}(n,x=0.02,T) - F_{th}(n,x=0.5,T)$ at $T=20$ MeV. These results indicate that the happenstance of rapid convergence of symmetry energies at zero temperature cannot automatically be taken to hold for their thermal parts as well. It should be noted, however, that the latter represent relatively small corrections to the total symmetry energy where the main contribution is due to the zero temperature component. The asymptotic nature of the Taylor series expansion of $\Delta F_{th}$ in even powers of $(1-2x)$ at finite temperature is at the origin of such poor convergence for large isospin asymmetries. Exact, albeit numerical, calculations of the Fermi integrals are necessary for high isospin asymmetry. \begin{figure}[!h] \includegraphics[width=8.5cm]{PLOTS/APR_Asym_S12_Double.pdf} \caption{Thermal symmetry free energies $S_{i,th}$ and their contributions to $\Delta F_{th} = \sum_i S_{i,th}$ as defined in the insets.} \label{APR_Asym_S12_Double} \end{figure} \begin{figure}[hbt] \includegraphics[width=9cm]{PLOTS/APR_Efree_Th_wLim.pdf} \vskip -0.5cm \caption{Thermal free energy (Eq. (\ref{fden})) and its limiting cases (Eqs. (\ref{fdeg}) and (\ref{fnd}) vs baryon density at the indicated proton fractions and temperatures. } \label{APR_MxT_Fth} \end{figure} In Fig. \ref{APR_MxT_Fth}, we show results for the thermal free energy from Eq. (\ref{fden}) and its limiting cases from Eqs. (\ref{fdeg}) and (\ref{fnd}) as functions of baryon density. The degenerate limit and the exact result of $F_{th}$ are in agreement for densities greater than $1.5n_0$ for $T=20$ MeV and only for much larger densities ($n\geq 5n_0$) for $T=50$ MeV. The convergence between the degenerate limit and the exact result of $F_{th}$ is independent of proton fraction for both temperatures. The non-degenerate limit begins to differ from the exact result at around $n_0$ for $T=20$ MeV and about $2n_0$ for $T=50$ MeV. For both temperatures shown, the convergence between the non-degenerate limit and the exact solution is nearly independent of proton fraction. \begin{figure}[h] \includegraphics[width=9cm]{PLOTS/APR_Pth.pdf} \caption{Thermal pressure (Eq. (\ref{pth})) and limiting cases (Eqs. (\ref{pdeg}) and (\ref{pnd})) vs baryon density.} \label{APR_Pth} \end{figure} Results for the exact thermal pressures $P_{th}$ (from Eq. (\ref{pth})) and those of its limiting cases (Eqs. (\ref{pdeg}) and (\ref{pnd}) are presented in Fig. \ref{APR_Pth} for the APR model. For both temperatures considered, the initial rise of $P_{th}$ (in the non-degenerate regime) is linear with slope $\sim T$ modulated by the factors $Q_i$ highlighting the role of density dependent effective masses relative to a free fermi gas for which the slope would be $T$. The linear rise is halted as matter begins to become increasingly degenerate when effective mass corrections begin to gain importance. Quantitative agreement of the exact results with those from the limiting form of the degenerate expression is, however, reached at densities much larger than shown in this figure. Note that isospin asymmetry effects are more pronounced for $P_{th}$ than for $E_{th}$ except at very low and very high densities. Thermal contributions to the neutron and proton chemical potentials $\mu_{n,th}$ and $\mu_{p,th}$ versus baryon density $n$ are shown in Fig. \ref{APR_MuTH} in which comparisons between between results from the exact (Eq. (\ref{muth})) and limiting cases (Eqs. (\ref{mudeg}) and (\ref{mund})) are made. For $\mu_{n,th}$ (left panels), good agreement is found between the non-degenerate limit and the exact result for densities up to $n_0$ for $T=20$ MeV and up to $2n_0$ for $T=50$ MeV. Results in the degenerate limit rapidly approach the exact results, unlike in the cases of $E_{th}$ and $P_{th}$. Note that this level of quantitative agreement, in both non-degenerate and degenerate cases, required derivatives of the density dependent effective masses (Eqs. (\ref{mudeg}) and (\ref{mund})). Isospin asymmetry effects are not very pronounced for $\mu_{n,th}$. The thermal contribution to the proton chemical potential $\mu_{p,th}$ (right panels) exhibits a greater difference between isospin symmetric and asymmetric matter when compared to $\mu_{n,th}$. The agreement between the exact results for $\mu_{p,th}$ and those of the limiting cases is much the same as it was for $\mu_{n,th}$. Both the degenerate and non-degenerate limits agree to a greater degree for the higher temperature and for isospin symmetric matter. \\ \begin{figure*}[!ht] \centering \begin{minipage}[b]{0.49\linewidth} \centering \includegraphics[width=9cm]{PLOTS/APR_MuNth.pdf} \end{minipage} \begin{minipage}[b]{0.49\linewidth} \centering \includegraphics[width=9cm]{PLOTS/APR_MuPth.pdf} \end{minipage} \vskip -0.5cm \caption{Proton (left) and neutron (right) thermal chemical potentials (Eq. (\ref{muth})) with limits (Eqs. (\ref{mudeg}) and (\ref{mund})) vs baryon density.} \label{APR_MuTH} \end{figure*} \begin{figure}[!h] \includegraphics[width=9cm]{PLOTS/APR_SoA.pdf} \caption{Entropy per baryon (Eq. (\ref{entr})) and its limiting cases (Eqs. (\ref{sdeg}) and (\ref{snd})) vs baryon density at the indicated proton fractions and temperatures.} \label{APR_SoA} \end{figure} In Fig. \ref{APR_SoA}, we present the exact results for the entropy per baryon (Eq. (\ref{entr})) and its limiting cases (Eqs. (\ref{sdeg}) and (\ref{snd})) as functions of baryon density $n$. The exact results show little difference between isospin symmetric and asymmetric matter. A comparison of the results in the two panels reveals the range of densities over which the non-degenerate and degenerate approximations reproduce the exact results. The agreement between the exact results and those of the limiting cases is almost independent of proton fraction, although what little difference there is points to isospin symmetric matter having a slightly better agreement. \begin{figure}[!ht] \includegraphics[width=10cm]{PLOTS/APR_Xnn.pdf} \caption{Neutron-neutron inverse susceptibility vs baryon density (Eqs. (\ref{xi}) and (\ref{xii})) for the APR model and its limiting cases at the indicated proton fractions $x$. The degenerate limit (Eq. (\ref{xiideg})) and the non-degenerate limit (Eq. (\ref{xiind})), see inset, are both shown. } \label{APR_Xnn} \end{figure} \begin{figure}[!hb] \includegraphics[width=10cm]{PLOTS/APR_Xpp.pdf} \caption{Proton-proton inverse susceptibility vs baryon density (Eqs. (\ref{xi}) and (\ref{xii})) and its limiting cases. Both the exact result and its degerate limit (Eq. (\ref{xiideg})) are shown. The inset compares the non-degenerate limit (Eq. (\ref{xiind})) with the exact result.} \label{APR_Xpp} \end{figure} \begin{figure}[!h] \includegraphics[width=10cm]{PLOTS/APR_Xpn.pdf} \caption{Mixed inverse susceptibilities (Eqs. (\ref{xi}) and (\ref{xij}) and the limiting cases (Eqs. (\ref{xijdeg}) and (\ref{xijnd})) vs baryon densities. As $d\mu_n/dn_p = d\mu_p/dn_n$, only one of the mixed derivatives is shown.} \label{APR_Xnp} \end{figure} In Figs. \ref{APR_Xnn}, \ref{APR_Xpp}, and \ref{APR_Xnp}, thermal contributions to the inverse susceptibilities $\chi^{-1}_{nn}$, $\chi^{-1}_{pp}$, and $\chi^{-1}_{np}$ (Eqs. (\ref{xi}), (\ref{xii}) and (\ref{xij})) are shown together with their limiting cases in Eqs. (\ref{xiideg}), (\ref{xijdeg}) (\ref{xiind}) and (\ref{xijnd}) (the non-degenerate limits are in the insets of all three figures). Note that where expected, the degenerate and non-degenerate approximations provide an accurate description of the exact results. It is intriguing that for densities slightly above the nuclear density, neither of the approximations works very well. \begin{figure*}[!ht] \centering \begin{minipage}[b]{0.49\linewidth} \centering \includegraphics[width=9.5cm]{PLOTS/APR_Cv.pdf} \end{minipage} \begin{minipage}[b]{0.49\linewidth} \centering \includegraphics[width=9.5cm]{PLOTS/APR_Cp.pdf} \end{minipage} \vskip -0.5cm \caption{Left: Specific heat at constant volume from Eq. (\ref{cv}) and its limiting cases from Eqs. (\ref{cvdeg}) and (\ref{cvnd}). Right: Specific heat at constant pressure from Eq. (\ref{cp}) and its limiting cases (Eqs. (\ref{cpdeg}) and (\ref{cp}) using Eqs. (\ref{cvnd}), (\ref{dpdtnd}), and (\ref{dpdnnd})). } \label{APR_CvCp_deg} \end{figure*} In the left panels of Fig. \ref{APR_CvCp_deg}, we present our results of the specific heat at constant volume from Eq. (\ref{cv}) and its limiting cases from Eqs. (\ref{cvdeg}) and (\ref{cvnd}) for the APR model. Results shown are for for isospin symmetric ($x=0.5$) and neutron rich matter ($x=0.1$) at temperatures of 20 and 50 MeV, respectively. The degenerate limit (\ref{cvdeg}) converges with the exact result for densities larger than $0.4~{\rm fm}^{-3}$ at $T=20$ MeV and for densities larger than ($1~{\rm fm}^{-3}$) for $T=50$ MeV with little to no dependence on proton fraction. As expected, the non-degenerate limit holds at low densities, the agreement with the exact result extending to slightly above $n_0$ at the higher temperature. The extent of disagreement is somewhat dependent on the proton fraction with neutron rich matter differing from the exact result at slightly lower baryon densities than for symmetric matter. The right panels of Fig. \ref{APR_CvCp_deg} show the specific heat at constant pressure from Eq. (\ref{cp}) and its limiting cases from Eqs. (\ref{cpdeg}) and (\ref{cp}) using Eqs. (\ref{cvnd}), (\ref{dpdtnd}), and (\ref{dpdnnd}) as functions of baryon density. The degenerate limit of $C_P$ (Eq. \ref{cpdeg}) provides good agreement with the exact solution at densities greater than about $n_0$ at $T=20$ MeV. At $T=50$ MeV, the degenerate limit of $C_P$ provides a good approximation to the exact result at densities greater than $2n_0$, but does not converge until large densities ($n>1~{\rm fm}^{-3}$). The non-degenerate limit of $C_P$ in Eq. (\ref{cp}) using Eqs. (\ref{cvnd}), (\ref{dpdtnd}), and (\ref{dpdnnd}) is in agreement with the exact solution up to $n_0$ for $T=20$ MeV and up to almost $2n_0$ for $T=50$ MeV. However, the liquid-gas phase transition pushes the exact solution to larger $C_P$ when compared to the effect of this transition on the degenerate limit. Even including the effects of the liquid-gas phase transition, the agreement between the non-degenerate limit and the exact solution is very good. The rate of converence between the two limits and the exact solution is independent of proton fraction. \subsection*{Results for leptons} Here we present results of our calculations for the contribution from leptons to the energy $E_e$ per baryon and the electron chemical potential $\mu_e$ as functions of baryon density $n$. Other state variables follow in a straightforward manner and are summarized in Appendix C. We present the exact results obtained using the scheme in Ref. \cite{jel} (Eqs. (\ref{eejel}) and (\ref{muejel}) labelled JEL in figures) and those of the relativistic approach with mass corrections (Eqs. (\ref{eerel}) and (\ref{muerel}) labelled Rel in figures). Comparisons are made both at $T=0$ and 50 MeV, and in isospin symmetric and neutron rich matter. In Fig. \ref{LEP_E}, we display the energy per baryon $E_e$ of electrons and positrons as a function of baryon density $n$. The two approaches (JEL and Rel) are in complete agreement at all $n$ for both temperatures and for isospin symmetric and asymmetric matter. Isospin symmetric matter provides a larger contribution to the energy of leptons than neutron rich matter. This is expected as the system is charge neutral, thus the quantity of leptons is dependent on the number of protons. For both temperatures considered, the contribution from positrons is negligible. \begin{figure}[!h] \includegraphics[width=9cm]{PLOTS/Electron_EoA.pdf} \caption{Contribution to energy per particle from leptons vs baryon density at the indicated temperatures and proton fractions. The solid lines are obtained using the approximate analytical expression Eq. (\ref{eerel}) and the crosses correspond to a full numerical calculation using Eq. (\ref{eejel}). } \label{LEP_E} \end{figure} The electron chemical potential $\mu_e$ is shown as a function of baryon density $n$ in Fig. \ref{LEP_Mu}. As was the case with the contribution to energy from leptons, the two approaches (JEL and Rel.) are in complete agreement for all baryon densities at both temperatures, and for both isospin symmetric and asymmetric matter. \begin{figure}[!h] \includegraphics[width=9cm]{PLOTS/Electron_Mu.pdf} \caption{Electron chemical potential vs baryon density at the indicated temperatures and proton fractions. The solid lines are obtained using the approximate analytical expression Eq. (\ref{muerel}) and the crosses correspond to a full numerical calculation using Eq. (\ref{muejel}). The positron chemical potential has the same magnitude but opposite sign. } \label{LEP_Mu} \end{figure} \section{EQUATION OF STATE WITH A PION CONDENSATE} We have seen in earlier sections that the APR Hamiltonian density incorporates a phase transition involving a neutral pion condensate and that at the transition density several of the state variables exhibited a jump. In this section, we discuss how an equation of state that satisfies the physical requirements of stability is constructed in the presence of this first-order phase transition. Mechanical stability requires that the inequality \begin{equation} \frac{dP}{dn} \geq 0 \end{equation} is always satisfied. However, in the case of APR model, the transition from the LDP to the HDP is accompanied by a decrease in pressure pointing to a negative incompressibility. We deal with this unphysical incompressibility by means of a Maxwell construction which takes advantage of the thermodynamic equilibrium conditions \begin{eqnarray} P_L(n_L) &=& P_H(n_H) \label{co1}\\ \mu_L(n_L) &=& \mu_H(n_H) \label{co2} \end{eqnarray} to establish the mixed-phase region such that \begin{equation} \frac{dP}{dn} = 0 \,. \end{equation} The entropy density is discontinuous across the region (even though it contains none of the terms in the Hamiltonian that drive the phase change) thus generating a latent heat \begin{equation} l = T\left[s_H(n_H) - s_L(n_L)\right] \end{equation} which signifies a first-order transition. The numerical implementation of the coexistence conditions in Eqs. (\ref{co1})-(\ref{co2}) is accomplished as in Ref. \cite{LLPR} where the average chemical potential (as electrons contribute similarly in both phases) \begin{equation} \mu = Y_n\mu_n +Y_p(\mu_p+\mu_e) \label{muave} \end{equation} and the function \begin{equation} Q = n_t\mu - P \end{equation} are expanded in a Taylor series about $n_t$ (the density at which transition from the LDP to HDP occurs) to first and second order respectively, yielding \begin{eqnarray} \mu(n) &=& \mu(n_t) + (n-n_t)\left.\frac{d\mu}{dn}\right|_{n_t} \\ Q(n) &=& Q(n_t) + \frac{(n-n_t)^2}{2}\left.\frac{d\mu}{dn}\right|_{n_t} \,. \end{eqnarray} Then the LDP and the HDP counterparts are set equal, as stipulated by equilibrium, forming a system of two equations the solution of which gives the densities that define the boundary of the coexistence region \begin{eqnarray} n_L &=& n_t + \frac{\mu_H(n_t)-\mu_L(n_t)} {\mu_L'(n_t)^{1/2}\left[\mu_L'(n_t)^{1/2}+\mu_H'(n_t)^{1/2}\right]} \label{nl}\\ n_H &=& n_t + \frac{\mu_L(n_t)-\mu_H(n_t)} {\mu_H'(n_t)^{1/2}\left[\mu_L'(n_t)^{1/2}+\mu_H'(n_t)^{1/2}\right]} \label{nh} \end{eqnarray} The primes ($\prime$) denote derivatives with respect to the number density $n$. These results serve as initial guesses which are further improved by adopting an iterative procedure. We define the functions \begin{eqnarray} f(n_L,n_H) &=& P_L(n_L) - P_H(n_H) \\ g(n_L,n_H) &=& \mu_L(n_L) - \mu_H(n_H) \end{eqnarray} and expand to first order in Taylor series about the $m^{th}$ iterative solution \begin{eqnarray} f(n_L^{m+1},n_H^{m+1}) &=& f(n_L^m,n_H^m) + (n_L^{m+1} - n_L^m)\left.\frac{\partial f}{\partial n_L}\right|_{n_L^m} \nonumber \\ &+& (n_H^{m+1} - n_H^m)\left.\frac{\partial f}{\partial n_H}\right|_{n_H^m} \label{iter1} \\ g(n_L^{m+1},n_H^{m+1}) &=& g(n_L^m,n_H^m) + (n_L^{m+1} - n_L^m)\left.\frac{\partial g}{\partial n_L}\right|_{n_L^m} \nonumber \\ &+& (n_H^{m+1} - n_H^m)\left.\frac{\partial g}{\partial n_H}\right|_{n_H^m}. \label{iter2} \end{eqnarray} Equations (\ref{iter1}) and (\ref{iter2}) are independent of each other and can thus be used to determine $n_L$ and $n_H$. If we assume that $n_L^{m+1}$ and $n_H^{m+1}$ are the ``true'' solutions of the system (i.e. $f(n_L^{m+1},n_H^{m+1}) = g(n_L^{m+1},n_H^{m+1}) = 0$), then \begin{eqnarray} n_L^{m+1} &=& n_L^m + \frac{f(n_L^m,n_H^m)\left.\frac{\partial g}{\partial n_H}\right|_{n_H^m} -g(n_L^m,n_H^m)\left.\frac{\partial f}{\partial n_H}\right|_{n_H^m}} {\left.\frac{\partial f}{\partial n_H}\right|_{n_H^m} \left.\frac{\partial g}{\partial n_L}\right|_{n_L^m} -\left.\frac{\partial f}{\partial n_L}\right|_{n_L^m} \left.\frac{\partial g}{\partial n_H}\right|_{n_H^m}} \nonumber \\ \\ n_H^{m+1} &=& n_H^m + \frac{f(n_L^m,n_H^m)\left.\frac{\partial g}{\partial n_L}\right|_{n_L^m} -g(n_L^m,n_H^m)\left.\frac{\partial f}{\partial n_L}\right|_{n_L^m}} {\left.\frac{\partial f}{\partial n_L}\right|_{n_L^m} \left.\frac{\partial g}{\partial n_H}\right|_{n_H^m} -\left.\frac{\partial f}{\partial n_H}\right|_{n_H^m} \left.\frac{\partial g}{\partial n_L}\right|_{n_L^m}} \nonumber \\ \end{eqnarray} This process is repeated until the difference $n^{m+1} - n^m$ is less than some prescribed value. \subsection*{Results} \begin{figure}[hbt] \begin{center} \includegraphics[width=10cm]{PLOTS/APR_Trans.pdf} \end{center} \vskip -1cm \caption{The curve labeled $n_t$ shows the trajectory in the $n-Y_p$ plane along which the transition from the LDP to the HDP occurs according to Eq.~(\ref{transition}). Results for $n_t$ are a reproduction of those in Fig. 7 of Ref.~\cite{apr}. The crosses show results from our polynomial fit in Eq. (\ref{polfit}). Curves labeled $n_L$ and $n_H$ indicate the mixed-phase boundary at zero and 50 MeV temperatures, respectively, determined by a Maxwell construction as described in the text.} \label{HDPline_wfit} \end{figure} The transition densities between the LDP and HDP phases from Eq. (\ref{polfit}) are shown by the solid curve (and crosses) in Fig. \ref{HDPline_wfit} as a function of proton fraction at zero and 50 MeV, respectively. In addition, results from the determination of the mixed phase region (curves labeled $n_L$ and $n_H$) using a Maxwell construction are presented as a function of proton fraction. The range of baryon densities in the mixed phase region has only slight dependence on the proton fraction and temperature. As the neutral pion condensate is mainly driven by density effects in the APR model, effects of temperature in the range considered are small . \begin{figure*}[!ht] \centering \begin{minipage}[b]{0.49\linewidth} \centering \includegraphics[width=9cm]{PLOTS/APRskaJEL_MxT_P.pdf} \end{minipage} \begin{minipage}[b]{0.49\linewidth} \centering \includegraphics[width=9cm]{PLOTS/APRskaJEL_MxT_Mu.pdf} \end{minipage} \vskip -0.5cm \caption{Pressure (left) (Eq. (\ref{pth})) and average chemical potential (right) (Eq. (\ref{muave})) for the APR (solid) and Ska (dashed) models at the indicated proton fractions and temperatures. The flat portions of the APR curves are due to the Maxwell construction for the mixed-phase region, the boundaries of which are given by Eqs. (\ref{nl})-(\ref{nh}).} \label{APR_MxT_PMu} \end{figure*} In Fig. \ref{APR_MxT_PMu}, we show the total pressure (left panels) and the average chemical potential (right panels) as functions of baryon density using a Maxwell construction. Results of our calculations are shown for $Y_p=0.1, 0.3,~{\rm and}~ 0.5$, and at $T=20$ and 50 MeV, respectively. The mixed phase region exists in the horizontal portions of the pressure and chemical potential curves. For both P and $\mu$, the abrupt transitions into and out of the mixed phase regions after Maxwell construction are evident. \begin{figure}[!h] \includegraphics[width=9cm]{PLOTS/APRjel_EfreeMeV.pdf} \caption{Free energy (Eq. (\ref{totEfree})) vs baryon density for the APR (solid) and Ska (dashed) models. Results for $Y_p=0.1~{\rm and}~ 0.4$ at $T=20$ MeV (left) and 50 MeV (right) are presented. The onset of pion condensation appears as a cusp at the appropriate densities.} \label{APRska_EfreeMeV} \end{figure} A comparison between the free energies of APR and Ska is presented in Fig. \ref{APRska_EfreeMeV}. The two models are in close agreement up to $n \sim 0.2$ fm$^{-3}$ but for higher densities, APR is softer due to pion condensation. \begin{figure}[!h] \includegraphics[width=9cm]{PLOTS/APRskaJEL_sMu.pdf} \caption{Total entropy of baryons (Eq. (\ref{entr})) and leptons (Eq. (\ref{serel})) vs average chemical potential (Eq. (\ref{muave})) at the temperatures and proton fractions shown after Maxwell construction.} \label{APR_sMu} \end{figure} In Fig. \ref{APR_sMu}, the total entropy as a fuction of the average chemical potential is shown for representative proton fractions at temperatures of 20 and 50 MeV, respectively. The vertical portions in these curves show the entropy jumps across the mixed phase region after Maxwell construction. \begin{figure*}[!ht] \centering \begin{minipage}[b]{0.49\linewidth} \centering \includegraphics[width=9cm]{PLOTS/APRjel_Cv_x4.pdf} \end{minipage} \begin{minipage}[b]{0.49\linewidth} \centering \includegraphics[width=9cm]{PLOTS/APRjel_Cp_x4.pdf} \end{minipage} \vskip -0.5cm \caption{Contributions from nucleonic and leptonic constituents for the specific heat densities at constant volume (left panels) and constant pressure (right panels). The nucleonic contributions are from Eqs. (\ref{cv})-(\ref{cp})) for the APR (solid) and Ska (dashed) models and leptonic contributions (dotted) are from (Eqs. (\ref{cve})-(\ref{cpe})). } \label{APRskaJEL_CvCp_cont} \end{figure*} \begin{figure}[!ht] \includegraphics[width=9cm]{PLOTS/APRjel_Ent_x4.pdf} \caption{Nucleonic (Eq. (\ref{entr})) and leptonic (dotted) (Eq. (\ref{se})) contributions for the total entropy density for the APR (solid) and Ska (dashed) models at the indicated proton fractions and temperatures. } \label{APR_sCont} \end{figure} In Fig. \ref{APRskaJEL_CvCp_cont}, we present the individual contributions of nucleons and leptons to the total specific heat densities at constant volume and pressure. The contribution from leptons was obtained using the JEL scheme (see Appendix D) while the nucleonic contribution was calculated by adapting the general results of section V (Eqs.(\ref{cv})-(\ref{cp})) to APR and Ska . The two models are in agreement for densities up to $n_0$, whereas for larger densities, the specific heat densities of APR are higher (both $c_V$ and $c_P$). Except for the highest densities shown in these figures, the dominant contributions arise from nucleons. The individual contributions of nucleons and leptons to the total entropy density for the APR and Ska models are displayed in Fig. \ref{APR_sCont}. Note that in the degenerate limit $s\simeq c_V \simeq c_P$. As with the specific heat densities, the largest contributions are from nucleons for densities of relevance in core-collapse supernovae. \begin{figure}[!h] \includegraphics[width=9cm]{PLOTS/APR_Tvn_Scont_deg_x14.pdf} \caption{Curves of constant entropy in the $(T,n)$-plane for the APR model. Solid curves show results from exact numerical calculations and the crosses show results from the degenerate limit expression in Eq. (\ref{T1}) at the indicated proton fractions.} \label{APR_Tvn_Isent} \end{figure} \begin{figure}[!h] \includegraphics[width=9cm]{PLOTS/APR_PTHvn_Scont_deg_x14.pdf} \caption{Isentropes in the $(P_{th},n)$-plane for the APR model at the indicared proton fractions. Solid curves are from the exact numerical calculations. Results (crosses) from the degenerate limit expression are from Eq. (\ref{PS}). } \label{APR_PTHvn_Isent} \end{figure} \begin{figure}[!h] \includegraphics[width=9cm]{PLOTS/APR_MUTHvn_Scont_deg_x14.pdf} \caption{Isentropes in the $(\mu_{th},n)$-plane for the APR model. Solid curves are results from the exact numerical calculations and the crosses are from expressions in the degenerate limit in Eqs. (\ref{muav2})-(\ref{T2})) at the indicated proton fractions.} \label{APR_MUTHvn_Isent} \end{figure} Thermal variables for constant entropy, that is isentropes, often provide valuable guidance to the hydrodynamic evolution of a system, as in ideal hydrodynamics (meaning without viscous terms) the entropy density current is conserved. Ever since the observation by Bethe et al., \cite{BBAL79}, who pointed out that the entropy in supernova evolution is low, a great deal of qualitative understanding has been gained by studying isentropes for the various thermodynamical variables. In view of this, we present some isentropes in what follows. Isentropes of the APR model in the $T$-$n$ plane are shown in Fig. \ref{APR_Tvn_Isent}. The crosses in this figure show results from the degenerate limit expression \begin{equation} T = \frac{S}{2[a_nY_n+(a_p+a_e)Y_p]} \label{T1} \end{equation} with excellent agreement for $S\le1$. The level density parameters $a_n$ and $a_p$ above are as in Eq. (\ref{levelden}), whereas that for the electrons is $a_e = (\pi^2/2)(E_{Fe}/k_{Fe}^2)$ as electrons are relativistic for near nuclear and supra-nuclear densities. We have verified that a similarly excellent agreement is obtained for the Ska model (results not shown). Isentropes of the APR model in the $P_{th}$-$n$ plane are shown in Fig. \ref{APR_PTHvn_Isent} in which the exact numerical results are compared with those in the degenerate limit \cite{pr280}: \begin{equation} P_{th} = \frac{2n}{3\pi^2}~S^2~\frac{\sum_i \frac{Y_i}{T_{Fi}}Q_i} {\left(\sum_i\frac{Y_i}{T_{Fi}}\right)^2} ~;~~~ i=n,p,e. \label{PS} \end{equation} We observe nearly identical results for $S\le2$. For nucleons, $Q_i$ are those from Eq. (\ref{qi}). For electrons, $Q_e = 1/2$ and $T_{Fe} = k_{Fe}^2/(2E_{Fe})=\pi^2/(4a_e)$. Isentropes of the APR model in the $\mu_{th}$-$n$ plane are shown in Fig. \ref{APR_MUTHvn_Isent}. To compare the exact results with those from the degenerate limit results, it was necessary to expand the expressions for the entropy and the nucleon thermal chemical potentials to ${\cal O}(T^3)$ and ${\cal O}(T^4)$ respectively: \begin{eqnarray} S &=& 2T\sum_{i=n,p,e}a_iY_i -\frac{16T^3}{5\pi^2}\sum_{i=n,p,e}a_i^3Y_i \label{S2nd}\\ \mu_{i=n,p} &=& -T^2\left[\frac{a_i}{3}+\frac{a_in_i}{m_i^*}\frac{dm_i^*}{dn_i} +\frac{a_jn_j}{m_j^*}\frac{dm_j^*}{dn_i}\right] \nonumber \\ &+& \frac{4T^4}{5\pi^2}\left[-a_i^3 +\frac{3a_i^3n_i}{m_i^*}\frac{dm_i^*}{dn_i} +\frac{3a_j^3n_j}{m_j^*}\frac{dm_j^*}{dn_i}\right] ~~;~~ i \ne j \nonumber \\ \label{mi2nd} \end{eqnarray} Then, the average thermal chemical potential is given by \begin{eqnarray} \mu_{av,th} &=& -T^2\left[\frac{Y_na_n}{3}\left(1+\frac{3n}{m_n^*}\frac{dm_n^*}{dn}\right)\right. \nonumber \\ &+& \left. \frac{Y_pa_p}{3}\left(1+\frac{3n}{m_p^*}\frac{dm_p^*}{dn}\right) +\frac{2a_eY_p}{3}\right] \nonumber \\ &+& \frac{4T^4}{5\pi^2} \left[-Y_na_n^3\left(1-\frac{3n}{m_n^*}\frac{dm_n^*}{dn}\right)\right. \nonumber \\ &-& \left. Y_pa_p^3\left(1-\frac{3n}{m_p^*}\frac{dm_p^*}{dn}\right)\right]. \label{muav2} \end{eqnarray} The temperature used in the above expression is obtained from Eq. (\ref{S2nd}) by perturbative inversion: \begin{equation} T = \frac{S}{2\sum a_iY_i}\left[1+\frac{2S^2}{5\pi^2}\frac{\sum a_i^3Y_i}{(\sum a_iY_i)^3}\right] ~~;~~ i=n,p,e \label{T2} \end{equation} At this level of approximation (made necessary by the weak density dependence of the chemical potential in the degenerate limit), we get fairly good consistency between the exact and the approximate results for $S\le1$. \section{Conclusions} Our primary objective in this work has been to build an equation of state of supernova matter in the bulk homogeneous phase based on the zero-temperature APR Hamiltonian density which has been devised to reproduce the results of the microscopic potential model calculations of Akmal and Pandharipande for nucleonic matter with varying isospin asymmetry. One of the main features of the APR model is that it incorporates a neutral pion condensate at supra-nuclear densities found in the calculations of AP for all values of proton fraction. Consequently, its high density behavior is somewhat soft in its pressure variation, yet it is able to support a neutron star in excess of 2 M$_{\odot}$ required by recent observations. Our principal contribution in this work is the extension of the APR model to finite temperature for use in numerical simulations of core-collapse supernovae. In order to provide a contrast, we have also calculated the finite temperature properties of a model (termed Ska) using an energy density functional stemming from Skyrme effective forces. The methods developed in this work are applicable and easily adapted to investigate thermal properties of other Skyrme-like energy density functionals. We have studied the behavior of the state variables energy $E$, pressure $P$, the neutron and proton chemical potentials $\mu_n$ and $\mu_p$, entropy per baryon $S$, the free energy $F$, and the response functions such as the compressibility $K$, the inverse susceptibilities $\chi_{ij}$, and specific heats $C_V$ and $C_P$ of the APR and the Ska models as functions of the temperature $T$, the baryon density $n$, and the proton-to-baryon fraction $x$. The two EOS's are quantitatively similar for densities up to $\sim$1.5 $n_0$, but differ significantly at higher densities. The cross susceptibilities $\chi_{np}$, $\chi_{pn}$ and the ratio $P_c/(n_cT_c)$ evaluated at the critical density $n_c$ of the liquid-gas phase transition are the only exceptions to the above general observation. We have also calculated several properties of isospin-symmetric matter at the saturation density and compared with experimental results, although the latter, in some cases, are associated with large uncertainties. Considerable attention has been paid to the symmetry energy $S_2$ as a function of the density and the temperature. Our results reveal a weak dependence on the temperature which leads to the conclusion that $S_2$ is determined mainly by the density dependent effective mass. It is also evident herein that, in the case of matter with a phase transition, the quantities $S_2$, and $F_{\rm sym}=F(x=0)-F(x=1/2)$ are fundamentally different. We also find that the density jump across the coexistence region of the LDP to the HDP transition of the APR model depends weakly on the temperature, the proton fraction, and the leptonic contributions. That thermal effects are, in general, less pronounced in degenerate matter is expected as this is the regime where $T/T_F \ll 1$; i.e., temperature effects are overwhelmed by density effects. However, when looking at the thermal part of any given thermodynamic quantity, the aforementioned density effects are entirely determined by the effective masses. As we have seen, the density dependence of the effective mass for nucleons interacting via Skyrme or Skyrme-like forces is responsible for several degenerate limit effects not encountered in a free gas. In particular, as a function of density the thermal pressure $P_{th}$ flattens (whereas in a free gas it increases monotonically), $\mu_{i,th}$ become positive (strictly negative in the free case), and $S_{2,th}$ becomes negative (always positive for a free gas). The results of Eqs. (\ref{pdeg}),(\ref{mudeg}), and (\ref{s2deg}) in which terms involving the derivatives of the effective mass with respect to the density encode effects of momentum dependent interactions and modify the expressions from what would have been their free forms. The role of the effective mass in the non-degenerate limit, although present, is minimal for most of the state variables. Intriguingly, our results indicate that, for the temperatures (up to 30 MeV) and proton fractions (0.38-0.42) of most relevance to supernova evolution, densities in the vicinity of the nuclear saturation density can be considered neither degenerate nor non-degenerate. The quantitative results presented in this work (particularly, the neutron and proton chemical potentials) can be used to advantage to determine the rates of electroweak reactions such as electron capture and neutrino-matter interactions in hot dense matter. Based on the APR model, work on the inhomogeneous phase at subnuclear densities where nuclei coexist with leptons, nucleons, nuclei, and light nuclear clusters as well as pasta-like configurations is in progress and will be reported in a separate work. \section*{ACKNOWLEDGEMENTS} We have benefitted greatly from the unpublished Ph.D. thesis of Matthew Carmell from Stony Brook University. Computational help from Kenneth Moore at Ohio University during the initial stages of this work is gratefully acknowledged. This work was supported in part by the US DOE under Grants No. DE-FG02-87ER-40317 (for C.C. and J.M.L) and No. DE-FG02-93ER-40756 (for B.M. and M.P.). \section{SINGLE-PARTICLE SPECTRA} Here we provide a derivation of the expression in Eq. (\ref{spectra}) which is a direct consequence of the fact that the expectation value of the Hamiltonian is stationary with respect to variations of its eigenstates ~\cite{atomic, vb}: \begin{eqnarray} \frac{\delta}{\delta\phi_k}\left(E-\sum_k \epsilon_k \int |\phi_k(\vec r)|^2d^3r\right) & = & 0, \label{variation} \end{eqnarray} where $\epsilon_k$ is the eigenvalue corresponding to the eigenstate $\phi_k$, $E=\langle H\rangle$ and $k$ is the set of all relevant quantum numbers. For a many-body Hamiltonian, $\phi_k$ are the single particle states making up the Slater determinant, and therefore the set of all $\epsilon_k$ is the single-particle energy spectrum of the Hamiltonian. Consider now a nucleonic Hamiltonian density \\ $\mathcal{H} = \mathcal{H}(\tau_i,n_i)$, where \begin{eqnarray} \tau_i(\vec r) & = & \sum_{k,s} |\nabla\phi_k(\vec r,s,i)|^2 \\ n_i(\vec r) & = & \sum_{k,s} |\phi_k(\vec r,s,i)|^2 \end{eqnarray} are the kinetic energy density and number density respectively, of the nucleon species with isospin $i$. The variation of the number density with respect to $\phi$ is \begin{eqnarray} \delta n_i & = & \sum_{k,s}[\delta\phi^*(\vec r,s,i)\phi(\vec r,s,i)+\phi^*(\vec r,s,i)\delta\phi(\vec r,s,i)]. \nonumber \\ \label{delni} \end{eqnarray} Imposing time-translational invariance leads to \begin{eqnarray} \phi(\vec r,s,i) & = & i^{2s}\phi^*(\vec r,-s,i) \\ \mbox{and}~~~ \delta\phi(\vec r,s,i) & = & i^{2s}\delta\phi^*(\vec r,-s,i). \label{delphi} \end{eqnarray} Therefore, \begin{eqnarray} \delta n_i & = & \sum_{k,s}[\delta\phi^*\phi+(-1)\phi(-s)\times(-1)\delta\phi^*(-s)] \nonumber \\ & = & \sum_{k,s}[\delta\phi^*\phi+\delta\phi^*(-s)\phi(-s)] \nonumber \\ & = & 2\sum_{k,s}\delta\phi^*\phi, \end{eqnarray} as the sum is over all spins.Similarly, \begin{eqnarray} \delta\tau_i & = & 2\sum_{k,s}\nabla\delta\phi_k^*\cdot\nabla\phi_k. \end{eqnarray} Furthermore, \begin{eqnarray} E & = & \sum_i\int d^3r~\mathcal{H}(\tau_i,n_i). \end{eqnarray} Combining this with (\ref{delni}) and (\ref{delphi}) implies \begin{eqnarray} \delta E & = & \sum_i\int d^3r \left[\frac{\partial \mathcal{H}}{\partial \tau_i}\delta\tau_i + \frac{\partial \mathcal{H}}{n_i}\delta n_i\right] \nonumber \\ & = & \int d^3r \sum_i\left[\frac{\partial \mathcal{H}}{\partial \tau_i} (2\sum_{k,s}\nabla\phi_k^*\cdot\nabla\phi_k) + \frac{\partial \mathcal{H}}{n_i}(2\sum_{k,s}\delta\phi^*_k\phi_k)\right] \nonumber \\ & = & \int d^3r \sum_{k,s}\left[2\delta\phi_k^* \sum_i\left(-\nabla \frac{\partial \mathcal{H}}{\partial \tau_i}\nabla +\frac{\partial \mathcal{H}}{n_i}\right)\phi_k\right]. \label{dele} \end{eqnarray} The minus sign is a consequence of the anti-hermiticity of the $\nabla$ operator: $\langle\nabla\phi| = \langle\phi|\nabla^{\dag} = \langle\phi|(-\nabla)$. \noindent Finally, by inserting (\ref{dele}) into (\ref{variation}) we get \begin{eqnarray} 0 & = & \int d^3r \sum_{k,s}2\delta\phi_k^*\left[\sum_i\left( -\nabla\frac{\partial\mathcal{H}}{\partial\tau_i}\nabla +\frac{\partial\mathcal{H}}{\partial n_i}\right)\right]\phi_k \nonumber \\ && \hspace{40pt} -\int d^3r \sum_{k,s}2\delta\phi_k^*\epsilon_k\phi_k \nonumber \\ & = & \int d^3r \sum_{k,s}2\delta\phi_k^*\left[\sum_i\left( -\nabla\frac{\partial\mathcal{H}}{\partial\tau_i}\nabla+\frac{\partial\mathcal{H}}{\partial n_i}\right) -\epsilon_k\right]\phi_k \nonumber \\ & \Rightarrow & \sum_i\left( -\nabla\frac{\partial\mathcal{H}}{\partial\tau_i}\nabla+\frac{\partial\mathcal{H}}{\partial n_i}\right) -\epsilon_k\ = 0 \nonumber \\ & \Rightarrow & -\nabla\frac{\partial\mathcal{H}}{\partial\tau_i}\nabla+\frac{\partial\mathcal{H}}{\partial n_i} -\epsilon_{ki}\ = 0 . \end{eqnarray} Thus in momentum space, \begin{equation} k_i^2\frac{\partial\mathcal{H}}{\partial\tau_i}+\frac{\partial\mathcal{H}}{\partial n_i}= \epsilon_{ki}. \end{equation} \section{APR STATE VARIABLES} In this appendix we summarize results pertaining to the zero temperature state variables of APR. Combining the density-dependent parts (see below) of these, with the appropriate thermal expressions from sections VI and VII yields the corresponding expressions at finite temperature. It is convenient to write $\mathcal{H}_{APR}$ as the sum of a kinetic part $\mathcal{H}_k$, a part consisting of the momentum-dependent interactions $\mathcal{H}_m$, and a density-dependent interactions part $\mathcal{H}_d$: \begin{equation} \mathcal{H}_{APR} = \mathcal{H}_k + \mathcal{H}_m + \mathcal{H}_d \end{equation} where \begin{eqnarray} \mathcal{H}_k &=& \frac{\hbar^2}{2m}(\tau_n+\tau_p) \\ \mathcal{H}_m &=& (p_3+(1-x)p_5)ne^{-p_4 n}\tau_n \nonumber \\ &+& (p_3+xp_5)ne^{-p_4 n}\tau_p \\ \mathcal{H}_d &=& g_1(n)[1-(1-2x)^2)]+g_2(n)(1-2x)^2 \end{eqnarray} Furthermore, the following quantities are necessary: \begin{eqnarray} \delta g_1 &=& g_{1H}-g_{1L} \nonumber \\ &=& -n^2\left[p_{17}(n-p_{19})+p_{21}(n-p_{19})^2 \right] e^{p_{18}(n-p_{19})} \nonumber \\ \\ \delta g_2 &=& g_{2H}-g_{2L} \nonumber \\ &=& -n^2\left[p_{15}(n-p_{20})+p_{14}(n-p_{20})^2 \right] e^{p_{16}(n-p_{20})} \nonumber \\ \\ f_{1L} &=& \frac{dg_{1L}}{dn} -\frac{2g_{1L}}{n} \nonumber \\ &=& -n^2\left[p_2+2p_6n \right. \nonumber \\ && \left. ~~~~~+(p_{11}-2p_9^2p_{10}n-2p_9^2p_{11}n^2)e^{-p_9^2n^2}\right] \\ f_{1H} &=& f_{1L}+\delta f_1 \\ \delta f_1 &=& \left[2p_{19}(p_{17}-p_{19}p_{21})n \right. \nonumber \\ && +\left\{3(2p_{19}p_{21}-p_{17})+p_{18}p_{19}(p_{17}-p_{19}p_{21})\right\}n^2 \nonumber \\ && +\left\{p_{18}(2p_{19}p_{21}-p_{17})-4p_{21}\right\}n^3 \nonumber \\ && \left. -p_{18}p_{21}n^4\right]e^{p_{18}(n-p_{19})} \end{eqnarray} \begin{eqnarray} h_{1L} &=& \frac{df_{1L}}{dn}-\frac{2f_{1L}}{n} \nonumber \\ &=& -n^2\left[2p_6-2p_9^2(p_{10}+3p_{11}n \right. \nonumber \\ && \left. ~~~~~~~~~~~~~~~~~~~~-2p_9^2p_{10}n^2-2p_9^2p_{11}n^3)e^{-p_9^2n^2}\right] \nonumber \\ \\ h_{1H} &=& h_{1L}+\delta h_1 \\ \delta h_1 &=& \left[2p_{19}(p_{17}-p_{19}p_{21}) \right. \nonumber \\ && +\left\{6(2p_{19}p_{21}-p_{17})+4p_{18}p_{19}(p_{17}-p_{19}p_{21})\right\}n \nonumber \\ && +\left\{6p_{18}(2p_{19}p_{21}-p_{17}) \right. \nonumber \\ && \left.~~+p_{18}^2p_{19}(p_{17}-p_{19}p_{21})-12p_{21}\right\}n^2 \nonumber \\ && +\left\{p_{18}^2(2p_{19}p_{21}-p_{17})-8p_{18}p_{21}\right\}n^3 \nonumber \\ && \left. -p_{18}^2p_{21}n^4\right]e^{p_{18}(n-p_{19})} \end{eqnarray} \begin{eqnarray} w_{1L} &=& \frac{dh_{1L}}{dn}-\frac{2h_{1L}}{n} \nonumber \\ &=& -n^2\left(-3p_{11}+6p_9^2p_{10}n+12p_9^2p_{11}n^2 \right. \nonumber \\ && \left. -4p_9^4p_{10}n^3-4p_9^4p_{11}n^4\right)2p_9^2e^{-p_9^2n^2} \\ w_{1H} &=& w_{1L}-\delta w_1 \\ \delta w_1 &=& \left[6\left\{(2p_{19}p_{21}-p_{17})+p_{18}p_{19}(p_{17}-p_{19}p_{21})\right\} \right. \nonumber \\ && +\left\{18p_{18}(2p_{19}p_{21}-p_{17}) \right. \nonumber \\ && ~~\left.+6p_{18}^2p_{19}(p_{17}-p_{19}p_{21})-24p_{21}\right\}n \nonumber \\ && +\left\{9p_{18}^2(2p_{19}p_{21}-p_{17}) \right. \nonumber \\ && ~~\left. +p_{18}^3p_{19}(p_{17}-p_{19}p_{21})-36p_{18}p_{21}\right\}n^2 \nonumber \\ && +\left\{p_{18}^3(2p_{19}p_{21}-p_{17})-12p_{18}^2p_{21}\right\}n^3 \nonumber \\ && \left. -p_{18}^3p_{21}n^4\right]e^{p_{18}(n-p_{19})} \end{eqnarray} \begin{eqnarray} f_{2L} &=& \frac{dg_{2L}}{dn} -\frac{2g_{2L}}{n} \nonumber \\ &=& -n^2\left(-\frac{p_{12}}{n^2}+p_8-2p_9^2p_{13}ne^{-p_9^2n^2}\right) \\ f_{2H} &=& f_{2L}+\delta f_2 \\ \delta f_2 &=& \left[2p_{20}(p_{15}-p_{20}p_{14})n \right. \nonumber \\ && +\left\{3(2p_{20}p_{14}-p_{15})+p_{16}p_{20}(p_{15}-p_{20}p_{14})\right\}n^2 \nonumber \\ && +\left\{p_{16}(2p_{20}p_{14}-p_{15})-4p_{14}\right\}n^3 \nonumber \\ && \left. -p_{16}p_{14}n^4\right]e^{p_{16}(n-p_{20})} \end{eqnarray} \begin{eqnarray} h_{2L} &=& \frac{dh_{2L}}{dn} -\frac{2h_{2L}}{n} \nonumber \\ &=& -n^2\left[\frac{2p_{12}}{n^3}-2p_{19}^2p_{13}(1-2p_9^2n)e^{-p_9^2n^2}\right] \\ h_{2H} &=& h_{2H}+\delta h_2 \\ \delta h_2 &=& \left[2p_{20}(p_{15}-p_{20}p_{14}) \right. \nonumber \\ && +\left\{6(2p_{20}p_{14}-p_{15})+4p_{16}p_{20}(p_{15}-p_{20}p_{14})\right\}n \nonumber \\ && +\left\{6p_{16}(2p_{20}p_{14}-p_{15}) \right. \nonumber \\ && \left.~~+p_{16}^2p_{20}(p_{15}-p_{20}p_{14})-12p_{14}\right\}n^2 \nonumber \\ && +\left\{p_{16}^2(2p_{20}p_{14}-p_{15})-8p_{16}p_{14}\right\}n^3 \nonumber \\ && \left. -p_{16}^2p_{14}n^4\right]e^{p_{16}(n-p_{20})} \end{eqnarray} \begin{eqnarray} w_{2L} &=& \frac{dw_{2L}}{dn} -\frac{2w_{2L}}{n} \nonumber \\ &=& -n^2\left[-\frac{6p_{12}}{n^4}+4p_9^4p_{13}(1+n-2p_9^2n^2)e^{-p_9^2n^2}\right] \nonumber \\ \\ w_{2H} &=& w_{2L}+\delta w_2 \\ \delta w_2 &=& \left[6\left\{(2p_{20}p_{14}-p_{15})+p_{16}p_{20}(p_{15}-p_{20}p_{14})\right\} \right. \nonumber \\ && +\left\{18p_{16}(2p_{20}p_{14}-p_{15}) \right. \nonumber \\ && ~~\left.+6p_{16}^2p_{20}(p_{15}-p_{20}p_{14})-24p_{14}\right\}n \nonumber \\ && +\left\{9p_{16}^2(2p_{20}p_{14}-p_{15}) \right. \nonumber \\ && ~~\left. +p_{16}^3p_{20}(p_{15}-p_{20}p_{14})-36p_{16}p_{14}\right\}n^2 \nonumber \\ && +\left\{p_{16}^3(2p_{20}p_{14}-p_{15})-12p_{16}^2p_{14}\right\}n^3 \nonumber \\ && \left. -p_{16}^3p_{14}n^4\right]e^{p_{16}(n-p_{20})} \end{eqnarray} The subscripts $L$ and $H$ imply the low density and the high density phase respectively. \\ Expressions for the state variables are collected below. \subsection*{Energy per particle} \begin{eqnarray} \frac{E}{A} &=& \frac{E_k}{A} + \frac{E_m}{A} + \frac{E_d}{A} = \frac{\mathcal{H}}{n} \label{EA1} \\ \frac{E_k}{A} &=& \frac{(3\pi^2)^{5/3}}{5\pi^2}\frac{\hbar^2}{2m}n^{2/3}[(1-x)^{5/3}+x^{5/3}] \\ \nonumber \\ \frac{E_m}{A} &=& \frac{(3\pi^2)^{5/3}}{5\pi^2}\left\{p_3[(1-x)^{5/3}+x^{5/3}] \right. \nonumber \\ && ~~~~~~~~~~~~\left.+p_5[(1-x)^{8/3}+x^{8/3}]\right\}n^{5/3}e^{-p_4n} \nonumber \\ \\ \frac{E_d}{A} &=& \frac{1}{n}\left\{g_1[1-(1-2x)^2)]+g_2(1-2x)^2\right\} \label{EA2} \end{eqnarray} \subsection*{Pressure} \begin{eqnarray} P &=& P_k+P_m+P_d = n^2\frac{\partial \mathcal{H}/n}{\partial n} \label{P1}\\ P_k &=& \frac{2}{3}n\frac{E_k}{A} \\ P_m &=& \left(\frac{5}{3}-p_4n\right)n\frac{E_m}{A} \\ P_{dL} &=& n\left\{\frac{E_{dL}}{A}+f_{1L}[1-(1-2x)^2] \right. \nonumber \\ && ~~~~\left.+f_{2L}(1-2x)^2\right\} \\ P_{dH} &=& P_{dL} + (-\delta g_1 + n \delta f_1)[1-(1-2x)^2] \nonumber \\ && + (-\delta g_2 + n \delta f_2)(1-2x)^2 \label{P2} \end{eqnarray} \subsection*{Incompressibility} \begin{eqnarray} K &=& K_k + K_m + K_d = 9\frac{\partial P}{\partial n} \\ K_k &=& 10\frac{E_k}{A} \\ K_m &=& (40-48p_4n+9p_4^2n^2)\frac{E_m}{A} \\ K_{dL} &=& 18\frac{E_d}{A} + 9\left\{(4f_1+nh_1)[1-(1-2x)^2]\right. \nonumber \\ && ~~~~~~~~~~~~~~ \left. +(4f_2+nh_2)(1-2x)^2\right\} \\ K_{dH} &=& K_{dL} + 9n\left(\delta h_1 [1-(1-2x)^2] \right. \nonumber \\ && ~~~~~~~~~~~~~\left.+ \delta h_2 (1-2x)^2\right) \end{eqnarray} \subsection*{Second derivative of pressure with respect to density} \begin{eqnarray} \frac{d^2P}{dn^2} &=& \frac{d^2P_k}{dn^2}+\frac{d^2P_m}{dn^2}+\frac{d^2P_d}{dn^2} \\ \frac{d^2P_k}{dn^2} &=& \frac{20}{27}\frac{1}{n}\frac{E_k}{A} \\ \frac{d^2P_m}{dn^2} &=& \left(\frac{200}{27}-\frac{56}{3}p_4n+p_9^2n^2-p_4^3n^3\right)\frac{1}{n} \frac{E_m}{A} \\ \frac{d^2P_{dL}}{dn^2} &=& \frac{2}{n}\frac{E_{dL}}{A}+\left(\frac{10f_{1L}}{n}+7h_{1L}+nw_{1L}\right) [1-(1-2x)^2] \nonumber \\ && +\left(\frac{10f_{2L}}{n}+7h_{2L}+nw_{2L}\right)(1-2x)^2 \\ \frac{d^2P_{dH}}{dn^2} &=&\frac{d^2P_{dL}}{dn^2} +(\delta h_1 + n\delta w_1)[1-(1-2x)^2] \nonumber \\ && +(\delta h_2 + n\delta w_2)(1-2x)^2 \end{eqnarray} \subsection*{Symmetry energy} \begin{eqnarray} S_2 &=&S_{2k}+S_{2m}+S_{2d}=\frac{1}{8}\left.\frac{\partial^2\mathcal{H}/n}{\partial x^2}\right|_{x=1/2}\\ S_{2k} &=& \frac{10}{9}\frac{1}{2^{5/3}}\frac{(3\pi^2)^{5/3}}{5\pi^2}\frac{\hbar^2}{2m}n^{2/3} \\ S_{2m} &=& \frac{10}{9}\frac{1}{2^{5/3}}\frac{(3\pi^2)^{5/3}}{5\pi^2}\frac{\hbar^2}{2m}n^{5/3} e^{-p_4n}(p_3+2p_5) \nonumber \\ \\ S_{2d} &=& \frac{1}{n}(-g_1+g_2) \end{eqnarray} \subsection*{First derivative of symmetry energy with respect to density} \begin{eqnarray} \frac{dS_2}{dn} &=& \frac{dS_{2k}}{dn}+\frac{dS_{2m}}{dn}+\frac{dS_{2d}}{dn} \\ \frac{dS_{2k}}{dn} &=& \frac{2}{3}\frac{S_{2k}}{n} \\ \frac{dS_{2m}}{dn} &=& \frac{S_{2m}}{n}\left(\frac{5}{3}-p_4n\right) \\ \frac{dS_{2dL}}{dn} &=& \frac{S_{2dL}}{n}+\frac{1}{n}(-f_{1L}+f_{2L}) \\ \frac{dS_{2dH}}{dn} &=& \frac{dS_{2dL}}{dn}+\frac{1}{n^2}(\delta g_1 - \delta g_2) \nonumber \\ && -\frac{1}{n}(\delta f_1 - \delta f_2) \end{eqnarray} \subsection*{Second derivative of symmetry energy with respect to density} \begin{eqnarray} \frac{d^2S_2}{dn^2} &=& \frac{d^2S_{2k}}{dn^2}+\frac{d^2S_{2m}}{dn^2}+\frac{d^2S_{2d}}{dn^2} \\ \frac{d^2S_{2k}}{dn^2} &=& -\frac{2}{9}\frac{S_{2k}}{n^2} \\ \frac{d^2S_{2m}}{dn^2} &=& \frac{S_{2m}}{n^2}\left(\frac{10}{9}-\frac{10}{3}p_4n+p_4^2n^2\right) \\ \frac{d^2S_{2dL}}{dn^2} &=& \frac{1}{n^2}(-2f_{1L}+2f_{2L}-nh_{1L}+nh_{2L}) \\ \frac{d^2S_{2dH}}{dn^2} &=& \frac{d^2S_{2dL}}{dn^2} -\frac{2}{n^3}(\delta g_1 - \delta g_2) \nonumber \\ && +\frac{2}{n^2}(\delta f_1 - \delta f_2) - \frac{1}{n}(\delta h_1 - \delta h_2) \end{eqnarray} \subsection*{Chemical potentials} \begin{eqnarray} \mu_i &=& \mu_{ik}+\mu_{im}+\mu_{id} = \frac{\partial \mathcal{H}}{\partial n_i} \label{MU1} \\ \mu_{ik} &=& \frac{5}{3}\frac{(3\pi^2)^{5/3}}{5\pi^2}\frac{\hbar^2}{2m}n_i^{2/3} \\ \mu_{im} &=& \frac{(3\pi^2)^{5/3}}{5\pi^2}e^{-p_4n} \nonumber \\ &*& \left\{p_5\left[\frac{8}{3}n_i^{5/3}-p_4\left(n_i^{8/3}+n_j^{8/3}\right)\right] \right. \nonumber \\ && \hspace{5pt} +p_3\left[\frac{8}{3}n_i^{5/3}+\frac{5}{3}n_i^{2/3}n_j+n_j^{5/3}\right. \nonumber \\ && \left.\left. \hspace{23pt}-p_4\left(n_i^{8/3}+n_i^{5/3}n_j+n_in_j^{5/3}+n_j^{8/3}\right)\right]\right\} \nonumber \\ \\ \mu_{idL} &=& \frac{1}{n^2}\left[4n_jg_{1L}+4n_in_jf_{1L} \right. \nonumber \\ &&~~~~~\left. +2(n_i-n_j)g_{2L}+(n_i-n_j)^2f_{2L}\right] \\ \mu_{idH} &=& \mu_{idL}-\frac{4}{n^3}n_j(n_i-n_j)(\delta g_1 - \delta g_2) \nonumber \\ && +\frac{1}{n^2}[4n_in_j\delta f_1 + (n_i-n_j)^2 \delta f_2] \label{MU2} \end{eqnarray} \subsection*{Inverse susceptibilities} \begin{eqnarray} \chi_{ii} &=& \chi_{iik}+\chi_{iim}+\chi_{iid}=\frac{\partial \mu_i}{\partial n_i} \label{CHI1} \\ \chi_{iik} &=& \frac{2}{3}\frac{\mu_{ik}}{n_i} \\ \chi_{iim} &=& -p_4\mu_{im} + \frac{(3\pi^2)^{5/3}}{5\pi^2}e^{-p_4n} \nonumber \\ &*& \left\{ p_5\left[\frac{40}{9}n_i^{2/3}-\frac{8}{3}p_4n_i^{5/3}\right] \right.\nonumber \\ && +p_3\left[\frac{40}{9}n_i^{2/3}+\frac{10}{9}n_i^{-1/3}n_j \right. \nonumber \\ && \left.\left. \hspace{13pt}-p_4\left(\frac{8}{3}n_i^{5/3}+\frac{5}{3}n_i^{2/3}n_j+n_j^{5/3}\right)\right]\right\} \\ \chi_{iidL} &=& \frac{1}{n^2}\left[8n_jf_{1L}+4n_in_jh_{1L} \right. \nonumber \\ && ~~~~~ \left.+4(n_i-n_j)f_{2L}+(n_i-n_j)^2h_{2L}\right] \\ \chi_{iidH} &=& \chi_{iidL} + \frac{8}{n^4}n_j(n_i-2n_j)(\delta g_1-\delta g_2) \nonumber \\ && - \frac{8}{n^3}n_j(n_i-n_j)(\delta f_1 - \delta f_2) \nonumber \\ && +\frac{4n_in_j}{n^2}\delta h_1 + \frac{(n_i-n_j)^2}{n^2}\delta h_2 \\ \chi_{ij} &=& \chi_{ijk}+\chi_{ijm}+\chi_{ijd}=\frac{\partial \mu_i}{\partial n_j} \\ \chi_{ijk} &=& 0 \\ \chi_{ijm} &=& -p_4\mu_{im} + \frac{(3\pi^2)^{5/3}}{5\pi^2}e^{-p_4n} \nonumber \\ &*& \left\{ -\frac{8}{3}p_4p_5n_j^{5/3} \right. \nonumber \\ && +p_3\left[\frac{5}{3}n_i^{2/3}+\frac{5}{3}n_j^{2/3} \right. \nonumber \\ &&\left.\left.\hspace{13pt}-p_4\left(n_i^{5/3}+\frac{5}{3}n_i^{2/3}n_j +\frac{8}{3}n_j^{5/3}\right)\right]\right\} \\ \chi_{ijdL} &=& \frac{1}{n^2}\left[4g_{1L}+4nf_{1L}+4n_in_jh_{1L} \right. \nonumber \\ && ~~~~~ \left. -2g_{2L}+(n_i-n_j)^2h_{2L}\right] \\ \chi_{ijdH} &=& \chi_{ijdL}-\frac{4}{n^4}[(n_i-n_j)^2-2n_in_j](\delta g_1 - \delta g_2) \nonumber \\ && +\frac{4}{n^3}(n_i-n_j)^2(\delta f_1 - \delta f_2) \nonumber \\ && +\frac{4n_in_j}{n^2}\delta h_1 + \frac{(n_i-n_j)^2}{n^2}\delta h_2 \label{CHI2} \end{eqnarray} \subsection*{Speed of Sound} \begin{eqnarray} \left(\frac{c_s}{c}\right)^2 &=& \frac{dP}{d\varepsilon} \\ &=& \frac{1}{(1-x)\mu_n+x\mu_p+m}\frac{K}{9} \\ &=& \frac{n}{(1-x)\mu_n+x\mu_p+m} \\ &*&\left[\chi_{nn}(1-x)^2+x(1-x)(\chi_{np}+\chi_{pn}) + \chi_{pp}x^2\right] \nonumber \end{eqnarray} Here, $\varepsilon$ includes the nucleon rest mass. \subsection*{Landau effective mass} \begin{equation} m_i^* = \left[\frac{1}{m}+\frac{2}{\hbar^2}\left(np_3+n_ip_5\right)e^{-p_4n}\right]^{-1} \end{equation} \subsection*{Derivatives of $m_i^*$ with respect to $n$, $x$, $n_i$, and $n_j$} \begin{eqnarray} \frac{dm_i^*}{dn} &=& -\frac{m_i^*}{n}\left(1-\frac{m_i^*}{m}\right)(1-np_4) \\ \frac{dm_i^*}{dx} &=& \pm_{(p)}^{(n)}\frac{2}{\hbar^2}p_5m_i^{*2}ne^{-p_4n} \\ \frac{dm_i^*}{dn_i} &=&-\frac{2}{\hbar^2}m_i^{*2}\left[p_3(1-np_4)+p_5(1-n_ip_4)\right]e^{-p_4n} \nonumber \\ \\ \frac{dm_i^*}{dn_j} &=&-\frac{2}{\hbar^2}m_i^{*2}\left[p_3(1-np_4)-n_ip_4p_5)\right]e^{-p_4n} \\ \frac{d^2m_i^*}{dn^2} &=&\frac{m_i^*}{n^2}\left(1-\frac{m_i^*}{m}\right) -\frac{1}{n}\frac{dm_i^*}{dn}(1-np_4) \\ \frac{d^2m_i^*}{dndn_i} &=&\frac{m_i^*}{n^2}\left(1-\frac{m_i^*}{m}\right) -\frac{1}{n}\frac{dm_i^*}{dn_i}(1-np_4) \\ \frac{d^2m_i^*}{dndn_j} &=&\frac{m_i^*}{n^2}\left(1-\frac{m_i^*}{m}\right) -\frac{1}{n}\frac{dm_i^*}{dn_j}(1-np_4) \end{eqnarray} \subsection*{Single-particle energy spectrum} \begin{eqnarray} \epsilon_{ki} &=& k_i^2T_i + V_i \\ T_i &=& \frac{\partial\mathcal{H}}{\partial\tau_i}=\frac{\hbar^2}{2m_i^*} \\ V_i &=& \frac{\partial\mathcal{H}}{\partial n_i} = \frac{\partial\mathcal{H}_m}{\partial n_i}+\frac{\partial\mathcal{H}_d}{\partial n_i} \\ \frac{\partial\mathcal{H}_m}{\partial n_i} &=& \left\{\left[p_3+p_5-p_4(np_3+n_ip_5)\right]\tau_i \right. \nonumber \\ && \left.+\left[p_3-p_4(np_3+n_jp_5)\right]\tau_j\right\}e^{-p_4n} \\ \frac{\partial\mathcal{H}_d}{\partial n_i} &=& \mu_{id} \end{eqnarray} \subsection*{Derivatives of $V_i$ with respect to $n_i$ and $n_j$} (for use in the finite-T susceptibilities) \begin{eqnarray} \frac{\partial V_{im}}{\partial n_i} &=& \left\{\left[p_3+p_5-p_4(np_3+n_ip_5)\right]\left(\frac{\partial\tau_i}{\partial n_i}-p_4\tau_i\right) \right. \nonumber \\ && -p_4(p_3+p_5)\tau_i -p_4p_3\tau_j \nonumber \\ && + \left. \left[p_3-p_4(np_3+n_jp_5)\right]\left(\frac{\partial\tau_j}{\partial n_i}-p_4\tau_j\right) \right\}e^{-p_4n} \nonumber \\ \\ \frac{\partial V_{id}}{\partial n_i} &=& \chi_{iid} \\ \frac{\partial V_{im}}{\partial n_j} &=& \left\{\left[p_3+p_5-p_4(np_3+n_ip_5)\right]\left(\frac{\partial\tau_i}{\partial n_j}-p_4\tau_i\right) \right. \nonumber \\ && -p_4p_3\tau_i -p_4(p_3+p_5)\tau_j \nonumber \\ &&\left. +\left[p_3-p_4(np_3+n_jp_5)\right]\left(\frac{\partial\tau_j}{\partial n_j}-p_4\tau_j\right) \right\}e^{-p_4n} \nonumber \\ \\ \frac{\partial V_{id}}{\partial n_j} &=& \chi_{ijd} \end{eqnarray} \subsection*{Derivatives of $Q_i$ with respect to $n$, $n_i$, and $n_j$} \begin{eqnarray} \frac{dQ_i}{dn} &=&-\frac{3}{2m_i^*}\left[\frac{dm_i^*}{dn} \right. \nonumber \\ && ~~~~~~~~~~\left.-\frac{n}{m_i^*}\left(\frac{dm_i^*}{dn}\right)^2+n\frac{d^2m_i^*}{dn^2}\right] \\ \frac{dQ_i}{dn_i} &=&-\frac{3}{2m_i^*}\left[\frac{dm_i^*}{dn} \right. \nonumber \\ && ~~~~~~~~~~\left.-\frac{n}{m_i^*}\frac{dm_i^*}{dn}\frac{dm_i^*}{dn_i}+n\frac{d^2m_i^*}{dndn_i}\right] \\ \frac{dQ_i}{dn_j} &=&-\frac{3}{2m_i^*}\left[\frac{dm_i^*}{dn} \right. \nonumber \\ && ~~~~~~~~~~\left.-\frac{n}{m_i^*}\frac{dm_i^*}{dn}\frac{dm_i^*}{dn_j}+n\frac{d^2m_i^*}{dndn_j}\right] \end{eqnarray} \section{CONTRIBUTIONS FROM LEPTONS AND PHOTONS} Charge neutrality requires that the total charge of the protons be exactly cancelled by that of the electrons. At $T=0$, this can be stated in terms of the number densities as $n_p = n_{e^-}$, where the electron (with its 2 spin degrees of freedom) number density $n_{e^-}$ is given by \begin{equation} n_{e^-} = 2\int_0^{k_{Fe^-}}\frac{d^3k}{(2\pi)^3} = \frac {k_{Fe^-}^3}{3\pi^2} \end{equation} so that the electron Fermi momentum is $k_{Fe^-} = (3\pi^2n_{e^-})^{1/3}$. The chemical potential of the electrons is equal to their energy on the Fermi surface: \begin{equation} \mu_{e^-} = \epsilon_{Fe^-} = (k_{Fe^-}^2+m_e^2)^{1/2}. \end{equation} Because electromagnetic interactions yield negligible corrections ~\cite{kapusta}, electrons can be treated as a free Fermi gas and hence their contributions to the energy density and the pressuse of the system are \begin{eqnarray} \varepsilon_{e^-} &=& 2\int_0^{k_{Fe^-}}\frac{d^3k}{(2\pi)^3}(k^2+m_e^2)^{1/2} \nonumber \\ &=& \frac{1}{8\pi^2}\left[k_{Fe^-}\epsilon_{Fe^-}(2k_{Fe^-}^2+m_e^2)\right. \nonumber \\ && \left. +m_e^4\ln\left(\frac{m_e}{k_{Fe^-}+\epsilon_{Fe^-}}\right)\right] \\ p_{e^-} &=& \frac{2}{3}\int_0^{k_{Fe^-}}\frac{d^3k}{(2\pi)^3}\frac{k^2}{(k^2+m_e^2)^{1/2}} \nonumber \\ &=& \frac{1}{24\pi^2}\left[k_{Fe^-}\epsilon_{Fe^-}(2k_{Fe^-}^2-3m_e^2)\right. \nonumber \\ && \left. +3m_e^4\ln\left(\frac{k_{Fe^-}+\epsilon_{Fe^-}}{m_e}\right)\right] \end{eqnarray} At finite $T$, one must consider the net electric charge of electrons and positrons because in supernovae temperature rises well above the 1 MeV threshold for $e^-e^+$ pair production. Accordingly, the charge neutrality condition becomes $n_p = n_{e^-} - n_{e^+} \equiv n_e$, where the net lepton density is given by \begin{equation} n_{e} = 2\int_0^{k_{Fe^-}}\frac{d^3k}{(2\pi)^3}\left[\frac{1}{1+e^{\frac{k-\mu_e}{T}}} -\frac{1}{1+e^{\frac{k+\mu_e}{T}}}\right] \label{ne} \end{equation} with the chemical potentials of electrons and positrons being equal in magnitude, but opposite in sign. In the range of densities and temperatures pertaining to supernovae $\mu_e,~T \gg m_e$ and thus the relativistic limit applies: \begin{eqnarray} \epsilon_k &=& (k^2+m_e)^{1/2} \simeq k\left(1+\frac{m_e^2}{2k^2}\right) \\ \frac{1}{1+e^{\frac{\epsilon_k\pm\mu_e}{T}}} &\simeq& \frac{1}{1+e^{\frac{k\pm\mu_e}{T}}} \nonumber \\ && \pm \frac{\partial}{\partial \mu_e}\left(\frac{m_e^2}{2k}\frac{1}{1+e^{\frac{k\pm\mu_e}{T}}}\right) \end{eqnarray} Then, Eq. (\ref{ne}) can be integrated analytically with the result \begin{equation} n_e = \frac{\mu_e^3}{3\pi^2}\left[1+\mu_e^{-2}(\pi^2T^2-\frac{3}{2}m_e^2)\right] \end{equation} which can be solved for the chemical potential \begin{eqnarray} \mu_e &=& \left(\frac{3\pi^2n_e}{2}\right)^{1/3} \nonumber \\ &*& \left\{\left(1-\left[1+\left(\frac{\pi^2T^2}{3}-\frac{m_e^2}{2}\right)^3 \left(\frac{2}{3\pi^2n_e}\right)^2\right]^{1/2}\right)^{1/3} \right. \nonumber \\ &+& \left. \left(1+\left[1+\left(\frac{\pi^2T^2}{3}-\frac{m_e^2}{2}\right)^3 \left(\frac{2}{3\pi^2n_e}\right)^2\right]^{1/2}\right)^{1/3} \right\} \nonumber \\ \label{muerel} \end{eqnarray} The total energy density, total pressure, and total entropy density of the leptons in the relativistic regime are \begin{eqnarray} \varepsilon_e &=& \varepsilon_{e^-} + \varepsilon_{e^+} \nonumber \\ &=& \frac{\mu_e^4}{4\pi^2}\left[1+\mu_e^{-2}(2\pi^2T^2-m_e^2) \right. \nonumber \\ && \left. +\pi^2T^2\mu_e^{-4}\left(\frac{7\pi^2T^2}{15}-\frac{m_e^2}{3}\right)\right] \label{eerel} \\ p_e &=& p_{e^-} + p_{e^+} \nonumber \\ &=& \frac{\mu_e^4}{12\pi^2}\left[1+\mu_e^{-2}(2\pi^2T^2-3m_e^2) \right.\nonumber \\ && \left. +\pi^2T^2\mu_e^{-4}\left(\frac{7\pi^2T^2}{15}-m_e^2\right)\right] \label{perel}\\ s_e &=& \frac{\varepsilon_e+p_e-\mu_en_e}{T} \nonumber \\ &=& \frac{\mu_e^2T}{3}\left[1+\mu_e^{-2}\left(\frac{7\pi^2T^2}{15}-\frac{m_e^2}{2}\right)\right] \label{serel} \end{eqnarray} In the limit $m_e\rightarrow 0$, $p_e = \frac{1}{3} \varepsilon_e$. The specific heats at constant volume and constant pressure can be obtained by \begin{eqnarray} C_{Ve} &=& \frac{1}{n_e}\left.\frac{\partial \varepsilon_e}{\partial T}\right|_{n_e} \nonumber \\ &=& \frac{1}{n_e}\left(\left.\frac{\partial \varepsilon_e}{\partial \mu_e}\right|_{T} \left.\frac{\partial \mu_e}{\partial T}\right|_{n_e} +\left.\frac{\partial \varepsilon_e}{\partial T}\right|_{\mu_e}\right) \\ C_{Pe} &=& \left.\frac{\partial}{\partial T}\left(\frac{\varepsilon_e+p_e}{n_e}\right)\right|_{p_e} \nonumber \\ &=& \frac{1}{n_e}\left(\left.\frac{\partial \varepsilon_e}{\partial \mu_e}\right|_{T} \left.\frac{\partial \mu_e}{\partial T}\right|_{p_e} +\left.\frac{\partial \varepsilon_e}{\partial T}\right|_{\mu_e}\right) \nonumber \\ &-& \frac{(\varepsilon_e+p_e)}{n_e^2} \left(\left.\frac{\partial n_e}{\partial \mu_e}\right|_{T} \left.\frac{\partial \mu_e}{\partial T}\right|_{p_e} +\left.\frac{\partial n_e}{\partial T}\right|_{\mu_e}\right) \end{eqnarray} where \begin{eqnarray} \left.\frac{\partial \varepsilon_e}{\partial \mu_e}\right|_{T} &=& \frac{\mu_e^3}{\pi^2}\left[1+\mu_e^{-2}\left(\pi^2T^2-\frac{m_e^2}{2}\right)\right] \\ \left.\frac{\partial \mu_e}{\partial T}\right|_{n_e} &=& -\frac{2\pi^2T}{3\mu_e\left[1+\pi^2\mu_e^{-2}\left(\frac{T^2}{3}-\frac{m_e^2}{2\pi^2}\right)\right]} \\ \left.\frac{\partial \varepsilon_e}{\partial T}\right|_{\mu_e} &=& T\mu_e^2\left[1+\mu_e^{-2}\left(\frac{7\pi^2T^2}{15}-\frac{m_e^2}{6}\right)\right] \\ \left.\frac{\partial \mu_e}{\partial T}\right|_{p_e} &=& -\frac{\mu_e^2T}{3\pi^2n_e}\left[ 1+\frac{3\pi^2}{\mu_e^2}\left(\frac{7\pi^2T^2}{15}-\frac{m_e^2}{2}\right)\right] \\ \left.\frac{\partial n_e}{\partial \mu_e}\right|_{T} &=& \frac{\mu_e^2}{\pi^2}\left[1+\pi^2\mu_e^{-2}\left(\frac{T^2}{3}-\frac{m_e^2}{2\pi^2}\right)\right] \\ \left.\frac{\partial n_e}{\partial T}\right|_{\mu_e} &=& \frac{2\mu_eT}{3} . \end{eqnarray} Finally, we present the derivatives of the electron chemical potential with respect to the proton and neutron number densities. These are essential for our subsequent discussion of the low-to-high-density phase transition of $\mathcal{H}_{APR}$ and of our treatment of it by means of a Maxwell construction. At $T=0$, we have \begin{eqnarray} \frac{\partial \mu_e}{\partial n_p} = \frac{k_{Fe^-}^2}{3n_{e^-}\mu_e} \quad {\rm and} \quad \frac{\partial \mu_e}{\partial n_n} = 0 \,, \end{eqnarray} whereas at finite temperature $(T>1~\mbox{MeV})$ \begin{eqnarray} \frac{\partial \mu_e}{\partial n_p} = \frac{3\pi^2}{\pi^2T^2-\frac{m^2}{2}+3\mu_e^2} \quad {\rm and} \quad \frac{\partial \mu_e}{\partial n_n} = 0 \,. \end{eqnarray} When $T<1$ MeV, numerical evaluation of the relevant FD integrals is required. The numerical methods adopted in this work are outlined in Appendix D. The contributions from photons are adequately given by the standard blackbody relations for the energy density, the pressure, and the entropy density: \begin{equation} \varepsilon_{\gamma} = \frac{\pi^2}{15}\frac{T^4}{(\hbar c)^3}\,, \quad p_{\gamma} = \frac{\varepsilon_{\gamma}}{3}\,, \quad {\rm and} \quad s_{\gamma} = \frac{4}{3}\frac{\varepsilon_{\gamma}}{T}, \end{equation} respectively. These remain very small compared to the baryonic and leptonic contributions for all temperatures relevant to the supernova problem and, for most practical purposes, can be ignored with no repercussions. \section{NUMERICAL NOTES} The electronic state variables involve relativistic Fermi-Dirac integrals, the general form of which is \begin{equation} F_{\lambda}(\psi,x) = \int_0^{\infty}\frac{\alpha^{\lambda}\left(\frac{\alpha}{x}+1\right)^{1/2}} {1+e^{\alpha-\psi}} d\alpha \end{equation} where \begin{eqnarray} x &=& \frac{m_e}{T} \\ \alpha &=& \frac{(k^2+m_e^2)^{1/2}}{T} + x \\ \psi &=& \frac{\mu_e-m_e}{T} \end{eqnarray} In particular, the number density, the energy density, and the pressure are given by \begin{eqnarray} n_e &=& \frac{\sqrt{2}}{\pi^2}T^{5/2}m_e^{1/2}(F_{3/2}+xF_{1/2}) \\ \varepsilon_e &=& \frac{\sqrt{2}}{\pi^2}T^{7/2}m_e^{1/2}(F_{5/2}+2xF_{3/2}+x^2F_{1/2}) \\ p_e &=& \frac{\sqrt{2}}{3\pi^2}T^{7/2}m_e^{1/2}(F_{5/2}+2xF_{3/2}) \end{eqnarray} respectively. \\ We evaluate these quantities numerically, using the JEL method ~\cite{jel} whereby they are expressed algebraically in terms of the mass, the temperature, and the chemical potential: \begin{eqnarray} n_e & = & \frac{m_e^3}{\pi^2}\frac{fg^{3/2}(1+g)^{3/2}}{(1+f)^{M+1/2}(1+g)^N (1+f/a)^{1/2}} \nonumber \\ &*& \sum_{m=0}^M\sum_{n=0}^Np_{mn}f^mg^n \left[1+m+\left(\frac{1}{4}+\frac{n}{2}-M\right)\frac{f}{1+f}\right. \nonumber \\ &&~~~~~~+\left.\left(\frac{3}{4}-\frac{N}{2}\right)\frac{fg}{(1+f)(1+g)}\right] \\ U_e &=& \varepsilon_e-m_en_e \nonumber \\ &=& \frac{m_e^4}{\pi^2} \frac{fg^{5/2}(1+g)^{3/2}}{(1+f)^{M+1}(1+g)^N} \sum_{m=0}^M\sum_{n=0}^Np_{mn}f^mg^n \nonumber \\ &&~~~~~\times\left[\frac{3}{2}+n+\left(\frac{3}{2}-N\right)\frac{g}{1+g}\right] \label{eejel} \\ p_e & = & \frac{m_e^4}{\pi^2} \frac{fg^{5/2}(1+g)^{3/2}}{(1+f)^{M+1}(1+g)^N} \sum_{m=0}^M\sum_{n=0}^Np_{mn}f^mg^n \label{pejel} \end{eqnarray} where \begin{eqnarray} \psi &=& \frac{\mu_e-m_e}{T}=2(1+f/a)^{1/2} \ln\left[\frac{(1+f/a)^{1/2}-1}{(1+f/a)^{1/2}+1}\right] \nonumber \\ \label{muejel} \\ g &=& \frac{T}{m_e}(1+f)^{1/2}\equiv t(1+f)^{1/2}. \end{eqnarray} The coefficients $p_{mn}$ for $M=N=3$ and $a=0.433$ are displayed in table \ref{jelpmn}. \\ \begin{table}[h] \begin{center} \begin{tabular}{|l|l|l|l|l|} \hline $p_{mn}$ & $n=0$ & $n=1$ & $n=2$ & $n=3$ \\ \hline $m=0$ & 5.34689 & 18.0517 & 21.3422 & 8.53240 \\ $m=1$ & 16.8441 & 55.7051 & 63.6901 & 24.6213 \\ $m=2$ & 17.4708 & 56.3902 & 62.1319 & 23.2602 \\ $m=3$ & 6.07364 & 18.9992 & 20.02285 & 7.11153 \\ \hline \end{tabular} \caption[JEL Coefficients]{JEL coefficients $p_{mn}$ for $M=N=3$ and $a=0.433$} \label{jelpmn} \end{center} \end{table} The entropy density and the free energy density follow from standard thermodynamic relations: \begin{eqnarray} s_e &=& \frac{1}{T}(\varepsilon_e+p_e-\mu_en_e) \label{se}\\ \mathcal{F}_e &=& \varepsilon_e -Ts_e \end{eqnarray} Furthermore, by taking derivatives of $n_e$, $U_e$, and $p_e$ with respect to $\psi$ and $t$ we can get the susceptibilities and the specific heats: \begin{eqnarray} \left.\frac{\partial \mu_e}{\partial n_p}\right|_{n_n} &=& T\left(\left.\frac{\partial n_e}{\partial \psi}\right|_t -t^2\left.\frac{\partial n_e}{\partial t}\right|_{\psi}\right)^{-1} \\ \left.\frac{\partial \mu_e}{\partial n_n}\right|_{n_p} &=& 0 \\ C_{Ve} &=& \frac{1}{n_em_e}\left(\left.\frac{\partial U_e}{\partial t}\right|_{\psi} -\left.\frac{\partial U_e}{\partial \psi}\right|_t \frac{\left.\frac{\partial n_e}{\partial t}\right|_{\psi}} {\left.\frac{\partial n_e}{\partial \psi}\right|_t}\right) \label{cve} \\ C_{Pe} &=& \frac{1}{n_em_e}\left(\left.\frac{\partial U_e}{\partial t}\right|_{\psi} -\left.\frac{\partial U_e}{\partial \psi}\right|_t \frac{\left.\frac{\partial p_e}{\partial t}\right|_{\psi}} {\left.\frac{\partial p_e}{\partial \psi}\right|_t}\right) \nonumber \\ &-&\frac{U_e+p_e}{n_e^2m_e} \left(\left.\frac{\partial n_e}{\partial t}\right|_{\psi} -\left.\frac{\partial n_e}{\partial \psi}\right|_t \frac{\left.\frac{\partial p_e}{\partial t}\right|_{\psi}} {\left.\frac{\partial p_e}{\partial \psi}\right|_t}\right) \label{cpe} \end{eqnarray} where \begin{eqnarray} \left.\frac{\partial}{\partial \psi}\right|_t &=& \frac{f}{1+f/a} \left(\left.\frac{\partial}{\partial f}\right|_g +\frac{t^2}{2g}\left.\frac{\partial}{\partial g}\right|_f\right) \\ \left.\frac{\partial}{\partial t}\right|_{\psi} &=& \frac{g}{t}\left.\frac{\partial}{\partial g}\right|_f. \end{eqnarray} The non-relativistic Fermi-Dirac integrals \begin{eqnarray} F_{\lambda}(\psi) &=& \int_0^{\infty}\frac{x^{\lambda}}{1+e^{x-\psi}}dx \\ x &=& \frac{1}{T}\frac{\hbar^2k^2}{2m^*}, ~~~~\psi = \frac{\mu-V(n)}{T} \end{eqnarray} that are relevant to the thermodynamics of the nucleons are treated by the method developed in \cite{cody}. There, three different approximations and corresponding intervals are given for each of $F_{3/2}$, $F_{1/2}$, and $F_{-1/2}$: \begin{eqnarray} F_{\lambda}(\psi) &=& e^{\psi}\left[\Gamma(\lambda+1)+e^{\psi} \frac{\displaystyle{\sum_{s=0}^np_se^{s\psi}}}{\displaystyle{\sum_{s=0}^nq_se^{s\psi}}}\right], ~~~-\infty<\psi\le1 \nonumber \\ \\ F_{\lambda}(\psi) &=& \frac{\displaystyle{\sum_{s=0}^np_s\psi^s}}{\displaystyle{\sum_{s=0}^nq_s\psi^s}}, ~~~1\le\psi\le4 \\ F_{\lambda}(\psi) &=& \psi^{\lambda+1}\left[\frac{1}{\lambda+1}+\frac{1}{\psi^2} \frac{\displaystyle{\sum_{s=0}^np_s\psi^{-s}}}{\displaystyle{\sum_{s=0}^nq_s\psi^{-2s}}}\right], ~~~4\le\psi<\infty \nonumber \\ \end{eqnarray} In our code, we have used the coefficients of the $n=4$ case as they appear in \cite{cody}.\\ These integrals have also been computed using the non-relativistic version of the JEL approach: \begin{eqnarray} F_{3/2} & = & \frac{3f(1+f)^{1/4-M}}{2\sqrt{2}}\sum_{m=0}^Mp_mf^m \\ F_{1/2} & = & \frac{f(1+f)^{1/4-M}}{\sqrt{2(1+f/a)}} \nonumber \\ &*& \sum_{m=0}^Mp_mf^m \left[1+m-\left(M-\frac{1}{4}\right)\frac{f}{1+f}\right] \\ F_{-1/2} & = & -\frac{f}{a(1+f/a)^{3/2}}F_{1/2} \nonumber \\ &+& \frac{\sqrt{2}f(1+f)^{1/4-M}}{1+f/a} \sum_{m=0}^M p_mf^m \left[(1+m)^2 \right. \nonumber \\ &-& \left.\left(M-\frac{1}{4}\right)\frac{f}{1+f} \left(3+2m-\left[M+\frac{3}{4}\right]\frac{f}{1+f}\right)\right] \nonumber \\ \end{eqnarray} with \begin{equation} \psi = \frac{\mu-V(n)}{T} = 2(1+f/a)^{1/2}+\mbox{ln}\left[\frac{(1+f/a)^{1/2}-1}{(1+f/a)^{1/2}+1}\right] \end{equation} The coefficients $M$, $a$, and $p_m$ in the above equations are contained in Table VI under the n=0 column. The agreement between the two methods is excellent. \section{CAUSAL EQUATIONS OF STATE} It is not unusual for equations of state from non-relativstic potential models to become acausal at some high density. Causality is preserved as long as the speed of sound $c_s$ is less than or equal to the speed of light $c$. In this appendix, we present a thermodynamically consistent method by which an EOS based on a non-relativistic potential model can be modified so that it remains causal at arbitrary high densities, both at zero temperature and at finite temperature. \section*{Zero temperature case} In terms of the pressure $P$ and energy density $\epsilon$ , the condition for an EOS to remain casual is \begin{equation} \left(\frac{c_s}{c}\right)^2 \equiv \beta = \frac{dP}{d\epsilon} = \frac{dP}{dn}\left(\frac{d\epsilon}{dn}\right)^{-1}\le 1 \,. \label{beta} \end{equation} Including the rest-mass energy density $mn$, the total energy density is \begin{equation} \epsilon = \varepsilon + mn\,, \end{equation} where $\varepsilon$ is the internal (or specific) energy density of matter. The pressure and its density derivative are then \begin{equation} P = n\frac{d\varepsilon}{dn}-\varepsilon = n\mu - \varepsilon \qquad {\rm and} \qquad \frac{dP}{dn} = n\frac{d\mu}{dn}\,. \label{ident}\\ \end{equation} \noindent We can thus write (\ref{beta}) as a first order differential equation (DE): \begin{equation} \frac{d\mu}{dn} - \frac{\beta}{n}\mu = \frac{\beta m}{n}\,. \label{1DE} \end{equation} The integrating factor of Eq. (\ref{1DE}) is given by \begin{equation} f(n) = \exp\left\{-\beta \int\frac{dn}{n}\right\} = n^{-\beta}\,, \end{equation} and has the property \begin{equation} \frac{d}{dn}[n^{-\beta}\mu] = n^{-\beta}~ \frac{\beta m}{n}. \label{ifac} \end{equation} Integration of Eq. (\ref{ifac}) leads to \begin{equation} \mu = \frac{d\varepsilon}{dn} = -m+c_1n^{\beta}\,, \end{equation} where $c_1$ is a constant of integration. A second integration results in \begin{equation} \varepsilon = -mn + \frac{c_1n^{\beta+1}}{\beta+1} + c_2 \end{equation} with another constant of integration $c_2$, and therefore \begin{equation} P = c_1\frac{\beta}{\beta+1}n^{\beta+1} - c_2\,. \end{equation} The integration constants $c_1$ and $c_2$ are determined by the boundary conditions \begin{equation} \varepsilon(n_f) = \varepsilon_f \qquad {\rm and} \qquad P(n_f) = P_f \,, \label{pf} \end{equation} where $n_f$ is the causality fixing density, about 0.9-0.95 $n_a$ (at which the EOS becomes acausal), which is chosen such that \begin{equation} \left.\frac{dP}{d\epsilon}\right|_{n_a} = 1\,, \end{equation} and the functional forms of $\varepsilon(n)$ and $P(n)$ are those obtained from the original Hamiltonian density. From Eqs. (\ref{pf}), we get \begin{equation} c_1 = \frac{\epsilon_f+P_f}{n_a^{\beta+1}} \quad {\rm and} \quad c_2 = \frac{1}{\beta+1}(\beta\epsilon_f-P_f). \label{c12} \end{equation} Thus the energy density and the pressure are given by \begin{eqnarray} \varepsilon &=& -mn + \frac{(\epsilon_f+P_f)}{\beta+1} \left(\frac{n}{n_f}\right)^{\beta+1} + \frac{\beta\epsilon_f-P_f}{\beta+1} \label{veps} \\ P &=& \frac{\beta}{\beta+1}(\epsilon_f+P_f) \left(\frac{n}{n_f}\right)^{\beta+1} -\frac{\beta\epsilon_f-P_f}{\beta+1}. \label{pp} \end{eqnarray} Equations (\ref{veps})-(\ref{pp}) can be used for $n \ge n_a$ with $\beta \le 1$ so that causality is never violated. Thermodynamic consistency is built-in, because Eqs. (\ref{veps})-(\ref{pp}) obey the general identity (\ref{ident}). \section*{Finite temperature case} At finite temperature, the causality condition becomes \begin{equation} \beta = \left.\frac{dP}{d\epsilon}\right|_s = \left.\frac{dP}{dn}\right|_s\left(\left.\frac{d\epsilon}{dn}\right|_s\right)^{-1}\le 1 \,. \label{betat} \end{equation} We transform the first term to the variables $n$ and $T$ by the use of Jacobians to get \begin{equation} \left.\frac{dP}{dn}\right|_s = \gamma \left.\frac{dP}{dn}\right|_T \qquad {\rm with} \qquad \gamma = \frac{C_P}{C_V}. \end{equation} The second term of (\ref{betat}) can be written as \begin{equation} \left.\frac{d\epsilon}{dn}\right|_s = \left.\frac{d(\varepsilon+mn)}{dn}\right|_s = \mu + m \end{equation} by employing the identity \begin{equation} \mu = \left.\frac{d\varepsilon}{dn}\right|_s = \left.\frac{d\mathcal{F}}{dn}\right|_T \end{equation} where $\mathcal{F}$ is the free energy density. The pressure and its density derivative at finite temperature change to \begin{equation} P = n\left.\frac{d\mathcal{F}}{dn}\right|_T-\mathcal{F} = n\mu - \mathcal{F} \qquad {\rm and} \qquad \left.\frac{dP}{dn}\right|_T = n\left.\frac{d\mu}{dn}\right|_T \,. \end{equation} Thus the finite-T equivalent of (\ref{1DE}) is: \begin{equation} \left.\frac{d\mu}{dn}\right|_T - \frac{\beta/\gamma}{n}\mu = \frac{\beta/\gamma m}{n}\, \end{equation} which leads to (by full analogy with the zero-T case) \begin{eqnarray} c_1 &=& \frac{\mathcal{F}_f+mn_f+P_f}{n_a^{\beta/\gamma+1}} \\ c_2 &=& \frac{1}{\beta/\gamma+1}\left[\frac{\beta}{\gamma}(\mathcal{F}_f+mn_f)-P_f\right] \\ \mathcal{F} &=& -mn + \frac{(\mathcal{F}_f+mn_f+P_f)}{\beta/\gamma+1} \left(\frac{n}{n_f}\right)^{\beta/\gamma+1} \nonumber \\ &+& \frac{\beta/\gamma(\mathcal{F}_f+mn_f)-P_f}{\beta/\gamma+1} \\ P &=& \frac{\beta/\gamma}{\beta/\gamma+1}(\mathcal{F}_f+mn_f+P_f) \left(\frac{n}{n_f}\right)^{\beta/\gamma+1} \nonumber \\ &-&\frac{\beta/\gamma(\mathcal{F}_f+mn_f)-P_f}{\beta/\gamma+1}. \end{eqnarray} Note that $\beta$ and $\gamma$ should be evaluated at $n_f$.
1,941,325,220,591
arxiv
\section*{Acknowledgment} The author is supported by the German Research Foundation (SA \mbox{2864/1-1} and SA \mbox{2864/3-1}).
1,941,325,220,592
arxiv
\section{Acknowledgements} This project was supported in part by NSF CAREER AWARD 1942230, an IBM faculty award, a grant from Capital One, and a Simons Fellowship on Deep Learning Foundations. This work was supported through the IBM Global University Program Awards initiative. Authors thank Ritesh Soni, Steven Loscalzo, Bayan Bruss, Samuel Sharpe and Jason Wittenbach for helpful discussions. {\small \bibliographystyle{ieee_fullname} \section{Introduction} \label{sec:introduction} When deploying machine learning models in the real world, we need to ensure safety and reliability along with the performance. The models which perform well on the training data can be easily fooled when deployed in the wild \cite{nguyen2014deep,szegedy2013intriguing}. Recognizing novel or anomalous samples in the landscape of constantly changing data is considered an important problem in AI safety~\cite{amodei2016concrete}. Flagging anomalies is of utmost importance in many real-life applications of machine learning such as self-driving and medical diagnosis. The task of identifying such novel or anomalous samples has been formalized as Anomaly Detection (AD). This problem has been studied for years under various names, like novelty detection, out-of-distribution detection, open set recognition, uncertainty estimation, and so on~\cite{hodge2004survey, chandola2009anomaly, chalapathy2019deep}. If the training data has the class labels available within the normal samples, several approaches have been proposed for OOD detection on top of or within a neural network classifier \cite{hendrycks2016baseline, liang2017enhancing, vyas2018out, hsu2020generalized, hendrycks2018deep, lee2018simple}. While these methods perform exceptionally well, they cannot be used in unsupervised or one class classification scenarios where labels are missing or not available for most of the classes, for instance, in credit card fraud recognition scenario, we are presented with lot of normal transactions, but no additional label available for transaction type. A rather obvious choice in such cases is to learn the underlying distribution of the data using generative models. Within deep generative models, two styles of approaches are popular, (1) Likelihood based models like Flow models or Autoregressive models, and use the estimated likelihood to recognise anomalies (2) AutoEncoder (AE) style approaches where reconstruction error of a given input is used to recognize the anomalies. While likelihood based approaches allow computation of exact likelihood for a given sample, they are found to assign high likelihood score to out-of-distribution samples as noted in the recent literature~\cite{choi2018waic, nalisnick2018deep, ren2019likelihood}. The goal of AE based approaches is to learn a good latent representation of data by either performing reconstruction, or adversarial training with a discriminator~\cite{schlegl2017unsupervised,zenati2018adversarially, akcay2018ganomaly, akccay2019skip, ngo2019fence}. In this work, we focus on the latter, i.e., the AE style methods and resolve two specific problems associated with them. First, the $\ell_{p}$ loss used for reconstruction by AutoEncoder (AE) methods compares only pixel-level errors but does not capture the high-level structure in the image. \cite{munjal2020implicit,rosca2017variational} proposed to alleviate this problem by introducing an adversarial loss~\cite{goodfellow2014generative}. While adversarial loss fixes the problem of blurry reconstructions in low-diversity settings such as CelebA~\cite{liu2018large} faces, quality of reconstruction remains poor for more diverse datasets such as CIFAR~\cite{krizhevsky2009learning} with many unrelated sub-classes like cats, and airplanes~\cite{munjal2020implicit}. We posit that this issue arises because the loss function in ~\cite{munjal2020implicit} compares distributions for a batch of samples but not the individual samples themselves. Hence a cat image reconstructed as an airplane is still a feasible solution since both airplane and cat belong to the same unlabeled input distribution. To address this problem, we propose Mirrored Wasserstein loss, where for a given sample $\mathbf{x}$ and its reconstruction $\hat{\mathbf{x}}$, a discriminator measures the Wasserstein distance between the joint distribution $(\mathbf{x},\mathbf{x})$ and $(\mathbf{x},\hat{\mathbf{x}})$. Stacking the image with its reconstruction allows discriminator to not only minimize the distance between distributions of images and reconstructions as before, but also ensures that each reconstruction is pushed closer to its ground truth. In \S~\ref{sec:Mirrored loss}, we give an intuition on how the mirrored Wasserstein loss improves the reconstructions quality as compared to the Wasserstein loss. The second problem associated with AE methods is the regularization of latent space. In absence of explicit regularization, the model ends up overfitting the training distribution. Several regularization approaches have been proposed in the past~\cite{kingma2013auto,makhzani2015adversarial}, typically with a goal of sampling from the latent distribution. In our work, we consider regularizing the latent space of the model from the perspective of anomaly detection. Ideally, we want the latent space to be smooth and compact for the samples with in the distribution, while simultaneously pushing away out-of-distribution samples. To this end, we perform a simplex interpolation between latent representations of multiple samples in the training data, to ensure that decoder reconstructions of these latents are also realistic~\cite{berthelot2018understanding}. For the training purposes, we generate synthetic negative samples by sampling from atypical set in latent space~\cite{cover1999elements}. Our latent space regularizer ensures high quality reconstructions for in-distribution latent codes thus improving the Anomaly Detection performance as demonstrated quantitatively in Section~\ref{sec:experiments}. In summary, our main contributions are: \begin{itemize} \item We propose \textbf{Adversarial~Mirrored~AutoEncoder (AMA)}, an AutoEncoder~Discriminator style network that uses Mirrored Wasserstein loss in the discriminator to enforce better reconstructions on diverse datasets. \item We propose Latent space regularization during training by performing \textbf{Simplex Interpolation} of normal samples in the latent space and by sampling \textit{synthetic negatives} by \textbf{Atypical Selection} and optimizing the latent space to be away from them. \item We propose an anomaly score metric that generates likelihood-like estimate for a given sample with respect to the distribution of reconstruction scores of training data. \end{itemize} We performed extensive evaluations on various benchmark image datasets CIFAR-10~\cite{krizhevsky2009learning}, CIFAR-100~\cite{krizhevsky2009learning}, ImageNet~(resized)\cite{deng2009imagenet}, SVHN~\cite{sermanet2012convolutional}, Fashion MNIST~\cite{xiao2017fashion}, MNIST~\cite{lecun1998gradient}, Omniglot~\cite{lake2019omniglot} and our model outperforms the current state-of-the-art generative methods for anomaly detection. \section{Background} \label{sec:Background} \textcolor{red}{Is this section needed?} This work considers the OOD detection in datasets with no class labels scenario.It is somewhat similar to One class classification/ One class Openset detection problems. Since anomalies are very rare in general, following the same setup as \cite{ruff2018deep, zenati2018adversarially, ren2019likelihood}, we assume that all data in the training set are normal samples. In appendix, we address the scenario in which the training data is corrupted with anomalies. \textcolor{red}{Maybe move it to main text, show the experiments with 5 percent anomalies} Suppose we have an in-distribution dataset $\mathcal{D}_{in}$ consisting of $\{\mathbf{x}_1, \mathbf{x}_2,...\mathbf{x}_m \}$ sampled from the distribution $\mathbb{P}_{in}$. Our objective is to train our model $\mathcal{M}$ such that the anomaly score \section{Related work} \label{sec:relatedwork} \begin{figure*}[ht!] \begin{center} \includegraphics[width=0.9\linewidth]{imgs/AMA_flow.pdf} \end{center} \caption{\textbf{AMA pipeline:} Our model consists of an Encoder $\mathbf{E}$, a Generator $\mathbf{G}$ and a Discriminator $\mathbf{D}$. (a) First we train the model on all the training samples by optimizing the min-max objective from Eq.~\ref{eq:final_objective} with latent space regularization discussed in \S~\ref{sec:latent}. (b) Next, we take the trained AMA module and pass the complete train data to generate R-scores using Eq.~\ref{eq:R_x} and fit them to a Gaussian distribution. (c) During inference, given an image $x_{test}$, we first calculate $R(x_{test})$ by passing it through frozen-AMA module, and then $A(x_{test})$ using Eq.~\ref{eq:A_x}, which is essentially the likelihood of $R(x_{test})$ under the Gaussian curve we generated in (b). Lower the $A(x_{test})$, more the likelihood of the given test sample being anomalous.} \label{fig:AMA_flow} \end{figure*} The problem we are trying to solve is OOD detection in datasets with no class labels. Depending on the field, it is studied under various names like One-class classification, Novelty detection, and so on. \noindent\textbf{Likelihood based approaches:} Since generative modeling techniques such as Glow~\cite{kingma2018glow}, PixelRNN\cite{oord2016pixel}, or PixelCNN++~\cite{salimans2017pixelcnn++} allow us to compute exact likelihood of data samples, several anomaly detection methods are built on the top of the likelihood estimates provided by these models. LLR~\cite{ren2019likelihood} proposes to train two models, one on the background statistics of the training data by random sampling of pixels and second model on the training data itself. Given an image, anomaly score is given by the ratio of likelihoods predicted by these two models. WAIC~\cite{choi2018waic} suggests to use Watanabe Akaike Information Criteria calculated over ensembles of generative model as anomaly scoring metric. Serra et al~\cite{serra2019input} proposes an $\mathcal{S}$-criterion, which is calculated by subtracting complexity estimate of the image from the negative log-likelihood predicted by a PixelCNN++ or a Glow model. Typicality test~\cite{nalisnick2019detecting} proposes a test for typicality of the samples by employing a Monte-carlo estimate of the empirical entropy. A limitation of this method is that it needs multiple images at the same time for evaluation. Some recent studies ~\cite{choi2018waic, nalisnick2018deep, ren2019likelihood} suggest that deep generative models trained on a dataset (say CIFAR-10) assign higher likelihoods to some out-of-distribution (OOD) images (e.g. SVHN). This behaviour is persistent in a wide range of auto-regressive models such as Glow, PixelRNN, and PixelCNN++ and raises the question whether the likelihood provided by these approaches can be reliably used for detecting anomalies. \smallskip \noindent\textbf{AutoEncoders or GANs based methods:} A number of methods proposed recently use a different kind of metric for scoring anomalies. In DeepSVDD~\cite{ruff2018deep}, an Encoder-Decoder network is used to learn the latent representations of the data while minimizing the volume of a lower-dimensional hypersphere that encloses them. They hypothesize that anomalous data is likely to fall outside the sphere, and normal data is likely to fall inside the sphere. This technique is inspired by traditional SVDD (Support Vector Data Description)~\cite{tax2004support} where a hypersphere is used to separate normal samples from anomalies. Ano-GAN~\cite{schlegl2017unsupervised} is one of the first works that uses Generative Adversarial Nets (GANs)~\cite{goodfellow2014generative} for anomaly detection. In this work, a GAN is trained only on normal samples. Since a GAN model is not invertible, an additional optimization is performed to find the closest latent representation for a given test sample. The anomaly score is computed as a combination of reconstruction loss and discriminator loss. FGAN~\cite{ngo2019fence} trains a GAN on the normal samples and uses a combination of adversarial loss and dispersion loss (distance based loss in latent space) to discover anomalies. \cite{akcay2018ganomaly,akccay2019skip} use a series of Encoder, Decoder and Discriminator networks to optimize the reconstructions as well as distance between the representations. ALAD~\cite{zenati2018adversarially} uses BiGAN~\cite{donahue2016adversarial} to improve the latent representations of the data. Each of these methods use discriminator-based score for detecting anomalies. A recent survey by \cite{chalapathy2019deep} does a comprehensive study of anomaly detection approaches. \smallskip \noindent\textbf{Negative Selection Algorithms (NSA):} NSA is one of the early biologically inspired algorithms to solve one-class classification problem, first proposed by \cite{forrest1994self} to detect data manipulation caused by computer viruses. The core idea is to generate synthetic negative samples which do not match normal samples using a search algorithm and use them to train a downstream, supervised anomaly classifier ~\cite{dasgupta2002anomaly,coello2002approach,gonzalez2002combining}. Since the search space for negative samples for high dimension data can grow exponentially very large, it can be computationally very expensive to sample synthetic negatives~\cite{jinyin2011study}. Recent work by \cite{sipple2020interpretable} proposes a simpler approach to perform negative selection by using uniform sampling and building a binary classifier with positives and \textit{synthetic negatives} to perform anomaly detection task. \section{Adversarial Mirrored AutoEncoder (AMA)} \label{sec:Approach} \begin{figure}[t] \centering \includegraphics[width=0.95\columnwidth]{imgs/wloss_vs_mwloss.pdf} % \caption{ Better reconstructions with Mirrored Wasserstein Loss. (a) Ground Truth (b) Reconstructions using AMA with regular Wasserstein loss (c) Reconstructions using AMA with Mirrored-Wasserstein loss. The quantitative comparisons are shown in Table~\ref{table:ood_aucs}} \label{fig:semantic_recons} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.95\columnwidth]{imgs/atyp_selection_illustr_resized.png} \caption{An illustration of the negative sampling from atypical set in the latent space. In each of the cases, typical set resides between the two blue d-dimensional spheres. Synthetic negative latents are drawn from yellow region (a) In \cite{sipple2020interpretable}, a cube centered at the origin is used as the negative sampling region. Instead, we propose to sample the synthetic negatives closer to the typical set between the spheres (b) $\sqrt{d}-\delta$ and $\sqrt{d}$ or (c) $\sqrt{d}$ and $\sqrt{d}+\delta$.} \label{fig:atypical_selection} \end{figure} As discussed earlier, AMA consists of 2 major improvements over the conventional Auto-Encoder architectures: (i) Mirrored Wasserstein Loss, and (ii) Latent space regularization. These improvements help us outperform several state of the art likelihood, as well as reconstruction based anomaly detection methods. Fig.~\ref{fig:AMA_flow} shows an overview of our overall anomaly detection pipeline using AMA. In the following sub-sections, we discuss each of the components of our anomaly detection framework in detail. \subsection{Mirrored Wasserstein Loss} \label{sec:Mirrored loss} For training auto-encoders, $\ell_{1}$ or $\ell_{2}$ reconstruction loss between the original image and its reconstruction, defined as $\|\mathbf{x} - \mathbf{x}_{rec}\|_{p}$, is typically used. Reconstruction losses based on $\ell_{p}$ distances results in blurred decodings, thus producing poor generative models. Also, the use of $\ell_p$ reconstruction losses as anomaly scores, which is the standard technique used in Auto-Encoder based anomaly detection, has several limitations: (1) $\ell_{p}$ distances do not measure the perceptual similarity between images, which makes it hard to detect outliers that are semantically different, (2) A large $\ell_p$ reconstruction loss between input and its decoding can be an outcome of poor generative modeling and not because the image is an outlier. Motivated by the success of Generative Adversarial Networks (GANs) in obtaining improved generations, a number of approaches replace the $\ell_p$ reconstruction losses in Auto-Encoders with an adversarial loss that captures high-level details in the image. While this loss is good enough to get good reconstructions in low-diversity datasets like MNIST, CelebA, but it is not enough to reconstruct diverse datasets like CIFAR-10 or Imagenet \cite{munjal2020implicit}. Regular Wasserstein loss function only ensures the input and its generated sample both belong to the same distribution, but doesn't necessarily make input and its reconstruction look alike. To resolve this problem, for a given sample $\mathbf{x} \sim \mathbb{P}_{X}$ and its reconstruction $\hat{\mathbf{x}} \sim \mathbb{P}_{\hat{X}} $, we perform a Wasserstein minimization between the joint distributions $\mathbb{P}_{X, X}$ and $\mathbb{P}_{X, \hat{X}}$. The discriminator now takes in stacked pairs of input images $(\mathbf{x}, \mathbf{x})$ and $(\mathbf{x}, \hat{\mathbf{x}})$. This clearly avoids the problems discussed in the previous part as the distribution $(\mathbf{x}, \mathbf{x})$ always has pairs of samples that are similar looking. If a car image is reconstructed as an airplane, the generated distribution will contain a (car, airplane) sample, which is never found in the input distribution $(\mathbf{x}, \mathbf{x})$. Hence, the model will aim to generate samples sharing the same semantics. Figure~\ref{fig:semantic_recons} shows the difference in image reconstructions using AMA with regular Wasserstein loss \vs AMA with Mirrored Wasserstein loss. While both the models perform well in terms of image quality, we can see that for the first image, the ground truth is the number 30, and regular Wasserstein loss model is fitting number 9, though very unlike the ground truth, but still from the same distribution, while AMA with Mirrored Wasserstein loss is faithful to the ground truth and reconstructed a very similar looking 30. Formally speaking, our model formulates a distribution of a set of samples $\mathbf{x} \sim \mathbb{P}_{X}$, using the Mirrored Wasserstein loss, as follows: \begin{align}\label{eq:wasserstein_joint} &W(\mathbb{P}_{X,X},\mathbb{P}_{X,\hat{X}}) = \max_{\mathbf{D} \in Lip-1}~\mathop{\mathbb{E}}_{x\sim \mathbb{P}_X}\left[{\mathbf{D}(\mathbf{x},\mathbf{x}) - \mathbf{D}(\mathbf{x},\hat{\mathbf{x}})}\right] \end{align} where $\hat{\mathbf{x}}=\mathbf{G}(\mathbf{E}(\mathbf{x}))$ and \emph{Lip-1} denotes the 1-Lipschitz constraint. Note that Eq.~\eqref{eq:wasserstein_joint} is similar to the loss function of Wasserstein GAN~\cite{martin2017wasserstein} with the only difference that discriminator $\mathbf{D}$ acts on the stacked images $(\mathbf{x}, \mathbf{x})$ and $(\mathbf{x}, \hat{\mathbf{x}})$. This is equivalent to minimizing the Wasserstein distance between conditional distributions $W(\mathbb{P}_{X|X}, \mathbb{P}_{\hat{X}|X})$. This model also shares similarities to discriminator architectures used in conditional image to image translations such as Pix2Pix~\cite{isola2017image}. \begin{lemma} If E and G are optimal encoder and generator networks, i.e., $\mathbb{P}_{X,\mathbf{G}(\mathbf{E}(X))} = \mathbb{P}_{X,X}$, then $\mathbf{x}$ = $\mathbf{G}(\mathbf{E}(\mathbf{x}))$. \end{lemma} \subsection{Latent Space Regularization} \label{sec:latent} The neural networks are universal approximators, and an autoencoder trained without any constraints on the latent space will tend to overfit the training dataset. While several regularization schemes have been proposed, in this section we develop our regularization framework adapted for the task for anomaly detection. \medskip \noindent\textbf{Simplex Interpolation in Latent space} \smallskip \noindent \cite{berthelot2018understanding} showed that by forcing linear combination of a latent codes of a pair of data points to look realistic after decoding, the Encoder learns a better representation of data. This is demonstrated by improved performance on downstream tasks such as supervised learning and clustering. However, \cite{sainburg2018generative} argues that pairwise interpolation between samples of $\mathbf{x}$ proposed by \cite{berthelot2018understanding} does not reach all points within the latent distribution, and may not necessarily make the latent distribution compact. Hence, we propose to use Simplex interpolation between $i$ randomly selected points to make the manifold smoother and amenable. Given $k$ normal samples $\mathbf{x}_1, \mathbf{x}_2,\dots, \mathbf{x}_k$, we uniformly sample $k$ scalars $\alpha_i$ from $[0,0.5]$, we define an interpolated sample as: \begin{align*} \hat{\mathbf{x}}_{inter} &= \mathbf{G}\left(\frac{1}{\sum{\alpha_i}} (\alpha_1 \mathbf{e}_1 + \alpha_2 \mathbf{e}_2 + \dots \alpha_k \mathbf{e}_k ) \right) \\ \mathbf{e}_i &= \mathbf{E}(\mathbf{x}_i) ~~~\forall i \end{align*} Here, $\hat{\mathbf{x}}_{inter}$ denotes the interpolated latent point. A discriminator is then trained to distinguish between $(\mathbf{x}, \mathbf{x})$ pair and $(\mathbf{x}, \hat{\mathbf{x}}_{inter})$ pair, while the generator learns by trying to fool the discriminator. That is, \begin{align*} \min_{\mathbf{G}} \max_{\mathbf{D} \in Lip-1}~\mathop{\mathbb{E}}_{x\sim \mathbb{P}_X}\left[{\mathbf{D}(\mathbf{x},\mathbf{x}) - \mathbf{D}(\mathbf{x},\hat{\mathbf{x}}_{inter})}\right] \end{align*} This ensures that the distribution of interpolated points follow the same distribution as the original data distribution, thereby improving the smoothness in the latent space. We use $k=3$ in all our experiments. We empirically observe that larger values of $k$ give marginal improvements. \medskip \noindent\textbf{Negative Sampling by Atypical Selection} \noindent In our experiments, we observed that regularization on the convex combination of latent codes of training samples works better if we also provide some negative examples, \ie, examples which should not look realistic. Since we are working in an unsupervised setting, we propose to generate synthetic negative samples in the by sampling from ``atypical set'' of the latent space distribution. A typical set of a probability distribution is the set whose elements have information content close to that of the expected information. It is essentially the volume that not only covers most of mass of the distribution, but also reflects the properties of samples from the distribution. Due to the concentration of measure, a generative model will draw samples only from typical set~\cite{cover1999elements}. Even though the typical set has the highest mass, it might not have the highest probability density. Recent works \cite{choi2018waic, nalisnick2019detecting} propose that normal samples reside in typical set while anomalies reside outside of typical set, sometimes even in high probability density region. Hence we propose to sample outside the typical set in the latent space to generate synthetic negatives. The Gaussian Annulus Theorem \cite{blum2016foundations, vershynin2018high} states that in a $d$-dimensional space, a typical set resides with high probability at a distance of $\sqrt{d}$ from the origin. In the absence of true negatives, we can obtain synthetic negatives by sampling the latents just outside and closer to the typical set than the origin and then use the generator for reconstruction. Although, our latent space is not inherently Gaussian, we observe that due to the $\ell_{2}$ regularization placed on the latent encodings, most of the training samples' encodings are close to $\sqrt{d}$ in magnitude. We sample atypical points uniformly between spheres with radii $\sqrt{d}$ and $\sqrt{d} \pm \delta$ as illustrated in Fig.~\ref{fig:atypical_selection}~(b)~(c). We call this procedure \textbf{Atypical Selection}. The $\delta$ and the direction of the selection, \textit{inward} or \textit{outward} are hyperparameters which are chosen based on the true anomaly samples available during the validation time. \cite{sipple2020interpretable} proposed a similar technique where \textit{synthetic negatives} are sampled around the origin as shown in Fig.~\ref{fig:atypical_selection}(a). We show in Table~\ref{table:atypicvssipple} that Atypical Selection outperforms this style of negative selection across multiple benchmarks. \subsection{Overall objective} Let $\mathbb{\widetilde{Q}}_{X}$ be the distribution of all atypical samples and let $\mathbb{P}_X$ be the distribution of normal samples. We consider two different scenarios, first, when we don't have access to any anomalies during training, and the second case when we have access to a few anomalies. \noindent\textbf{Unsupervised case:} We train the AMA using the following min-max objective: \begin{align} \min_{\mathbf{G}}~~\max_{\mathbf{D} \in Lip-1} & \mathcal{L}_{normal} - \lambda_{neg} \mathcal{L}_{neg} \end{align} $\mathcal{L}_{normal}$ part of the loss is to improve the reconstructions of normal in-distribution samples . It consists of 3 terms, first term is inspired by Mirrored Wasserstein loss, making sure that reconstructions look like their ground truths, second term is to ensure the interpolated points look similar to normal points, and the third term is a regularization term on encodings. $- \lambda_{neg} \mathcal{L}_{neg}$ part is to penalize the anomalies. It ensures that anomalies are not reconstructed well. In this paper, since we assume that real anomalies are not available to us during training, we instead use generated \textit{synthetic anomalies} in this term. \begin{align}\label{eq:final_objective} \mathcal{L}_{normal} = &\mathbb{E}_{\mathbf{x} \sim \mathbb{P}_X} \Big[\mathbf{D}(\mathbf{x},\mathbf{x}) - \mathbf{D}(\mathbf{x},\hat{\mathbf{x}}) + \\ & \lambda_{inter} \left( \mathbf{D}(\mathbf{x},\mathbf{x}) - \mathbf{D}(\mathbf{x},\hat{\mathbf{x}}_{inter}) \right) + \nonumber \\ & \lambda_{reg} \|\mathbf{E}(\mathbf{x})\| \Big] \nonumber \\ \mathcal{L}_{neg} = &\mathbb{E}_{\mathbf{x} \sim \mathbb{\widetilde{Q}}_{X}} \left[{\mathbf{D}(\mathbf{x},\mathbf{x}) - \mathbf{D}(\mathbf{x},\hat{\mathbf{x}}_{neg})}\right] \end{align} $\hat{\mathbf{x}}_{neg} = G(\mathbf{z}_{neg})$, where $\mathbf{z}_{neg}$ is the latent sampled by Atypical Selection. $\lambda_{neg}$ is the Atypical Selection hyper-parameter, $\lambda_{inter}$ is the weight for the interpolation component, $\lambda_{reg}$ is the latent space regularization weight and $\|E(x)\|$ acts as regularizer for the latent representations. \smallskip \noindent\textbf{Semi-Supervised case:} If we have a few true anomalies available during the training, we can use the same objective by using real anomalies instead of synthetic negatives in the $\mathcal{L}_{neg}$ term. Please refer to appendix for related experiments. \begin{figure}[!ht] \centering \includegraphics[width=0.95\columnwidth]{imgs/rscores_diff_datasets.pdf} % \caption{In this figure we show the density plots of R-scores predicted on various datasets by an AMA module trained on CIFAR-10 train data. We can note that CIFAR-10 train and test data distributions highly overlap. Depending on the dataset we are considering to be OOD distribution, R-scores of anomalous samples can trend lower or higher than the normal samples. If we use R-score directly to tag the anomalies, it will classify all the images with higher R-scores are anomalies, including many of the CIFAR-10 test samples. Meanwhile, all those to the left will be misidentified as normal samples. Instead of taking R-score on its face value, we propose to create A-score which weighs in the R-score of a given sample with respect to training data R-scores. R-score can be computed using Eq:\ref{eq:R_x} and A-score can be computed using Eq:\ref{eq:A_x}} \label{fig:cifar_rx} \end{figure} \input{4.2_Anomalyscoring} \subsection{Anomaly score} \label{sec:anomaly_metric} Prior work in GAN-based anomaly detection used discriminator output as anomaly score~ \cite{schlegl2017unsupervised,ngo2019fence}. \cite{zenati2018adversarially} proposed an improvement by computing the distance between a sample and its reconstruction in the feature space of the discriminator, \textbf{R-score} (or R(x) score used interchangeably), which can be written as: \begin{align} \label{eq:R_x} R(x) = \|f(x,x) - f(x,\mathbf{G}(\mathbf{E}(x)))\|_1 \end{align} where $f(\cdot,\cdot)$ is the penultimate layer of the discriminator. In \cite{zenati2018adversarially}, authors claim that the anomalous samples will have higher $R(x)$ values compared to that of normal samples. While this is true for the datasets considered in \cite{zenati2018adversarially}, we observed a counter-intuitive behaviour in some OOD detection scenarios. In CIFAR-10 vs SVHN OOD detection experiment, our model and many other AE-based anomaly detectors~(including \cite{zenati2018adversarially}) assign lower R-scores to OOD samples as shown in Fig.~\ref{fig:cifar_rx}. This behavior is similar to the observations in \cite{nalisnick2018deep,choi2018waic} where sample likelihoods are used as anomaly scores. Even though the R-scores distribution of test CIFAR-10 samples overlaps with training distribution quite well, if we use the R-scores to compute AUROC, it results in a very low AUC value(0.442 from Table~\ref{table:ood_aucs}), meaning most of the anomalies are classified as normals. This suggests that this reconstruction-based score is not a robust anomaly scoring function in all OOD detection scenarios. Hence we propose the following technique: (i) fit the R-scores of training data to a Gaussian distribution (ii) compute the anomaly score for a given test sample $x_i$ as the likelihood of $R(x_i)$ under the Gaussian distribution. The proposed anomaly metric, \textbf{A-score} (or A(x) score used interchangeably) can be written as: \begin{align} \label{eq:A_x} A(x_i) = \frac{1}{{\sigma \sqrt {2\pi } }}e^{{{ - \left( {R(x_i) - \mu } \right)^2 } \mathord{\left/ {\vphantom {{ - \left( {x - \mu } \right)^2 } {2\sigma ^2 }}} \right. \kern-\nulldelimiterspace} {2\sigma ^2 }}} \end{align} where $\mu$ is the mean and $\sigma^2$ is the variance of the distribution of R-scores over the training data. A-score metric measures how similar the behaviour of test-time sample to that of training data, while R-score looks at relative behaviour of samples only at the test time. \section{Experiments and Results} \label{sec:experiments} \begin{table*} \centering \caption{In this table we present AUROC scores on various OOD detection across various datasets. All methods in the table have no access to OOD data during training, but a small number of anomalies during validation to choose the best model. All the results are average AUROC values across test dataset, with one sample evaluated at a time except the result for Typicality test\cite{nalisnick2019detecting} which corresponds to using batchsize of 2 of the same type. In the bottom half of the table we show the ablation results of our model AMA with one component missing at a time from our pipeline.} \resizebox{0.99\textwidth}{!}{ \begin{tabular}{lccccccccccc} \hline Trained on: & & \multicolumn{2}{c}{FashionMNIST} & & \multicolumn{3}{c}{CIFAR-10} & & \multicolumn{3}{c}{SVHN} \\ \cline{3-4}\cline{6-8}\cline{10-12} OOD data: & & MNIST & Omniglot & & SVHN & Imagenet & CIFAR-100 && CIFAR-10 & Imagenet & CIFAR-100 \\ \hline WAIC on WGAN ensemble \cite{choi2018waic} & & 0.871 & 0.832 & & 0.623 & 0.626 & - & & - & - & - \\ Likelihood-ratio on PixelCNN++ \cite{ren2019likelihood} & & \textbf{0.994} & - & & 0.931 & - & - & & - & - & - \\ Typicality test on Glow model \cite{nalisnick2019detecting} & & 0.140 & - & & 0.420 & 0.640 & - & & 0.980 & \textbf{1.000} & - \\ DeepSVDD \cite{ruff2018deep} & & 0.864 & 0.999 & & 0.533 & 0.387 & 0.478 & & 0.795 & 0.823 & 0.819 \\ $S$ using PixelCNN++ and FLIF \cite{serra2019input} & & 0.967 & \textbf{1.000} & & 0.929 & 0.589 & 0.535 & & - & - & - \\ \midrule AMA w/o Mirrored Wass. Loss (Ours) & & 0.653 & 0.899 & & 0.800 & 0.526 & 0.510 & & 0.503 & 0.693 & 0.592 \\ AMA w/o Simplex Interpolation (Ours) & & 0.960 & 0.998 & & 0.820 & 0.847 & 0.537 & & 0.991 & 0.993 & 0.987 \\ AMA w/o Atypical selection (Ours) & & 0.894 & 0.997 & & 0.861 & 0.812 & 0.535 & & 0.990 & 0.991 & 0.987 \\ AMA w/o new anomaly scoring (Ours) & & 0.991 & 0.997 & & 0.442 & 0.890 & 0.501 & & \textbf{0.993} & \textbf{1.000} & \textbf{0.988 } \\ AMA (Ours) & & 0.987 & 0.998 & & \textbf{0.958} & \textbf{0.911} & \textbf{0.551} & & \textbf{0.993 } & \textbf{1.000 } & \textbf{0.988} \\ \hline \end{tabular} } \label{table:ood_aucs} \end{table*} \subsection{Experimental Setting} \noindent\textbf{Datasets:} Following the setting in \cite{ren2019likelihood,choi2018waic,serra2019input,nalisnick2019detecting}, we use CIFAR-10, SVHN and FashionMNIST are taken as normal datasets. We evaluate the performance of the models when the anomalies are coming from each of the OOD datasets, ImageNet(resize), CIFAR-100, LSUN(resize), iSUN, CelebA, MNIST, Omniglot, TrafficSign, Uniform random images, Gaussian random images. We also consider the case when anomalies arise within the same data manifold (i.e. same dataset). We evaluated this scenario on CIFAR-10 and MNIST datasets. For these experiments, we consider one class as normal and rest of the 9 classes as anomalies following the setup from \cite{ruff2018deep, zenati2018adversarially}. \smallskip \noindent\textbf{Baselines:} We compare our model against various generative model based anomaly detection approaches. Ren et al \cite{ren2019likelihood} uses likelihood based estimate from a Autoregressive model to discover anomalies. WAIC~\cite{choi2018waic} proposes to use WAIC criteria on top of likelihood estimation methods to find anomalies. Serra et al \cite{serra2019input} leverages complexity estimate of images to detect OOD inputs. Typicality test~\cite{nalisnick2019detecting} proposes to calculate an empirical estimate of entropy of set of samples and use it to recognize anomalies. DeepSVDD \cite{ruff2018deep} optimizes the latent representations of images and uses the distance in latent space as complexity measure. In addition to these, another set of methods \cite{akccay2019skip,zenati2018adversarially, schlegl2017unsupervised, ngo2019fence, ruff2018deep} addresses the scenario of anomalies from the same data manifold (i.e. same dataset) in their respective papers. We have compared our model to these methods in this scenario as well and we believe these methods are just as applicable to OOD samples coming from different data manifold. For these experiments we follow the setup from \cite{zenati2018adversarially, ruff2018deep, schlegl2017unsupervised}, where one class is considered normal and the rest of the classes from the same dataset as anomalies. All the results shown in Table~\ref{table:indistribution_aucs} are for this setting. In DeepSVDD, Global Contrast Normalization is used on the data prior to the training. We removed this additional normalization step to make the method comparable to other baselines. Note that discriminative models such as \cite{hendrycks2016baseline,hendrycks2018deep,hsu2020generalized} achieve high performance in several OOD detection benchmarks, but assume access to the class labels during training. For brevity, we consider only unsupervised baselines in this work. \smallskip \noindent\textbf{BatchNorm Issue:} While we were working on the baselines, we noticed that one of the earlier work~\cite{akccay2019skip}\footnote{https://github.com/samet-akcay/skip-ganomaly} has evaluated their model in the training mode instead of setting it in the evaluation mode. Due to this issue, the BatchNorm is calculated for the test batch, rather than using the train-time statistics. Hence, while reporting results for~\cite{akccay2019skip}, we re-evaluate their models by freezing the BatchNorm statistics during the test time. We follow the same protocol in the case of rest of the models as well. \smallskip \noindent\textbf{Network Architectures and Training:} The generator and the discriminator architectures have residual architectures and are borrowed from Spectral Normalization GAN~\cite{miyato2018spectral}. Our Encoder is a 4 layered convolution network with BatchNorm and LeakyRelu nonlinearity. Refer to appendix for the complete architecture details. Following the setting in \cite{zenati2018adversarially, ren2019likelihood} we assume that we have access to a small number of anomalies during validation time ($\approx 50$ in number). To generate the test set, we randomly sample anomalies from the an OOD dataset, 20\% the size of normal samples, compared to sampling equal number of normal and anomalies scenario presented in \cite{ren2019likelihood,choi2018waic}. We believe our scenario is far more realistic and more stringent.We keep the test data and normalizations same for our model as well as the baselines to make them comparable. The whole pipe-line of our model, AMA, is trained end-to-end with Adam optimizer with $\beta_1 = 0$ and $\beta_2=0.9$ for Generator and Discriminator and $\beta_1 = 0.5$ and $\beta_2=0.9$ for Encoder, initial learning rate of 3e-4 and decaying it by a factor of 0.1 at 30, 60 and 90 training epochs. We trained each model for 100 epochs with a batch size of 256 for all the datasets. If Atypical selection is enabled, we train the model for first the 10 epochs only on normal samples, and from $11^{th}$ epoch onwards we generate synthetic anomalies and use them along with normal samples in training. We use $\lambda_{inter} = 0.5$, $\lambda_{neg}=5$ and $\lambda_{reg}=1$ in all of our CIFAR-10 and SVHN experiments. Refer to the appendix for the hyperparameter values of MNIST experiments. In OOD experiments, for CIFAR-10, we sampled for synthetic anomalies \textit{inward} and for SVHN and Fashion MNIST we sampled \textit{outward}. Experiments are performed using two NVIDIA GTX-2080TI GPUs. \input{5.2_Results} \subsection{Anomaly Detection performance} \label{sec:results} \begin{table*} \centering \caption{Here we show performance of anomaly detection task when anomalies come from an unseen class of the same dataset. Each column denotes the normal class and the rest 9 classes from that respective dataset are considered as anomalies. The performance is measured using AUROC scores, higher the better.} \resizebox{0.85\textwidth}{!}{ \begin{tabular}{l | c c c c c c c c c c | c } \toprule MNIST & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & Average \\ \midrule FGAN\cite{ngo2019fence} & 0.754 & 0.307 & 0.628 & 0.566 & 0.390 & 0.490 & 0.538 & 0.313 & 0.645 & 0.408 & 0.504 \\ ALAD\cite{zenati2018adversarially} & 0.962 & 0.915 & 0.794 & 0.821 & 0.702 & 0.79 & 0.843 & 0.865 & 0.771 & 0.821 & 0.828 \\ Ano-GAN\cite{schlegl2017unsupervised} & 0.902 & 0.869 & 0.623 & 0.785 & 0.827 & 0.362 & 0.758 & 0.789 & 0.672 & 0.720 & 0.731 \\ Skip-Ganomaly\cite{akcay2018ganomaly} & 0.297 & 0.877 & 0.393 & 0.486 & 0.618 & 0.540 & 0.455 & 0.633 & 0.426 & 0.584 & 0.531 \\ DeepSVDD\cite{ruff2018deep} & 0.971 & 0.995 & 0.809 & 0.884 & \textbf{0.920} & 0.869 & 0.978 & 0.940 & \textbf{0.900} & 0.946 & 0.921 \\ AMA (Ours) & \textbf{0.986} & \textbf{0.998} & \textbf{0.882} & \textbf{0.891} & 0.894 & \textbf{0.938} & \textbf{0.981} & \textbf{0.983} & 0.876 & \textbf{0.948} & \textbf{0.938} \\ [1ex] \midrule CIFAR-10 & airplane & automobile & bird & cat & deer & dog & frog & horse & ship & truck & Average \\ [0.5ex] \midrule FGAN\cite{ngo2019fence} & 0.572 & 0.582 & 0.505 & 0.544 & 0.534 & 0.535 & 0.528 & 0.537 & 0.664 & 0.338 & 0.567 \\ ALAD\cite{zenati2018adversarially} & 0.679 & 0.397 & 0.685 & \textbf{0.652} & 0.696 & 0.550 & 0.704 & 0.463 & \textbf{0.787} & 0.391 & 0.601 \\ Ano-GAN\cite{schlegl2017unsupervised} & 0.602 & 0.439 & 0.637 & 0.594 & \textbf{0.755} & 0.604 & \textbf{0.730} & 0.498 & 0.675 & 0.445 & 0.598 \\ Skip-Ganomaly\cite{akccay2019skip} & 0.655 & 0.406 & 0.663 & 0.598 & 0.739 & 0.617 & 0.638 & 0.519 & 0.746 & 0.387 & 0.597 \\ Deep SVDD\cite{ruff2018deep} & 0.682 & 0.477 & 0.679 & 0.573 & 0.752 & 0.628 & 0.710 & 0.511 & 0.733 & 0.567 & 0.631 \\ AMA (Ours) & \textbf{0.752} & \textbf{0.634} & \textbf{0.696} & 0.603 & 0.733 & \textbf{0.650} & 0.658 & \textbf{0.582} & 0.754 & \textbf{0.632} & \textbf{0.669} \\ \bottomrule \end{tabular} } \label{table:indistribution_aucs} \end{table*} We consider two common scenarios used in literature to benchmark the performance of Anomaly Detection techniques. In the first scenario, we consider images from a given dataset as the normal samples and images from a different dataset (typically with a different underlying distribution) as anomalies. In the second scenario, we consider images from one of the categories in the dataset as normal images while all other as anomalies. Note that, in some papers, these two scenarios are referred as as out-of-distribution (OOD) and in-distribution anomalies. We do not make this distinction and use the term ``anomalies" to refer to the either scenario. \medskip \noindent\textbf{Images from different dataset as anomalies} In Table~\ref{table:ood_aucs}, we show the performance of our model and the baselines against 3 different cases. Our first set of experiments uses gray-scale images from Fashion MNIST as normal images while the images from MNIST and Omniglot as OOD images. This is a relatively simple scenario and nearly all the baselines and our model achieve almost perfect AUROC. Even though our model does not have the best AUROC, it is well within the margin of error of the best performing the model. Next two cases are a bit more challenging when the images are colored and more diverse. In first case, we use normal samples from CIFAR-10, and anomalies from SVHN, Imagenet, and CIFAR-100. In the second case, we use normal samples from SVHN, while the anomalies coming from CIFAR-10, Imagenet and CIFAR-100. Our model outperforms all the baselines in both these experiments. This shows that our model, AMA is optimizing the latent space of normal samples well which leads to an impressive generalization behavior. Even though AUROC scores are greater than 0.9 in most of the cases, our model falls short in case of CIFAR-10 vs CIFAR-100 (similar behavior is observed for the other baselines as well). This is a really hard scenario and even humans will have tough time deciding whether a given image is from CIFAR-10 or CIFAR-100. \medskip \noindent\textbf{Images from different categories as anomalies} In Table-\ref{table:indistribution_aucs}, we show Anomaly Detection experiments when anomalies arise from the same data manifold (i.e. same dataset). Each column shows the results of a normal class with the rest of 9 classes as anomalies. Our method~(AMA) outperforms other methods in terms of average scores with 1.7\% AUROC gain over the next best method on MNIST and 3.7\% gain on CIFAR-10 dataset. In terms of an individual case comparison, we best 8 out of 10 cases on MNIST, while 6 out of 10 cases on CIFAR-10. \subsection{Ablation studies} We have introduced 3 main ideas in this paper: Mirrored Wasserstein loss, Latent space regularization using Simplex Interpolation and Atypical Selection, and an alternative Anomaly scoring technique. In the second half of the Table~\ref{table:ood_aucs}, we show the ablation results, removing one component at a time. As expected, removing Mirrored Wasserstein loss reduces the AUROC scores the most. AUROC scores are reduced by an order of $\sim 0.1$ points whenever a part of Latent space regularization is removed. We see that in most of the cases, removing Atypical Selection reduces the scores a bit more than removing Simplex Interpolation. The new anomaly scoring metric contrtibutes the most when the normal sample distribution is more diverse than the OOD distribution, eg: the case of CIFAR-10 as normal and SVHN as OOD. When we used R-score to identify anomalies in this scenario, most of the SVHN samples are tagged normal while most of the CIFAR-10 images tagged as anomalies, thus resulting in lower AUROC. \smallskip \noindent \textbf{Atypical selection vs Sipple 2020:} Performance comparision of Atypical Selection against Negative Sampling proposed in \cite{sipple2020interpretable} is presented in Table~\ref{table:atypicvssipple}. Atypical Selection outperforms \cite{sipple2020interpretable}'s technique in all studied cases. We hypothesize that, since Atypical Selection samples near the boundary of the latent space, it enforces the encoder to create more compact latent space for normal samples. \smallskip \begin{table}[h!] \centering \caption{In this table we present the performace of various models trained using Negative selection proposed by Sipple~\cite{sipple2020interpretable} \textit{vs} Atypical Selection proposed by us, in presence and absence of Simplex interpolation. Values reported are AUROC scores in format Sipple / Atypical Selection } \resizebox{1\columnwidth}{!}{ \begin{tabular}{ c c c c} \toprule Experiment & No interpolation & With interpolation \\ \midrule FashionMNIST \textit{vs} MNIST & 0.778 /\textbf{0.960} & 0.824 / \textbf{0.987} \\ CIFAR-10 \textit{vs} SVHN & 0.752 / \textbf{0.820} & 0.819 / \textbf{0.958} \\ SVHN \textit{vs} CIFAR-10 & 0.723/\textbf{0.991} & 0.896/\textbf{0.993} \\ \bottomrule \end{tabular} } \label{table:atypicvssipple} \end{table} \section{Conclusion} \label{sec:conclusion} In this paper, we have we introduced a new method for the unsupervised anomaly detection problem, Adversarial Mirrored Autoencoder~(AMA), equipped with Mirrored Wasserstein loss and a latent space regularizer. Our method outperforms existing generative model based anomaly detectors on several benchmark tasks. We also show how each of the components contribute to the model's performance in diverse data settings. While our model is quite powerful in OOD detection, it still underperforms in some data settings like CIFAR-10 vs CIFAR-100. This is rather similar to the setting of anomalies arising from the same data manifold. While we showed some early results in Table~\ref{table:indistribution_aucs}, we can further extend this work to improve for such scenarios.
1,941,325,220,593
arxiv
\section{Introduction} \label{s:intro} \subsection{Overview} We continue the investigation of a model for soap films based on capillarity theory which was recently introduced by A. Scardicchio and the authors in \cite{maggiscardicchiostuvard,kms}. Soap films are usually modeled as minimal surfaces with a prescribed boundary: this idealization of soap films gives a model without length scales, which cannot capture those behaviors of soap films determined by their three-dimensional features, e.g. by their thickness. Regarding enclosed volume, rather than thickness, as a more basic geometric property of soap films, in \cite{maggiscardicchiostuvard,kms} we have started the study of soap films through capillarity theory, by proposing a {\bf soap film capillarity model} (see \eqref{def:SFCP} below). In this model, one looks for surface tension energy minimizers enclosing a fixed small volume, and satisfying a spanning condition with respect to a given wire frame. In \cite{kms} we have proved the existence of minimizers, and have shown their convergence to minimal surfaces satisfying the same spanning condition as the volume constraint converges to zero ({\bf minimal surfaces limit}). Although minimizers in the soap film capillarity model are described by regions of positive volume, these regions may fail to have uniformly positive thickness: indeed, in order to satisfy the spanning condition, minimizers may locally collapse onto surfaces. Understanding these collapsed surfaces, as well as their behavior in the minimal surfaces limit, is an important step in the study of the soap film capillarity model. In this paper we obtain a decisive progress on this problem, by showing the smoothness of collapsed surfaces, up to possible singular sets of codimension at least $7$. In particular, we show that, in physical dimensions, collapsed regions are smooth, thus providing strong evidence that, in the minimal surfaces limit, any singularities of solutions of the Plateau's problem should be ``wetted'' by the bulky parts of capillarity minimizers. \subsection{The soap film capillarity model}\label{section recap} We start by recalling the formulation of our model for soap films hanging from a wire frame, together with the main results obtained in \cite{kms,kms2}. The wire frame is a compact set $W\subset\mathbb{R}^{n+1}$, $n\ge1$, and the region of space accessible to soap films is the open set \[ \Omega=\mathbb{R}^{n+1}\setminus W\,. \] A {\bf spanning class} in $\Omega$ is a non-empty family $\mathcal{C}$ of smooth embeddings $\gamma \colon \SS^1 \to \Omega$ which is homotopically closed \footnote{If $\gamma_0,\gamma_1$ are smooth embeddings $\SS^1\to \Omega$ with $\gamma_0 \in \mathcal{C}$ and $f \colon \left[0,1\right] \times \SS^1 \to \Omega$ is a continuous mapping such that $f(0,\cdot)=\gamma_0$ and $f(1,\cdot)=\gamma_1$ then also $\gamma_1 \in \mathcal{C}$.} in $\Omega$; correspondingly, a relatively closed subset $S$ of $\Omega$ is {\bf $\mathcal{C}$-spanning $W$} if $S\cap\gamma\ne\emptyset$ for every $\gamma\in\mathcal{C}$. Given choices of $W$ and $\mathcal{C}$, we obtain a formulation of {\bf Plateau's problem} (area minimization with a spanning condition) following Harrison and Pugh \cite{harrisonpughACV,harrisonpughGENMETH} (see also \cite{DLGM}), by setting \begin{equation} \label{def:Plateau} \ell = \ell(W,\mathcal{C})= \inf\left\lbrace \H^n (S) \, \colon \, S \in \mathcal{S} \right\rbrace\,, \end{equation} where $\H^n$ denotes the $n$-dimensional Hausdorff measure on $\mathbb{R}^{n+1}$, and where \begin{equation} \label{def:Plateau competitors} \mathcal{S} = \left\lbrace S \subset \Omega \, \colon \, \mbox{$S$ is relatively closed and $\mathcal{C}$-spanning $W$} \right\rbrace\,. \end{equation} Minimizers $S$ of $\ell$ exist as soon as $\ell<\infty$. They are, in the jargon of Geometric Measure Theory, {\bf Almgren minimal sets in $\Omega$}, in the sense that they minimize area with respect to local Lipschitz deformations \begin{equation} \label{almgren minimizer} \H^n(S\cap B_r(x))\le\H^n(f(S)\cap B_r(x))\,, \end{equation} whenever $f$ is a Lipschitz map with $\{f\ne{\rm id}\,\}\subset\joinrel\subset B_r(x)\subset\joinrel\subset \Omega$ and $f(B_r(x))\subset B_r(x)$ (here $B_r(x)$ is the open ball of center $x$ and radius $r$ in $\mathbb{R}^{n+1}$). This minimality property is crucial in establishing that, in the physical dimensions $n=1,2$, minimizers of $\ell$ satisfy the celebrated {\bf Plateau's laws}, and are thus realistic models for actual soap films; see \cite{Almgren76,taylor76}, section \ref{section regularity of Almgren min}, and \begin{figure} \input{yt.pstex_t} \caption{\small{When $n=1$, Almgren minimal sets are locally {\it isometric} either to lines or to $\mathbf{Y}^1\subset\mathbb{R}^2$, the cone with vertex at the origin spanned by $(1,0)$, $e^{i\,2\pi/3}$ and $e^{i\,4\pi/3}$. When $n=2$, Almgren minimal sets are locally diffeomorphic either to planes (and locally at these points they are smooth minimal surfaces), or to $\mathbf{Y}^1\times\mathbb{R}$, or to $\mathbf{T}^2$, the cone with vertex at the origin spanned by the edges of a reference regular tetrahedron (for the purposes of this paper, there is no need to specify this reference choice).}} \label{fig yt} \end{figure} Figure \ref{fig yt}. \medskip In capillarity theory (neglecting gravity and working for simplicity with a null adhesion coefficient) regions $E$ occupied by a liquid at equilibrium inside a container $\Omega$ can be described by minimizing the area $\H^n(\Omega\cap\partial E)$ of the boundary of $E$ lying inside the container while keeping the volume $|E|$ of the region fixed. When the fixed amount of volume $\varepsilon=|E|$ is small, minimizers in the capillarity problem take the form of small almost-spherical droplets sitting near the points of highest mean curvature of $\partial\Omega$, see \cite{baylerosales,fall,maggimihaila}. To observe minimizers with a ``soap film geometry'', we impose the $\mathcal{C}$-spanning condition on $\Omega\cap\partial E$, and come to formulate the {\bf soap film capillarity problem} $\psi(\varepsilon) = \psi(\varepsilon,W,\mathcal{C})$, by setting \begin{equation} \label{def:SFCP} \psi(\varepsilon)= \inf\left\lbrace \H^n(\Omega \cap \partial E)\, \colon \, \mbox{$E \in \mathcal{E}$, $|E| = \varepsilon$, and $\Omega \cap \partial E$ is $\mathcal{C}$-spanning $W$}\right\rbrace\,, \end{equation} where \begin{equation} \label{def:SFCP competitors} \mathcal{E} = \left\lbrace E \subset \Omega \,\colon\, \mbox{$E$ is an open set and $\partial E$ is $\H^n$-rectifiable} \right\rbrace\,. \end{equation} Of course, a minimizing sequence $\{E_j\}_j$ for $\psi(\varepsilon)$ may find energetically convenient to locally ``collapse'' onto lower dimensional regions, see \begin{figure} \input{collapsing.pstex_t} \caption{{\small The soap film capillarity problem in the case when $W$ consists of three small disks centered at the vertexes of an equilateral triangle, and $\mathcal{C}$ is generated by three loops, one around each disk in $W$: (a) the unique minimizer $S$ of $\ell$ consists of three segments meeting at 120-degrees at a $Y$-point; (b) a minimizing sequence $\{E_j\}_j$ for $\psi(\varepsilon)$ will partly collapse along the segments forming $S$; (c) the resulting generalized minimizer $(K,E)$, where $K\setminus\partial E$ consists of three segments (whose area is weighted by $\mathcal F$ with multiplicity $2$, and which are depicted by bold lines), and where $E$ is a negatively curved curvilinear triangle enclosing a volume $\varepsilon$, and ``wetting'' the $Y$-point of $S$.}} \label{fig collapsing} \end{figure} Figure \ref{fig collapsing}. Hence, we do not expect to find minimizers of $\psi(\varepsilon)$ in $\mathcal{E}$, but rather to describe limits of minimizing sequences in the class \begin{equation} \label{def:gen min class} \begin{split} \mathcal{K} = \Big\{ (K,E) \, \colon \, & \mbox{$E \subset \Omega$ is open with $\Omega \cap \mathrm{cl}\,(\partial^*E) = \Omega \cap \partial E \subset K$,} \\ & \mbox{$K \in \mathcal{S}$ and $K$ is $\H^n$-rectifiable} \Big\}\,, \end{split} \end{equation} (where $\partial^*E$ is the reduced boundary of $E$, and $\mathrm{cl}\,$ stands for topological closure in $\mathbb{R}^{n+1}$), and to compute the limit of their energies with the relaxed energy functional $\mathcal F$ defined on $\mathcal{K}$ as \begin{equation} \label{def:relaxed energy} \mathcal F(K,E) = \H^n(\Omega \cap \partial^*E) + 2\,\H^n(K \setminus \partial^*E) \qquad \mbox{for $(K,E) \in \mathcal{K}$}\,. \end{equation} Notice the factor $2$ appearing as a weight for the area of $K\setminus\partial^*E$, due to the fact that $K\setminus\partial^*E$ originates as the limit of collapsing boundaries of $\Omega\cap\partial E_j$. We can now recall the two main results proved in \cite{kms,kms2}, which state the existence of (generalized) minimizers of $\psi(\varepsilon)$ and prove the convergence of $\psi(\varepsilon)$ to the Plateau's problem $\ell$ when $\varepsilon\to 0^+$. \begin{theorem}[Existence of generalized minimizers {\cite[Theorem 1.4]{kms}} and {\cite[Theorem 5.1]{kms2}}]\label{kms:existence} Assume that $\ell=\ell(W,\mathcal{C})<\infty$, $\Omega$ has smooth boundary, and that \begin{equation} \label{unica hp} \mbox{$\exists\,\tau_0>0$ such that $\mathbb{R}^{n+1} \setminus I_\tau(W)$ is connected for all $\tau < \tau_0$,} \end{equation} where $I_\tau(W)$ is the closed $\tau$-neighborhood of $W$. \medskip If $\varepsilon>0$ and $\{E_j\}_j$ is a minimizing sequence for $\psi(\varepsilon)$, then there exists $(K,E)\in\mathcal{K}$ with $|E|=\varepsilon$ such that, up to possibly extracting subsequences, and up to possibly modifying each $E_j$ outside a large ball containing $W$ (with both operations resulting in defining a new minimizing sequence for $\psi(\varepsilon)$, still denoted by $\{E_j\}_j$), we have that, \begin{equation}\label{minimizing seq conv to gen minimiz} \begin{split} &\mbox{$E_j\to E$ in $L^1(\Omega)$}\,, \\ &\H^n\llcorner(\Omega\cap\partial E_j)\stackrel{*}{\rightharpoonup} \theta\,\H^n\llcorner K\qquad\mbox{as Radon measures in $\Omega$} \end{split} \end{equation} as $j\to\infty$, where $\theta:K\to\mathbb{R}$ is upper semicontinuous and satisfies \begin{equation} \label{theta density} \mbox{$\theta= 2$ $\H^n$-a.e. on $K\setminus\partial^*E$},\qquad\mbox{$\theta=1$ on $\Omega\cap\partial^*E$}\,. \end{equation} Moreover, $\psi(\varepsilon)=\mathcal F(K,E)$ and, for a suitable constant $C$, $\psi(\varepsilon)\le 2\,\ell+C\,\varepsilon^{n/(n+1)}$. \end{theorem} \begin{remark} {\rm Based on Theorem \ref{kms:existence}, we say that $(K,E)\in\mathcal{K}$ is a {\bf generalized minimizer of $\psi(\varepsilon)$} if $|E|=\varepsilon$, $\mathcal F(K,E)=\psi(\varepsilon)$ and there exists a minimizing sequence $\{E_j\}_j$ of $\psi(\varepsilon)$ such that \eqref{minimizing seq conv to gen minimiz} and \eqref{theta density} hold. } \end{remark} \begin{theorem}[Minimal surfaces limit, {\cite[Theorem 1.9]{kms}} and {\cite[Theorem 5.1]{kms2}}]\label{thm msl} Assume that $\ell=\ell(W,\mathcal{C})<\infty$, $\Omega$ has smooth boundary, and that \eqref{unica hp} holds. Then $\psi$ is lower semicontinuous on $(0,\infty)$ and $\psi(\varepsilon)\to 2\,\ell$ as $\varepsilon\to 0^+$. Moreover, if $\{(K_h,E_h)\}_h$ are generalized minimizers of $\psi(\varepsilon_h)$ corresponding to $\varepsilon_h\to 0^+$ as $h\to\infty$, then there exists a minimizer $S$ of $\ell$ such that, up to extracting a subsequence in $h$, and as $h\to\infty$, \[ 2\,\H^n\llcorner(K_h\setminus \partial^*E_h)+\H^n\llcorner(\Omega\cap\partial^*E_h)\stackrel{*}{\rightharpoonup} 2\,\H^n\llcorner S\,,\qquad\mbox{as Radon measures in $\Omega$}\,. \] \end{theorem} Theorem \ref{kms:existence} and Theorem \ref{thm msl} open of course several questions on the properties of generalized minimizers at fixed $\varepsilon$, and on their behavior in the minimal surfaces limit $\varepsilon\to 0^+$. The two themes are very much intertwined, and in this paper we focus on the former, having in mind future developments on the latter. Before presenting our new results, we recall from \cite{kms} one of the most basic properties of generalized minimizers of $\psi(\varepsilon)$, namely, that they actually minimize the relaxed energy $\mathcal F$ among their (volume-preserving) diffeomorphic deformations. In particular, they satisfy a certain Euler-Lagrange equation which, by Allard's regularity theorem \cite{Allard}, implies a basic degree of regularity of $K$. \begin{theorem}[{\cite[Theorem 1.6]{kms}} and {\cite[Theorem 5.1]{kms2}}]\label{thm basic regularity} Assume that $\ell=\ell(W,\mathcal{C})<\infty$, $\Omega$ has smooth boundary, and that \eqref{unica hp} holds. If $(K,E)$ is a generalized minimizer of $\psi(\varepsilon)$ and $f:\Omega\to\Omega$ is a diffeomorphism with $|f(E)|=|E|$, then \begin{equation} \label{minimality KE against diffeos} \mathcal F(K,E)\le\mathcal F(f(K),f(E))\,. \end{equation} In particular: \medskip \noindent (i) there exists $\l\in\mathbb{R}$ such that, for every $X\in C^1_c(\mathbb{R}^{n+1};\mathbb{R}^{n+1})$ with $X\cdot\nu_\Omega=0$ on $\partial\Omega$, \begin{equation} \label{stationary main} \l\,\int_{\partial^*E}X\cdot\nu_E\,d\H^n=\int_{\partial^*E}{\rm div}\,^K\,X\,d\H^n+2\,\int_{K\setminus\partial^*E}{\rm div}\,^K\,X\,d\H^n \end{equation} where ${\rm div}\,^K$ denotes the tangential divergence operator along $K$; \medskip \noindent (ii) there exists $\Sigma\subset K$, closed and with empty interior in $K$, such that $K\setminus\Sigma$ is a smooth hypersurface, $K\setminus(\Sigma\cup\partial E)$ is a smooth embedded minimal hypersurface, $\H^n(\Sigma\setminus\partial E)=0$, $\Omega\cap(\partial E\setminus\partial^*E)\subset \Sigma$ has empty interior in $K$, and $\Omega\cap\partial^*E$ is a smooth embedded hypersurface with constant scalar mean curvature $\l$ (defined with respect to the outer unit normal $\nu_E$ of $E$). \end{theorem} \subsection{The exterior collapsed region of a generalized minimizer}\label{section collapsed} In \cite{kms2} we have started the study of the {\bf exterior collapsed region} \[ K\setminus\mathrm{cl}\,(E) \] of a generalized minimizer $(K,E)$ of $\psi(\varepsilon)$. Indeed, the main result of \cite{kms2} is that if $K\setminus\mathrm{cl}\,(E)\ne\emptyset$, then the Lagrange multiplier $\l$ appearing in \eqref{stationary main} is non-positive, a fact that, in turn, implies the validity of the convex hull inclusion $K\subset{\rm conv}(W)$; see \cite[Theorem 2.8, Theorem 2.9]{kms2}. In this paper, we continue the study of $K\setminus\mathrm{cl}\,(E)$ by looking at its regularity. The basic fact that the multiplicity-one $n$-varifold defined by $K$ is stationary in $\Omega\setminus\mathrm{cl}\,(E)$, see \eqref{stationary main}, implies the existence of a relatively closed subset $\Sigma$ of $K\setminus\mathrm{cl}\,(E)$ such that \begin{equation} \label{sing and reg} \mbox{$K\setminus(\mathrm{cl}\,(E)\cup\Sigma)$ is a smooth minimal hypersurface} \end{equation} and $\H^n(\Sigma)=0$. The main result of this paper greatly improves this picture, by showing that $\Sigma$ is much smaller than $\H^n$-negligible. \begin{theorem}[Sharp regularity for the exterior collapsed region] \label{t:main} Assume that $\ell=\ell(W,\mathcal{C})<\infty$, $\Omega$ has smooth boundary, and that \eqref{unica hp} holds. If $(K,E)$ is a generalized minimizer of $\psi(\varepsilon)$, then there exists a closed subset $\Sigma$ of $K\setminus\mathrm{cl}\,(E)$ such that $K\setminus(\Sigma\cup\mathrm{cl}\,(E))$ is a smooth minimal hypersurface, \[ \Sigma=\emptyset\qquad\mbox{if $1\le n\le 6$}\,, \] $\Sigma$ is locally finite in $\Omega\setminus\mathrm{cl}\,(E)$ if $n=7$, and $\Sigma$ is countably $(n-7)$-rectifiable (and thus has Hausdorff dimension $\le n-7$) if $n\ge 8$. In particular, in the physically relevant cases $n=1$ and $n=2$, the exterior collapsed region $K\setminus\mathrm{cl}\,(E)$ is a smooth stable minimal hypersurface in $\Omega\setminus\mathrm{cl}\,(E)$. \end{theorem} \begin{remark}[Uniform local finiteness of the singular set]\label{remark locally finite} {\rm In fact, when $n\ge 7$ and $\Sigma$ is possibly non-empty, we will show that $\Sigma$ has locally finite $(n-7)$-dimensional Minkowski content (and thus locally finite $\H^{n-7}$-measure) in $\Omega\setminus\mathrm{cl}\,(E)$; see section \ref{appendix NV local}.} \end{remark} \begin{remark}[Consequences for the minimal surfaces limit] \label{remark importante} {\rm A striking consequence of Theorem \ref{t:main} is that the exterior collapsed region $K\setminus\mathrm{cl}\,(E)$ is dramatically {\it more regular} than the generic minimizer of Plateau's problem $\ell$. For instance, in the physical dimension $n=2$, one can apply the work of Taylor \cite{taylor76}, as detailed for example in section \ref{section regularity of Almgren min}, to conclude that a minimizer $S$ of $\ell$ (which is known to be an Almgren minimal set in $\Omega$ as defined in \eqref{almgren minimizer}) is locally diffeomorphic either to a plane (in which case, $S$ is locally a smooth minimal surface), or to the cone $\mathbf{Y}^1\times\mathbb{R}$ ($Y$-points), or to the cone $\mathbf{T}^2$ ($T$-points); and, indeed these singularities are easily observable in soap films. At the same time, by Theorem \ref{t:main}, when $n=2$ the singular set of the exterior collapsed region is empty. Similarly, in arbitrary dimensions, the singular set of an $n$-dimensional minimizer $S$ of $\ell$ could have codimension one in $S$, while, by Theorem \ref{t:main}, the singular set of $K\setminus\mathrm{cl}\,(E)$ has {\it at least} codimension {\it seven} in $K\setminus\mathrm{cl}\,(E)$. This huge regularity mismatch between the exterior collapsed region and the typical minimizer in Plateau's problem has a second point of interest, as it provides strong evidence towards the conjecture that, in the minimal surfaces limit ``$(K_h,E_h)\to S$'' described in Theorem \ref{thm msl}, low codimension singularities of minimizers $S$ of $\ell$ may be contained (or even coincide, as it seems to be the case when $n=1$) with the set of accumulation points of the bulky regions $E_h$. This implication is of course not immediate, and will require further investigation.} \end{remark} \subsection{Outline of the proof of Theorem \ref{t:main}} The proof of Theorem \ref{t:main} is based on a mix of regularity theorems from Geometric Measure Theory, combined with two steps which critically hinge upon the specific structure of the variational problem $\psi(\varepsilon)$. A breakdown of the argument is as follows: \medskip \noindent {\bf Step one ($K\setminus\mathrm{cl}\,(E)$ is Almgren minimal in $\Omega\setminus\mathrm{cl}\,(E)$):} In \eqref{minimality KE against diffeos} we have proved that $(K,E)$ minimizes $\mathcal F$ against diffeomorphic images which preserve the volume of $E$, an information which implies $\H^n(\Sigma)=0$ for the set $\Sigma$ in \eqref{sing and reg}. In this first step we greatly improve this information in the region away from $E$, by allowing for arbitrary Lipschitz deformations. Precisely, we show that $K\setminus\mathrm{cl}\,(E)$ is an Almgren minimal set in $\Omega\setminus\mathrm{cl}\,(E)$, i.e. \begin{equation} \label{e:almgren} \H^n(K\cap B_r(x)) \leq \H^n(f(K)\cap B_r(x)) \end{equation} for every Lipschitz map $f \colon \mathbb{R}^{n+1} \to \mathbb{R}^{n+1}$ such that $\{f \ne {\rm id}\} \subset B_r(x)$ and $f(B_r(x)) \subset B_r(x)$, with $B_r(x) \subset\joinrel\subset \Omega \setminus \mathrm{cl}\,(E)$. Proving \eqref{e:almgren} is delicate, as discussed below. \medskip \noindent {\bf Step two ($K\setminus\mathrm{cl}\,(E)$ has no $Y$-points in $\Omega\setminus\mathrm{cl}\,(E)$):} We construct ``wetting'' competitors, that cannot be realized as Lipschitz images of $K$, to rule out the existence of $Y$-points in $\Sigma$, that is points where $K$ is locally diffeomorphic to the cone $\mathbf{Y}^1\times\mathbb{R}^{n-1}$; see \begin{figure} \input{wet.pstex_t} \caption{\small{Wetting competitors: (a) a local picture of a generalized minimizer $(K,E)$ when $n=1$, with a point $p$ of type $Y$; (b) the wetting competitor is obtained by first modifying $K$ at a scale $\delta$ near $p$, so to save an ${\rm O}(\delta)$ of length at the expense of an increase of ${\rm O}(\delta^2)$ in area; the added area can be restored by pushing inwards $E$ at some point in $\partial^*E$, with a linear tradeoff between subtracted area and added length: in other words, to subtract an area of ${\rm O}(\delta^2)$, we are increasing length by an ${\rm O}(\delta^2)$ (whose size is proportional to the absolute value of the Lagrange multiplier $\l$ of $(K,E)$). If $\delta$ is small enough in terms of $\lambda$, the ${\rm O}(\delta)$ savings in length will eventually beat the ${\rm O}(\delta^2)$ length increase used to restore the total area. In higher dimensions (where length and area become $\H^n$-measure and volume/Lebesgue measure respectively), wetting competitors are obtained by repeating this construction in the cylindrical geometry defined by the spine $\{0\}\times\mathbb{R}^{n-1}$ of $\mathbf{Y}^1\times\mathbb{R}^{n-1}$ near the $Y$-point $p$.}} \label{fig wet} \end{figure} Figure \ref{fig wet}. \medskip \noindent {\bf Step three:} We combine the Almgren minimality of $K\setminus\mathrm{cl}\,(E)$ in $\Omega\setminus\mathrm{cl}\,(E)$ and the absence of $Y$-points in $K\setminus\mathrm{cl}\,(E)$ with some regularity theorems by Taylor \cite{taylor76} and Simon \cite{Simon_cylindrical}, to conclude that the singular set $\Sigma$ of $K\setminus\mathrm{cl}\,(E)$ is $\H^{n-1}$-negligible. At the same time, \eqref{minimality KE against diffeos} implies that the multiplicity-one varifold associated to $K\setminus\mathrm{cl}\,(E)$ is not only stationary, but also stable in $\Omega\setminus\mathrm{cl}\,(E)$. Therefore, we can exploit Wickramasekera's far reaching extension \cite{Wic} of a classical theorem of Schoen and Simon \cite{SchoenSimon81} to conclude that $\Sigma$ is empty if $1\le n\le 6$, is locally finite if $n=7$, and is $\H^{n-7+\eta}$-negligible for every $\eta>0$ if $n\ge 8$. This last information, combined with the Naber-Valtorta theorem \cite[Theorem 1.5]{NV_varifolds} implies that when $n\ge8$, $\Sigma$ is countably $(n-7)$-rectifiable, thus completing the proof of the theorem. We notice here that when $n=1,2$, one can implement this strategy by relying solely on Taylor's theorem \cite{taylor76}, thus avoiding the use of the Schoen-Simon-Wickramasekera theory, see Remark \ref{rmk reg if n12}. Also, one can somehow rely on \cite{SchoenSimon81} only (rather than on the full strength of \cite{Wic}), see Remark \ref{rmk no wic}. Finally, we notice that when $n\ge 8$, by further refining the above arguments, we can also show that $\Sigma$ is locally $\H^{n-7}$-finite: this is discussed in section \ref{appendix NV local}. \medskip We close this introduction by further discussing the construction of the competitors needed in carrying over step one of the above scheme. Indeed, this is a delicate point of the argument where we have made some non-obvious technical choices. \medskip \noindent {\bf Discussion of step one:} We illustrate the various aspects of the proof of \eqref{e:almgren} by means of \begin{figure} \input{stepone.pstex_t} \caption{\small{Proving that the exterior collapsed region is an Almgren minimal set.}} \label{fig stepone} \end{figure} Figure \ref{fig stepone}. In panel (a) we have a schematic representation of a generalized minimizer $(K,E)$ whose exterior collapsed region $K\setminus\mathrm{cl}\,(E)$ consists of various segments, intersecting along a singular set $\Sigma$ which is depicted by two black disks. We center $B_r(x)$ at one of the points in $\Sigma$, pick $r$ so that $B_r(x)$ is disjoint from $\mathrm{cl}\,(E)\cup W$, and in panel (b) we depict the effect on $K\cap B_r(x)$ of a typical area-decreasing Lipschitz deformation supported in $B_r(x)$ (notice that such a map is not injective, so \eqref{minimality KE against diffeos} is of no help here). As it turns out, one has $(f(K),E)\in\mathcal{K}$: the only non-trivial point is showing that $f(K)$ is $\mathcal{C}$-spanning $W$, but this follows quite directly by arguing as in \cite[Proof of Theorem 4, Step 3]{DLGM}. Now, in order to deduce \eqref{e:almgren} from $\psi(\varepsilon)=\mathcal F(K,E)$ we need to find a sequence $\{F_j\}_j$ in the competition class of $\psi(\varepsilon)$ such that $\H^n(\Omega\cap\partial F_j)\to \mathcal F(f(K),E)$ as $j\to\infty$. The obvious choice, at least in the situation depicted in Figure \ref{fig stepone}, would be taking \footnote{In the general situation, with $K\cap E$ possibly not empty, one should modify the formula for $F_j$ by removing $I_{\eta_j}(K\cap E)$. This fact is taken into account in the actual proof when we consider the set $A_1$ in Lemma \ref{l:one_sided_fattening} below.} \[ F_j=U_{\eta_j}(f(K)\cup E)\,, \] for some $\eta_j\to 0^+$, where $U_\eta(S)$ denotes the open $\eta$-tubular neighborhood of the set $S$, see panel (c); for such a set $F_j$, we want to show that (i) $\H^n(B_r(x)\cap\partial F_j)\to 2\,\H^n(B_r(x)\cap f(K))$ as $j\to\infty$; and (ii) that $\Omega\cap\partial F_j$ is $\mathcal{C}$-spanning $W$. Concerning problem (i), taking into account that \[ \H^n(\Omega\cap\partial F_\eta)\approx\frac{|U_\eta(f(K)\cup E)|}{\eta}\qquad\mbox{as $\eta\to 0^+$}\,, \] one wants first to show that $f(K)$ is Minkowski regular, in the sense that \[ \lim_{\eta\to 0^+}\frac{|U_\eta(f(K))|}{2\eta}=\H^n(f(K))\,, \] and then to discuss the relation between $|U_\eta(f(K))|$ and $|U_\eta(f(K)\cup E)|$, which needs to keep track of those ``volume cancellations'' due to the parts of $U_\eta(f(K))$ which are contained in $E$. Discussing such cancellations is indeed possible through a careful adaptation of some recent works by Ambrosio, Colesanti and Villa \cite{ambrosiocolevilla,villa}. Addressing the Minkowski regularity of $f(K)$ requires instead the merging of two basic criteria for Minkowski regularity: ``Lipschitz images of compact subsets in $\mathbb{R}^n$ are Minkowski regular'' (Kneser's Theorem \cite{kneser}, see also \cite[3.2.28-29]{FedererBOOK} and \cite[Theorem 2.106]{AFP}) and ``compact $\H^n$-rectifiable sets with uniform density estimates are Minkowski regular'' (due to Ambrosio, Fusco and Pallara \cite[Theorem 2.104]{AFP}). In section \ref{s:Minkowski}, see in particular Theorem \ref{thm fine} below, we indeed merge these criteria by showing that ``Lipschitz images of compact $\H^n$-rectifiable sets with uniform density estimates are Minkowski regular''. (To apply this theorem to $f(K)$ we need of course to obtain uniform density estimates for $K$, which are discussed in section \ref{s:lde}, Theorem \ref{thm boundary density estimates}). We can thus come to a satisfactory solution of problem (i). However, we have not been able to solve problem (ii): in other words, it remains highly non-obvious if a set like $\Omega\cap\partial F_j$ is always $\mathcal{C}$-spanning $W$, given the possibly subtle interactions between the geometries of $E$ and $f(K)$ and the operation of taking open neighborhoods. To overcome the spanning problem, we explore the possibility of defining $F_j$ as a {\it one-sided neighborhood} of $f(K)$ (which automatically contains the $\mathcal{C}$-spanning set $f(K)$ in its boundary), rather than as an open neighborhood of $f(K)$ (which contains the $\mathcal{C}$-spanning set $f(K)$ in its interior). As shown in \cite[Lemma 3.2]{kms2}, we can define $\mathcal{C}$-spanning one-sided neighborhoods of a pair $(K,E)\in\mathcal{K}$ whenever $K$ is smoothly orientable outside of a meager closed subset of $K$. Thanks to Theorem \ref{thm basic regularity}-(ii) a generalized minimizer $(K,E)$ has enough regularity to define the required one-sided neighborhoods of $K$, but this regularity may be lost after applying the Lipschitz map $f$ to $K$: it thus seems that neither approach is going to work. The solution comes {\it by mixing the two methods}, as depicted in panel (d): inside $B_r(x)$, we define $F_j$ by taking an $\eta_j$-neighborhood of $f(K)$ -- which is fine, in terms of proving the $\mathcal{C}$-spanning condition, given the simple geometry of the ball and the care we will put in making sure that $\Omega\cap\partial F_j$ contains the spherical subsets $\partial B_r(x)\cap U_{\eta_j}(f(K))$; inside $\Omega\setminus\mathrm{cl}\,(B_r(x))$ we will define $F_j$ by the one-sided neighborhood construction -- notice that we have enough regularity in this region because $f(K)$ and $K$ coincide on $\Omega\setminus\mathrm{cl}\,(B_r(x))$, We will actually need a variant of the one-sided neighborhood lemma \cite[Lemma 3.2]{kms2}, to guarantee that $\partial B_r(x)\cap U_{\eta_j}(f(K))$ is contained in $\Omega\cap\partial F_j$, see Lemma \ref{l:one_sided_fattening}. \medskip \noindent {\bf Organization of the paper}: Section \ref{section notation} contains a summary of the notation used in the paper. In section \ref{s:Minkowski} we obtain the criterion for Minkowski regularity merging Kneser's theorem with \cite[Theorem 2.104]{AFP}, see Theorem \ref{thm fine}. In section \ref{s:lde} we discuss the lower density bounds up to the boundary wire frame needed to apply Theorem \ref{thm fine} to $K$, while in section \ref{s:almgren_minimal} we put together all these results to show the Almgren minimality of $K\setminus\mathrm{cl}\,(E)$ in $\Omega\setminus\mathrm{cl}\,(E)$. In section \ref{s:noY} we construct the wetting competitors needed to exclude the presence of $Y$-points of $K\setminus\mathrm{cl}\,(E)$ in $\Omega\setminus\mathrm{cl}\,(E)$, and finally, in section \ref{s:graph}, we illustrate the application of various regularity theorems \cite{taylor76,Simon_cylindrical,SchoenSimon81,Wic,NV_varifolds} needed to deduce Theorem \ref{t:main} from our variational analysis. Finally, in section \ref{appendix NV local}, we exploit more specifically the Naber-Valtorta results on the quantitative stratification of stationary varifolds and prove the local $\H^{n-7}$-estimates for $\Sigma$ mentioned in Remark \ref{remark locally finite}. \medskip \noindent {\bf Acknowledgment:} This work was supported by the NSF grants DMS 2000034, FRG-DMS 1854344, and RTG-DMS 1840314. \section{Notation and terminology}\label{section notation} We summarize some basic definitions, mostly following \cite{SimonLN,maggiBOOK}. \medskip \noindent {\bf Radon measures and rectifiability:} We work in the Euclidean space $\mathbb{R}^{n+1}$ with $n \geq 1$. For $A \subset \mathbb{R}^{n+1}$, $\mathrm{cl}\,(A)$ denotes the topological closure of $A$ in $\mathbb{R}^{n+1}$, while $U_\eta(A)$ and $I_\eta(A)$ are the open and closed $\eta$-tubular neighborhoods of $A$, respectively. The open ball centered at $x \in \mathbb{R}^{n+1}$ with radius $r>0$ is denoted $B_r(x)$; given $1\leq k\leq n$ and a $k$-dimensional linear subspace $L \subset \mathbb{R}^{n+1}$, $B^L_r(x)$ denotes instead the open disc $B_r(x) \cap (x+L)$, and $B^k_r(x)$ is the corresponding shorthand notation when the subspace $L$ is clear from the context. We use the shorthand notation $B_r=B_r(0)$ and $B_r^k=B_r^k(0)$. If $A \subset \mathbb{R}^{n+1}$ is (Borel) measurable, then $|A|=\mathcal{L}^{n+1}(A)$ and $\H^s(A)$ denote its Lebesgue and $s$-dimensional Hausdorff measures, respectively, and we set $\omega_k= \H^k(B^k_1)$. If $\mu$ is a Radon measure in $\mathbb{R}^{n+1}$, $A \subset \mathbb{R}^{n+1}$ is Borel, and $f \colon \mathbb{R}^{n+1} \to \mathbb{R}^d$ is continuous and proper, then $\mu \llcorner A$ and $f_\sharp \mu$ denote the restriction of $\mu$ to $A$ and the push-forward of $\mu$ through $f$, respectively defined by $(\mu \llcorner A)(E) = \mu(A \cap E)$ for every Borel $E \subset \mathbb{R}^{n+1}$ and $(f_\sharp \mu)(F) = \mu(f^{-1}(F))$ for every Borel $F \subset \mathbb{R}^d$. The {\bf Hausdorff dimension} of $A$ is denoted $\dim_{\H}(A)$: it is the infimum of all real numbers $t \geq 0$ such that $\H^s(A) = 0$ for all $s>t$. Given an integer $1\leq k \leq n+1$, a Borel measurable set $M \subset \mathbb{R}^{n+1}$ is {\bf countably $k$-rectifiable} if it can be covered by countably many Lipschitz images of $\mathbb{R}^k$ up to a set of zero $\H^k$ measure; $M$ is {\bf (locally) $\H^k$-rectifiable} if it is countably $k$-rectifiable and, in addition, its $\H^k$ measure is (locally) finite. If $M$ is locally $\H^k$-rectifiable, then for $\H^k$-a.e. $x \in M$ there exists a unique $k$-dimensional linear subspace of $\mathbb{R}^{n+1}$, denoted $T_xM$, with the property that $\H^k \llcorner ((M-x)/r) \stackrel{*}{\rightharpoonup} \H^k \llcorner T_xM$ in the sense of Radon measures in $\mathbb{R}^{n+1}$ as $r \to 0^+$: $T_xM$ is called the approximate tangent space to $M$ at $x$. If $f \colon \mathbb{R}^{n+1} \to \mathbb{R}^{n+1}$ is locally Lipschitz and $M$ is locally $\H^k$-rectifiable then the tangential gradient $\nabla^M f$ and the tangential jacobian $J^Mf$ are well defined at $\H^k$-a.e. point in $M$. \medskip \noindent {\bf Sets of finite perimeter:} A Borel set $E \subset \mathbb{R}^{n+1}$ is: of {\bf locally finite perimeter} if there exists an $\mathbb{R}^{n+1}$-valued Radon measure $\mu_E$ such that $\langle \mu_E, X \rangle = \int_E {\rm div}\,(X)\, dx$ for all vector fields $X \in C^1_c(\mathbb{R}^{n+1};\mathbb{R}^{n+1})$; of {\bf finite perimeter} if, in addition, $P(E) = |\mu_E|(\mathbb{R}^{n+1})$ is finite. For any Borel set $F \subset \mathbb{R}^{n+1}$, the relative perimeter of $E$ in $F$ is then defined by $P(E;F) = |\mu_E|(F)$. The {\bf reduced boundary} of a set $E$ of locally finite perimeter is the set $\partial^*E$ of all points $x \in \mathbb{R}^{n+1}$ such that the vectors $|\mu_E|(B_r(x))^{-1}\,\mu_E(B_r(x))$ converge, as $r\to 0^+$, to a vector $\nu_E(x) \in \mathbb{S}^n$: $\nu_E(x)$ is called the {\bf outer unit normal} to $\partial^*E$ at $x$. By De Giorgi's structure theorem, $\partial^*E$ is locally $\H^n$-rectifiable, with $\mu_E = \nu_E\,\H^n \llcorner \partial^*E$ and $|\mu_E| = \H^n \llcorner \partial^*E$. \medskip \noindent {\bf Integral varifolds}: An {\bf integral $n$-varifold} $V$ on an open set $U\subset\mathbb{R}^{n+1}$ is a continuous linear functional on $C^0_c(U\times G_n^{n+1})$ (where $G_n^{n+1}$ is the set of unoriented $n$-dimensional planes in $\mathbb{R}^{n+1}$) corresponding to a locally $\H^n$-rectifiable set $M$ in $U$, and a non-negative, integer valued function $\theta \in L^1_{{\rm loc}}(\H^n\llcorner M)$, so that \[ V(\varphi)=\mathbf{var}\,(M,\theta)(\varphi) = \int_M \varphi(x,T_xM) \,\theta(x)\,d\H^n(x) \qquad \mbox{for all $\varphi \in C^0_c(U\times G_n^{n+1})$}\,. \] The function $\theta$, which is uniquely defined only $\H^n$-a.e. on $M$ is called the {\bf multiplicity} of $V$, while the Radon measure $\|V\|=\theta\,\H^n\llcorner M$ is the {\bf weight} of $V$ and ${\rm spt}\,V={\rm spt}\,\|V\|$ is the {\bf support} of $V$. If $\Phi \colon U \to U'$ is a diffeomorphism, the {\bf push-forward of $V=\mathbf{var}\,(M,\theta)$ through $\Phi$} is the integral $n$-varifold $\Phi_\sharp V = \mathbf{var}\,(\Phi(M), \theta \circ \Phi^{-1})$ on $U'$. If $X\in C^1_c(U;\mathbb{R}^{n+1})$, then ${\rm div}\,^TX=\varphi(x,T)$ defines a function $\varphi\in C^0_c(U\times G_n^{n+1})$: correspondingly, one says that $\vec{H} \in L^1_{{\rm loc}}(U;\mathbb{R}^{n+1})$ is the {\bf generalized mean curvature vector} of $V$ if \begin{equation} \label{gen min curv vector} \int_M\,\theta\,{\rm div}\,^MX\,d\H^n= \int_M X \cdot \vec{H} \, \theta\,d\H^n \qquad\forall X\in C^1_c(U;\mathbb{R}^{n+1})\,. \end{equation} When $\vec{H}=0$ we say that $V$ is {\bf stationary} in $U$: for example, if $M$ is a minimal hypersurface in $U$, then $V=\mathbf{var}\,(M,1)$ is stationary in $U$. Area monotonicity carries over from minimal surfaces to stationary varifolds, in the sense that the density ratios \[ \frac{\|V\|(B_r(x))}{\omega_n\,r^n}\quad\mbox{are increasing in $r\in\big(0,{\rm dist}(x,\partial U)\big)$}\,, \] with limit value as $r\to 0^+$ denoted by $\Theta_V(x)$ and called the {\bf density} of $V$ at $x$. \section{Minkowski content of rectifiable sets} \label{s:Minkowski} The goal of this section is merging two well-known criteria for Minkowski regularity, Kneser's Theorem \cite{kneser} and \cite[Theorem 2.104]{AFP}, into Theorem \ref{thm fine} below. As explained in the introduction, this result will then play a crucial role in proving the Almgren minimality of exterior collapsed regions. It is convenient to introduce the following notation: given a compact set $Z\subset\mathbb{R}^d$ and an integer $k \in \{0,\ldots,d\}$, we define the {\bf upper and lower $k$-dimensional Minkowski contents} of $Z$ as \begin{eqnarray*} \mathcal{UM}\,^k(Z)&=&\limsup_{\eta\to 0^+}\frac{|U_\eta(Z)|}{\omega_{d-k}\,\eta^{d-k}}\,, \\ \mathcal{LM}\,^k(Z)&=&\liminf_{\eta\to 0^+}\frac{|U_\eta(Z)|}{\omega_{d-k}\,\eta^{d-k}}\,. \end{eqnarray*} When $\mathcal{UM}\,^k(Z)=\mathcal{LM}\,^k(Z)$ we denote by $\mathcal{M}^k(Z)$ their common value, and call it the {\bf $k$-dimensional Minkowski content} of $Z$. If the $k$-dimensional Minkowski content of $Z$ exists, we say further that $Z$ is {\bf Minkowski $k$-regular} provided \begin{equation} \label{d:Minkowski regularity} \mathcal{M}^k(Z) \;=\; \H^k(Z)\,. \end{equation} It is easily seen that any $k$-dimensional $C^2$-surface with boundary in $\mathbb{R}^d$ is Minkowski $k$-regular, but, as said, more general criteria are available. \begin{theorem}[Kneser's Theorem]\label{thm Federer 3229} If $Z\subset\mathbb{R}^k$ is compact and $f:\mathbb{R}^k\to\mathbb{R}^d$ is a Lipschitz map, then $f(Z)$ is Minkowski $k$-regular. \end{theorem} \begin{theorem}[{\cite[Theorem 2.106]{AFP}}]\label{thm AFP 2104} If $Z$ is a compact, countably $k$-rectifiable set in $\mathbb{R}^d$, and if there exists a Radon measure $\nu$ on $\mathbb{R}^d$ with $\nu<<\H^k$ and \[ \nu(B_r(x))\ge c\,r^k\,,\qquad\forall x\in Z\,,\forall r<r_0\,, \] for positive constants $c$ and $r_0$, then $Z$ is Minkowski $k$-regular. \end{theorem} \begin{remark} {\rm For the reader's convenience we observe that Theorem \ref{thm AFP 2104} has the same statement as \cite[Theorem 2.104]{AFP}, although it should be noted that the existence of $\nu$ implies that $\H^k(Z)<\infty$, and thus that $Z$ is $\H^k$-rectifiable. For this reason we shall directly work with $\H^k$-rectifiable sets.} \end{remark} We now prove a result that mixes elements of both Theorem \ref{thm Federer 3229} and Theorem \ref{thm AFP 2104}, but that apparently does not follow immediately from them. \begin{theorem} \label{thm fine} If $Z$ is a compact and $\H^k$-rectifiable set in $\mathbb{R}^d$ such that \begin{equation} \label{ldb} \H^k(Z\cap B_r(x))\ge c\,r^k\qquad\forall x\in Z\,,\forall r<r_0\,, \end{equation} and if $f:\mathbb{R}^d\to\mathbb{R}^d$ is a Lipschitz map, then $f(Z)$ is Minkowski $k$-regular. \end{theorem} We present a proof of Theorem \ref{thm fine} which follows the argument used in \cite{AFP} to prove Theorem \ref{thm Federer 3229}. We premise two propositions to the main argument. \begin{proposition} \label{prop good case} If $Z$ is a compact and $\H^k$-rectifiable set in $\mathbb{R}^d$ such that \[ \H^k(Z\cap B_r(x))\ge c\,r^k\qquad\forall x\in Z\,,\forall r<r_0\,, \] and if $f:\mathbb{R}^d\to\mathbb{R}^d$ is a Lipschitz map with $J^Z f>0$ a.e. on $Z$, then $f(Z')$ is Minkowski $k$-regular for any compact subset $Z' \subset Z$. \end{proposition} \begin{proof} Let \[ \nu=f_\sharp\,(\H^k\llcorner Z)\,. \] If $y=f(x)\in f(Z)$ and $r\le {\rm Lip}(f)\,r_0$, then \[ \nu(B_r(y))=\H^k(Z\cap f^{-1}(B_r(y)))\ge\H^k(Z\cap B_{r/{\rm Lip}(f)}(x))\ge \frac{c}{({\rm Lip} (f))^k}\,r^k\,. \] Moreover $\nu<< \H^k$, since if $E\subset\mathbb{R}^d$ with $\H^k(E)=0$, then by $J^Z f>0$ on $Z$ we get \[ \nu(E)=\H^k(Z\cap f^{-1}(E))=\int_{Z\cap f^{-1}(E)}\frac{J^Z f}{J^Z f}\,d\H^k=\int_{E\cap f(Z)}d\H^k(y)\int_{f^{-1}(y)\cap Z}\frac{d\H^0}{J^Zf}=0\,. \] We can thus apply Theorem \ref{thm AFP 2104} to $f(Z')$ for every $Z'\subset Z$ compact. \end{proof} \begin{proposition} \label{prop higher codim zeor jac} If $Z$ is a compact and $\H^k$-rectifiable set in $\mathbb{R}^d$ such that \[ \H^k(Z\cap B_r(x))\ge c\,r^k\qquad\forall x\in Z\,,\forall r<r_0\,, \] and if $f:\mathbb{R}^d\to\mathbb{R}^d$ is a Lipschitz map, then \[ \mathcal{M}^k(f(Z'))=0 \] whenever $Z'\subset Z$ is compact with $J^Z f=0$ $\H^k$-a.e. on $Z'$. \end{proposition} \begin{proof} Let us define $f_\varepsilon:\mathbb{R}^d\to\mathbb{R}^d\times\mathbb{R}^d$ by setting \[ f_\varepsilon(x)=\big(f(x),\varepsilon\,x\big)\,. \] If $x\in Z$ is such that $f$ is tangentially differentiable at $x$ along $Z$, then $f_\varepsilon$ is tangentially differentiable at $x$ along $Z$, and thus \[ J^Z f_\varepsilon(x)>0\,, \] with $J^Z f_\varepsilon(x)\to J^Z f(x)$ as $\varepsilon\to 0^+$ and $J^Z f_\varepsilon\le({\rm Lip} (f))^k+1$ for $\varepsilon<\varepsilon_0$. By Proposition \ref{prop good case} and the area formula, since $f_\varepsilon$ is injective we get \[ \mathcal{M}^k(f_\varepsilon(Z'))=\H^k(f_\varepsilon(Z'))=\int_{Z'} J^Z f_\varepsilon\to 0\qquad\mbox{as $\varepsilon\to 0^+$}\,, \] where in computing the limit we have used $J^Z f=0$ $\H^k$-a.e. on $Z'$. Thus, \[ \lim_{\varepsilon\to 0^+} \mathcal{M}^k(f_\varepsilon(Z'))=0\,. \] At the same time if $\eta>0$, then \[ \Big\{(y,z)\in\mathbb{R}^d\times\mathbb{R}^d:y\in B_\eta(f(x))\,, z\in B_\eta(\varepsilon\,x)\,, x\in Z'\Big\}\subset U_{2\,\eta}(f_\varepsilon(Z'))\,, \] so that Fubini's theorem gives \[ |U_\eta(f(Z'))|\,\omega_d\,\eta^d\le |U_{2\eta}(f_\varepsilon(Z'))|\,. \] Dividing by $\eta^{2d-k}$ we get \[ \mathcal{UM}\,^k(f(Z'))\,\le C(d,k)\,\limsup_{\eta\to 0^+}\frac{|U_{2\eta}(f_\varepsilon(Z'))|}{\eta^{2d-k}}\le C(d,k)\,\mathcal{M}^k(f_\varepsilon(Z'))\,, \] and letting $\varepsilon\to 0^+$ we conclude the proof. \end{proof} \begin{proof}[Proof of Theorem \ref{thm fine}] For brevity, set $S = f(Z)$. The rectifiability of $S$ gives $\mathcal{LM}\,^k(S)\ge\H^k(S)$, see e.g. \cite[Proposition 2.101]{AFP}, so that we only need to prove $\mathcal{UM}\,^k(S)\le\H^k(S)$. We set $F=\{J^Z f>0\}$ (that is $f$ is tangentially differentiable along $Z$ with positive tangential Jacobian on $F$) and pick $Z_0\subset \{J^Z f=0\}\subset Z\setminus F$ compact with the property that \begin{equation} \label{error1} \H^k\big(Z\setminus (F\cup Z_0)\big)<\sigma\,, \end{equation} for some $\sigma>0$. In this way, by Proposition \ref{prop higher codim zeor jac}, we have \begin{equation} \label{zero piece} \mathcal{M}^k(S_0)=0\qquad\mbox{where $S_0=f(Z_0)$}\,. \end{equation} Since $S$ is compact and $\H^k$-rectifiable we can find a countable disjoint family $\{S_i\}_i$ of compact subsets of $S$, which covers $S$ modulo $\H^k$, and such that $S_i=f_i(Z_i)$ for injective Lipschitz maps $f_i$ with uniformly positive Jacobian on $\mathbb{R}^k$. By Theorem \ref{thm Federer 3229}, \begin{equation} \label{good pieces} \mathcal{M}^k(S_i)=\H^k(S_i) \qquad \mbox{for every $i$}\,. \end{equation} We pick $N$ so that \begin{equation} \label{Hk delta} \H^k\Big(S\setminus \bigcup_{i=1}^N S_i\Big)<\delta\,, \end{equation} for $\delta$ to be chosen depending on $\sigma$, and set \begin{equation} \label{error2} S^*=S\setminus \Big(S_0\cup\bigcup_{i=1}^N S_i\Big)\,. \end{equation} Next, we further distinguish points in $S^*$ depending on their distance from $\bigcup_{i=0}^N S_i$. More precisely, for any arbitrary $\lambda \in \left( 0,1 \right)$ we define the compact set \begin{equation} \label{to cover} S^{**}=S\setminus U_{\l\eta}\Big(\bigcup_{i=0}^N S_i\Big)\,, \end{equation} and then we apply the Besicovitch covering theorem to cover \begin{equation} \label{covering} S^{**} \subset \bigcup_{j \in J} B_{\l\,\eta}(y_j)\,, \end{equation} where $J$ is a finite set of indexes, each $y_j \in S^{**}$, and each point of $S^{**}$ belongs to at most $\xi(d)$ distinct balls in the covering. Notice that \begin{eqnarray*} \bigcup_{j\in J}B_{\l\,\eta}(y_j)&\subset&U_{\l\,\eta}(S)\setminus\bigcup_{i=0}^N S_i\,, \\ S\cap\bigcup_{j\in J}B_{\l\,\eta}(y_j)&\subset& S^*\,. \end{eqnarray*} Furthermore, $B_{\l\,\eta/{\rm Lip}\,f}(x_j)\subset f^{-1}(B_{\l\,\eta}(y_j))$ for some $x_j \in Z$ such that $y_j=f(x_j)$. The lower density bound \eqref{ldb} then yields \begin{eqnarray*} \#(J)\,\frac{c\,(\l\,\eta)^k}{({\rm Lip} f)^k}&\le&\sum_{j\in J}\H^k\Big(Z\cap B_{\l\,\eta/{\rm Lip} f}(x_j)\Big) \\ &\le&\sum_{j\in J}\H^k\Big(Z\cap f^{-1}(B_{\l\eta}(y_j))\Big) \\ &\le&\xi(d)\,\H^k\Big(Z\cap f^{-1}\Big(\bigcup_{j\in J}B_{\l\eta}(y_j)\Big)\Big) \\ &\le&\xi(d)\,\H^k(Z\cap f^{-1}(S^*))\,. \end{eqnarray*} However, \begin{eqnarray*} \H^k(Z\cap f^{-1}(S^*))&=&\H^k(Z\cap f^{-1}(S^*)\cap F)+\H^k(Z\cap f^{-1}(S^*)\setminus F) \\ &\le& \nu(S^*)+\H^k(Z\setminus(F\cup Z_0))\le \nu(S^*)+\sigma\,, \end{eqnarray*} provided we set \[ \nu=f_\sharp[\H^k\llcorner (Z\cap F)]\,. \] Since $J^Z f > 0$ on $F$, we have, for any Borel set $A \subset \mathbb{R}^d$ \[ \nu(A)=\int_{Z\cap F\cap f^{-1}(A)}\frac{J^Z f}{J^Z f}=\int_{A}d\H^k(y)\int_{f^{-1}(y)\cap Z\cap F}\frac{d\H^0}{J^Zf} \] so that $\nu<<\H^k$. Therefore a suitable choice of $\delta=\delta(\sigma)$ in \eqref{Hk delta} gives \[ \nu(S^*)<\sigma\,, \] and we have thus proved that \begin{equation} \label{number of balls} \#(J) \leq C(d,c,{\rm Lip}(f))\, \sigma\, \lambda^{-k} \, \eta^{-k}\,. \end{equation} We can now conclude the argument. From the definition of $S^*$ it follows that \begin{equation} S = \bigcup_{i=0}^N S_i \cup S^*\,, \end{equation} and thus that \begin{equation} \label{tb_nbd} U_\eta(S) \subset \bigcup_{i=0}^N U_\eta(S_i) \cup U_\eta(S^*) \qquad \mbox{for all $\eta > 0$}\,. \end{equation} On the other hand, by the definition of $S^{**}$ \begin{equation} \label{key} U_\eta(S^*) \subset U_\eta(S^{**}) \cup \bigcup_{i=0}^N U_{(1+\lambda)\,\eta} (S_i)\,, \end{equation} which, together with \eqref{covering}, gives \begin{equation} \label{final_tb_nbd} U_\eta(S) \subset \bigcup_{i=0}^N U_{(1+\l)\,\eta}(S_i) \cup \bigcup_{j \in J} B_{(1+\l)\,\eta}(y_j)\,. \end{equation} By means of \eqref{number of balls} we achieve \begin{equation} |U_\eta(S)| \leq \sum_{i=0}^N |U_{(1+\l)\,\eta}(S_i)| + C(d,c,{\rm Lip}(f)) \, \sigma \, \lambda^{-k}\, (1+\lambda)^d \, \eta^{d-k}\,, \end{equation} so that, dividing by $\omega_{d-k}\,\eta^{d-k}$, taking the limit as $\eta \to 0^+$, and using \eqref{zero piece} and \eqref{good pieces} we obtain \begin{equation} \label{final} \begin{split} \mathcal{UM}\,^k (S) &\leq (1+\l)^{d-k} \sum_{i=1}^N \H^k(S_i) + C(d,k,c,{\rm Lip}(f)) \, \sigma \, \lambda^{-k} \, (1+\lambda)^{d}\\ &\leq (1+\lambda)^{d-k} \, \H^k(S) + C(d,k,c,{\rm Lip}(f)) \, \sigma \, \lambda^{-k} \, (1+\lambda)^{d}\,. \end{split} \end{equation} The conclusion follows by letting first $\sigma \to 0^+$ and then $\l \to 0^+$. \end{proof} We close this section by proving a useful localization statement. \begin{proposition}[Localization of Minkowski content] \label{prop_localization} If $Z$ is a compact and $\H^k$-rectifiable set in $\mathbb{R}^d$ such that \[ \mathcal{M}^k(Z)=\H^k(Z)\,, \] then \[ \lim_{\eta\to 0^+}\frac{|U_\eta(Z)\cap E|}{\omega_{d-k}\eta^{d-k}}=\H^k(Z\cap E) \] whenever $E$ is a Borel set with $\H^k(K\cap\partial E)=0$. \end{proposition} \begin{proof} If we set \[ \mu_\eta=\frac{\L^d\llcorner U_\eta(Z)}{\omega_{d-k}\eta^{d-k}}\,,\qquad \mu=\H^k\llcorner Z\,, \] then we just need to prove that, as $\eta\to 0^+$, $\mu_\eta\stackrel{*}{\rightharpoonup} \mu$ in $\mathbb{R}^d$. To this end, we first consider an open set $A$, set $A_\eta=\{x\in A:{\rm dist}(x,\partial A)\ge\eta\}$, and notice that, for $\eta<\eta_0$, \begin{eqnarray*} \mu_\eta(A)&=&\frac{|U_\eta(Z)\cap A|}{\omega_{d-k}\eta^{d-k}}\ge \frac{|U_\eta(Z\cap A_\eta)|}{\omega_{d-k}\eta^{d-k}} \\ &\ge& \frac{|U_\eta(Z\cap A_{\eta_0})|}{\omega_{d-k}\eta^{d-k}} \end{eqnarray*} so that, by \cite[Proposition 2.101]{AFP} and since $Z\cap A_{\eta_0}$ is compact and $\H^k$-rectifiable \[ \liminf_{\eta\to 0^+}\mu_\eta(A)\ge\H^k(Z\cap A_{\eta_0})\,. \] Letting $\eta_0\to 0^+$ we get \begin{equation} \label{prop loc lsc} \liminf_{\eta\to 0^+}\mu_\eta(A)\ge\mu(A)\qquad\mbox{$\forall A\subset\mathbb{R}^d$ open}\,. \end{equation} Since $\mathcal{M}^k(Z)=\H^k(Z)$ means that $\mu_\eta(\mathbb{R}^d)\to\mu(\mathbb{R}^d)$ as $\eta\to 0^+$, we find that, for every compact set $H\subset\mathbb{R}^d$, \begin{eqnarray} \label{prop loc usc} \mu(H)=\mu(\mathbb{R}^d)-\mu(A)\ge\lim_{\eta\to 0^+}\mu_\eta(\mathbb{R}^d)-\liminf_{\eta\to 0^+}\mu_\eta(A) \ge\limsup_{\eta\to 0^+}\mu_\eta(H)\,, \end{eqnarray} where we have used \eqref{prop loc lsc} with $A=\mathbb{R}^d\setminus H$. By a standard criterion for weak-star convergence of Radon measures, \eqref{prop loc lsc} and \eqref{prop loc usc} imply that $\mu_\eta\stackrel{*}{\rightharpoonup} \mu$ in $\mathbb{R}^d$ as $\eta\to 0^+$. \end{proof} \section{Uniform lower density estimates} \label{s:lde} The application of Theorem \ref{thm fine} to $K$ (where $(K,E)$ is a generalized minimizer of $\psi(\varepsilon)$), requires proving uniform lower density estimates for $\mathrm{cl}\,(K)$ (recall that $K$ is compact relatively to $\Omega$, not to $\mathbb{R}^{n+1}$). Now, it is a consequence of the analysis carried out in \cite{kms} that there exists a radius $r_* > 0$ such that \begin{equation} \label{interior_dlb} \H^n(K\cap B_r(x))\ge \omega_n\,r^n\,,\qquad\forall x\in K\,, r < r_*\,,B_r(x)\subset\joinrel\subset\Omega\,. \end{equation} However, the lower density estimate in \eqref{interior_dlb} is not sufficient to apply Theorem \ref{thm fine}, because its radius of validity degenerates as $x$ approaches $\mathrm{cl}\, (K) \setminus K = \mathrm{cl}\, (K) \cap \partial \Omega$. We thus need an improvement, which is provided in the following theorem. \begin{theorem}[Uniform lower density estimates] \label{thm boundary density estimates} If $(K,E)$ is a generalized minimizer of $\psi(\varepsilon)=\psi(\varepsilon,W,\mathcal{C})$, then there exist $c=c(n)>0$ and $r_0=r_0(n,W,|\l|)>0$ such that \begin{equation} \label{e:uniform_lde} \H^n(K\cap B_r(x))\ge c\,r^n\qquad\forall x\in\mathrm{cl}\,(K)\,, \forall r<r_{0}\,. \end{equation} Here $\l$ is the Lagrange multiplier of $(K,E)$, as introduced in Theorem \ref{thm basic regularity}-(i). \end{theorem} The proof of Theorem \ref{thm boundary density estimates} starts with the remark that the integral $n$-varifold $V$ naturally associated to $(K,E)$ has generalized mean curvature in $L^\infty$ and it satisfies a distributional formulation of Young's law. More precisely, letting $V$ be the $n$-varifold $V = \mathbf{var}\,(K,\theta)$ defined by $K$ with multiplicity function \[ \theta(x) = \begin{cases} 1 &\mbox{if $x \in \partial^*E$}\\ 2 & \mbox{if $x \in K \setminus \partial^*E$}\,, \end{cases} \] then \[ \|V\| = \H^n \llcorner (\Omega \cap \partial^*E) + 2\,\H^n \llcorner (K \setminus \partial^*E) \] and, by considering \eqref{stationary main} in Theorem \ref{thm basic regularity} on vector fields compactly supported in $\Omega$, we see that $V$ has generalized mean curvature vector $\vec{H} = \lambda\, 1_{\partial^*E} \, \nu_E$ in $\Omega$. Actually, \eqref{stationary main} says more, since it allows for vector fields not necessarily supported in $\Omega$, provided they are tangential to $\partial\Omega$, i.e. \eqref{stationary main} gives \begin{equation} \label{first variation} \int {\rm div}\,^KX \, d\|V\| = \int X \cdot \vec{H} \, d\|V\| \quad \mbox{$\forall\,X \in C^1(\Omega; \mathbb{R}^{n+1})$ with $X \cdot \nu_\Omega = 0$ on $\partial \Omega$}\,. \end{equation} The extra information conveyed in \eqref{first variation} is that, in a distributional sense, $V$ has contact angle $\pi/2$ with $\partial \Omega$. The consequences of the validity of \eqref{first variation} have been extensively studied in the classical work of Gr\"uter and Jost \cite{gruterjost}, and their work has been recently extended to arbitrary contact angles by Kagaya and Tonegawa \cite{kagayatone}. In particular, if $s_0 \in \left(0,\infty \right)$ is such that the tubular neighborhood $U_{s_0}(\partial W)$ admits a well-defined nearest point projection map $\Pi \colon U_{s_0}(\partial W) \to \partial W$ of class $C^1$ then \cite[Theorem 3.2]{kagayatone} ensures the existence of a constant $C = C(n,s_0)$ such that for any $x \in U_{s_0/6}(\partial W) \cap {\rm cl}(\Omega)$ the map \begin{equation} \label{monotonicity_formula} r \in (0,s_0/6)\mapsto \frac{\|V\|(B_r(x)) + \|V\|(\tilde B_r(x))}{\omega_n \, r^n} \,e^{(|\l|+C)r} \end{equation} is increasing, where \begin{equation} \label{reflection} \tilde B_r(x) = \left\lbrace y \in \mathbb{R}^{n+1} \, \colon \, \tilde y \in B_r(x) \right\rbrace \,, \qquad \tilde y = \Pi(y) + (\Pi(y) - y) \end{equation} denotes a sort of nonlinear reflection of $B_r(x)$ across $\partial W$. In particular, for $x$ as above the limit \begin{equation} \label{boundary_density} \sigma(x) = \lim_{r\to 0^+} \frac{\|V\|(B_r(x)) + \|V\|(\tilde B_r(x))}{\omega_n \, r^n} \end{equation} exists for every $x \in U_{s_0/6}(\partial W) \cap \mathrm{cl}\,(\Omega)$, and the map $x \mapsto \sigma(x)$ is upper semicontinuous in there; see \cite[Corollary 5.1]{kagayatone}. The uniform density estimate \eqref{e:uniform_lde} will be deduced as a consequence of the above monotonicity reault, together with the following simple geometric lemma: \begin{lemma}\label{lemma_ovvio} Suppose that $x \in U_{s_0}(\partial W)$, and $\rho > 0$ is such that ${\rm dist}(x,\partial W) \leq \rho$ and $B_{\rho}(x) \subset U_{s_0}(\partial W)$. Then: \begin{equation} \label{ovvio2} \tilde B_{\rho}(x) \subset B_{5\rho}(x)\,. \end{equation} \end{lemma} \begin{proof} [Proof of Lemma \ref{lemma_ovvio}] See \cite[Lemma 4.2]{kagayatone}. \end{proof} \begin{proof}[Proof of Theorem \ref{thm boundary density estimates}] First observe that \eqref{e:uniform_lde} holds with $c=\omega_n$ for all $x \in K \setminus U_{s_0/6}(\partial W)$ as soon as $r < \min\{r_*,s_0/6\}$. Therefore, we can assume that \begin{equation} \label{close_to_boundary} x \in \mathrm{cl}\,(K) \cap U_{s_0/6}(\partial W)\,. \end{equation} Also note that for points as in \eqref{close_to_boundary} it holds $\sigma(x) \geq 1$: by upper semicontinuity of $\sigma$ on $U_{s_0/6}(\partial W) \cap \mathrm{cl}\,(\Omega)$, we just need to show this when, in addition to \eqref{close_to_boundary}, we have $x \in K$, and indeed in this case \[ \sigma(x) \geq \lim_{r\to 0^+} \frac{\H^n(K\cap B_r(x))}{\omega_n\, r^n} \geq 1 \] thanks to \eqref{interior_dlb}. Now we fix $r < r_0 = \min\{r_*,5s_0/6\}$, and distinguish two cases depending on the validity of \begin{equation} \label{no cross} {\rm dist}(x,\partial W) > \frac{r}{5}\,. \end{equation} If \eqref{no cross} holds, then by \eqref{interior_dlb} \[ \H^n(K \cap B_r(x)) \geq \H^n(K \cap B_{r/5}(x)) \geq \omega_n \, \left( \frac{r}{5} \right)^n\,, \] so that \eqref{e:uniform_lde} holds. If ${\rm dist}(x,\partial W) \leq r/5$, then, thanks to the obvious inclusion $B_{r/5}(x) \subset U_{s_0}(\partial W)$ we can apply Lemma \ref{lemma_ovvio} with $\rho = r/5$, and \eqref{ovvio2} yields $\tilde B_{r/5}(x) \subset B_r(x)$. Hence, by exploiting $\sigma(x) \geq 1$ and \eqref{monotonicity_formula} we get \[ \begin{split} c_n\,r^n &\leq \sigma(x)\,\omega_n\,\left(\frac{r}{5}\right)^n \\ &\leq \left( \|V\|(B_{r/5}(x)) + \|V\|(\tilde B_{r/5}(x)) \right) \, e^{(|\lambda|+C)\,r/5} \\ &\leq 2\, \|V\|(B_r(x)) \, e^{(|\lambda|+C)\,r_0} \leq 8\,\H^n(K\cap B_r(x))\,, \end{split} \] up to further decreasing $r_0$. \end{proof} \begin{corollary} \label{cor:Minkowski final for soap films} Let $(K,E)$ be a generalized minimizer of $\psi(\varepsilon)$, and let $f \colon \mathbb{R}^{n+1} \to \mathbb{R}^{n+1}$ be Lipschitz. Then \begin{equation} \label{e:Minkowski final} \lim_{\eta \to 0^+} \frac{|U_\eta(f(\mathrm{cl}\,(K))) \cap E |}{2\eta} = \H^n(f(\mathrm{cl}\,(K)) \cap E) \end{equation} whenever $E$ is a Borel set with $\H^n(f(\mathrm{cl}\,(K)) \cap \partial E )=0$. \end{corollary} \begin{proof} Immediate from Theorem \ref{thm fine}, Proposition \ref{prop_localization} and Theorem \ref{thm boundary density estimates}. \end{proof} \section{Minimality with respect to Lipschitz deformations} \label{s:almgren_minimal} In this section we complete the first step of our strategy, by proving the Almgren minimality of the exterior collapsed set. \begin{theorem}\label{proposition Lipschitz minimizing} If $(K,E)$ is a generalized minimizer of $\psi(\varepsilon)$, $B_r(x)\subset\joinrel\subset\Omega\setminus\mathrm{cl}\,(E)$, and $f:\mathbb{R}^{n+1}\to\mathbb{R}^{n+1}$ is a Lipschitz map with $\{f\ne{\rm id}\,\}\subset B_r(x)$ and $f(B_r(x)) \subset B_r(x)$, then $\H^n(K\cap B_r(x))\le\H^n(f(K)\cap B_r(x))$. \end{theorem} As explained in the introduction, an important tool in the proof is the construction of one-sided neighborhoods of $K$. This point is discussed in the following lemma, which is, in fact, an extension of \cite[Lemma 3.2]{kms2} (which corresponds to the case $U=\emptyset$). \begin{figure} \input{a0a1u.pstex_t}\caption{{\small The construction in Lemma \ref{l:one_sided_fattening} gives a one-sided neighborhood of $K$ away from $\mathrm{cl}\,(E)\cup\mathrm{cl}\,(U)\cup W$, thus defining an open set $F_{\delta,\eta}$ which contains $K\setminus \mathrm{cl}\,(U)$ in its boundary, and which collapses onto $K\setminus \mathrm{cl}\,(U)$ as $\eta\to 0^+$.}}\label{fig a0a1U} \end{figure} \begin{lemma} \label{l:one_sided_fattening} Let $K \subset \Omega$ be a relatively compact and $\H^n$-rectifiable set, let $E \subset \Omega$ be an open set with $\Omega \cap \mathrm{cl}\,(\partial^*E) = \Omega \cap \partial E \subset K$, and let $U \subset\joinrel\subset \Omega \setminus \mathrm{cl}\, (E)$ be an open set. Suppose that $\Sigma \subset K$ is a closed subset with empty interior relatively to $K$ such that $K \setminus \Sigma$ is a smooth hypersurface in $\Omega$ such that there exists $\nu \in C^\infty(K \setminus \Sigma ; \mathbb{S}^n)$ with $\nu(x)^\perp = T_x(K \setminus \Sigma)$ at every $x \in K \setminus \Sigma$. Set \[ M=K \setminus (\Sigma \cup \partial E \cup \mathrm{cl}\,(U))\,, \] and decompose $M=M_0\cup M_1$ by letting \begin{equation*} M_0 = (K \setminus \Sigma) \setminus (\mathrm{cl}\, (E) \cup \mathrm{cl}\, (U))\,, \quad M_1 = (K \setminus \Sigma) \cap E\,. \end{equation*} Let $\|A_M\|(x)$ be the maximal principal curvature (in absolute value) of $M$ at $x$. For given $\eta,\delta \in \left( 0, 1 \right)$, define a positive function $u \colon M \to \left(0,\eta\right]$ by setting \begin{equation} \label{osf:the function u} u(x) = \min\left\lbrace \eta\,,\, \frac{{\rm dist}(x,\Sigma \cup \partial E \cup \mathrm{cl}\,(U) \cup W)}{2}\,,\, \frac{\delta}{\|A_M\|(x)} \right\rbrace\,, \end{equation} and let \begin{eqnarray*} A_0 &=& \left\lbrace x + t\,u(x)\,\nu(x)\,\colon\, x \in M_0\,,\, 0 < t < 1 \right\rbrace\,,\\ A_1 &=& \left\lbrace x + t\,u(x)\,\nu(x)\,\colon\, x \in M_1\,,\, 0 < t < 1 \right\rbrace\,,\\ F_{\delta,\eta} &=& A_0 \cup (E \setminus \mathrm{cl}\, (A_1))\,. \end{eqnarray*} Then, $F_{\delta,\eta} \subset \Omega \setminus \mathrm{cl}\,(U)$ is open, $\partial F_{\delta,\eta}$ is $\H^n$-rectifiable, and \begin{eqnarray} \label{osf1} K \setminus \mathrm{cl}\, (U) & \subset & \Omega \cap \partial F_{\delta,\eta} \setminus \mathrm{cl}\,(U)\,, \\ \label{osf4} \partial F_{\delta,\eta} \cap \partial U &\subset & K \cap \partial U\,. \end{eqnarray} Moreover, \begin{equation} \label{osf2} \lim_{\eta \to 0^+} |F_{\delta,\eta} \, \Delta\, E| = 0 \qquad \mbox{for every $\delta$}\,, \end{equation} and \begin{equation} \label{osf3} \begin{split} \limsup_{\eta \to 0^+} \;& \H^n(\Omega \cap \partial F_{\delta,\eta} \setminus \mathrm{cl}\,(U)) \\& \leq (1+\delta)^n \, \Big( \H^n(\Omega \cap \partial^*E) + 2\,\H^n(K \setminus (\mathrm{cl}\,(U) \cup \partial^*E)) \Big)\,. \end{split} \end{equation} \end{lemma} Without losing sight of the general picture, we first prove Theorem \ref{proposition Lipschitz minimizing}, and then take care of proving Lemma \ref{l:one_sided_fattening}. \begin{proof}[Proof of Theorem \ref{proposition Lipschitz minimizing}] Let us recall from \cite[Lemma 3.1]{kms2}, that if $M$ is a smooth hypersurface in $\mathbb{R}^{n+1}$, then there exists a closed set $J \subset M$ with empty interior in $M$ such that a smooth unit normal vector field to $M$ can be defined on $M \setminus J$. Combining this fact with Theorem \ref{thm basic regularity}-(ii), we find that if $(K,E)$ is a generalized minimizer of $\psi(\varepsilon)$, then there exists a subset\footnote{This set $\Sigma$ could be much larger than the singular set of $K$, but is denoted with same letter used for the singular set of $K$ since the notation should be clear from the context.} $\Sigma \subset K$, closed and with empty interior relatively to $K$, such that $K \setminus \Sigma$ is a smooth \emph{orientable} hypersurface in $\mathbb{R}^{n+1}$. We shall denote $\nu$ a smooth unit normal vector field on $K \setminus \Sigma$. \medskip Let us fix $\rho>r$ such that $B_\rho(x)\subset\joinrel\subset \Omega\setminus \mathrm{cl}\,(E)$ and \begin{equation} \label{choice of rho} \H^n(K \cap \partial B_\rho(x))=0\,. \end{equation} Since $f(K)\cap \partial B_\rho(x)=K\cap \partial B_\rho(x)$, we can apply Corollary \ref{cor:Minkowski final for soap films} to find \[ \lim_{t\to 0^+}\frac{|U_t(f(K))\cap B_\rho(x)|}{2\,t}=\H^n\big(f(K)\cap B_\rho(x)\big)\,. \] By applying the coarea formula to the distance function from $f(K)$, see e.g. \cite[Theorem 18.1, Remark 18.2]{maggiBOOK}, we find that $v(t)=|U_t(f(K)) \cap B_\rho(x)|$ satisfies \[ v(t)=\int_{0}^t \H^n\big(\partial (U_\eta(f(K))) \cap B_\rho(x)\big) \, d\eta\,, \] and is thus absolutely continuous, with \[ v'(\eta)= \H^n\big(\partial (U_\eta(f(K))) \cap B_\rho(x)\big)\,, \qquad \mbox{for a.e. $\eta > 0$}\,; \] moreover, again for a.e. $\eta>0$, \begin{equation} \label{uetaK good} \begin{split} &\mbox{$U_\eta(f(K))$ is a set of finite perimeter} \\ &\mbox{whose reduced boundary is $\H^n$-equivalent to $\partial[U_\eta(f(K))]$}\,. \end{split} \end{equation} Therefore, for every $t>0$ there are points of differentiability $\eta_1(t), \eta_2(t) \in \left(0,t\right)$ of $v$ such that \eqref{uetaK good} holds at $\eta=\eta_1(t),\eta_2(t)$, and \[ v'(\eta_1(t)) \leq \frac{v(t)}{t} \leq v'(\eta_2(t))\,. \] Picking any sequence $t_j\to 0^+$, and correspondingly setting $\eta_j=\eta_1(t_j)$, we thus find \begin{eqnarray} \nonumber \liminf_{j \to \infty} \H^n\Big(\partial ( U_{\eta_j}(f(K)) )\cap B_\rho(x)\Big)& \leq& \liminf_{j \to \infty} \frac{|U_{t_j}(f(K)) \cap B_\rho(x)|}{t_j} \\ \label{final_2} &=&2\,\H^n(f(K) \cap B_\rho(x))\,. \end{eqnarray} We also notice that since $\{\mathrm{cl}\,(U_{\eta_j}(f(K)))\}_j$ is a decreasing sequence of sets with monotone limit $\mathrm{cl}\,(f(K))$, we have \begin{equation} \label{mancava} \lim_{j\to\infty}\H^n\Big(\partial B_\rho(x)\cap \mathrm{cl}\,(U_{\eta_j}(f(K)))\Big)=\H^n\Big(\partial B_\rho(x)\cap \mathrm{cl}\,(f(K))\Big)=\H^n(\partial B_\rho(x)\cap K)=0\,, \end{equation} again thanks to \eqref{choice of rho}. We now pick $\delta\in(0,1)$, and define $\{G_j\}_j$ by letting \begin{equation} \label{almgren_competitors} G_j =\left( U_{\eta_j}(f(K)) \cap B_\rho (x) \right) \cup F_j \subset \Omega\,, \end{equation} with $F_j=F_{\delta,\eta_j}$ as in Lemma \ref{l:one_sided_fattening} with $U = B_\rho(x)$. Since \begin{equation} \label{boundary containment} \partial G_j \subset \Big(\partial (U_{\eta_j}(f(K))) \cap B_\rho(x) \Big) \cup \Big( \mathrm{cl}\,(U_{\eta_j}(f(K))) \cap \partial B_\rho(x) \Big) \cup \partial F_j\,, \end{equation} by \eqref{uetaK good} we see that $\partial G_j$ is $\H^n$-rectifiable for every $j$. Next, we make the following claim \begin{equation} \label{claim}\,\mbox{$\Omega \cap \partial G_j$ is $\mathcal{C}$-spanning $W$ for every $j$}\,, \end{equation} which implies that $G_j$ is a competitor for the problem $\psi(|G_j|)$. In this way, by \[ E \, \Delta \, G_j \subset (E\,\Delta\, F_j) \cup U_{\eta_j}(f(K))\,, \] and by \eqref{osf2} we find that $|G_j| \to \varepsilon$ as $j \to \infty$, and since $\psi(\varepsilon)$ is lower semicontinuous on $(0,\infty)$, see \cite[Theorem 1.9]{kms}, we conclude that \begin{eqnarray*} \H^n(\Omega\cap\partial^*E) + 2\,\H^n(K\setminus\partial^*E) &=&\psi(\varepsilon) \le\liminf_{j\to\infty}\psi(|G_j|) \\ &\le&\liminf_{j\to\infty}\H^n(\Omega\cap\partial G_j)\,. \end{eqnarray*} In turn, \eqref{boundary containment} implies that \begin{equation} \label{estimate sum} \begin{split} \H^n(\Omega \cap \partial G_j) \leq \H^n(& \partial (U_{\eta_j}(f(K))) \cap B_\rho(x)) + \H^n\Big(\mathrm{cl}\,(U_{\eta_j}(f(K))) \cap \partial B_\rho(x)\Big) \\ & + \H^n(\partial F_j \cap \Omega \setminus \mathrm{cl}\,(B_\rho(x))) + \H^n(\partial F_j \cap \partial B_\rho(x))\,, \end{split} \end{equation} and thus, thanks to \eqref{choice of rho}, \eqref{final_2}, \eqref{mancava}, \eqref{osf4} and \eqref{osf3}, we have that \begin{eqnarray*} &&\H^n(\Omega \cap \partial^* E) + 2\, \H^n (K \setminus \partial ^*E) \\ &&\leq2\,\H^n(f(K) \cap B_\rho(x)) + (1+\delta)^n \, \Big\{ \H^n(\Omega \cap \partial^*E) + 2\,\H^n(K \setminus (\mathrm{cl}\,(B_\rho(x)) \cup \partial^*E)) \Big\}\,. \end{eqnarray*} By using again \eqref{choice of rho} and letting $\delta \to 0^+$ we deduce \[ \H^n(K\cap B_\rho(x))\le\H^n(f(K)\cap B_\rho(x))\,. \] To complete the proof we are thus left to prove our claim \eqref{claim}. \medskip Given $\gamma \in \mathcal{C}$ we want to show that \begin{equation} \label{comoda} \gamma\cap\Omega\cap\partial G_j\ne\emptyset\,. \end{equation} If $\gamma \cap (K \setminus \mathrm{cl}\,(B_\rho(x))) \ne \emptyset$, then by \eqref{osf1} we also have $\gamma \cap (\Omega\cap \partial F_j \setminus \mathrm{cl}\, (B_\rho(x))) \ne \emptyset$, and \eqref{comoda} holds. We can then suppose that $\gamma \cap (K \setminus \mathrm{cl}\,(B_\rho(x))) = \emptyset$, so that, since $K$ is $\mathcal{C}$-spanning $W$, $\gamma \cap K \cap \mathrm{cl}\,(B_\rho(x)) \neq \emptyset$. \medskip If there is $x_0 \in \gamma \cap K \cap \partial B_\rho(x)$, then necessarily $x_0 \in \partial G_j$. Indeed, $G_j \cap \partial B_\rho(x) = \emptyset$ by construction, so that $x_0 \notin G_j$; on the other hand, since $\{f\ne{\rm id}\,\}\subset\joinrel\subset B_\rho(x)$, we have that $x_0 \in f(K)$, and thus $x_0 \in \mathrm{cl}\,(U_{\eta_j}(f(K)) \cap B_\rho(x)) \subset \mathrm{cl}\,(G_j)$. \medskip Hence we can assume $\gamma\cap (K \setminus B_\rho(x)) = \emptyset$, and thus the existence of $x_0 \in \gamma \cap K \cap B_\rho(x)$. By \cite[Lemma 2.2]{kms}, there exists a connected component $\gamma_0$ of $\gamma \cap \mathrm{cl}\,(B_\rho(x))$ which is diffeomorphic to an interval, whose end-points $p,q$ belong to different connected components of $\partial B_\rho(x) \setminus K$, and such that $\gamma_0 \setminus \{p,q\} \subset B_\rho(x)$. Arguing as in \cite[Proof of Theorem 4, Step 3]{DLGM}, we conclude that in fact $p=f(p)$ and $q=f(q)$ belong to the closures of distinct connected components of $B_\rho(x) \setminus f(K)$, and thus there exists $y_0 \in (\gamma_0\setminus\{p,q\}) \cap f(K)$. Let $u$ be the function $u(y) = {\rm dist}(y, f(K)\cap B_\rho(x))$, and consider its restriction to the interval $\gamma_0$. If $\min\{u(p),u(q)\} \leq \eta_j$, then either $p$ or $q$ belongs to $\partial B_\rho(x)\cap\mathrm{cl}\,(U_{\eta_j}(f(K))\cap B_\rho(x)) \subset \partial G_j$. Otherwise, both $u(p) > \eta_j$ and $u(q) > \eta_j$, whereas $u(y_0)=0$, and thus, by the intermediate value theorem, $\gamma_0 \cap B_\rho(x) \cap \partial(U_{\eta_j}(f(K))) \neq \emptyset$. Thus $\gamma_0\cap\partial G_j\ne\emptyset$, and the proof is complete. \end{proof} \begin{proof}[Proof of Lemma \ref{l:one_sided_fattening}] Let us recall that we have set \begin{eqnarray*} M&=&K \setminus (\Sigma \cup \partial E \cup \mathrm{cl}\,(U))=M_0\cup M_1\,, \\ M_0&=&M\setminus\mathrm{cl}\,(E)=(K \setminus \Sigma) \setminus (\mathrm{cl}\, (E) \cup \mathrm{cl}\, (U))\,, \\ M_1&=&M\cap E= (K \setminus \Sigma) \cap E\,, \end{eqnarray*} and \[ A_0=g(M_0\times(0,1))\,,\qquad A_1=g(M_1\times(0,1))\,,\qquad F=F_{\delta,\eta}=A_0\cup \big(E\setminus\mathrm{cl}\,(A_1)\big)\,, \] where $g:M\times\mathbb{R}\to\mathbb{R}^{n+1}$ and $u:M\to(0,\eta]$ are defined by setting \begin{eqnarray*} u(x)&=&\min\Big\{ \eta\,,\, \frac{{\rm dist}(x,\Sigma \cup \partial E \cup \mathrm{cl}\,(U) \cup W)}{2}\,,\, \frac{\delta}{\|A_M\|(x)}\Big\}\,, \\ g(x,t)&=&x+t\,u(x)\,\nu(x)\,. \end{eqnarray*} We divide the argument in two steps. \medskip \noindent {\it Step one:} In this step we prove \eqref{osf4} as well as \begin{eqnarray} \label{opening -2} &&\mbox{$F$ is open with $F\subset\Omega \setminus \mathrm{cl}\,(U)$}\,, \\ \label{opening -1} &&(K\setminus \mathrm{cl}\, (U) )\cup\Big\{x+u(x)\,\nu(x):x\in M\Big\}=\Omega\cap\partial F \setminus \mathrm{cl}\, (U)\,. \end{eqnarray} Notice that \eqref{opening -1} immediately implies \eqref{osf1}, while \eqref{osf2} follows from $F \, \Delta \, E \subset A_0 \cup\, \mathrm{cl}\,(A_1)\subset I_\eta(K)$ and the fact that, as $\eta\to 0^+$, $|I_\eta(K)|\to|K|=0$. Therefore in step two we will only have to prove the validity of \eqref{osf3}. \medskip Since $M_0$ and $M_1$ are relatively open in $M$ and $u$ is positive on $M$, it is easily seen that $A_0$ and $A_1$ are open, and thus that $F$ is open. Since $M\subset\Omega\setminus \mathrm{cl}\,(U)$ and $u(x)<{\rm dist}(x,W \cup \mathrm{cl}\,(U))$ for every $x\in M$, we deduce that $A_0=g(M_0\times(0,1))\subset\Omega\setminus\mathrm{cl}\,(U)$; since trivially $E\subset\Omega\setminus\mathrm{cl}\,(U)$, we have proved \eqref{opening -2}. \medskip As a preliminary step towards proving \eqref{opening -1}, we show that, for $k=0,1$, we have \begin{equation} \label{opening 0} M_k\cup\big\{x+u(x)\,\nu(x):x\in M_k\big\}\,\,\subset\,\, \Omega \cap \partial A_k \,\,\subset\,\, K\cup\big\{x+u(x)\,\nu(x):x\in M_k\big\}\,. \end{equation} The first inclusion in \eqref{opening 0} is due to the fact that if $y=x+s\,\nu(x)$ for some $x\in M$ and $|s|< 1/\|A_M\|(x)$, then $x$ and $s$ are uniquely determined in $M$ and $[-1/\|A_M\|(x),1/\|A_M\|(x)]$. The second inclusion in \eqref{opening 0} follows because if $y\in \Omega \cap \partial A_k$, then $y$ is the limit of a sequence $x_j+t_j u(x_j)\,\nu(x_j)$ with $t_j\in(0,1)$, $x_j\in M_k$, $t_j\to t_0\in \left[0,1\right]$ and $x_j\to x_0\in \mathrm{cl}\,(M_k)$. If $x_0\in\mathrm{cl}\,(M_k)\setminus M_k\subset\Sigma\cup\partial E\cup\mathrm{cl}\,(U) \cup W$, then $u(x_j) \to 0$ and therefore $y=x_0\in K$. If $x_0\in M_k$ then clearly $t_0\in\{0,1\}$: when $t_0=0$, then $y=x_0\in K$; when, instead, $t_0=1$, then $y=x_0+u(x_0)\,\nu(x_0)$ for $x_0\in M_k$ as claimed. \medskip We prove the inclusion $\supset$ in \eqref{opening -1} by showing that actually \begin{equation} \label{fix 4} \Omega\cap\partial F\,\subset\,\,K\cup\Big\{x+u(x)\,\nu(x):x\in M\Big\}\,. \end{equation} Since the boundary of the union and of the intersection of two sets is contained in the union of the boundaries, and since the boundary of a set coincides with the boundary of its complement, the inclusion $\partial(\mathrm{cl}\,(A_1))\subset\partial A_1$ gives \begin{eqnarray}\nonumber \Omega\cap\partial F&\subset&\Omega\cap\Big(\partial A_0\cup\partial[E\setminus\mathrm{cl}\,(A_1)]\Big)\,\subset\, \Omega\cap\Big(\partial A_0\cup\partial E\cup\partial [\mathbb{R}^{n+1}\setminus\mathrm{cl}\,(A_1)]\Big) \\\label{recalling} &=& \Omega\cap\Big(\partial A_0\cup\partial E\cup\partial (\mathrm{cl}\,(A_1))\Big)\,\subset\,\Omega\cap\big(\partial E\cup\partial A_0\cup\partial A_1\big)\,. \end{eqnarray} Hence, \eqref{fix 4} follows from $\Omega\cap\partial E\subset K$ and \eqref{opening 0}. \medskip We prove \eqref{osf4}, i.e. $\partial F\cap\partial U\subset K\cap\partial U$. Indeed, $M \cap \mathrm{cl}\,(U) = \emptyset$ and $u(x) < {\rm dist}(x,\mathrm{cl}\,(U))$ for every $x\in M$ give \begin{eqnarray*} \partial U\cap \big\{x+u(x)\,\nu(x):x\in M\big\}=\emptyset\,, \end{eqnarray*} so that \eqref{osf4} follows immediately from \eqref{fix 4}. \medskip Finally, we complete the proof of \eqref{opening -1} by showing the inclusion $\subset$. Since $\mathrm{cl}\,(U)\cap\partial E=\emptyset$ and $M=K \setminus (\Sigma \cup \partial E \cup \mathrm{cl}\,(U))$, this amounts to show that \begin{eqnarray} \label{opening 3} M\cup \Big\{x+ u(x)\,\nu(x):x\in M\Big\}&\subset&\Omega\cap\partial F \setminus \mathrm{cl}\,(U)\,, \\ \label{fix 2} \Sigma\setminus(\partial E \cup \mathrm{cl}\,(U))&\subset&\Omega\cap\partial F\setminus \mathrm{cl}\,(U)\,, \\ \label{fix 3} \Omega\cap\partial E&\subset&\Omega\cap\partial F\setminus \mathrm{cl}\,(U)\,. \end{eqnarray} {\it Proof of \eqref{opening 3}}: Since $M_0\cap(\mathrm{cl}\,(E) \cup \mathrm{cl}\,(U) )=\emptyset$, $M_1\subset E$, and $u(x)<{\rm dist}(x,\partial E \cup \mathrm{cl}\,(U))$ for every $x\in M$, we find \begin{equation} \label{opening 1} g\big(M_0\times[0,1]\big)\cap(\mathrm{cl}\,(E) \cup \mathrm{cl}\,(U) )=\emptyset\,,\qquad g\big(M_1\times[0,1]\big)\subset E\,. \end{equation} Let us notice that, if $X,Y\subset\mathbb{R}^{n+1}$, $V\subset\mathbb{R}^{n+1}$ is open, and $X\cap V=Y\cap V$, then $V\cap\partial X=V\cap\partial Y$. We can apply this remark in the open sets $V=E$ and $V=\mathbb{R}^{n+1}\setminus\mathrm{cl}\,(E)$, together with $A_0\cap\mathrm{cl}\,(E)=\emptyset$ and $A_1\subset E$ (both consequences of \eqref{opening 1}), the definition of $F=A_0\cup(E\setminus\mathrm{cl}\,(A_1))$, and $\mathrm{cl}\,(U)\cap E=\emptyset$, to first deduce that \[ (\partial F)\setminus\mathrm{cl}\,(E)=(\partial A_0)\setminus\mathrm{cl}\,(E)\,,\qquad E\cap\partial F=E\cap\partial A_1\,, \] and then that \begin{equation} \label{opening 1.1} \Big(\big(\partial A_0\big)\setminus(\mathrm{cl}\,(E) \cup \mathrm{cl}\,(U) ) \Big)\,\cup\,\Big(E\cap\partial A_1\Big)\subset\partial F \setminus \mathrm{cl}\,(U)\,. \end{equation} Since \eqref{opening 0} and \eqref{opening 1} give \begin{equation} \label{opening 2} g(M_0\times\{0,1\})\subset \Omega \cap \partial A_0\setminus(\mathrm{cl}\,(E) \cup \mathrm{cl}\, (U) ) \,,\qquad g(M_1\times\{0,1\})\subset E\cap \partial A_1\,, \end{equation} we deduce \eqref{opening 3} from \eqref{opening 1.1} and \eqref{opening 2}. \medskip \noindent {\it Proof of \eqref{fix 2}}: since $M_1=(K\setminus\Sigma)\cap E$ and $\Sigma$ has empty interior in $K$, we find that $\mathrm{cl}\,(M_1)\cap E=K\cap E$. At the same time, $M\subset\Omega\cap\partial F \setminus \mathrm{cl}\,(U)$ gives $M_1\cap E\subset E\cap\partial F$ and thus $\mathrm{cl}\,(M_1)\cap E\subset E\cap\partial F$: hence, \[ \Sigma\cap E\,\,\subset \,\, K\cap E\,\,=\,\,\mathrm{cl}\,(M_1)\cap E\,\,\subset\,\,\Omega\cap\partial F \setminus \mathrm{cl}\,(U)\,. \] Setting for the sake of brevity $T=\Omega\setminus(\mathrm{cl}\,(E) \cup \mathrm{cl}\, (U))$, so that $T$ is open, we notice that $M_0=(K\setminus\Sigma)\cap T$ implies $\Sigma\cap T\subset \Omega \cap \mathrm{cl}\,(M_0)\cap T=K\cap T$, while $M\subset\Omega\cap \partial F \setminus \mathrm{cl}\,(U)$ and $M_0=M\cap T$ give $\Omega \cap \mathrm{cl}\,(M_0) \cap T\,\subset \,T\cap \partial F $; hence \[ \Sigma\setminus(\mathrm{cl}\,(E) \cup \mathrm{cl}\,(U) )\,\,\subset \,\,K\setminus(\mathrm{cl}\,(E) \cup \mathrm{cl}\,(U) )\,\,\subset\,\,\Omega \cap \partial F\setminus(\mathrm{cl}\,(E)\cup\mathrm{cl}\,(U) )\,. \] By combining the last two displayed inclusions, we obtain \eqref{fix 2}. \medskip \noindent {\it Proof of \eqref{fix 3}}: since $F$ and $E$ coincide in the complement of $\mathrm{cl}\,(A_0)\cup\mathrm{cl}\,(A_1)$, and since $\partial E \cap \mathrm{cl}\,(U)=\emptyset$, we have \[ \Omega\cap\partial E\setminus\big(\mathrm{cl}\,(A_0)\cup\mathrm{cl}\,(A_1)\big)\,\,=\,\,\Omega\cap\partial F\setminus\big(\mathrm{cl}\,(A_0)\cup\mathrm{cl}\,(A_1)\cup\mathrm{cl}\,(U)\big)\,\,\subset\,\,\Omega\cap\partial F\setminus\mathrm{cl}\,(U)\,. \] Now let $y\in\Omega\cap\partial E\cap\mathrm{cl}\,(A_1)$: since $A_1=g(M_1\times(0,1))$ and $y\not\in g(M_1\times[0,1])$ by \eqref{opening 1}, we find that $y$ is in the closure of $M_1$, and thus of $M$, relatively to $K$: thus $y\in\Omega\cap\mathrm{cl}\,(M)\setminus \mathrm{cl}\,(U)$; at the same time, by \eqref{opening 3}, we have $M\subset\Omega\cap\partial F \setminus \mathrm{cl}\,(U)$ and thus $\Omega\cap\mathrm{cl}\,(M)\setminus \mathrm{cl}\,(U)\subset\Omega\cap\partial F \setminus \mathrm{cl}\,(U)$; combining the two facts, \[ \Omega\cap\partial E\cap\mathrm{cl}\,(A_1)\subset \Omega\cap\partial F \setminus \mathrm{cl}\,(U)\,. \] We argue similarly to show that $\Omega\cap\partial E\cap\mathrm{cl}\,(A_0)\subset\Omega\cap\partial F \setminus \mathrm{cl}\,(U)$ and thus prove \eqref{fix 3}. \medskip \noindent {\it Step two}. We prove \eqref{osf3}. First, we notice that thanks to \eqref{opening -1} \begin{equation} \label{energy estimate opening1} \H^n(\Omega\cap\partial F \setminus \mathrm{cl}\,(U))\le\H^n(K \setminus \mathrm{cl}\,(U))+\H^n\Big(\big\{x+u(x)\,\nu(x):x\in M\big\}\Big)\,. \end{equation} Since ${\rm dist}(x,\Sigma\cup\partial E \cup \mathrm{cl}\,(U) \cup W)>0$ and $\|A_M\|(x)<\infty$ for every $x\in M$, we find that \[ M_\eta=\big\{x\in M:u(x)=\eta\big\}=\Big\{x\in M\,\colon\,\|A_M\|(x)\le\frac\delta\eta\,,\,{\rm dist}(x,\Sigma\cup\partial E\cup \mathrm{cl}\,(U)\cup W)\ge2\eta\Big\} \] is monotonically increasing towards $M$ as $\eta\to 0^+$. Moreover, $x\mapsto x+u(x)\,\nu(x)=x+\eta\,\nu(x)$ is smooth on $M_\eta$, and if $\k_i$ are the principal curvatures of $M$ with respect to $\nu$, \begin{equation} \label{energy estimate opening2} \H^n\Big(\big\{x+u(x)\,\nu(x):x\in M_\eta\big\}\Big)=\int_{M_\eta}\prod_{i=1}^n(1+\eta\,\k_i) \le(1+\delta)^n\,\H^n(M_\eta)\le(1+\delta)^n\,\H^n(M)\,. \end{equation} Letting $\eta\to 0^+$, $g(M_\eta\times\{1\})=\{x+u(x)\,\nu(x):x\in M_\eta\}$ is increasingly converging to $g(M\times\{1\}) = \{x+u(x)\,\nu(x):x\in M\}$, so that \eqref{energy estimate opening2} yields \begin{equation} \label{estimate M up} \H^n(\{ x+u(x)\,\nu(x)\,\colon\, x\in M \}) \leq (1+\delta)^n\,\H^n(M)\,, \end{equation} and therefore from \eqref{energy estimate opening1} we deduce \begin{equation} \label{energy estimate opening final} \limsup_{\eta\to 0^+}\H^n(\Omega\cap\partial F \setminus \mathrm{cl}\,(U))\le\H^n(K \setminus \mathrm{cl}\,(U))+(1+\delta)^n\,\H^n(M)\,. \end{equation} Finally, \eqref{osf3} follows from \eqref{energy estimate opening final} once we observe that $M=K\setminus(\Sigma\cup\partial E \cup \mathrm{cl}\,(U))\subset K\setminus(\partial^*E \cup \mathrm{cl}\,(U) )$, so that \begin{eqnarray*} \H^n(K\setminus \mathrm{cl}\,(U))+\H^n(M)&=&\H^n(\Omega\cap\partial^*E)+\H^n(K\setminus(\partial^*E \cup \mathrm{cl}\,(U)))+\H^n(M) \\ &\le&\H^n(\Omega\cap\partial^*E)+2\,\H^n(K\setminus(\partial^*E \cup \mathrm{cl}\,(U) ))\,, \end{eqnarray*} as required. \end{proof} \section{Wetting competitors and exclusion of points of type $Y$} \label{s:noY} By Theorem \ref{proposition Lipschitz minimizing}, $K$ is an Almgren minimal set in $\Omega\setminus\mathrm{cl}\,(E)$. As we shall see in the next section, this property is compatible with $K$ containing $(n-1)$-dimensional submanifolds of $Y$-points, that is, points such that $K$ is locally diffeomorphic to a cone of type $Y$ in $\mathbb{R}^{n+1}$. The goal of this section is to show that, for reasons related to the specific properties of the variational problem $\psi(\varepsilon)$, such points cannot exist. As explained in detail in the next section, this bit of information will prove crucial in closing the proof of Theorem \ref{t:main}. \begin{theorem}\label{prop:noYx} If $(K,E)$ is a generalized minimizer of $\psi(\varepsilon)$, then there cannot be $x_0\in K\setminus\mathrm{cl}\,(E)$ such that there exist $\a\in(0,1)$, a $C^{1,\a}$-diffeomorphism $\Phi:\mathbb{R}^{n+1}\to\mathbb{R}^{n+1}$ with $\Phi(0)=x_0$, and $r_0>0$ such that, setting $A=\Phi(B_{r_0})$, \begin{equation} \label{K is a perturbation of Y} \Phi\Big((\mathbf{Y}^1\times\mathbb{R}^{n-1})\cap B_{r_0}\Big)=A\cap K\,, \end{equation} and, in addition, \begin{equation} \label{e:properties of diffeo} \nabla\Phi(0)={\rm Id}\,\,,\qquad \| \nabla \Phi - {\rm Id}\, \|_{C^0(B_r)} \leq C\, r^\a\,,\qquad\forall r<r_0\,. \end{equation} \end{theorem} \begin{proof} The proof is achieved by showing that, up to decrease the value of $r_0$, there exist a constant $c_{\mathbf{Y}}=c_{\mathbf{Y}}(n)>0$ and a set $G \subset \Omega$ with \begin{equation} \label{contra:the set G} G \in \mathcal{E}\,, \quad |G|=\varepsilon\,, \quad \mbox{$\Omega \cap \partial G$ is $\mathcal{C}$-spanning $W$}\,, \end{equation} such that \begin{equation} \label{contra:the estimate} \H^n(\Omega \cap \partial G) \leq \mathcal F(K,E) - c_{\mathbf{Y}}\,r_0^n\,. \end{equation} The set $G$, defined in \eqref{the competitor!} below, is constructed in three stages, that we introduce as follows. We pick $x^*\in\Omega\cap\partial^*E$, so that, by Theorem \ref{thm basic regularity}, we can find an open cylinder $Q^*$ of height and radius $r^*>0$, centered at $x^*$, and with axis along $\nu_E(x^*)$, such that $E\cap Q^*$ is the subgraph of a smooth function $u$ defined over the cross section $D^*$ of $Q^*$, and such that the graph of $u$ has mean curvature $\l$ in the orientation induced by $\nu_E(x^*)$ (here $\l$ is the Lagrange multiplier of $(K,E)$). \begin{figure} \input{ystar.pstex_t} \caption{\small{The construction in step one of the proof of Theorem \ref{prop:noYx}.}} \label{fig ystar} \end{figure} Up to decrease $r_0$ and $r_*$, we can make sure that $A=\Phi(B_{r_0})$ and $Q^*$ lie at positive distance, so that modification of $(K,E)$ compactly supported in these two regions will not interact. We then argue in three stages: in the first stage (first three steps of the proof), we modify $(K,E)$ by replacing the collapsed surface $K\cap A$ with an open set of the form $\Phi(\Delta_{r_0})$, where $\Delta_{r_0}\subset B_{r_0}$ is constructed so to achieve an ${\rm O}(r_0^n)$-gain in area, at the cost of an ${\rm O}(r_0^{n+1})$-increase in volume -- this is possible, of course, only because we are assuming that $x_0$ is a $Y$-point; in the second stage (step four of the proof), we construct a one-parameter family of modifications $\{E_t\}_{t\in(0,t_0)}$ of $E$, all supported in $Q^*$, with $|E_t|=|E|-t$, and such that the area increase from $\partial E$ to $\partial E_t$ is of order ${\rm O}(t)$; in the final stage, we apply Lemma \ref{l:one_sided_fattening} to the element of $\mathcal{K}$ constructed in stage one, and then use the volume-fixing variation of stage two to create the competitor $G$ which will eventually give the desired contradiction. \medskip \noindent {\it Step one:} There exist positive constants $v_0$ and $c_0$, such that for every $\delta >0$, there is an open subset $Y^*_\delta \subset B_\delta \subset \mathbb{R}^2$ such that \begin{eqnarray} \label{2D:geometry} \mathbf{Y}^1 \cap B_\delta \subset Y^*_\delta\,, &\qquad & \mathrm{cl}\,(Y^*_\delta) \cap \partial B_\delta = \mathbf{Y}^1 \cap \partial B_\delta\,, \\ \label{e:2D gain} \quad |Y^*_\delta| = v_0 \,\delta^2\,, & \qquad & \H^1(\partial Y^*_\delta ) = 2\, \H^1(\mathbf{Y}^1 \cap B_\delta) - c_0\,\delta\,. \end{eqnarray} This results from an explicit construction, see Figure \ref{fig ystar}. By scale invariance of the statement, we can assume that $\delta=1$. Let $\{A_i\}_{i=1}^3 = \mathbf{Y}^1 \cap \partial B_1$, and let $\{P_{1,2}, P_{2,3}, P_{1,3}\}$ be defined as follows: for $i,j \in \{1,2,3\}$, $i < j$, $P_{i,j}$ is the intersection of the straight lines $\ell_i$ and $\ell_j$ tangent to $\partial B_1$ and passing through $A_i$ and $A_j$, respectively. We also let $S_{i,j}$ be the closed disc sector centered at $P_{i,j}$ and corresponding to the arc $A_iA_j$. Finally, we define \begin{equation} \label{Y-competitor} Y^*=Y^*_1= B_1 \setminus \bigcup_{i<j} S_{i,j}\,. \end{equation} It is easily shown that \eqref{2D:geometry} holds with \begin{equation} \label{exact values} v_0=\frac{3}{2}(2\,\sqrt{3}-\pi)\,, \qquad c_0 = 6-\pi\,\sqrt{3}\,. \end{equation} \medskip \noindent {\it Step two:} We adapt to higher dimensions the construction of step one, see the set $\Delta_{r_0}$ defined in \eqref{deltar0} below. We assign coordinates $x = (z,y) \in \mathbb{R}^2\times\mathbb{R}^{n-1}$ to points $x \in \mathbb{R}^{n+1}$, so that $(0,y)$ is the component of the vector $x$ along the spine of the cone $\mathbf{Y}^1 \times \mathbb{R}^{n-1} \subset \mathbb{R}^2 \times \mathbb{R}^{n-1}$, and $|z|$ is the distance of $x$ from the spine. We observe that, if $\mathbf{p} \colon \mathbb{R}^{n+1} \to \mathbb{R}^{n-1}$ denotes the orthogonal projection operator onto the spine, then the \emph{slice} of $B_{r_0}$ with respect to $\mathbf{p}$ at $y \in \mathbb{R}^{n-1}$ is given by \begin{equation} \label{fading disks} B_{r_0} \cap \mathbf{p}^{-1}(y) = \begin{cases} (0,y) + B^2_{\sqrt{r_0^2-|y|^2}} & \mbox{if $|y| < r_0$} \\ \emptyset & \mbox{otherwise}\,, \end{cases} \end{equation} where $B^2_{\rho}$ is the disc of radius $\rho$ in $\mathbb{R}^2 \times \{0\}$. Analogously, the slice of $(\mathbf{Y}^1 \times \mathbb{R}^{n-1}) \cap B_{r_0}$ with respect to $\mathbf{p}$ at $y$ is \begin{equation} \label{fading Ys} (\mathbf{Y}^1 \times \mathbb{R}^{n-1}) \cap B_{r_0} \cap \mathbf{p}^{-1}(y) = \begin{cases} (0,y) + \mathbf{Y}^1 \cap B^2_{\sqrt{r_0^2-|y|^2}} & \mbox{if $|y| < r_0$}\\ \emptyset &\mbox{otherwise}\,. \end{cases} \end{equation} For $\tau \in \left(0,1/2\right)$ to be chosen later, we pick $g \in C^\infty_{c}(\left[0,\infty\right))$ such that \begin{eqnarray*} g \equiv \tau \;\; \mbox{in $\left[ 0,\tau \right]$}\,, & \qquad & g \equiv 0 \;\; \mbox{in $\left[1,\infty\right)$}\,,\\ 0<g(t) \leq \sqrt{1-t^2} \;\; \mbox{in $\left[0,1\right)$}\,, & \qquad & g' \leq 0 \mbox{ and }|g'|\leq 2\,\tau \;\;\mbox{everywhere}\,, \end{eqnarray*} and set $g_{r_0}(s) = r_0\,g(s/r_0)$, see \begin{figure}\input{gr0.pstex_t}\caption{\small{The dampening function $g_{r_0}$. The set $U_{r_0}$ is obtained by rotating the graph of $g_{r_0}$ around the spine of $\mathbf{Y}^1\times\mathbb{R}^{n-1}$.}}\label{fig gr0}\end{figure} Figure \ref{fig gr0}. Next, let $U_{r_0}$ define the open tubular neighborhood \begin{equation} U_{r_0} = \Big\{ x=(z,y) \in \mathbb{R}^{n+1} \,\colon\, \mbox{$|y|<r_0$ and $|z| < g_{r_0}(|y|)$} \Big\}\,, \end{equation} and notice that $U_{r_0} \subset B_{r_0}$ by \eqref{fading disks} and the properties of $g$. Finally, we define the set \begin{equation}\label{deltar0} \Delta_{r_0} = \Big\{ x=(z,y) \in \mathbb{R}^{n+1} \, \colon \, \mbox{$|y|<r_0$ and $z \in Y^*_{g_{r_0}(|y|)}$} \Big\}\,. \end{equation} We claim that $\Delta_{r_0}$ is an open subset of $U_{r_0}$ (thus of $B_{r_0}$), with $\H^n$-rectifiable boundary, and such that \begin{eqnarray} \label{first containment} (\mathbf{Y}^1 \times \mathbb{R}^{n-1}) \cap U_{r_0}&\subset&\Delta_{r_0}\,, \\ \label{Delta volume} |\Delta_{r_0}| &\le& C(n)\,v_0\,r_0^{n+1}\,, \\ \label{Delta perimeter deficit} \H^n(\partial\Delta_{r_0})&\le& 2\,\H^n\Big((\mathbf{Y}^1\times\mathbb{R}^{n-1}) \cap U_{r_0}\Big)-C(n)\,c_0\,r_0^n\,. \end{eqnarray} Only \eqref{Delta volume} and \eqref{Delta perimeter deficit} require a detailed proof. To compute the volume of $\Delta_{r_0}$, we apply Fubini's theorem, step one, and the definition of $g_{r_0}$, to get \[ |\Delta_{r_0}| = \int_{B_{r_0}^{n-1}} |Y^*_{g_{r_0}(|y|)}| \, dy = v_0\, \int_{B_{r_0}^{n-1}} g_{r_0}(|y|)^2\,dy = v_0 \, r_0^{n+1}\, \int_{B_1^{n-1}} g(|w|)^2 \, dw\,. \] Similarly, we use the coarea formula (see e.g. \cite[Theorem 2.93 and Remark 2.94]{AFP}) to write \begin{equation} \label{coarea formula} \begin{split} \H^n(\partial \Delta_{r_0}) &- 2\, \H^n((\mathbf{Y}^1\times\mathbb{R}^{n-1}) \cap U_{r_0}) \\ &= \int_{B_{r_0}^{n-1}}\,dy \int_{\partial Y^*_{g_{r_0}(|y|)}} \frac{d\H^1(z)}{\mathrm{C}_{n-1}(\nabla^{\partial\Delta_{r_0}}\mathbf{p}(z))} - 2\,\int_{B_{r_0}^{n-1}} \H^1(\mathbf{Y}^1 \cap B^2_{g_{r_0}(|y|)}) \, dy \end{split} \end{equation} where $\nabla^{\partial\Delta_{r_0}}\mathbf{p}$ is the tangential gradient of $\mathbf{p}$ along $\partial \Delta_{r_0}$, and where $\mathrm{C}_{n-1}(L)$ is the $(n-1)$-dimensional coarea factor of a linear map $L \colon \mathbb{R}^n \to \mathbb{R}^{n-1}$. Standard calculations show that, for every $y \in B_{r_0}^{n-1}$, \[ \mathrm{C}_{n-1}(\nabla^{\partial\Delta_{r_0}} \mathbf{p}(z)) = \left( 1 + |g_{r_0}'(|y|)|^2 \right)^{-\frac{n-1}{2}}\qquad \mbox{for $\H^1$-a.e. $z \in \partial Y^*_{g_{r_0}(|y|)}$}\,, \] so that \eqref{coarea formula} allows to estimate \begin{eqnarray*} \H^n(\partial \Delta_{r_0}) & - & 2\, \H^n((\mathbf{Y}^1\times\mathbb{R}^{n-1}) \cap U_{r_0}) \\ &\leq & \int_{B_{r_0}^{n-1}} \left( ( 1 + 4\,\tau^2 )^{\frac{n-1}{2}} \, \H^1(\partial Y^*_{g_{r_0}(|y|)}) - 2\,\H^1(\mathbf{Y}^1 \cap B^2_{g_{r_0}(|y|)}) \right) \,dy\\ &=& \left( (1+4\,\tau^2)^{\frac{n-1}{2}} \,\H^1(\partial Y^*_1) - 2\, \H^1(\mathbf{Y}^1 \cap B_1^2) \right) \, r_0^n \, \int_{B_1^{n-1}} g(|w|) \, dw \,, \end{eqnarray*} and thus by \eqref{e:2D gain} and provided $\tau$ is sufficiently small depending on $n$, $c_0$ and $\H^1(\mathbf{Y}^1\cap B_1^2)$, \[ \H^n(\partial \Delta_{r_0}) - 2\, \H^n((\mathbf{Y}^1\times\mathbb{R}^{n-1}) \cap U_{r_0}) \leq - \frac{c_0}{2} \,r_0^n\, \int_{B_1^{n-1}} g(|w|) \, dw\,, \] which gives \eqref{Delta perimeter deficit}. \medskip \noindent {\it Step three:} By step two and the properties of $\Phi$, we have that $\Phi(\Delta_{r_0})$ is an open subset of $\Phi(U_{r_0}) \subset \Phi(B_{r_0})=A$ with $\H^n$-rectifiable boundary $\partial\Phi(\Delta_{r_0})=\Phi(\partial\Delta_{r_0})$, and such that \begin{equation} \label{refined containment} K \cap \Phi(U_{r_0})=\Phi\Big((\mathbf{Y}^1\times\mathbb{R}^{n-1})\cap U_{r_0}\Big) \subset \Phi(\Delta_{r_0}) \end{equation} thanks to \eqref{first containment}. Moreover, up to possibly taking a smaller value for $r_0$, \eqref{Delta volume} and \eqref{Delta perimeter deficit}, together with the area formula and \eqref{e:properties of diffeo}, guarantee the existence of constants $v_{\mathbf{Y}} = v_{\mathbf{Y}}(n) > 0$ and $c_{\mathbf{Y}} = c_{\mathbf{Y}}(n) >0$ such that \begin{align} \label{Phi Delta volume} |\Phi(\Delta_{r_0})| \;\; &\le \;\; v_{\mathbf{Y}}\,r_0^{n+1}\,, \\ \label{Phi Delta perimeter deficit} \H^n(\partial (\Phi(\Delta_{r_0}))) - 2\, \H^n(K \cap \Phi(U_{r_0})) \;\; &\leq \;\; - 3\,c_{\mathbf{Y}} \, r_0^n\,. \end{align} \medskip \noindent {\it Step four:} We recall the following construction from \cite[Proof of Theorem 2.8]{kms2}; see \begin{figure}\input{vt.pstex_t}\caption{\small{The volume-fixing variations constructed in step four. The surface $S_t$ has been depicted with a bold line.}}\label{fig Vt}\end{figure} Figure \ref{fig Vt}. Fix a point $x^*\in\Omega \cap \partial^*E$, and let $\nu^*=\nu_E(x^*)$. Theorem \ref{thm basic regularity} guarantees then the existence of a radius $r^* > 0$ such that, denoting by $Q^*$ the cylinder of center $x^*$, axis $\nu^*$, height and radius $r^*$, and by $D^*$ its $n$-dimensional cross-section passing through $x^*$, we have $\mathrm{cl}\,(Q^*) \cap \mathrm{cl}\,(\Phi(B_{r_0}))=\emptyset$ and \begin{eqnarray} \label{cilindro filling} E \cap \mathrm{cl}\,(Q^*)&=&\Big\{z+h\,\nu^*\,\colon\, z\in \mathrm{cl}\,(D^*)\,, \; -r^*\leq h<v(z)\Big\}\,, \label{cilindro set} \\ K\cap\mathrm{cl}\,(Q^*)=\partial E\cap\mathrm{cl}\,(Q^*)&=&\Big\{z+v(z)\,\nu^*\,\colon\, z\in \mathrm{cl}\,(D^*)\Big\}\,, \end{eqnarray} for a smooth function $v \colon \mathrm{cl}\,(D^*) \to \mathbb{R}$ solving, for $\lambda \le 0$ (the non-positivity of $\lambda$ is not important here, but it holds thanks to the main result in \cite{kms2}), \begin{equation} \label{pa E is cmc} -{\rm div}\,\bigg(\frac{\nabla v}{\sqrt{1+|\nabla v|^2}}\bigg)=\l\quad\mbox{on $D^*$}\,,\qquad \max_{\mathrm{cl}\,(D^*)}|v|\le \frac{r^*}2\,. \end{equation} We choose a smooth function $w \colon\mathrm{cl}\,(D^*)\to\mathbb{R}$ with \begin{equation} \label{volume fixing variation} w=0\quad\mbox{on $\partial D^*$}\,,\qquad w>0\quad\mbox{on $D^*$}\,,\qquad\int_{D^*}w=1\,, \end{equation} and then define, for $t>0$, an open set $V_t$ by setting \begin{equation} \label{cave} V_t=\Big\{ z+h\,\nu^*\,\colon\, z\in D^*\,,v(z)-t\,w(z) < h < v(z)\Big\}\,. \end{equation} For $t$ small enough (depending only on $r^*$ and on the choice of $w$) we have that $V_t\subset E \cap Q^*$, with \begin{equation} \partial V_t\cap\partial Q^*=K\cap \partial Q^*=\left\lbrace z+v(z)\,\nu^*\,\colon\, z\in \partial D^*\right\rbrace\,. \end{equation} Furthermore, if we let $S_t$ denote the closed set \begin{equation} \label{d:S_surface} S_t = \Big\{ z + (v(z) - t w(z))\, \nu^* \, \colon \, z \in \mathrm{cl}\,(D^*) \Big\}\,, \end{equation} it is easily seen that for $t<t_0$ \begin{equation} \label{volume fixing volume and perimeter} |V_t|=t\,,\qquad \H^n(S_t) = \H^n (S_t \cap \mathrm{cl}\,(Q^*)) = \H^n(\partial E \cap \mathrm{cl}\,(Q^*)) - \lambda\, t + {\rm O}(t^2)\,, \end{equation} where we have used $\int_{D^*} w=1$, $w=0$ on $\partial D^*$, and \eqref{pa E is cmc}. In particular, setting \begin{equation} \label{E deflated} E_t = E \setminus \mathrm{cl}\,(V_t)\,, \end{equation} we have (see \cite[Equation (3.37)]{kms2}) \begin{equation} \label{properties of Et} |E_t| = |E| -t\,, \qquad \partial E_t \cap \mathrm{cl}\,(Q^*) = S_t\,, \end{equation} so that \eqref{volume fixing volume and perimeter} reads \begin{equation} \label{perimeter estimate bulk} \H^n ( \partial E_t \cap\mathrm{cl}\,( Q^*) ) = \H^n(\partial E \cap \mathrm{cl}\,(Q^*)) + |\lambda|\,t + {\rm O}(t^2)\,. \end{equation} Finally, if needed, we further reduce the value of $r_0$ (this time also depending on $|\l|$) in order to entail \begin{equation} \label{fixing radius} \H^n(\partial E_t \cap \mathrm{cl}\,(Q^*)) \leq \H^n(\partial E \cap \mathrm{cl}\,(Q^*)) + c_{\mathbf{Y}}\,r_0^n \qquad\forall\,t \leq 2\,v_{\mathbf{Y}}\,r_0^{n+1}\,. \end{equation} \medskip \noindent {\it Step five:} Without loss of generality, we can assume that $r^*<{\rm dist}(x^*, \mathrm{cl}\,(K) \setminus \partial^*E)/2$. In this way, provided $\eta$ is small enough in terms of $r^*$, we can enforce that \begin{equation} \label{oddio}\mbox{$I_\eta(\mathrm{cl}\,(K)\setminus \partial^*E)$ and $\mathrm{cl}\,(Q^*)$ lie at positive distance.} \end{equation} We thus apply the construction of Lemma \ref{l:one_sided_fattening} to $(K,E)$ with the open set $U = \Phi(U_{r_0})$, and correspondingly define the function $u$ and the sets $M_0,M_1,M,A_0,A_1$. After choosing $\delta$ and $\eta$ sufficiently small in terms of $r_0$, we can achieve \begin{align} \label{condition1} &(\mathrm{cl}\, (A_0) \cup \mathrm{cl}\,(A_1) ) \cap \mathrm{cl}\, (Q^*) = \emptyset\,, \qquad |A_0| + |A_1| \leq \frac{v_{\mathbf{Y}}\,r_0^{n+1}}{4}\\ \label{condition2} &\H^n\left( \{ x + u(x)\,\nu(x) \, \colon \, x \in M \} \right) \leq \H^n(M) + c_{\mathbf{Y}}\,r_0^n\,. \end{align} Since $\mathrm{cl}\,(A_0)\cup\mathrm{cl}\,(A_1)\subset I_\eta(K \setminus \partial E)$, the first condition in \eqref{condition1} is immediate from \eqref{oddio}, while the second condition follows from $|I_\eta(K)|\to|\mathrm{cl}\,(K)|=0$ as $\eta\to 0^+$. Finally, \eqref{condition2} is satisfied for $\delta$ sufficiently small (in terms of $r_0$, $n$ and $\H^n(K)$) thanks to \eqref{estimate M up}. \medskip \noindent {\it Step six:} We apply step four with \begin{equation} \label{volume deficit} t = |\Phi(\Delta_{r_0})| + |A_0| - |A_1| \in \left( 0, 2\,v_{\mathbf{Y}}\, r_0^{n+1} \right]\,. \end{equation} In particular, \eqref{fixing radius} holds for the corresponding set $E_t$, and we can finally define the competitor \begin{equation} \label{the competitor!} G = \Phi(\Delta_{r_0}) \cup F\,, \qquad \mbox{where $F=A_0 \cup \left( E_t \setminus \mathrm{cl}\,(A_1) \right)$}\,. \end{equation} We verify now that $G$ satisfies the properties \eqref{contra:the set G} and \eqref{contra:the estimate}. First, we observe that $\Phi (\Delta_{r_0}) \subset \Phi (U_{r_0})$, whereas, by Lemma \ref{l:one_sided_fattening} and given that $E_t \subset E$, one has $F \subset \Omega \setminus \mathrm{cl}\,(\Phi(U_{r_0}))$, so that $\Phi(\Delta_{r_0})$ and $F$ are two \emph{disjoint} open subsets of $\Omega$. In particular, $G \subset \Omega$ is open and, as a consequence of \eqref{properties of Et} and \eqref{volume deficit}, \begin{equation} \label{volume of competitor} \begin{split} |G| = |\Phi(\Delta_{r_0})| + |F| \;&=\; |\Phi(\Delta_{r_0})| + |A_0| + |E_t| - |\mathrm{cl}\, (A_1)|\\ &=\; |\Phi(\Delta_{r_0})| + |A_0| - |\mathrm{cl}\,(A_1)| + |E| - t \\ &=\, |E|\,. \end{split} \end{equation} Since $\partial G\subset\partial[\Phi(\Delta_{r_0})]\cup\partial F$, recalling the last inclusion in \eqref{recalling} (which in the present case holds with $E_t$ in place of $E$) and noticing that $\mathrm{cl}\,(\Phi(\Delta_{r_0})) \subset \Omega$, we obtain \begin{equation} \label{first inclusion} \Omega\cap\partial G\,\, \subset\,\,\partial \Phi(\Delta_{r_0})\cup\Big\{\Omega\cap\Big(\partial A_0 \cup \partial A_1 \cup \partial E_t\Big)\Big\}\,. \end{equation} and, in particular, $\partial G$ is $\H^n$-rectifiable. Moreover, for $k=0,1$, by \eqref{opening 0} and by $I_\eta(K \setminus \partial E)\cap \mathrm{cl}\,(Q^*)=\emptyset$, we get \[ \Omega \cap \partial A_k \subset K \setminus ( \Phi(U_{r_0}) \cup Q^* ) \cup \Big\{ x + u(x)\,\nu(x)\,\colon\,x \in M_k \Big\} \] while \begin{eqnarray*} \Omega \cap \partial E_t & \subset & [\Omega \cap \partial E_t \cap \mathrm{cl}\, (Q^*)] \cup [(\Omega \cap \partial E) \setminus \mathrm{cl}\,(Q^*)] \\ &\subset&S_t \cup [K \setminus (\Phi (U_{r_0}) \cup Q^*)] \end{eqnarray*} so that the $\subset$-inclusion in the following identity \begin{equation} \label{boundary identity} \Omega \cap \partial G = \partial \Phi(\Delta_{r_0}) \cup (K \setminus ( \Phi(U_{r_0}) \cup Q^* )) \cup S_t \cup \Big\{ x + u(x)\,\nu(x)\,\colon\, x \in M \Big\} \end{equation} follows from \eqref{first inclusion}. To complete the proof of \eqref{boundary identity} we will show that \begin{eqnarray} \label{2inclusion:Delta} \partial \Phi(\Delta_{r_0}) & \subset & \Omega \cap \partial G\,,\\ \label{2inclusion:bdry ball} K \cap \partial (\Phi(U_{r_0})) &\subset & \Omega \cap \partial G\,, \\ \label{2inclusion:M} M \cup \big\{ x + u(x)\,\nu(x) \, \colon \, x \in M \big\}&\subset & \Omega \cap \partial G\,, \\ \label{2inclusion:singular} \Sigma \setminus ( \partial E \cup \mathrm{cl}\, (\Phi (U_{r_0})) ) &\subset & \Omega \cap \partial G\,, \\\label{2inclusion:bdry set out cyl} (\Omega \cap \partial E) \setminus \mathrm{cl}\,(Q^*) &\subset & \Omega \cap \partial G\,, \\ \label{2inclusion:S} S_t &\subset & \Omega \cap \partial G\,. \end{eqnarray} \textit{Proof of \eqref{2inclusion:Delta}:} it readily follows from the fact that $\mathrm{cl}\,(\Phi(\Delta_{r_0})) \subset \Omega \cap \mathrm{cl}\, (G)$ together with $F\cap \mathrm{cl}\, (\Phi(\Delta_{r_0}))=\emptyset$. \textit{Proof of \eqref{2inclusion:bdry ball}:} since $K \cap \partial (\Phi(U_{r_0})) \subset \Omega \setminus G$, we only have to prove that $K \cap \partial (\Phi(U_{r_0})) \subset \mathrm{cl}\, (G)$. Since $K \cap \mathrm{cl}\, (\Phi(U_{r_0})) = \Phi(\mathbf{Y}^1\times\mathbb{R}^{n-1}) \cap \mathrm{cl}\, (\Phi(U_{r_0}))$, any $x \in K \cap \partial (\Phi(U_{r_0}))$ is a limit of points $\Phi(z_h)$ with $z_h \in (\mathbf{Y}^1 \times \mathbb{R}^{n-1}) \cap U_{r_0} \subset \Delta_{r_0}$ by \eqref{first containment}. In particular, $x$ is a limit of points in $\Phi(\Delta_{r_0}) \subset G$. \textit{Proof of \eqref{2inclusion:M}, \eqref{2inclusion:singular}, and \eqref{2inclusion:bdry set out cyl}:} since $E_t \setminus \mathrm{cl}\,(Q^*)=E\setminus \mathrm{cl}\,(Q^*)$, the sets appearing on the left-hand sides of \eqref{2inclusion:M}, \eqref{2inclusion:singular}, and \eqref{2inclusion:bdry set out cyl} are all subsets of $\Omega \cap \partial F \setminus \mathrm{cl}\,(\Phi(U_{r_0}))$ as a consequence of \eqref{opening 3}, \eqref{fix 2}, and \eqref{fix 3}, respectively. \textit{Proof of \eqref{2inclusion:S}:} By construction $G\cap Q^*=E_t\cap Q^*$ so that $Q^*\cap\partial G=Q^*\cap\partial E_t$; since $S_t\subset\mathrm{cl}\,(Q^*)\cap\partial E_t$ we conclude the proof of \eqref{2inclusion:S}, and thus of \eqref{boundary identity}. \medskip \noindent {\it Conclusion:} We first prove \eqref{contra:the estimate}. Without loss of generality assume that $r^*$ is such that $\H^n(\partial^*E\cup\partial Q^*)=0$. By \eqref{boundary identity}, \eqref{Phi Delta perimeter deficit}, \eqref{fixing radius}, \eqref{condition2}, and the fact that $M\subset K\setminus(\partial E\cup\mathrm{cl}\,(\Phi(U_{r_0})))$, we find \begin{eqnarray*} \H^n(\Omega \cap \partial G) & \leq & \H^n(\partial \Phi(\Delta_{r_0})) \\ &&+ \H^n((\Omega \cap \partial^*E) \setminus Q^*) + \H^n ((K \setminus \partial^*E)\setminus \Phi (U_{r_0})) \\ &&+ \H^n (S_t) + \H^n(\{x + u(x) \, \nu(x) \, \colon \, x \in M\}) \\ &\le& 2\,\H^n(K\cap\Phi(U_{r_0}))-3\,c_{\mathbf{Y}}\,r_0^n \\ &&+\H^n((\Omega \cap \partial^*E) \setminus Q^*)+ \H^n ((K \setminus \partial^*E)\setminus \Phi (U_{r_0})) \\ &&+\H^n(\partial^*E\cap\mathrm{cl}\,(Q^*))+c_{\mathbf{Y}}\,r_0^n+\H^n(M)+c_{\mathbf{Y}}\,r_0^n \\ &\leq & 2\, \H^n(K \setminus \partial^*E) + \H^n (\Omega \cap \partial^*E) - c_{\mathbf{Y}}\,r_0^n\,, \end{eqnarray*} that is \eqref{contra:the estimate}. To complete the argument we finally prove that $\Omega \cap \partial G$ is $\mathcal{C}$-spanning $W$. To this aim, pick $\gamma \in \mathcal{C}$. If $\gamma \cap K \setminus ( \Phi(U_{r_0}) \cup Q^* ) \neq \emptyset$, then also $\gamma \cap \partial G \ne \emptyset$ by \eqref{boundary identity}. If $\gamma \cap K \cap Q^* \ne \emptyset$, then also $\gamma \cap \partial E \cap Q^* \ne \emptyset$, and thus also $\gamma \cap S_t \ne \emptyset$ as a consequence of \cite[Lemma 2.3]{kms} since $S_t$ is a diffeomorphic image of $\partial E \cap \mathrm{cl}\,(Q^*)$: hence, $\gamma \cap \partial G \ne \emptyset$, again by \eqref{boundary identity}. We can therefore assume that $\gamma \cap K \setminus \Phi(U_{r_0}) = \emptyset$, and thus, since $K$ is $\mathcal{C}$-spanning $W$, that there exists $x \in \gamma \cap K \cap \Phi(U_{r_0}) \subset \gamma \cap \Phi(\Delta_{r_0})$, where in the last inclusion we have exploited \eqref{refined containment}. Since $\Phi(\Delta_{r_0})$ is contractible and, as consequence of $\ell<\infty$, $\gamma$ is homotopically non-trivial in $\Omega$, $\gamma$ must necessarily intersect $\mathbb{R}^{n+1} \setminus \Phi(\Delta_{r_0})$, and thus, by continuity, $\gamma \cap \partial \Phi(\Delta_{r_0})\ne\emptyset$. Since $\partial \Phi(\Delta_{r_0})\subset\Omega\cap\partial G$, we have completed the proof. \end{proof} \section{Regularity theory and conclusion of the proof of Theorem \ref{t:main}}\label{s:graph} \subsection{Blow-ups of stationary varifolds}\label{section varifolds} We say that $V_0$ is an {\bf integral $n$-cone} in $\mathbb{R}^{n+1}$ if $V_0=\mathbf{var}\,(\mathbf{C},\theta_0)$ for a closed locally $\H^n$-rectifiable cone $\mathbf{C}$ in $\mathbb{R}^{n+1}$ (so that $\l\,x\in\mathbf{C}$ for every $x\in\mathbf{C}$ and $\l>0$), and a zero-homogenous multiplicity function $\theta_0$ (so that $\theta_0(\l\,x)=\theta_0(x)$ for every $x\in\mathbf{C}$ and $\l>0$). The importance of integral cones lies in the fact that if $V$ is a stationary integral $n$-varifold in some open set $U$, $x_0\in{\rm spt}\,V$ and $r_j\to0^+$ as $j\to\infty$, then, up to extracting a subsequence of $r_j$, there exists an integral $n$-cone $V_0$ such that \[ (\iota_{x_0,r_j})_\sharp V \rightharpoonup V_0\,, \] in the varifold convergence (duality with $C^0_c(U\times G_n^{n+1})$), where $\iota_{x,r}(y)=(y-x)/r$ for $x,y\in\mathbb{R}^{n+1}$ and $r>0$; moreover, $V_0$ is stationary in $\mathbb{R}^{n+1}$, and the collection of such limit stationary integral $n$-cones for $V$ at $x_0$ is denoted by \[ {\rm Tan}(V,x_0)\,. \] We recall that if $V_0=\mathbf{var}\,(\mathbf{C},\theta_0)\in{\rm Tan}(V,x_0)$, then \[ \Theta_V(x_0)=\Theta_{V_0}(0)\ge\Theta_{V_0}(y)\,,\qquad\forall y\in\mathbf{C}\,. \] Correspondingly, the {\bf spine} of the integral $n$-cone $V_0=\mathbf{var}\,(\mathbf{C},\theta_0)$ is defined as \[ S(V_0)=\Big\{y\in\mathbb{R}^{n+1}:\Theta_{V_0}(y)=\Theta_{V_0}(0)\Big\}\,; \] as it turns out, $S(V_0)$ is a linear space in $\mathbb{R}^{n+1}$, and it can actually be characterized as the largest linear space $L$ of $\mathbb{R}^{n+1}$ such that $V_0$ is invariant by translations in $L$, i.e. $(\tau_{v})_{\sharp}V_0=V_0$ for every $v\in L$, where $\tau_v(y)=y+v$ for all $y\in\mathbb{R}^{n+1}$. It is easily seen that if $\dim\,S(V_0)=k\in\{0,...,n\}$ and, without loss of generality, $S(V_0)=\{0\}^{n-k+1}\times\mathbb{R}^k$, then there exist a closed $(n-k)$-cone $\mathbf{C}_0$ in $\mathbb{R}^{n-k+1}$ and a zero-homogeneous multiplicity function $\phi_0$ on $\mathbf{C}_0$ such that \[ \mathbf{C}=\mathbf{C}_0 \times \mathbb{R}^k\,, \qquad \theta_0(z,y)=\phi_0(z) \quad \mbox{for $\H^{n-k}$-a.e. $z \in \mathbf{C}_0$, for every $y \in \mathbb{R}^k$}\,, \] and such that $W_0=\mathbf{var}\,(\mathbf{C}_0,\phi_0)$ is a stationary integral $(n-k)$-cone in $\mathbb{R}^{n-k+1}$ with \[ \Theta_{W_0}(0)=\Theta_{V_0}(0)\,, \qquad S(W_0)=\{0\}\,. \] The concept of spine leads to defining the notion of {\bf $k$-dimensional stratum} of a stationary integral $n$-varifold $V$ as \[ \mathcal S^k(V)=\Big\{x\in{\rm spt}\,V: \dim S(V_0)\le k\,,\quad\forall V_0\in{\rm Tan}(V,x)\Big\}\,, \] where the classical dimension reduction argument of Federer, see \cite[Appendix A]{SimonLN}, shows that \begin{equation} \label{dimension reduction} \dim_\H(\mathcal S^k(V))\le k\qquad \forall\,k=0,...,n\,. \end{equation} Moreover, we have the following key result by Naber and Valtorta. \begin{theorem}[{\cite[Theorem 1.5]{NV_varifolds}}]\label{thm nv} If $V$ is an integral stationary $n$-varifold in an open set $U$ of $\mathbb{R}^{n+1}$, then $\mathcal S^k(V)$ is countably $k$-rectifiable in $U$ for every $k=0,...,n$. \end{theorem} \subsection{Regularity of Almgren minimal sets and proof of Theorem \ref{t:main}}\label{section regularity of Almgren min} We recall that $M$ is an Almgren minimal set in an open set $U\subset\mathbb{R}^{n+1}$ if $M\subset U$ is closed relatively to $U$ and \begin{equation} \label{almgren minimizing set} \H^n(M\cap B_r(x))\le\H^n(f(M)\cap B_r(x)) \end{equation} whenever $f$ is a Lipschitz map with $\{f\ne{\rm id}\,\}\subset\joinrel\subset B_r(x)\subset\joinrel\subset U$ and $f(B_r(x))\subset B_r(x)$. An immediate consequence of \eqref{almgren minimizing set} is that the multiplicity-one $n$-varifold $V=\mathbf{var}\,(M,1)$ associated to $M$ is stationary in $U$. The Almgren minimality of $M$ implies that the set of tangent varifolds to $V$ is simpler than it could be in general: indeed, varifold tangent cones to Almgren minimal sets have multiplicity one, and their supports are Almgren minimal cones: \begin{theorem}[{\cite[Corollary II.2]{taylor76}}]\label{thm taylor1} If $M$ is an Almgren minimal set in $U\subset\mathbb{R}^{n+1}$, $x_0\in M$, and $V_0=\mathbf{var}\,(\mathbf{C},\theta_0)\in{\rm Tan}(\mathbf{var}\,(M,1),x_0)$, then $\theta_0=1$ on $\mathbf{C}$, and $\mathbf{C}$ is an Almgren minimal cone in $\mathbb{R}^{n+1}$. \end{theorem} In particular, setting \begin{eqnarray*} &&{\rm Tan}(M,x_0)=\Big\{\mathbf{C}\subset\mathbb{R}^{n+1}:V_0=\mathbf{var}\,(\mathbf{C},1)\in{\rm Tan}(\mathbf{var}\,(M,1),x_0)\Big\}\,, \\ &&\mbox{and, correspondingly, $S(\mathbf{C})=S(V_0)$ for every $\mathbf{C}\in {\rm Tan}(M,x_0)$}\,, \end{eqnarray*} we have that \[ \mathcal S^k(M)=\Big\{x_0\in M:\dim S(\mathbf{C})\le k\,,\quad\forall \mathbf{C}\in{\rm Tan}(M,x_0)\Big\} \] is countably $k$-rectifiable in $\mathbb{R}^{n+1}$ thanks to Theorem \ref{thm nv}. \begin{remark}[Smoothness criterion]\label{remark smooth} {\rm If ${\rm Tan}(M,x_0)$ contains an $n$-dimensional plane, then $M$ is a classical minimal surface in a neighborhood of $M$ as a consequence of Allard's regularity theorem \cite{Allard} and of the fact that $V=\mathbf{var}\,(M,1)$ is an integral stationary $n$-varifold. As a consequence, the {\bf singular set} $\Sigma$ of $M$ in $U$, defined as the maximal closed subset of $M$ such that $M\setminus\Sigma$ is a smooth minimal surface in $U$, can be characterized as the set of those $x_0\in M$ such that ${\rm Tan}(M,x_0)$ contains no plane.} \end{remark} The next important fact is that one can completely characterize Almgren minimal cones in $\mathbb{R}^2$ and $\mathbb{R}^3$: \begin{theorem}{\cite[Proposition II.3]{taylor76}}\label{thm taylor2} If $\mathbf{C}$ is an Almgren minimal cone in $\mathbb{R}^2$, then, up to rotations, either $\mathbf{C}=\{0\}\times\mathbb{R}$ or $\mathbf{C}=\mathbf{Y}^1$. If $\mathbf{C}$ in an Almgren minimal cone in $\mathbb{R}^3$, then, up to rotations, either $\mathbf{C}=\{0\}\times\mathbb{R}^2$, or $\mathbf{C}=\mathbf{Y}^1\times\mathbb{R}$, or $\mathbf{C}=\mathbf{T}^2$. \end{theorem} \begin{corollary}\label{corollary taylor 3} If $M$ is an Almgren minimal set in $U\subset\mathbb{R}^{n+1}$ and $\mathbf{C}\in{\rm Tan}(M,x_0)$ for some $x_0\in M$, then, up to rotations, either $\mathbf{C}=\{0\}\times\mathbb{R}^n$, or $\mathbf{C}=\mathbf{Y}^1\times\mathbb{R}^{n-1}$, or $\mathbf{C}=\mathbf{T}^2\times\mathbb{R}^{n-2}$ or $\dim S(\mathbf{C})\le n-3$. \end{corollary} \begin{proof} One needs to notice that if $\mathbf{C}=\mathbf{C}_0\times\mathbb{R}^k$ is an Almgren minimal cone in $\mathbb{R}^{n+1}$, then $\mathbf{C}_0$ is an Almgren minimal cone in $\mathbb{R}^{n-k+1}$, and combine this fact with Theorem \ref{thm taylor1} and Theorem \ref{thm taylor2}. \end{proof} If $M$ is an Almgren minimal set in $U$, $\mathbf{C}$ is an Almgren minimal cone in $\mathbb{R}^{n+1}$, $\a\in(0,1)$ and $x_0\in M$, then we say that $M$ {\bf admits ambient parametrization of class $C^{1,\a}$ over $\mathbf{C}$ at $x_0$}, if there exist $r>0$, an open neighborhood $A$ of $x_0$, and a $C^{1,\a}$-diffeomorphism $\Phi:\mathbb{R}^{n+1}\to\mathbb{R}^{n+1}$ such that, $\Phi(0)=x_0$, $\nabla\Phi(0)={\rm Id}\,$ and \begin{equation} \label{ambient parametrization} \Phi\big(B_r\cap\mathbf{C}\big)=M\cap A\,. \end{equation} The main result contained in \cite{taylor76} can be formulated as follows: \begin{theorem}[{\cite{taylor76}}]\label{thm main taylor} If $M$ is an Almgren minimal set in $U\subset\mathbb{R}^3$ and $x_0\in M$, then either $M$ is a classical minimal surface in a neighborhood of $x_0$, or $M$ admits an ambient parametrization of class $C^{1,\a}$ over $\mathbf{C}$ at $x_0$, where, modulo isometries, $\mathbf{C}\in\{\mathbf{Y}^1\times\mathbb{R},\mathbf{T}^2\}$. \end{theorem} \begin{remark} {\rm The analysis of Almgren minimal sets in $\mathbb{R}^2$ is noticeably simpler, and it yields the stronger conclusions that $M$ is locally {\it isometric} either to a line or to $\mathbf{Y}^1$: a detailed proof can be easily obtained, for example, by minor adaptations of \cite[Section 30.3]{maggiBOOK}.} \end{remark} We are finally in the position to prove Theorem \ref{t:main}. \begin{proof}[Proof of Theorem \ref{t:main}] Let $(K,E)$ be a generalized minimizer of $\psi(\varepsilon)$. By Theorem \ref{proposition Lipschitz minimizing}, $M=K\setminus\mathrm{cl}\,(E)$ is an Almgren minimal set in $U=\Omega\setminus\mathrm{cl}\,(E)$. By Corollary \ref{corollary taylor 3} we have that $M=R\cup\Sigma$, where $R$ is a smooth, stable minimal hypersurface in $U\setminus\Sigma$, and $\Sigma$ is a relatively closed subset of $M$ such that if $x_0\in\Sigma$ and $\mathbf{C}\in{\rm Tan}(M,x_0)$, then either $\mathbf{C}=\mathbf{Y}^1\times\mathbb{R}^{n-1}$ (modulo isometries), or $\dim\,S(\mathbf{C})\le n-2$. \medskip If $\mathbf{C}=\mathbf{Y}^1\times\mathbb{R}^{n-1}\in{\rm Tan}(M,x_0)$, then \[ V_0=\mathbf{var}\,(\mathbf{C},1)\in{\rm Tan}(V,x_0) \] where $V=\mathbf{var}\,(M,1)$ is an integral stationary $n$-varifold in $U$. By Simon's $Y$-regularity theorem \cite{Simon_cylindrical}, see e.g. \cite[Theorem 4.6]{ColomboEdelenSpolaor} for a handy statement, $M$ can be locally parameterized over $\mathbf{Y}^1\times\mathbb{R}^{n-1}$ near $x_0$, in the sense that there exist $r>0$, an open neighborhood $A$ of $x_0$, and a homeomorphism $\Phi:(\mathbf{Y}^1\times\mathbb{R}^{n-1})\cap B_r\to M\cap A$ with $\Phi(0)=x_0$ and mapping the spine of $\mathbf{Y}^1\times\mathbb{R}^{n-1}$ into $\Sigma\cap A$, such that, denoting by $\{H_i\}_{i=1}^3$ the three $n$-dimensional half-planes whose union gives $\mathbf{Y}^1\times\mathbb{R}^{n-1}$, the restriction of $\Phi$ to $H_i\cap B_r$ is a $C^{1,\a}$-diffeomorphism between hypersurfaces with boundary. An application of Whitney's extension theorem (which is usually mentioned without details in the literature, see e.g. the comments in \cite[Pag. 528]{taylor76} and \cite[Pag. 650]{Simon_cylindrical}; we notice that a simplification of the proof of \cite[Theorem 3.1]{CiLeMaIC1} gives the desired result) allows one to extend $\Phi$ into an ambient parametrization of $M$ over $\mathbf{Y}^1\times\mathbb{R}^{n-1}$ in a neighborhood of $x_0$. However, Theorem \ref{prop:noYx}, excludes the existence of such ambient parametrization. Therefore we conclude that $\mathbf{Y}^1\times\mathbb{R}^{n-1}$ cannot belong modulo isometries to ${\rm Tan}(M,x_0)$ for any $x_0\in\Sigma$. As a consequence, $\dim S(\mathbf{C})\le n-2$ for every $\mathbf{C}\in{\rm Tan}(M,x_0)$, and thus $\Sigma=\mathcal S^{n-2}(M)$. By Federer's dimensional reduction argument \eqref{dimension reduction}, we conclude in particular that \[ \H^{n-1}(\Sigma)=0\,. \] In summary, $V=\mathbf{var}\,(M,1)$ is a stationary integral $n$-varifold in $U$, whose regular part is stable thanks to \eqref{minimality KE against diffeos}, and whose singular part is $\H^{n-1}$-negligible. The regularity theory of Schoen, Simon and Wickramasekera \cite{SchoenSimon81,Wic} allows us to conclude then that $\Sigma$ is empty if $1\le n\le 6$, is locally finite in $U$ if $n=7$, and coincides with $\mathcal{S}^{n-7}(V)$ if $n \ge 8$. In particular, if $n \ge 8$ then $\Sigma$ is countably $(n-7)$-rectifiable in $U$ by Theorem \ref{thm nv}. This completes the proof of the theorem. \end{proof} We close with a few technical comments on how the regularity theory for varifolds and Almgren minimal sets has been applied in the above argument. \begin{remark}\label{rmk reg if n12} {\rm In the physical cases $n=1$ and $n=2$, which are clearly the most important ones for the soap film capillarity model, one does not need to use the full power of the regularity theory contained in \cite{Simon_cylindrical,SchoenSimon81,Wic}. Indeed, once $M=K\setminus\mathrm{cl}\,(E)$ has been shown to be an Almgren minimal set in $\Omega\setminus\mathrm{cl}\,(E)$, Taylor's theorem (i.e., Theorem \ref{thm main taylor} above) shows that if $\Sigma$ is non-empty, then $M$ admits an ambient parametrization over $\mathbf{Y}^1\times\mathbb{R}^{n-1}$ at some of its singular points, thus triggering a contradiction with Theorem \ref{prop:noYx}.} \end{remark} \begin{remark}\label{rmk no wic} {\rm The following argument allows to use \cite{SchoenSimon81} in place of \cite{Wic} (notice that \cite{Wic} relies on \cite{SchoenSimon81}). Going back to the application of Corollary \ref{corollary taylor 3} to $M=K\setminus\mathrm{cl}\,(E)$, and after having excluded the existence of $Y$ points thanks to \cite{Simon_cylindrical} and Theorem \ref{prop:noYx}, we are in the position to say that if $\mathbf{C}\in{\rm Tan}(M,x_0)$, then either $\mathbf{C}=\mathbf{T}^2\times\mathbb{R}^{n-2}$ modulo isometries or $\dim\,S(\mathbf{C})\le n-3$. In the former case, a direct parametrization argument away from the spine of $\mathbf{C}$ (in the spirit of \cite[Lemma 4.8]{ColomboEdelenSpolaor}) implies the existence of $Y$-points near $x_0$, and a contradiction with Theorem \ref{prop:noYx}. We thus conclude that $\dim\,S(\mathbf{C})\le n-3$ for every $\mathbf{C}\in{\rm Tan}(M,x_0)$, $x_0\in\Sigma$, and thus that $\H^{n-2}(\Sigma)=0$. By \cite{SchoenSimon81}, an integral stationary $n$-varifold $V$ in $\mathbb{R}^{n+1}$ whose regular part is stable and whose singular set is $\H^{n-2}$-negligible is such that the singular set is empty if $1\le n\le 6$, and coincides with $\mathcal{S}^{n-7}(V)$ if $n \ge 7$ (and thus it is countably $(n-7)$-recitifiable by Naber-Valtorta).} \end{remark} \section{Local finiteness of the Hausdorff measure of the singular set}\label{appendix NV local} In this section we sketch the arguments needed to improve the countable $(n-7)$-rectifiability of $\Sigma$, proved in Theorem \ref{t:main}, into local finiteness of the $(n-7)$-dimensional Minkowski content, and thus, in particular, into local $\H^{n-7}$-rectifiability; see Remark \ref{remark locally finite}. Towards this goal, we will need to introduce the following notion of {\bf quantitative stratification} of the singular set of a stationary integral varifold. \medskip Let ${\rm dist}_\mathbf{var}\,$ be a distance function of the space of $n$-dimensional varifolds in $B_1\subset \mathbb{R}^{n+1}$ which induces the varifold convergence. Let $V$ be a stationary integral $n$-varifold in a ball $B_r(x)\subset \mathbb{R}^{n+1}$ with $x \in {\rm spt}(V)$. For any $\delta >0$, we say that $V$ is {\bf $(k,\delta)$-almost symmetric in $B_r(x)$} if there exists a $k$-symmetric integral $n$-cone $V_0$ (i.e. $\dim\,S(V_0)\ge k$) such that \[ {\rm dist}_\mathbf{var}\,((\iota_{x,r})_\sharp V \llcorner B_1, V_0\llcorner B_1) < \delta\,. \] For $k \in \{0,\ldots,n\}$ and $\delta>0$, we define the $(k,\delta)$-{\bf quantitative stratum} $\mathcal{S}^k_\delta(V)$ by \begin{equation*} \begin{split} \mathcal{S}^{k}_{\delta}(V) = \Big\{ x \in {\rm spt}(V)\, \colon \, & \mbox{$V$ is \emph{not} $(k+1,\delta)$-almost symmetric in $B_r(x)$} \\ & \mbox{for all $r>0$ such that $V$ is stationary in $B_r(x)$} \Big\}\,. \end{split} \end{equation*} We can now recall the following theorem from \cite{NV_varifolds}: \begin{theorem}[{See \cite[Theorem 1.4]{NV_varifolds}}] \label{thm:nv} Let $\delta, \Lambda >0$. There exists $C_\delta=C(n,\Lambda,\delta)>0$ such that if $V$ is an integral stationary $n$-varifold in $B_2\subset\mathbb{R}^{n+1}$ with $\|V\|(B_2)\leq \Lambda$ then \begin{equation} \label{e:minkowski estimate nv} \Big| I_r(\mathcal{S}^k_\delta(V)) \cap B_1 \Big| \leq C_\delta \, r^{n+1-k} \qquad \mbox{for all $0 < r < 1$}\,. \end{equation} In particular, $\H^{k}(\mathcal{S}^k_\delta(V) \cap B_1) \leq C_\delta$. Furthermore, $\mathcal{S}^k_\delta(V)$ is countably $k$-rectifiable. \end{theorem} \begin{remark} The countable $k$-rectifiability of $\mathcal{S}^k(V)$ claimed in Theorem \ref{thm nv} is in fact a corollary of the countable $k$-rectifiability of the quantitative strata $\mathcal{S}^k_\delta(V)$ together with the fact that \begin{equation} \label{strata back together} \mathcal{S}^k (V)= \bigcup_{\delta >0} \mathcal{S}^k_\delta(V)\,. \end{equation} \end{remark} We are now in the position to show that, under the assumptions of Theorem \ref{t:main}, if $n\ge 7$, then $\Sigma$ has locally finite $(n-7)$-dimensional Minkowski content, and thus it is locally $\H^{n-7}$-finite. Since we can cover any open set compactly contained in $\Omega \setminus \mathrm{cl}\, (E)$ by a finite number of balls $B_{3r_*}(x_i)$ such that $B_{r_*}(x_i)$ are pairwise disjoint and $B_{9r_*}(x_i) \subset \Omega \setminus \mathrm{cl}\, (E)$, we can directly focus on obtaining an upper bound on the $(n-7)$-dimensional Minkowski content of $\Sigma$ in $B$ whenever $B$ is an open ball with $3B \subset \Omega \setminus \mathrm{cl}\, (E)$, where $3B$ denotes the concentric ball to $B$ with three times the radius. To this end we claim that \[ \mbox{$\exists \delta > 0$ such that $\Sigma \cap 2B \subset \mathcal{S}^{n-7}_\delta(V) \cap 2B$\,.} \] Indeed, thanks to Theorem \ref{thm:nv} this claim implies \[ \Big| I_r(\Sigma) \cap B \Big| \leq C_\delta\,r^{8} \qquad \mbox{for all $0<r<{\rm radius}(B)$}\,, \] and thus $\H^{n-7}(\Sigma\cap B)\le C_\delta$ for a constant $C_\delta=C(n,\H^n(K \cap 2B),\delta)$, from which the local $\H^{n-7}$-finiteness of $\Sigma$ follows. To prove the claim we argue by contradiction and assume the existence of a sequence $\delta_h \to 0^+$ and points $x_h \in \Sigma \cap 2B$ such that $x_h \notin \mathcal{S}^{n-7}_{\delta_h}(V)$. Assuming that ${\rm radius}(B)=1$ for simplicity, so that $V$ is stationary in $B_1(x_h)$ for every $h$, the definition of quantitative strata then yields a sequence $r_h$ of scales $0 < r_h < 1$ such that $V$ is $(n-6,\delta_h)$-almost symmetric in $B_{r_h}(x_h)$: in other words, there are integral $n$-cones $W_h$ with $\dim S(W_h) \ge n-6$ such that, setting $K_h = (K-x_h)/r_h$ and $V_h = \mathbf{var}\,(K_h,1)$, we have ${\rm dist}_{\mathbf{var}\,}(V_h \llcorner B_1, W_h \llcorner B_1) \leq \delta_h$. Since the weights $\|V_h\|(B_1)$ are uniformly bounded as a consequence of the monotonicity formula, each $V_h$ is stationary in $B_1$, and $\delta_h \to 0^+$, a (not relabeled) subsequence of the varifolds $V_h \llcorner B_1$ converges, as $h \to \infty$ and in the sense of varifolds, to a stationary integral $n$-varifold which is the restriction to $B_1$ of an Almgren minimal cone $\mathbf{C}$ in $\mathbb{R}^{n+1}$ with $\dim S(\mathbf{C}) \ge n-6$. By Remark \ref{remark smooth}, $\mathbf{C}$ cannot be a plane, as otherwise $K$ would be smooth in a neighborhood of $x_h$ for all sufficiently large $h$, a contradiction to $x_h \in \Sigma$. In particular, $\mathbf{C}$ is singular at the origin, and since $\dim S(\mathbf{C}) \ge n-6$ it must be $\H^{n-6}(\mathrm{Sing}(\mathbf{C}))=\infty$, if $\mathrm{Sing}(\mathbf{C})$ denotes the set of singular points of $\mathbf{C}$. We claim that \begin{equation} \label{a priori zero} \H^{n-1}(\mathrm{Sing}(\mathbf{C}))=0\,. \end{equation} If this is true, then we can apply again \cite{Wic} and conclude that $\dim_{\H}(\mathrm{Sing}(\mathbf{C}))\le n-7$, a contradiction. We prove \eqref{a priori zero} by showing that $\mathbf{C}$ cannot have points of type $Y$. Otherwise, there would be a point $y \in \mathbf{C} \cap B_1$ such that, modulo rotations, the (unique) tangent cone $\mathbf{C}_y$ to $\mathbf{C}$ at $y$ is $\mathbf{Y}^1 \times \mathbb{R}^{n-1}$. Since varifold convergence of stationary integral varifolds implies Hausdorff convergence of their supports, for every $\delta > 0$ there exists $\sigma \in \left( 0, {\rm dist}(y, \partial B_1) \right)$ such that, for all sufficiently large $h$, \[ \mathrm{hd}\,({\rm spt}((\iota_{y,\sigma})_\sharp V_h) \cap B_{1} , (\mathbf{Y}^1 \times \mathbb{R}^{n-1}) \cap B_1 ) \le \delta \] where $\mathrm{hd}\,$ denotes the Hausdorff distance. By Simon's $Y$-regularity theorem, $K_h$ admits an ambient parametrization of class $C^{1,\alpha}$ over $\mathbf{Y}^1 \times \mathbb{R}^{n-1}$ in $B_{\sigma/2}(y)$, and thus, in turn, there is a point in $K$ at which $K$ admits an ambient parametrization of class $C^{1,\alpha}$ over $\mathbf{Y}^1\times\mathbb{R}^{n-1}$, a contradiction to Theorem \ref{prop:noYx}. \qed \bibliographystyle{is-alpha}
1,941,325,220,594
arxiv
\section{Introduction}\label{sec_intro} The main theorem of a book by Cox~\cite{cox} is a beautiful criterion of the solvability of the diophantine equation $p=x^2+ny^2$. The specific statement is \thmus{ Let $n$ be a positive integer. Then there is a monic irreducible polynomial $f_n(x)\in\mathbb Z[x]$ of degree $h(-4n)$ such that if an odd prime $p$ divides neither $n$ nor the discriminant of $f_n(x)$, then $p=x^2+ny^2$ is solvable over $\mathbb Z$ if and only if $\fracl{-n}{p}{}=1$ and $f_n(x)=0$ is solvable over $\mathbb Z/p\mathbb Z$. Here $h(-4n)$ is the class number of primitive positive definite binary forms of discriminant $-4n$. Furthermore, $f_n(x)$ may be taken to be the minimal polynomial of a real algebraic integer $\alpha$ for which $L=K(\alpha)$ is the ring class field of the order $\mathbb Z[\sqrt{-n}]$ in the imaginary quadratic field $K=\mathbb Q(\sqrt{-n})$. } Maciak treated the same problem over rational function fields in his Ph.D thesis \cite{maciak2010primes}, and gave a similar criterion of the integral solvability of $p=x^2+ny^2$. One can consider the integral solvability of the generalized equation \eq{\label{eq_qr} ax^2+bxy+cy^2+g=0 } over rings of integers of global fields, which amounts to the integral representability of $-g$ as the binary quadratic form $ax^2+bxy+cy^2$. There are some results considering the problem over number fields. By using classical results in class field theory, the first and third author~\cite{rcf} gave a criterion of the integral solvability of the equation $p = x^2 + ny^2$ for some $n$ over a class of imaginary quadratic fields of $\mathbb Q$, where $p$ is a prime element. Recently, Harari~\cite{bmob} showed that the Brauer-Manin obstruction is the only obstruction for the existence of integral points of a scheme over the ring of integers of a number field, whose generic fiber is a principal homogeneous space (torsor) of a torus. After then Wei and Xu \cite{multi-norm-tori,multip-type} showed that there exist idele groups which are the so-called $\mathbf X$-\emph{admissible subgroups} for determining the integral points for multi-norm tori, and interpreted the $\mathbf X$-admissible subgroup in terms of finite Brauer-Manin obstruction. In \cite[Section 3]{multi-norm-tori} Wei and Xu also showed how to apply this method to binary quadratic diophantine equations. As applications, they gave some explicit criteria of the solvability of equations of the form $x^2\pm dy^2=a$ over $\mathbb Z$ in \cite[Sections 4 and 5]{multi-norm-tori}, by construct explicit admissible subgroups. Later Wei \cite{wei_diophantine} applied the method in \cite{multi-norm-tori} to give some additional criteria of the solvability of the diophantine equation $x^2-dy^2=a$ over $\mathbb Z$ for some $d$. He also determined which integers can be written as a sum of two integral squares for some of the quadratic fields $\mathbb Q(\sqrt{\pm p})$ (in \cite{wei1}), $\mathbb Q(\sqrt{-2p})$ (in \cite{wei2}) and so on. In \cite{lv2017intrepqr}, the author et al. applied the method in \cite{multi-norm-tori} to diophantine equations of the form \eqref{eq_qr} over $\mathbb Z$ and gave a criterion of the solvability with some additional assumptions, by construct explicit admissible subgroups for \eqref{eq_qr}. In this text, we treat the equation \eqref{eq_qr} over $k[t]$, the ring of integers of rational function fields. We use a similar but slightly different argument as in \cite{lv2017intrepqr}, to solve the same problem over $k[t]$. We generalize the method in \cite{lv2017intrepqr} to construct explicit admissible subgroups for the equation \eqref{eq_qr}. See Lemma \ref{lem_lambda_inv_img}. Specifically, the main result of this text is: \thmus{ Let ${K_\Pin^+}$ be the class field corresponding to $E^\tm{\Xi_\Pin^+}$ and \eqn{ \mathbf X=\Spec(\fo_F[x,y]/(ax^2+bxy+cy^2+g)). } Then $\mathbf X(\fo_F)\neq\emptyset$ if and only if there exists \eqn{ \prod_{\mathfrak p\in\Omega_F}(x_\mathfrak p, y_\mathfrak p)\in\prod_{\mathfrak p\in\Omega_F}\mathbf X(\fo_{F_\mathfrak p}) } such that \eqn{ \psi_{{K_\Pin^+}/E}(\tilde f_E(\prod_\mathfrak p(x_\mathfrak p,y_\mathfrak p)))=1. }} In the above theorem, ${\Xi_\Pin^+}$ is an open subgroup of finite index of the idele group $\mathbb I_E$ of $E$, $\tilde f_E$ is a map from $\prod_\mathfrak p\mathbf X(\fo_{F_\mathfrak p})$ to $\mathbb I_E$ which is constructed by using the fact that the generic fiber of $\mathbf X$ admits the structure of a torsor of a torus, and $\psi_{{K_\Pin^+}/E}: \mathbb I_E\rightarrow \Gal({K_\Pin^+}/E)$ is the Artin reciprocity map. The condition $ \psi_{{K_\Pin^+}/E}(\tilde f_E(\prod_\mathfrak p(x_\mathfrak p,y_\mathfrak p)))=1$ is called the Artin condition. See Sections \ref{sec_nota} and \ref{sec_rff} for details. In Section \ref{sec_artin_cond}, we introduce from \cite{multi-norm-tori} notation and the general result we mainly use in this text, but in a modified way which focus on our goal. Then we give our result on the equation \eqref{eq_qr} over $k[t]$ in Section \ref{sec_rff}. The results state that the integral local solvability and the Artin condition (see Remark \ref{rk_artin_cond}) completely describe the global integral solvability. In view of the result of Maciak, we recover the main theorems in \cite{maciak2010primes} by our result. At last, we ended this text by some concrete examples showing the explicit criteria of the solvability. \section{Solvability by the Artin Condition}\label{sec_artin_cond} \subsection{Notation}\label{sec_nota} Let $F$ be a global field with character not $2$, $\fo_F$ the ring of integers of $F$, $\Omega_F$ the set of all places in $F$. Let $F_\mathfrak p$ be the completion of $F$ at $\mathfrak p$ and $\fo_{F_\mathfrak p}$ the valuation ring of $F_\mathfrak p$ for each $\mathfrak p\in\Omega_F$. We also write $\fo_{F_\mathfrak p}=F_\mathfrak p$ for infinite place $\mathfrak p$. The adele ring (resp. idele group) of $F$ is denoted by $\bA_F$ (resp. $\mathbb I_F$). Let $a,b,c$ and $g$ be elements in $\fo_F$ and suppose that $-d=b^2-4ac$ is not a square in $F$. Let $E=F(\sqrt{-d})$, a quadratic extension of $F$. Since the character of $F$ is not $2$, the extension $E/F$ is separable. Let \eq{\label{eq_XX} \mathbf X=\Spec(\fo_F[x,y]/(ax^2+bxy+cy^2+g)) } be the affine scheme defined by the equation $ax^2+bxy+cy^2+g=0$ over $\fo_F$. The equation \eq{\label{eq_bqf} ax^2+bxy+cy^2+g=0 } is solvable over $\fo_F$ if and only if $\mathbf X(\fo_F)\neq\emptyset$. Now we denote \aln{ \tilde x&= 2ax+by, \\ \tilde y&= y, \\ n&=-4ag. } Then we can write \eqref{eq_bqf} as \eq{\label{eq_bqf_n} \tilde x^2+d\tilde y^2=n. } Denote $R_{E/F}(\mathbf G_m)$ the Weil restriction of $\mathbf G_{m,E}$ to $F$. Let \eqn{ \varphi: R_{E/F}(\mathbf G_m)\longrightarrow\mathbf G_m } be the homomorphism of algebraic groups which represents \eqn{ x\longmapsto N_{E/F}(x): (E\otimes_FA)^\tm\longrightarrow A^\tm} for any $F$-algebra $A$. Define the torus $T=\ker\varphi$. Let $X_F$ be the generic fiber of $\mathbf X$. We can identify elements $u, v\in A$ in $T(A)$ (resp. $x,y\in A$ in $X_F(A)$) as $u+\sqrt{-d}v\in E\otimes_FA$, with $u^2+dv^2 = 1$ (resp. $\tilde x+\sqrt{-d}\tilde y\in E\otimes_FA$ with $\tilde x^2+d\tilde y^2=n$). Then $X_F$ is naturally a $T$-torsor by the action: \aln{ T(A)\tm X_F(A) &\longrightarrow X_F(A)\\ (u+\sqrt{-d}v, \tilde x+\sqrt{-d}\tilde y) &\longmapsto (u+\sqrt{-d}v) (\tilde x+\sqrt{-d}\tilde y). } Note that $T$ has an integral model $\mathbf T=\Spec(\fo_F[x,y]/(x^2+dy^2-1))$ and we can view $\mathbf T(\fo_{F_\mathfrak p})$ as a subgroup of $T(F_\mathfrak p)$. Denote by $\lambda$ the embedding of $T$ into $R_{E/F}(\mathbf G_m)$. Clearly $\lambda$ induces a natural injective group homomorphism \eqn{ \lambda_E: T(\bA_F)\longrightarrow\mathbb I_E. } Now we assume that \eq{\label{eq_nonempty} X_F(F)\neq\emptyset, } i.e. $X_F$ is a trivial $T$-torsor. Fixing a rational point $P\in X_F(F)$, for any $F$-algebra $A$, we have an isomorphism \gan{ \xymatrix{\phi_P: X_F(A)\ar[r]^-{\sim} &T(A)}\\ \qquad\qquad x \longmapsto P^{-1}x } induced by $P$. Since we can view $\prod_{\mathfrak p\in\Omega_F}\mathbf X(\fo_{F_\mathfrak p})$ as a subset of $X_F(\bA_F)$, the composition \eqn{ f_E=\lambda_E\phi_P: \prod_\mathfrak p\mathbf X(\fo_{F_\mathfrak p})\longrightarrow \mathbb I_E} makes sense, mapping $x$ to $P^{-1}x$ in $\mathbb I_E$. Note that $P$ is in $E^\tm\subset \mathbb I_E$ since it is a rational point over $F$. It follows that we can define the map $\tilde f_E$ to be the composition \eqn{\xymatrix{ \prod_\mathfrak p\mathbf X(\fo_{F_\mathfrak p}) \ar@{->}[r]^-{ f_E} &\mathbb I_E \ar@{->}[r]^-{\tm P} &\mathbb I_E. }} It can be seen that the restriction to $\mathbf X(\fo_{F_\mathfrak p})$ of $\tilde f_E$ is defined by \eq{\label{eq_tilde_f_E} \tilde f_E[(x_\mathfrak p,y_\mathfrak p)]= \cs{ (\tilde x_\mathfrak p+\sqrt{-d}\tilde y_\mathfrak p, \tilde x_\mathfrak p-\sqrt{-d}\tilde y_\mathfrak p) \in E_\fP\tm E_{\bar\fP} &\text{if }\mathfrak p=\fP\bar\fP\text{ splits in }E/F,\\ \tilde x_\mathfrak p+\sqrt{-d}\tilde y_\mathfrak p \in E_\fP &\text{otherwise}, }} where $\fP$ and $\bar\fP$ (resp. $\fP$) are places of $E$ above $\mathfrak p$ and $\tilde x_\mathfrak p=2ax_\mathfrak p+by_\mathfrak p$, $\tilde y_\mathfrak p=y_\mathfrak p$. Let $\Xi$ be an open subgroup of $\mathbb I_E$ such that $E^\tm\Xi$ is of finite index (in the case of number fields, the finiteness is not necessary). Let $K_\Xi$ be the class field corresponding to $E^\tm \Xi$ under class field theory, such that the Artin map gives the isomorphism \eqn{ \xymatrix{\psi_{K_\Xi/E}: \mathbb I_E/E^\tm\Xi\ar[r]^-{\sim} &\Gal(K_\Xi/E). }} For any $\prod_{\mathfrak p\in\Omega_F}(x_\mathfrak p, y_\mathfrak p)\in\prod_{\mathfrak p\in\Omega_F}\mathbf X(\fo_{F_\mathfrak p})$, noting that $P$ is in $E^\tm$, we have \eq{\label{eq_tilde_f} \psi_{K_\Xi/E}(f_E(\prod_\mathfrak p(x_\mathfrak p,y_\mathfrak p)))=1\text{ if and only if } \psi_{K_\Xi/E}(\tilde f_E(\prod_\mathfrak p(x_\mathfrak p,y_\mathfrak p)))=1. } \rk{\label{rk_hasse_min} If $\prod_{\mathfrak p\in\Omega_F}\mathbf X(\fo_{F_\mathfrak p})\neq\emptyset$, then the assumption \eqref{eq_nonempty} holds automatically by the Hasse-Minkowski theorem on quadratic equations. Hence we can pick an $F$-point $P$ of $X_F$ and obtain $\phi_P$. But note that the map $\tilde f_E$ is independent of $P$. } \subsection{A general result}\label{sec_main} For the integral points of the scheme $\mathbf X$ over $\fo_F$ defined in \eqref{eq_XX}, we have the following general result, which is a Corollary to \cite[Corollary 1.6]{multi-norm-tori}. \prop{\label{prop_artin_cond} Let symbols be as before and suppose that \eq{\label{eq_lambda_inv_img} \lambda_E^{-1}(E^\tm\Xi) \subseteq T(F)\prod_\mathfrak p \mathbf T(\fo_{F_\mathfrak p}). } Then $\mathbf X(\fo_F)\neq\emptyset$ if and only if there exists \eqn{ \prod_{\mathfrak p\in\Omega_F}(x_\mathfrak p, y_\mathfrak p)\in\prod_{\mathfrak p\in\Omega_F}\mathbf X(\fo_{F_\mathfrak p}) } such that \eq{\label{eq_artin_cond} \psi_{K_\Xi/E}(\tilde f_E(\prod_\mathfrak p(x_\mathfrak p,y_\mathfrak p)))=1. }} \pf{ Note that $\lambda_E( T(F)) \subseteq E^\tm$ in $\mathbb I_E$. If $\mathbf X(\fo_F)\neq\emptyset$, then \eqn{ \tilde f_E\left(\prod_\mathfrak p\mathbf X(\fo_{F_\mathfrak p})\right)\cap E^\tm\prod_\mathfrak p L_\mathfrak p^\tm\supseteq \tilde f_E(\mathbf X(\fo_F))\cap E^\tm\neq\emptyset, } hence there exists $x\in \prod_{\mathfrak p\in\Omega_F}\mathbf X(\fo_{F_\mathfrak p})$ such that $\psi_{K_\Xi/E}\tilde f_E(x)=1$. Conversely, suppose there exists $x\in \prod_\mathfrak p\mathbf X(\fo_{F_\mathfrak p})$ such that $\psi_{K_\Xi/E}\tilde f_E(x)=1$ (here $\tilde f_E$ makes sense by Remark \ref{rk_hasse_min}). Then by \eqref{eq_tilde_f} we have $\lambda_E\phi_P(x)=f_E(x)\in E^\tm\Xi$. Thus by assumption \eqref{eq_lambda_inv_img} we have $\phi_P(x) \in T(F)\prod_\mathfrak p \mathbf T(\fo_{F_\mathfrak p})$, which is to say there are $\tau\in T(F)$ and $\sigma\in\prod_\mathfrak p\mathbf T( \fo_{F_\mathfrak p})$ such that $\tau\sigma=\phi_P(x)=P^{-1}x$, i.e. $\tau\sigma(P)=x$. Since $P\in X_F(F)$ and \eq{\label{eq_in_stab} g\mathbf X(\fo_{F_\mathfrak p})=\mathbf X(\fo_{F_\mathfrak p})\text{ for all }g\in\mathbf T(\fo_{F_\mathfrak p}), } it follows that \eqn{ \tau(P)=\sigma^{-1}(x)\in \mathbf X(F)\cap \prod_\mathfrak p\mathbf X(\fo_{F_\mathfrak p})=\mathbf X(\fo_F). } } \rk{\label{rk_artin_cond} The condition \eqref{eq_artin_cond} is called the \emph{Artin condition} in, for example, Wei's~\cite{wei_diophantine,wei1,wei2}. If the assumption in the proposition holds, the integral local solvability and the Artin condition completely describe the global integral solvability. As a result, in cases where $K_\Xi$ is known it is possible to calculate the Artin condition, and give explicit criteria for the solvability. Actually, the assumption \eqref{eq_lambda_inv_img} is in line with the definition of $\mathbf X$-admissible subgroup in \cite{multi-norm-tori}. } Let $L=\fo_F+\fo_F\sqrt{-d}$ in $E$ and $L_\mathfrak p=L\otimes_{\fo_F}\fo_{F_\mathfrak p}$ in $E_\mathfrak p=E\otimes_F F_\mathfrak p$. Then $\prod_\mathfrak p L_\mathfrak p^\tm$ is an open subgroup of $\mathbb I_E$. Let $S\subseteq \Omega_E$ be a finite set of places of $E$, $U_\fP\subseteq \fo_{E_\fP}^\tm$ be an open subgroup of $\fo_{E_\fP}^\tm$ and \eqn{ W_\fP = \cs{ U_\fP &\text{ for }\fP\in S,\\ \fo_{E_\fP}^\tm &\text{ for }\fP\notin S. }} We define the open subgroup of $\mathbb I_E$ \eq{\label{eq_Xi_W} \Xi_W =\left(\prod_\mathfrak p L_\mathfrak p^\tm\right) \bigcap\left( \prod_\fP W_\fP\right) = \prod_\mathfrak p\left( L_\mathfrak p^\tm\cap \prod_{\fP\mid\mathfrak p} W_\fP\right ), } and assume $E^\tm\Xi_W$ is also of finite index in $\mathbb I_E$ (since $\prod_\fP W_\fP$ contains \eqn{ E^\tm\prod_{\fP\mid \mathfrak m} (1+\mathfrak p^{a_\mathfrak p}) \tm \prod_{\mathfrak p\nmid \mathfrak m} \fo_{F_\mathfrak p}^\tm } for some modules $\mathfrak m=\prod_\mathfrak p \mathfrak p^{a_\mathfrak p}$, it is already of finite index in the number field case). We define $K_W/E$ the class field corresponding to $E^\tm\Xi_W$. By some additional assumptions, we prove that $\Xi=\Xi_W$ satisfies the assumption \eqref{eq_lambda_inv_img} in Proposition \ref{prop_artin_cond}, that is, \eqn{ \lambda_E^{-1}(E^\tm\Xi_W) \subseteq T(F)\prod_\mathfrak p \mathbf T(\fo_{F_\mathfrak p}). } \lemm{\label{lem_lambda_inv_img} Let $S$ and $W_\fP$ as before. Suppose for every $u\in\fo_F$, the equation \eqn{ N_{E/F}(\alpha)=u,\quad \alpha\in L^\tm } is solvable or the equation \eqn{ N_{E_\mathfrak p/F_\mathfrak p}(\alpha)=u,\quad \alpha\in L_\mathfrak p^\tm \cap \prod_{\fP\mid\mathfrak p} W_\fP } is not solvable for some place $\mathfrak p$. Then the assumption \eqref{eq_lambda_inv_img} in Proposition \ref{prop_artin_cond} is true. } \pf{ Recall that $T=\ker(R_{E/F}(\mathbf G_m)\rightarrow\mathbf G_m)$ and $\mathbf T$ is the group scheme defined by the equation $x^2+dy^2=1$ over $\fo_F$. Therefore we have \eqn{ T(F)=\set{\beta\in E^\tm|N_{E/F}(\beta)=1} } and \eqn{ \mathbf T(\fo_{F_\mathfrak p})=\set{\beta\in L_\mathfrak p^\tm|N_{E_\mathfrak p/F_\mathfrak p}(\beta)=1}. } Suppose $t\in T(\bA_F)$ such that $\lambda_E(t)\in E^\tm\Xi_W$. Write $t=\beta i$ with $\beta\in E^\tm$ and $i\in \Xi_W$. Since $t\in T(\bA_F)$ we have \eqn{ N_{E/F}(\beta)N_{E/F}(i)=N_{E/F}(\beta i)=1. } It follows that \eqn{ N_{E/F}(i)=N_{E/F}(\beta^{-1})\in F^\tm\cap \prod_\mathfrak p\fo_{F_\mathfrak p}^\tm=\fo_F^\tm. } So we have $N_{E/F}(i)=u$ for some $u\in\fo_F$. Note that $i\in \Xi_W$, and thus at each $\mathfrak p$ we have \eqn{ N_{E_\mathfrak p/F_\mathfrak p}(i_\mathfrak p)=u,\quad i_\mathfrak p=(i_\fP)_{\fP\mid\mathfrak p} \in L_\mathfrak p^\tm \cap \prod_{\fP\mid\mathfrak p} W_\fP. } Thus the assumption tells us that the equation \eqn{ N_{E/F}(\alpha)=u,\quad \alpha\in L^\tm } is solvable. Let $\alpha_0$ be such a solution and let \aln{ \gamma &=\beta \alpha_0\\ \text{and }j&=i\alpha_0^{-1}. } Then $N_{E/F}(\gamma)=N_{E/F}(j)=1$. Note that $\alpha_0\in L^\tm$, and we have $\gamma\in T(F)$ and $j\in\prod_\mathfrak p\mathbf T(\fo_{F_\mathfrak p})$. It follows that $t=\beta i=\gamma j\in T(F)\prod_\mathfrak p\mathbf T(\fo_{F_\mathfrak p})$. This finishes the proof. } \rk{ In \cite{lv2017intrepqr}, the admissible subgroup for the equation \eqref{eq_qr} is simply chosen to be $\prod_\mathfrak p L_\mathfrak p^\tm$, which is generalized by the above lemma, where we intersect $\prod_\mathfrak p L_\mathfrak p^\tm$ with the open subgroup $\prod_\fP W_\fP$ (see \eqref{eq_Xi_W}). This allows us to deal more difficult base fields and parameters for \eqref{eq_qr}. Previous method to to this \cite{multi-norm-tori,wei_diophantine,wei2} is to construct an Kummer extension $\Theta/E$ with low degree and chose the admissible group to be $E^\tm\prod_\mathfrak p L_\mathfrak p^\tm\cap E^\tm N_{\Theta/E}\mathbb I_\Theta^\tm$. } Using this Lemma, we obtain the following corollary for Proposition \ref{prop_artin_cond}. \coru{\label{cor_artin_cond} Let symbols be as before and suppose that $E^\tm\Xi_W$ is of finite index in $\mathbb I_E$, $S$ and $W_\fP$ satisfy the assumption in Lemma \ref{lem_lambda_inv_img}, that is, for every $u\in\fo_F$, the equation \eqn{ N_{E/F}(\alpha)=u,\quad \alpha\in L^\tm } is solvable or the equation \eqn{ N_{E_\mathfrak p/F_\mathfrak p}(\alpha)=u,\quad \alpha\in L_\mathfrak p^\tm \cap \prod_{\fP\mid\mathfrak p} W_\fP } is not solvable for some place $\mathfrak p$. Then $\mathbf X(\fo_F)\neq\emptyset$ if and only if there exists \eqn{ \prod_{\mathfrak p\in\Omega_F}(x_\mathfrak p, y_\mathfrak p)\in\prod_{\mathfrak p\in\Omega_F}\mathbf X(\fo_{F_\mathfrak p}) } such that \eqn{ \psi_{K_W/E}(\tilde f_E(\prod_\mathfrak p(x_\mathfrak p,y_\mathfrak p)))=1, } where $K_W$ is the class field corresponding to $E^\tm\Xi_W$ with $\Xi_W$ defined in \eqref{eq_Xi_W}. } \section{The integral representation of binary quadratic forms over $k[t]$} \label{sec_rff} Now we consider the case where $F=k(t)$ which is our focus, where $k=\mathbb F_q$ is a finite field of character $p\neq2$. Hence we are interested in diophantine equation $ax^2+bxy+cy^2+g=0$ over $\fo_F=k[t]$. Suppose that $-d=b^2-4ac$ is not a square in $F$. Set $E=F(\sqrt{-d})$ and $L=\fo_F+ \fo_F\sqrt{-d}$ as previous sections. Let ${\p_\infty}$ be the place of $k(t)$ at infinity and suppose further that $E/F$ is ``imaginary", that is, \eq{\label{eq_as_unique_infty} \text{there is unique place ${\fP_\infty}$ in $E$ lying over ${\p_\infty}$.} } Let $\sgn(\cdot)$ be a sign function of ${\fP_\infty}$ and define \eq{\label{eq_Ep} {E_\Pin^+}=\set{\alpha\in {E_\Pin^\tm}| \sgn(\alpha)=1}, } $S=\set{{\fP_\infty}}$, and $U_{\fP_\infty}={E_\Pin^+}$. Let ${\Xi_\Pin^+}$ be the subgroup $\Xi_W$ defined in \eqref{eq_Xi_W} for the chosen $S$ and $U_{\fP_\infty}$. \thm{\label{thm_rff} With the above notation, we have: \enmt{ \itc{a} The open subgroup $E^\tm{\Xi_\Pin^+}$ is of finite index in $\mathbb I_E$. \itc{b} Let ${K_\Pin^+}$ be the class field corresponding to $E^\tm{\Xi_\Pin^+}$ and \eqn{ \mathbf X=\Spec(\fo_F[x,y]/(ax^2+bxy+cy^2+g)). } Then $\mathbf X(\fo_F)\neq\emptyset$ if and only if there exists \eqn{ \prod_{\mathfrak p\in\Omega_F}(x_\mathfrak p, y_\mathfrak p)\in\prod_{\mathfrak p\in\Omega_F}\mathbf X(\fo_{F_\mathfrak p}) } such that \eqn{ \psi_{{K_\Pin^+}/E}(\tilde f_E(\prod_\mathfrak p(x_\mathfrak p,y_\mathfrak p)))=1. }}} \pf{ We first show that $\mathbb I_E/E^\tm{\Xi_\Pin^+}$ is finite. By the choice of $S$ and $U_{\fP_\infty}$ we know that \eqn{ {\Xi_\Pin^+}= {E_\Pin^+}\tm \prod_{\mathfrak p\neq{\p_\infty}} L_\mathfrak p^\tm } Define $(\fo_E)_\mathfrak p=\fo_E\otimes_{\fo_F}\fo_{F_\mathfrak p}$ in $E_\mathfrak p=E\otimes_F F_\mathfrak p$ and \eqn{ {\tilde\Xi_\Pin^+} = {E_\Pin^+}\tm \prod_{\mathfrak p\neq{\p_\infty}} (\fo_E)_\mathfrak p^\tm = {E_\Pin^+}\tm \prod_{\fP\neq{\fP_\infty}} \fo_{E_\mathfrak p}^\tm. } Since we have the exact sequence \eqn{ 1\longrightarrow \fo_E^\tm/L^\tm\longrightarrow {\tilde\Xi_\Pin^+}/{\Xi_\Pin^+}\longrightarrow E^\tm{\tilde\Xi_\Pin^+}/E^\tm{\Xi_\Pin^+}\longrightarrow 1 } and \eqn{ {\tilde\Xi_\Pin^+}/{\Xi_\Pin^+}=\prod_{\mathfrak p\mid [\fo_E:L]}(\fo_E)_\mathfrak p^\tm/L_\mathfrak p^\tm } is finite, we know that ${\tilde\Xi_\Pin^+}/{\Xi_\Pin^+}$ is finite. Therefore we only need to show that $\mathbb I_E/E^\tm{\tilde\Xi_\Pin^+}$ is finite. Define $\mathbb I_E^+=\mathbb I_E\cap {E_\Pin^+}$ and $E^+=\mathbb I_E^+\cap E^\tm$. Then naturally we have an isomorphism $\mathbb I_E^+/E^+{\tilde\Xi_\Pin^+}\cong Cl^+(\fo_E)$, where $Cl^+(\fo_E)$ is the \emph{narrow class group} with respect to $({\fP_\infty},\sgn)$ (c.f. \cite[p. 200]{goss1996basicff}). And we have $\mathbb I_E=E^\tm\mathbb I_E^+$ by the approximation theorem (c.f. \cite{cassels-nt}). Hence we have $\mathbb I_E^+/E^+{\tilde\Xi_\Pin^+}\cong \mathbb I_E/E^\tm{\tilde\Xi_\Pin^+}$ and thus $\mathbb I_E/E^\tm{\tilde\Xi_\Pin^+}\cong Cl^+(\fo_E)$ is finite. This completes the proof for (a). For the assertion (b) we will apply Corollary \ref{cor_artin_cond}. For any $u\neq1$ in $\fo_F^\tm=k^\tm$. Let $\mathfrak p={\p_\infty}$ so $L_\mathfrak p^\tm \cap \prod_{\fP\mid\mathfrak p} W_\fP={K_\Pin^+}$, since ${\fP_\infty}$ is the only place above ${\p_\infty}$. Assume that there is $\alpha \in {E_\Pin^+}$ such that $u=N_{E_\mathfrak p/F_\mathfrak p}(\alpha)=\alpha\bar\alpha$. By the definition of ${E_\Pin^+}$ \eqref{eq_Ep} we know that $\sgn(\alpha)=1$. Since $v_{\fP_\infty}(\alpha)=v_{\fP_\infty}(\bar\alpha)=\frac{1}{2}v_{\fP_\infty}(u)=0$ where $v$ is the normalized exponential valuation, we have $\alpha$ is a unit of $K_{\fP_\infty}^\tm$ with $\sgn(\alpha)=1$, which is to say $\alpha\in 1+{\fP_\infty}$. Therefor $\bar\alpha\in 1+\overline{\fP_\infty}=1+{\fP_\infty}$, and hence we have $\alpha,\bar\alpha\in{E_\Pin^+}$ with $\sgn(\alpha)=\sgn(\bar\alpha)=1$. It follows that $u=\sgn(u)=\sgn(\alpha)\sgn(\bar\alpha)=1$, which is a contradiction and shows that the equation \eqn{ N_{E_\mathfrak p/F_\mathfrak p}(\alpha)=u,\quad \alpha\in L_\mathfrak p^\tm \cap \prod_{\fP\mid\mathfrak p} W_\fP } is not solvable for $\mathfrak p=\mathfrak p_\infty$. Thus (b) follows form Corollary \eqref{cor_artin_cond}. } In contrast to the class field ${K_\Pin^+}$ corresponding to ${\Xi_\Pin^+}$, we denote ${K_\Pin}$ the Hilbert class field of $E$ with respect to ${\fP_\infty}$, i.e. ${K_\Pin}$ is the class field corresponding to the open subgroup $E^\tm{\Xi_\Pin}$ of finite index in $\mathbb I_E$, where \eqn{ {\Xi_\Pin} = {E_\Pin^\tm}\tm \prod_{\fP\neq{\fP_\infty}} \fo_{E_\mathfrak p}^\tm, } which we will use later and basically we have ${K_\Pin^+}\supseteq{K_\Pin}$ since $E^\tm{\Xi_\Pin^+}\subseteq E^\tm{\Xi_\Pin}$. We use the above Theorem to derive a result similar to the main theorems of Maciak in \cite{maciak2010primes} considering the equation $l=x^2+Dy^2$. First we need some notation and facts in \cite{maciak2010primes}. Recall that we set $F=k(t)$, $k=\mathbb F_q$ and hence $\fo_F=k[t]$. Let $D\in k[t]$ be square free with positive degree and $l\nmid D$ an irreducible element of $k[t]$ and we consider the equation $l=x^2+Dy^2$ over $k[t]$. For this equation, we have $a=1$, $b=0$, $c=D$, $g=-l$ and $-d=b^2-4ac=-4D$. Since $2\in k^\tm$, we can simplify the equation \eqref{eq_bqf_n} by canceling $4$ in both sides. We set \aln{ n&=-4ag/4=l,\\ \tilde x&=(2ax+by)/2= x,\\ \tilde y&=y. } We can apply Theorem \ref{thm_rff} for $d = D$ to this example, as we still have $E = \mathbb Q(\sqrt{-d})$, $\tilde x^2+d\tilde y^2=n$ and \eqref{eq_in_stab}. Remember that $D$ is square free with positive degree, we know that $\fo_F+ \fo_F\sqrt{-D}=\fo_E$. Suppose that $\deg D$ is odd or $\lc(-D)$ ($\lc$ means the leading coefficient) is not a square in $k^\tm$, which is to say the assumption \eqref{eq_as_unique_infty} holds. In what follow we make a further assumption that \eq{\label{eq_as_solv_infty} \text{the equation $l=x^2+Dy^2$ is solvable in the completion ${E_\Pin^\tm}$,} } i.e., $\deg l$ is even if $\deg D$ is. Let $d_\infty$ be the ramification index of ${\fP_\infty}\mid{\p_\infty}$ and define $\deg^*l=\frac{\deg l}{d_\infty}$ as in \cite{maciak2010primes}, which is an positive integer by the above assumption. Following \cite[Lemma 3.3.3]{maciak2010primes}, since $v_{\fP_\infty}(g/\sqrt{-D})=1$, we fix $\sgn$ to be with respect to the uniformizer $g/\sqrt{-D}$ in the following \thm{\label{thm_h+} Let $k$, $D$ and $l$ in $k[t]$ as before. Then we have \enmt{ \itc{a} if $\sgn(l)(-1)^{\deg^*l}\in k^{\tm2}$, then $l=x^2+Dy^2$ is solvable over $k[t]$ if and only if $\fracn{l}{r}=1$ for each monic irreducible factor $r\mid D$ and $l$ splits completely in ${K_\Pin^+}$; \itc{b} if $\sgn(l)(-1)^{\deg^*l}\not\in k^{\tm2}$, then $l=x^2+Dy^2$ is solvable over $k[t]$ if and only if $\fracn{l}{r}=1$ for each monic irreducible factor $r\mid D$, $l$ splits completely in ${K_\Pin}$ and the relative degree of $l$ in ${K_\Pin^+}$ is $2$. }} \pf{ In line with Theorem \ref{thm_rff}, recall that $F=k(t)$, $E=F(\sqrt{-D})$, $L=\fo_F+ \fo_F\sqrt{-D}=\fo_E$ and ${K_\Pin^+}$ is the class field corresponding to ${\Xi_\Pin^+}$. We know by \eqref{eq_tilde_f_E} that \eq{\label{thm_recover.eq_tilde_f_E} \tilde f_E[(x_\mathfrak p,y_\mathfrak p)]= \cs{ (x_\mathfrak p+\sqrt{-D}y_\mathfrak p, x_\mathfrak p-\sqrt{-D}y_\mathfrak p) &\text{if }\mathfrak p\text{ splits in }E/F,\\ x_\mathfrak p+\sqrt{-D}y_\mathfrak p &\text{otherwise}. }} Then by Theorem \ref{thm_rff}, the equation $l=x^2+Dy^2$ is solvable over $k[t]$ if and only if there exists local solution \eqn{ \prod_{\mathfrak p\in\Omega_F}(x_\mathfrak p, y_\mathfrak p)\in\prod_{\mathfrak p\in\Omega_F}\mathbf X(\fo_{F_\mathfrak p}) } such that \eqn{ \psi_{{K_\Pin^+}/E}(\tilde f_E(\prod_\mathfrak p(x_\mathfrak p,y_\mathfrak p)))=1. } Next we calculate these conditions in details. By a simple calculation we know the local condition \eqn{ \prod_{\mathfrak p}\mathbf X(\fo_{F_\mathfrak p})\neq\emptyset } is equivalent to \al{ &\text{$l$ splits completely in $E$},\label{thm_h+.eq_local_l}\\ &\fracn{l}{r}=1,\text{ for each monic irreducible factor $r\mid D$,}\label{thm_h+.eq_local_r}\\ } where we have removed the local condition at ${\p_\infty}$ since it automatically holds by the assumption \eqref{eq_as_solv_infty}. For the Artin condition, let $\prod_\mathfrak p(x_\mathfrak p,y_\mathfrak p)\in\prod_\mathfrak p\mathbf X(\fo_{F_\mathfrak p})$ be a local solution. Then \eq{\label{thm_recover.eq_local_decomp} (x_\mathfrak p+\sqrt{-D}y_\mathfrak p)(x_\mathfrak p-\sqrt{-D}y_\mathfrak p) = l \text{ in }E_\fP\text{ with }\fP\mid\mathfrak p. } Let $\frl$=$l\fo_F$. Thus for all $\mathfrak p\nmid \frl{\p_\infty}$, $\tilde f_E[(x_\mathfrak p,y_\mathfrak p)]\in L_\mathfrak p^\tm$ by \eqref{thm_recover.eq_tilde_f_E} and \eqref{thm_recover.eq_local_decomp}. It follows that \eqn{ \psi_{{K_\Pin^+}/E}(\tilde f_E[(x_\mathfrak p,y_\mathfrak p)])=1\quad\text{for all $\mathfrak p\nmid \frl{\p_\infty}$}, } where $\tilde f_E[(x_\mathfrak p,y_\mathfrak p)]$ is regarded as an element in $\mathbb I_E$ such that the component above $\mathfrak p$ is given by the value of $\tilde f_E[(x_\mathfrak p,y_\mathfrak p)]$ and $1$ otherwise. For $\mathfrak p=\frl$, by the local condition we already know that $l$ splits completely in $E/F$. Hence \eqref{thm_recover.eq_local_decomp} tells us that one of $v_\frl(x_\frl\pm\sqrt{-D}y_\frl)$ is $1$ and the other $0$. Suppose $v_\frl(\tilde x_\frl+\sqrt{-D}\tilde y_\frl)=1$ and let $\frl=\fL\bar\fL$ in $E$. Note that $L=\fo_E$ and $L_\frl^\tm = (\fo_E)_\frl^\tm = \fo_{E_\fL}^\tm\tm \fo_{E_{\bar\fL}}$, so both $\fL$ and $\bar\fL$ are unramified in ${K_\Pin^+}/E$. It follows that \eqn{ \sigma_\fL:=\psi_{{K_\Pin^+}/E}(\tilde f_E[(x_\frl,y_\frl)])=\psi_{{K_\Pin^+}/E}(l_\fL)\in\Gal({K_\Pin^+}/E) } where $l_\fL$ is in $\mathbb I_E$ such that its $\fL$ component is $l$ and the other components are $1$, and $\sigma_\fL$ denotes the Frobenius automorphism of $\fL$ in ${K_\Pin^+}/E$. For $\mathfrak p={\p_\infty}$, let $\alpha= x_{\p_\infty}+\sqrt{-D}y_{\p_\infty}$. Then $l=\alpha\bar\alpha$ and $\sgn(\bar\alpha)=(-1)^{\deg^*l}\sgn(\alpha)$ (\cite[Proposition 4.4.2]{maciak2010primes}). Thus \eqn{ \sgn(\alpha)=\pm\sqrt{\sgn(l)(-1)^{\deg^*l}}. } Let \eqn{ \sigma_{\fP_\infty} =\psi_{{K_\Pin^+}/E}(\tilde f_E[(x_{\p_\infty},y_{\p_\infty})])\in\Gal({K_\Pin^+}/E). } Note that since $L=\fo_E$, we have \eqn{ {\Xi_\Pin^+}= {E_\Pin^+}\tm \prod_{\mathfrak p\neq{\p_\infty}} L_\mathfrak p^\tm={E_\Pin^+}\tm \prod_{\fP\neq{\fP_\infty}} \fo_{E_\fP}^\tm. } It follows that $(\alpha)_{\fP_\infty}\in E^\tm{\Xi_\Pin^+}$ if and only if $\sgn(\alpha u)=1$ for some $u\in \fo_E^\tm=k^\tm$. And we also note that $\sgn(l)(-1)^{\deg^*l}\in k^\tm$ since $\sgn(l)=\lc(l)\lc(-D)^{-\deg^*l}$ by \cite[(3.3.3) p. 38]{maciak2010primes}. Using these facts, we distinguish two cases: \enmt{ \itc{i} $\sgn(l)(-1)^{\deg^*l}\in k^{\tm2}$. Then $\sgn(\alpha)=\pm\sqrt{\sgn(l)(-1)^{\deg^*l}}\in k^\tm$ and thus $(\alpha)_{\fP_\infty}\in E^\tm{\Xi_\Pin^+}$. It follows that $\sigma_{\fP_\infty}=1$. \itc{ii} $\sgn(l)(-1)^{\deg^*l}\not\in k^{\tm2}$. But we already know that $\sgn(l)(-1)^{\deg^*l}\in k^\tm$. It follow that $\sgn(\alpha)\not\in k^\tm$ and $\sgn(\alpha^2)\in k^\tm$, which is to say that $(\alpha)_{\fP_\infty}$ is of order $2$ in $\mathbb I_E/E^\tm{\Xi_\Pin^+}\cong \Gal({K_\Pin^+}/E)$ and $\sigma_{\fP_\infty}$ is an element of order $2$. } So we obtain that the Artin condition $\psi_{{K_\Pin^+}/E}(\tilde f_E(\prod_\mathfrak p(x_\mathfrak p,y_\mathfrak p)))=1$ is equivalent to \eq{\label{thm_h+.eq_artin} \sigma_\fL\sigma_{\fP_\infty}=1. } At this time we know that $l=x^2+Dy^2$ is solvable over $k[t]$ if and only if \eqref{thm_h+.eq_local_l}, \eqref{thm_h+.eq_local_r} hold and there is local solution \eqn{ \prod_{\mathfrak p\in\Omega_F}(x_\mathfrak p, y_\mathfrak p)\in\prod_{\mathfrak p\in\Omega_F}\mathbf X(\fo_{F_\mathfrak p}) } such that the corresponding \eqref{thm_h+.eq_artin} holds. We first prove (a). Suppose that $\sgn(l)(-1)^{\deg^*l}\in k^{\tm2}$. Then $\sigma_{\fP_\infty}=1$. If $l=x^2+Dy^2$ is solvable over $k[t]$ then by \eqref{thm_h+.eq_local_l} we have $l$ splits in $E/F$. Moreover, \eqref{thm_h+.eq_artin} implies $\sigma_\fL=\sigma_\fL\sigma_{\fP_\infty}=1$, i.e. $\fL$ splits completely in ${K_\Pin^+}/E$. Thus we know \eqref{thm_h+.eq_local_r} holds and $l$ splits completely in ${K_\Pin^+}$. Conversely, if \eqref{thm_h+.eq_local_r} holds and $l$ splits completely in ${K_\Pin^+}$, then so does $l$ in $E$, i.e. \eqref{thm_h+.eq_local_l} holds. Since $l$ splits completely in ${K_\Pin^+}$, $\sigma_\fL\sigma_{\fP_\infty}=\sigma_\fL=1$ so $\eqref{thm_h+.eq_artin}$ also holds. This completes the proof for (a). To show (b), suppose that $\sgn(l)(-1)^{\deg^*l}\not\in k^{\tm2}$. Then $\sigma_{\fP_\infty}$ is of order $2$. If $l=x^2+Dy^2$ is solvable over $k[t]$ then we also have $l$ splits in $E/F$. And by \eqref{thm_h+.eq_artin} we know that $\sigma_\fL=\sigma_{\fP_\infty}^{-1}$ is of order $2$. Also, \eq{\label{thm_h+.eq_spin_res} \sigma_{\fP_\infty}|_{K_\Pin}=\psi_{{K_\Pin}/E}(\tilde f_E[(x_{\p_\infty},y_{\p_\infty})])=1 } since $\tilde f_E[(x_{\p_\infty},y_{\p_\infty})]\in{\Xi_\Pin}$, which implies that $l$ splits completely in ${K_\Pin}$. So we have $l$ splits completely in ${K_\Pin}$ and the relative degree of $l$ in ${K_\Pin^+}$ is $2$. Conversely, if $l$ splits completely in ${K_\Pin}$ and the relative degree of $l$ in ${K_\Pin^+}$ is $2$, then \eqref{thm_h+.eq_local_l} holds for the same reason as in the prove for (a). Then it suffices to show $\eqref{thm_h+.eq_artin}$ for some local solution. Actually, since $l$ splits completely in ${K_\Pin}$, we have $\sigma_\fL|_{K_\Pin}=1$ by the property of Frobenius automorphism. It follows that $\sigma_\fL\in\Gal({K_\Pin^+}/{K_\Pin})$. Also $\sigma_\fL$ is of order $2$ since $\sigma_\fL|_{K_\Pin}=1$ and the relative degree of $l$ in ${K_\Pin^+}$ is $2$. On the other hand, by \eqref{thm_h+.eq_spin_res}, we have $\sigma_{\fP_\infty}\in\Gal({K_\Pin^+}/{K_\Pin})$. Note that $\Gal({K_\Pin^+}/{K_\Pin})$ is cyclic (c.f. \cite[Theorem 3.4.7]{maciak2010primes}). Therefore $\Gal({K_\Pin^+}/{K_\Pin})$ has a unique subgroup $\set{\pm1}$ of order $2$ and $\sigma_\fL=\sigma_{\fP_\infty}=-1\in \set{\pm1}$. It follow that $\sigma_\fL\sigma_{\fP_\infty}=1$, i.e. that $\eqref{thm_h+.eq_artin}$ holds. The proof for (b) is finished. } \rk{ If we assume that \eq{\label{eq_as_hil_rec} \text{$\deg D$ is odd or $D$ contains no odd degree irreducible factor}, } then \cite[Theorems 4.4.3 and 4.4.5]{maciak2010primes} is a special form of the above theorem in the case $\sgn(l)=1$. To see this we only need to remove the condition \eqref{thm_h+.eq_local_r} in (a) and (b). Actually, if the other conditions in (a) or (b) hold, we know that $l$ splits completely in ${K_\Pin}$. For monic irreducible $r\mid D$, there exists $u\in \fo_F^\tm$ such that $\fracn{l}{r}=\fracn{ur}{l}$ by the assumption $\sgn(l)=1$. Let $r^*=ur$. We see that $\sqrt{r^*}\in{K_\Pin}$ if and only if \eqn{ \psi(i)(\sqrt{r^*})=\sqrt{r^*}\quad\text{for all } i\in {\Xi_\Pin} } under the Artin map $\psi$ of $E$, which is equivalent to the product of quadratic Hilbert symbols \eqn{ \prod_\fP \fracn{r^*, i_\fP}{\fP}=1\quad\text{for all $i=(i_\fP)_\fP\in{\Xi_\Pin}$}. } Clearly $E(\sqrt{r^*})/E$ is unramified at $\fP\nmid (r){\fP_\infty}$, and using the fact that ${\p_\infty}$ is unramified in $F(\sqrt{P})/F$ if $P$ is an polynomial of even degree and that one of $r^*$ and $D/r^*$ is of even degree (by assumption \eqref{eq_as_hil_rec}), we konw that ${\fP_\infty}$ is also unramified in $E(\sqrt{r^*})/E$. It follows that $\fracn{r^*, i_\fP}{\fP}=1$ for all $\fP\nmid r$. Since $i\in {\Xi_\Pin} = {E_\Pin^\tm}\tm \prod_{\mathfrak p\neq{\p_\infty}} L_\mathfrak p^\tm$ and $L_\mathfrak p=L\otimes_{\fo_F}\fo_{F_\mathfrak p}$, there exist $a_\mathfrak p,b_\fb\in F_\mathfrak p$ such that \eqn{ \cs{ (i_\fP,i_{\bar\fP}) = (a_\mathfrak p+\sqrt{-D}b_\mathfrak p, a_\mathfrak p-\sqrt{-D}b_\mathfrak p) &\text{if }\mathfrak p=\fP\bar\fP\text{ splits in }E/F,\\ i_\fP = a_\mathfrak p+\sqrt{-D}b_\mathfrak p &\text{otherwise}. }} It follows that \eqn{ \prod_{\fP\mid r} \fracn{r^*, i_\fP}{\fP}=\fracn{r^*, a_\mathfrak p^2-Db_\mathfrak p^2}{r}=1, } where the last equality comes from \cite[(3.4) Proposition]{neukirch_alnt}. Thus we have $\sqrt{r^*}\in{K_\Pin}$ and then $l$ splits in $F(\sqrt{r^*})$, which is to say $\fracn{l}{r}=\fracn{r^*}{l}=1$ for each monic irreducible factor $r\mid D$. This ensures the local condition \eqref{thm_h+.eq_local_r}. } We now give an example where the explicit criterion is obtained using Theorem \ref{thm_rff}. \eg{ Let $k=\mathbb F_3$ and $g\in k[t]$, write \eqn{ g=u\tm (t-1)^{s_1}\tm (t^2-t-1)^{s_2}\tm \prod_{k=1}^r p_k^{m_k}, } where $u\in k^\tm,s_1,s_2,k\ge0,m_k\ge1,p_1,p_2,\dots,p_r \neq t-1, t^2-t-1$ are distinct monic polynomial in $k[t]$. Define \eqn{ D=\set{p\in\set{p_1,\dots,p_k}| \text{$\fracn{-(t-1)(t^2-t-1)}{p}=1$ and $\fracn{t^2-t-1}{p}=-1$.}} } Then the diophantine equation \eq{\label{eq_eg} -x^2+txy-(t^3-t^2+1)y^2+g=0 } is solvable over $k[t]$ if and only if \enmt{ \itc{1} $\fracn{g\tm p^{v_p(g)}}{p} = (-1)^{v_p(g)}$, for $p=t-1$ or $t^2-t-1$, \itc{2} $\fracn{-(t-1)(t^2-t-1)}{p}=1$, for $p\nmid (t-1)(t^2-t-1)$ with odd $m_p:= v_p(g)$, \itc{3} and $s_1+s_2+\sum_{p\in D}v_p(g)\equiv0\pmod2$. }} \pf{ In this example, we have $a=-1$, $b=t$, $c=-(t^3-t+1)$ and $-d=b^2-4ac=-(t-1)(t^2-t-1)$. Since $\deg d$ odd, the assumption \eqref{eq_as_unique_infty} holds. In order to apply Theorem \ref{thm_rff}, let $F=k(t)$, $E=F(\sqrt{-d})$, $L=\fo_F+ \fo_F\sqrt{-d}$ and ${K_\Pin^+}$ is the class field corresponding to ${\Xi_\Pin^+}$. We may identify $\mathfrak p$ with the unique monic irreducible $p$ for all $\mathfrak p\neq {\p_\infty}$ and write $\infty={\p_\infty}$. Thus we also denote $\mathfrak p\in\Omega_F$ by $p\le\infty$. Let \aln{ n&=-4ag=g,\\ \tilde x&=2ax+by= x+ty,\\ \tilde y&=y } and we know by \eqref{eq_tilde_f_E} that \eqn{ \tilde f_E[(x_p,y_p)]= \cs{ (x_p+\sqrt{-d}y_p, x_p-\sqrt{-d}y_p) &\text{if }p\text{ splits in }E/F,\\ x_p+\sqrt{-d}y_p &\text{otherwise}, }} where $\tilde x_p=x_p+ty_p$, $\tilde y_p=y_p$. Since $d$ is square free and $\deg d=3$ is odd, we know that $L=\fo_E$ and ${K_\Pin^+}={K_\Pin}$ is the Hilbert class field corresponding to \eqn{ {\Xi_\Pin} = {E_\Pin^\tm}\tm \prod_{\fP\neq{\fP_\infty}} \fo_{E_\mathfrak p}^\tm } (c.f. \cite[Theorem 3.4.7]{maciak2010primes}). Moreover, the computational result using Drinfeld modules in \cite[Example 4.3.3]{maciak2010primes} tells us that we can write ${K_\Pin}=E(\alpha)$ where the minimal polynomial of $\alpha$ is $x^2-(t^2-t-1)$. Thus we have \eqn{ \Gal({K_\Pin}/E)=<-1>=\mathbb Z/2\mathbb Z. } Then by Theorem \ref{thm_rff}, the equation \eqref{eq_eg} is solvable over $k[t]$ if and only if there exists local solution \eqn{ \prod_p(x_p, y_p)\in\prod_p \mathbf X(\fo_{F_p}) } such that \eqn{ \psi_{{K_\Pin}/E}(\tilde f_E(\prod_p(x_p,y_p)))=1. } Next we calculate these conditions in details. By a simple calculation we know the local condition is equivalent to (1) and (2). For the Artin condition, let $\prod_p(x_p,y_p)\in\prod_p\mathbf X(\fo_{F_p})$ be a local solution. Then \eq{\label{eg.eq_local_decomp} (\tilde x_p+\sqrt{-d}\tilde y_p)(\tilde x_p-\sqrt{-d}\tilde y_p) = g \text{ in }E_\fP\text{ with }\fP\mid p } and since ${K_\Pin}/E$ is unramified, for any $p\neq \infty$ we have \eq{\label{eq_psi_1} 1=\cs{ \psi_{{K_\Pin}/E}(p_\fP)\psi_{{K_\Pin}/E}(p_{\bar\fP}), &\text{ if }p=\fP\bar\fP\text{ splits in }E/F,\\ \psi_{{K_\Pin}/E}(p_\fP), &\text{ if }p=\fP\text{ is inert in }E/F. }} We calculate $\psi_{{K_\Pin}/E}(\tilde f_E[(x_p,y_p)])$ separately: \enmt{ \item If $p=t-1$ or $t^2-t-1$, then $p=\fP^2$ in $E/F$. Suppose $\fP=\pi\fo_{E_\fP}$ for $\pi\in\fo_{E_\fP}$. Noting that ${K_\Pin}/E$ is unramified and that $\fP_2^2$ is principal in $E$ but $\fP_2$ is not, we have $\psi_{{K_\Pin}/E}((\pi_2)_{\fP_2})=-1$. By \eqref{eg.eq_local_decomp} we have \eqn{ v_\fP(\tilde x_p+\sqrt{-d}\tilde y_p)=v_\fP(\tilde x_p-\sqrt{-d}\tilde y_p) = \frac{1}{2}v_\fP(g)=v_p(g)=s_1. } It follows that \aln{ \psi_{{K_\Pin}/E}(\tilde f_E[(x_p,y_p)]) &= \psi_{{K_\Pin}/E}((\tilde x_p+\sqrt{-d}\tilde y_p)_\fP)\\ &= (-1)^{v_\fP (\tilde x_p+\sqrt{-d}\tilde y_p)} = (-1)^{s_1}. } \item If $\fracn{-d}{p}=1$ then by \eqref{eq_psi_1} we can distinguish the following two cases. \enmt{[(i)] \item If $\fracn{t^2-t-1}{p}=1$ i.e. $x^2-(t^2-t-1)\mod p$ is solvable, then $\psi_{{K_\Pin}/E}(p_\fP)=\psi_{{K_\Pin}/E}(p_{\bar\fP})=1$ and $\psi_{{K_\Pin}/E}(\tilde f_E[(x_p,y_p)])=1$. \item If $\fracn{t^2-t-1}{p}=-1$ i.e. $x^2-(t^2-t-1)\mod p$ is not solvable, then $\psi_{{K_\Pin}/E}(p_\fP)=\psi_{{K_\Pin}/E}(p_{\bar\fP})=-1$. It follows that \aln{ \psi_{{K_\Pin}/E}(\tilde f_E[(x_p,y_p)]) &= \psi_{{K_\Pin}/E}((\tilde x_p+\sqrt{-d}\tilde y_p)_\fP) \psi_{{K_\Pin}/E}((\tilde x_p-\sqrt{-d}\tilde y_p)_{\bar\fP})\\ &= (-1)^{v_\fP(\tilde x_p+\sqrt{-d}\tilde y_p) +v_{\bar\fP}(\tilde x_p-\sqrt{-d}\tilde y_p)}=(-1)^{v_p(g)}, } since \aln{ v_\fP(\tilde x_p+\sqrt{-d}\tilde y_p) &+ v_{\bar\fP}(\tilde x_p-\sqrt{-d}\tilde y_p)\\ &= v_p(\tilde x_p+\sqrt{-d}\tilde y_p) +v_p(\tilde x_p-\sqrt{-d}\tilde y_p)=v_p(g). } } \item If $\fracn{-d}{p}=-1$ then $p$ is inert in $E/F$. By \eqref{eq_psi_1} we have $\psi_{{K_\Pin}/E}(\tilde f_E[(x_p,y_p)])=1$. \item At last if $p=\infty$, since ${K_\Pin}/E$ is unramified, we have $\psi_{{K_\Pin}/E}(\tilde f_E[(x_\infty,y_\infty)])=1$. } Putting the above argument together, we know the Artin condition is \eq{\label{eg.eq_artin} s_1+s_2+\sum_{p\in D}v_p(g)\equiv0\pmod2. } The proof is complete if we put the local condition (1) (2) and the Artin condition \eqref{eg.eq_artin} together. }
1,941,325,220,595
arxiv
\section{Introduction} A wealth of studies have explored the evolution of primordial circumstellar disks in star forming regions. These mostly focused on single systems, since they are easier to observe and, from a theoretical point of view, less complex than multiple systems and hence easier to model. However, 42\% of all late-type field stars are bound in binary systems \citep{fis92} and the binary frequency is even higher for stars of solar mass and above \citep{duq91}. Assuming that most stars are formed in clusters these numbers are lower limits for the initial binary frequency shortly after formation, because tidal interaction in the cluster will disrupt many systems. It can hence be assumed that binaries are the most important branch of star formation. Disk evolution in binaries is controversially discussed in the literature. The total mass of a disk is likely reduced as long as the binary separation is less than $\sim$3 times the typical disk size in the star forming region in focus \citep{arm99}. This seems to agree with observational studies \citep[e.g.][]{bou06,mon07,cie09} that show that the frequency of disks in binary systems is significantly lower than that in single stars for systems separated by 100\,AU and less---a hint at an overall faster evolution of circumstellar disks in binaries, since, in addition to the dynamically removed outer parts, the innermost regions as measured by means of accretion and hot dust are also missing. Other studies like the mid-IR observations of \citet{pas08}, however, detect no difference in the evolution of the disk between singles and binaries. Primordial disks are not only indicators for the formation of the star itself, but also contain the material for the formation of planetary systems. If disks in binaries evolve differently from those in single stars, then the properties of the population of planets found in binaries will likely reflect those differences. Today, more than 43 planets are known to orbit one of the components of a binary system \citep{mug09}, most of which are separated by 30\,AU and more. Interestingly, systems with both components orbited by their own planet are disproportionally rarely observed---to the knowledge of the authors no system has been published so far. Although not entirely free from observational selection effects, this suggests a differential evolution of the individual disks of a binary system. Investigating the presence of disks with respect to their appearance around the higher and lower mass components of a large number of binaries will hence help to understand the appearance of planets in binaries and restrict the planet formation process in terms of available formation time. \section{Methods \& Goals of the Project} This contribution describes a study of the Orion Nebula Cluster (ONC) investigating a sample of 22 binaries for signs of accretion and dust disk presence around each separate component. Our observations of binaries in the ONC will address, among others, the following questions: \begin{itemize} \item What is the frequency of circumstellar disks in primaries and secondaries of young binaries? \item Do disks around secondaries disappear sooner than around primaries? \item How do disks evolve in the presence of a stellar companion? \end{itemize} In order to approach the answers to above questions we are using photometric as well as spectral information of the individual components of all target binaries. $JHK$ photometry allows us to assess near infrared excess emission and hence the presence of warm dust in an optically thick inner disk. Spectroscopy in $K$-band enables us to determine spectral types as well as to identify the actively accreting binary components of the sample. The latter is achieved through measuring the strength of Brackett gamma (Br$\gamma$) emission produced when accreting from the inner disk. \section{Targets \& Observations} \subsection{Sample Selection} The Orion Nebula Cluster is one of the nearest \citep[414$\pm$7\,pc;][]{men07}, young star cluster. It is $\lesssim$1($\pm2$)\,Myr in age \citep{hil97} and it has been intensively investigated for its circumstellar disks \citep[e.g.][]{hil98,da_10} and its binary content \citep{sim99,khl06,rei07}. The ONC hence features excellent conditions to study the evolution of circumstellar disks in binaries and allows us to compare our findings to results from single stars. We selected 22 visual binaries in the ONC. The observed projected separations range from 0.25 to 1.1\,arcsec, which corresponds to roughly 100--400\,AU at the distance of the ONC. Magnitude differences of the binary components range from 0.1 to $\sim$3\,mag in $H$ and $K$-band. Since all of the targets are likely to be members of the ONC \citep{rei07} and the separations are small, we can assume that the targeted binaries are gravitationally bound companions. Membership and hence physical binarity is further supported by analysis of the observed spectra and photometry, which suggest late spectral types at moderate luminosity ruling out background giants. \subsection{NIR Photometry \& Spectroscopy} Due to their close separations, all target binaries were observed with the help of Adaptive Optics (AO). We imaged all targets with VLT/NACO in $J$ and $H$ bands (see Fig.~\ref{fig1}) which will be combined with data from the literature to provide full $JHK$ photometry of all separate binary components in the sample. Furthermore, AO assisted spectroscopy of all targets were taken with NACO (R$\sim$1400, 16 sources) and Gemini/NIFS (R$\sim$5000, 6 sources) providing separate K-band spectra of all binary components. \begin{figure}[t] \plotone{daemgen_s_fig1.eps} \caption{\label{fig1}NACO H-band images of 9 of our 22 binary targets in the ONC.} \end{figure} \subsection{Reduction} All spectra and images were reduced and extracted with custom {\em IDL} and {\em IRAF} procedures. Telluric correction was achieved through division by telluric standard stars which were observed close in time and at similar airmass as each target. All telluric standards are of spectral type B0--B9. Hence, in order to preserve the intrinsic Br$\gamma$ information from the target components, the telluric standard spectra had to be cleaned from the significant Br$\gamma$ absorption feature. This was achieved by fitting and dividing a Moffat line model to the absorption feature and thus removing the Br$\gamma$ line before division. In order to derive spectral type, extinction, and veiling for each component, the spectra were matched with template spectra from the IRTF Spectral Library \citep{ray09,cus05}. Best estimates were found from a 3-parameter least $\chi^2$ fit (Fig.~\ref{fig2}). \begin{figure}[t] \plotone{daemgen_s_fig2.eps} \caption{\label{fig2}In red: NACO Spectra of both components of one of our target binaries. In black: artificially extinced and veiled template spectra \citep{ray09,cus05} to find spectral types, extinction A$_\mathrm{V}$, and veiling r$_\mathrm{K}$. The best fit spectral types are noted on the right and best A$_\mathrm{V}$ and r$_\mathrm{K}$ are reported in the box in the lower left.} \end{figure} \section{First Results: Brackett Gamma Emission Statistics} In the following, we will discuss only the spectroscopy results from a subset of the 22 observed targets. The reduced dataset consists of 16 fully reduced NACO spectra and will be amended by the rest of the targets and observational modes as soon as the data are coherently reduced and extracted. Fig.~\ref{fig3} shows a census of Br$\gamma$ emission, as an indicator of ongoing accretion, observed in the components that were targeted with NACO. Components with significant Br$\gamma$ emission (peak to noise ratio $>$3) were considered accreting. Typical measured equivalent widths of successful detections were in the range of 0.5 to 5\,\AA, although also one strong emitter was detected with W(Br$\gamma$)=18.5$\pm$2.6\,\AA. We see a clear preference for systems with non-accreting components (9 out of 16 binaries). Mixed pairs with one accreting component as well as systems with both components accreting are relatively rare. From these statistics we can draw the following conclusions: \begin{figure}[t] \plotone{daemgen_s_fig3.eps} \caption{\label{fig3}Accretion activity, as assessed through Br$\gamma$ emission, in the primary and/or secondary component of the 16 binaries observed with NACO spectroscopy. A thorough discussion of the detection limits is presented in Daemgen et al. (2011, {\em in prep.}).} \end{figure} \subsection{Accretion disks exist in primaries and/or secondaries.} As was observed in other studies of various star-forming regions \citep[e.g.][]{mon07,pra03}, we find all combinations of accreting and non-accreting primary and secondary components. Furthermore, we observe a preference towards neither the higher mass (typically the primary) nor the lower mass component to be more likely to exhibit an accretion disk. \subsection{ONC binaries are less frequently accreting than single stars.} We see that most of the binary components do not show significant signs of accretion. Assuming that, as a consequence of the star formation process, initially all target components were in a state of accretion, most of the components---preferably both components of a binary---stop accreting within the short period of the lifetime of the ONC ,i.e.\ $\sim$1\,Myr. The statistics of Br$\gamma$ emission in the components (10 out of 32) would imply an accretion disk frequency of only $\sim$31$\pm$10\%. When compared to the disk frequency of single systems in the ONC \citep[55\%--90\%][]{hil98} it seems that a binary companion forces accretion to stop earlier than it would in a single system. We caution that the above frequency of binary accretion disks cannot directly be compared to the single star disk frequency, since the latter was measured by means of infrared excess and not accretion. The 31\% accretion disk frequency calculated above can, however, serve as a lower limit for the disk frequency in binaries: \citet{fed10} show that accretion disks in star forming regions are less frequently observed than the infrared excess (indicating hot circumstellar dust) of stars in the same regions. For associations younger than 5\,Myr, the accretion disk frequency is on average 9\% lower than the disk frequency detected by infrared excess. Additionally, Br$\gamma$ emission through accretion is considerably fainter than the simultaneously produced H$\alpha$ emission, the standard accretion measure also used by \citet{fed10}. Hence, objects might exist in the sample which would be detected as accreting by means of H$\alpha$ but not with Br$\gamma$ if the equivalent width is buried in the noise. The equivalent width of Br$\gamma$ can be estimated to be about $\sim$$1/4$ of the H$\alpha$ value (compare \citealt{edw94} and \citealt{naj96}). Hence, an H$\alpha$ equivalent width of 10\,\AA\ implies $W($Br$\gamma)\approx2.5$\,\AA. This is within the detection limits of most of our target components. Accordingly, although accreting targets might be missed by our Br$\gamma$ measurement, their number should be small. Taking into account these considerations, the data would still point to an underrepresentation of accretion disks in binaries when comparing their frequency (31$\pm$10\%) to the first order estimation of the accretion disk fraction of single stars in the ONC ($\equiv$dust excess frequency reduced by 9\%, i.e.\ 46\%--81\%). \subsection{\label{sec4.3}Binarity {\em does} influence disk evolution even for separations $>$100\,AU.} Our ONC sample consists of binaries with separations of 100--400\,AU. Models \citep{may05} and results from other star-forming regions \citep[Ophiuchus, Taurus;][]{duc10} typically predict significant differences between single star evolution and binaries only for close systems $<$100\,AU. However, the statistics in Fig.~\ref{fig3} is not compatible with being composed from unrelated, randomly paired sources: assuming a disk frequency of 31\%, we statistically expect to find $\sim$1.5 sources with both components accreting and $\sim$7 sources with one component accreting. Despite the low number statistics, this seems to be not observed in our sample, but pairs of non-accreting components appear to be preferred. This finding, in addition to the overall low disk frequency, strongly supports an impact of binarity on disk evolution even in systems as wide as 100--400\,AU. \section{Outlook} This is the {\em largest and most complete spatially resolved spectroscopic study of sub-arcsecond Orion binaries} to date. The assessment of spatially resolved $JHK$ photometry and NIR spectroscopy will allow us to infer not only the presence of disk accretion as described above, but also hot circumstellar dust as well as spectral type, mass, age, and luminosity of each individual component. The complete dataset will diagnose the differential evolution of disks in ONC binaries, complement multi-region studies, and provide input to binary disk evolution and planet formation theories.
1,941,325,220,596
arxiv
\section{\@startsection{section}{1}{\z@}{3.5ex plus 1ex minus .2ex}{2.3ex plus .2ex}{\large\bf}} \def\arabic{section}.{\arabic{section}.} \def\arabic{section}.\arabic{subsection}{\arabic{section}.\arabic{subsection}} \def#1}{} \def\FERMILABPub#1{\def#1}{#1}} \def\ps@headings{\def\@oddfoot{}\def\@evenfoot{} \def\@oddhead{\hbox{}\hfill \makebox[.5\textwidth]{\raggedright\ignorespaces --\thepage{}-- \hfill }} \def\@evenhead{\@oddhead} \def\subsectionmark##1{\markboth{##1}{}} } \ps@headings \catcode`\@=12 \relax \def\r#1{\ignorespaces $^{#1}$} \def\figcap{\section*{Figure Captions\markboth {FIGURECAPTIONS}{FIGURECAPTIONS}}\list {Fig. \arabic{enumi}:\hfill}{\settowidth\labelwidth{Fig. 999:} \leftmargin\labelwidth \advance\leftmargin\labelsep\usecounter{enumi}}} \let\endfigcap\endlist \relax \def\tablecap{\section*{Table Captions\markboth {TABLECAPTIONS}{TABLECAPTIONS}}\list {Table \arabic{enumi}:\hfill}{\settowidth\labelwidth{Table 999:} \leftmargin\labelwidth \advance\leftmargin\labelsep\usecounter{enumi}}} \let\endtablecap\endlist \relax \def\reflist{\section*{References\markboth {REFLIST}{REFLIST}}\list {[\arabic{enumi}]\hfill}{\settowidth\labelwidth{[999]} \leftmargin\labelwidth \advance\leftmargin\labelsep\usecounter{enumi}}} \let\endreflist\endlist \relax \catcode`\@=11 \def\marginnote#1{} \newcount\hour \newcount\minute \newtoks\amorpm \hour=\time\divide\hour by60 \minute=\time{\multiply\hour by60 \global\advance\minute by- \hour} \edef\standardtime{{\ifnum\hour<12 \global\amorpm={am}% \else\global\amorpm={pm}\advance\hour by-12 \fi \ifnum\hour=0 \hour=12 \fi \number\hour:\ifnum\minute<100\fi\number\minute\the\amorpm}} \edef\militarytime{\number\hour:\ifnum\minute<100\fi\number\minute} \def\NPB#1#2#3{Nucl. Phys. { B#1} (19#2) #3} \def\PLB#1#2#3{Phys. Lett. { B#1} (19#2) #3} \def\PLBold#1#2#3{Phys. Lett. {#1B} (19#2) #3} \def\PRD#1#2#3{Phys. Rev. { C#1} (19#2) #3} \def\PRL#1#2#3{Phys. Rev. Lett. {#1} (19#2) #3} \def\PRT#1#2#3{Phys. Rep. { C#1} (19#2) #3} \def\ARAA#1#2#3{Ann. Rev. Astron. Astrophys. {#1} (19#2) #3} \def\ARNP#1#2#3{Ann. Rev. Nucl. Part. Sci. {#1} (19#2) #3} \def\MODA#1#2#3{Mod. Phys. Lett. { A#1} (19#2) #3} \def\draftlabel#1{{\@bsphack\if@filesw {\let\thepage\relax \xdef\@gtempa{\write\@auxout{\string \newlabel{#1}{{\@currentlabel}{\thepage}}}}}\@gtempa \if@nobreak \ifvmode\nobreak\fi\fi\fi\@esphack} \gdef\@eqnlabel{#1}} \def\@eqnlabel{} \def\@vacuum{} \def\draftmarginnote#1{\marginpar{\raggedright\scriptsize\tt#1}} \def\draft{\oddsidemargin -.5truein \def\@oddfoot{\sl preliminary draft \hfil \rm\thepage\hfil\sl\today\quad\militarytime} \let\@evenfoot\@oddfoot \overfullrule 3pt \let\label=\draftlabel \let\marginnote=\draftmarginnote \def\@eqnnum{(\theequation)\rlap{\kern\marginparsep\tt\@eqnlabel}% \global\let\@eqnlabel\@vacuum} } \def\preprint{\twocolumn\sloppy\flushbottom\parindent 1em \leftmargini 2em\leftmarginv .5em\leftmarginvi .5em \oddsidemargin -.5in \evensidemargin -.5in \columnsep 15mm \footheight 0pt \textwidth 250mmin \topmargin -.4in \headheight 12pt \topskip .4in \textheight 175mm \footskip 0pt \def\@oddhead{\thepage\hfil\addtocounter{page}{1}\thepage} \let\@evenhead\@oddhead \def\@oddfoot{} \def\@evenfoot{} } \def\titlepage{\@restonecolfalse\if@twocolumn\@restonecoltrue\onecolumn \else \newpage \fi \thispagestyle{empty}\c@page\z@ \def\arabic{footnote}{\fnsymbol{footnote}} } \def\endtitlepage{\if@restonecol\twocolumn \else \fi \def\arabic{footnote}{\arabic{footnote}} \setcounter{footnote}{0}} \catcode`@=12 \relax \def#1}{} \def\FERMILABPub#1{\def#1}{#1}} \def\ps@headings{\def\@oddfoot{}\def\@evenfoot{} \def\@oddhead{\hbox{}\hfill \makebox[.5\textwidth]{\raggedright\ignorespaces --\thepage{}-- \hfill }} \def\@evenhead{\@oddhead} \def\subsectionmark##1{\markboth{##1}{}} } \ps@headings \relax \newcommand{\newcommand}{\newcommand} \newcommand{\johnadd}[1]{\fbox{\parbox{\textwidth}{#1}}} \newcommand{\ra}{\rightarrow} \newcommand{\lra}{\leftrightarrow} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \def\tilde{\alpha}{\tilde{\alpha}} \def\epsilon{\epsilon} \def\lambda{\lambda} \usepackage{epsfig} \newcommand{\bar\varepsilon}{\bar\varepsilon} \renewcommand{\l}{\lambda} \newcommand{\bar\lambda}{\bar\lambda} \begin{document} \def\firstpage#1#2#3#4#5#6{ \begin{titlepage} \nopagebreak \title{ \vfill {#3}} \author{\large #4 \\[1.0cm] #5} \maketitle \vskip -7mm \nopagebreak \begin{abstract} {\noindent #6} \end{abstract} \vfill \begin{flushleft} \rule{16.1cm}{0.2mm}\\[-3mm] \end{flushleft} \thispagestyle{empty} \end{titlepage}} \def\stackrel{<}{{}_\sim}{\stackrel{<}{{}_\sim}} \def\stackrel{>}{{}_\sim}{\stackrel{>}{{}_\sim}} \date{} \firstpage{3118}{IC/95/34} {\large\bf A D-brane inspired Trinification model.} {G. K. Leontaris${}^{*}$ and J. Rizos {\normalsize\sl Theoretical Physics Division, Ioannina University, GR-45110 Ioannina, Greece\\ [2.5mm] } {We describe the basic features of model building in the context of intersecting D-branes. As an example, a D-brane inspired construction with $U(3)_C\times U(3)_L\times U(3)_R$ gauge symmetry is proposed -which is the analogue of the Trinification model- where the unification porperties and some low energy implications on the fermion masses are analysed. } \vfill {\it ${}^{*}$Talk presented at the ``Corfu Summer Institute'', Corfu-Greece, September 4-14, 2005} \newpage \section{Introduction} Model building establishes the connection between the mathematical formulation of a physics theory and the known (experimentally discovered) world of elementary particles. In High Energy Physics the ``known world'' is defined as the one which is described by the Standard Model (SM). The SM spectrum, consists of three flavors of LH lepton doublets and quarks, the RH electrons, up and down quarks respectively plus gauge bosons and the Higgs field (although not yet discovered). During the last three decades we have learned a lot from the attempts to extend the SM. As early as 1974, the observation that gauge couplings converge at large scales, suggested unification of the three forces at a scale $M_U\sim 10^{15}$ GeV. This observation led to the incorporation of the SM into a gauge group with higher symmetry i.e., $SU(5)$ etc (Grand Unification). A number of Grand Unified Theories (GUTs) (Pati-Salam model, SO(10) GUT etc )\footnote{For an early review see \cite{Langacker:1980js}} predicted also the existence of the RH neutrino. Its existence can lead to a light Majorana mass (through the see-saw mechanism) which is now confirmed by the present experimental data. Next, the incorporation of supersymmetry~\cite{Nilles:1983ge} into the game of unification had a big success: This was the solution of the hierarchy, which was a fatal problem in all non-supersymmetric GUTs. However, the cost to pay was the doubling of the spectrum, with the inclusion of superpartners and many arbitrary parameters. For example, the number of arbitrary parameters one counts in the Minimal Supersymmetric Standard Model (MSSM) are pretty much above 100. String theory model building went a bit further: For instance, one could calculate from the first principles of the theory the Yukawa couplings~\cite{Leontaris:1999ce}. Non-renormalizable contributions of the form $\epsilon^n Q u^c h$, $\epsilon=\frac{\langle\phi\rangle}{M_S}$, (and similar terms for the other masses), where $\phi$ is a singlet and $M_S$ the string scale, were also calculated to any order. This gave the possibility to determine the structure of the fermion mass matrices in terms of a few known expansion parameters\cite{Ibanez:1994ig}. This kind of mass textures gives rise to successful hierarchies of the form $m_u:m_c:m_t$ $\sim \epsilon^6:\epsilon^2:1$ up to order one coefficients (and similarly for the other fermions). A number of other phenomenological problems however, were not resolved even in the string context. For example, there was no systematic way of eliminating baryon violating operators of any dimension, since, even if they are absent at the three level (due to the existence of a possible symmetry), they generally appear in higher order corrections. Another aesthetically and experimentally unpleasant fact is that the string scale which, in all models obtained from the heterotic string theory, is of the order of the Planck scale~\cite{Kaplunovsky:1987rp}. Yet, the non-discovery of the superpartners at energies expected to be there, is also another headache. It now appears that many of the above unanswered questions and puzzles might have a solution in models built in the context of branes immersed in higher dimensions~\cite{Polchinski:1996na}. Indeed, the models built in this context offer possibilities for solving the above problems: a class of them allows a low unification scale of the order of a few TeV~\cite{Bachas:1998kr}, therefore supersymmetry is not necessary since there is no hierarchy problem; further, in certain cases, (type I string theory) the presence of internal magnetic fields\cite{Bachas:1995ik} provides a concrete realization of split supersymmetry~\cite{Antoniadis:2004dt}, therefore, intermediate or higher string scales are also possible in this case\cite{split}. Further, the gauge group structure obtained in these models contains $U(1)$ global symmetries, one of them associated with the baryon number, so that baryon number violation is prohibited to all orders in perturbation theory. \section{Intersecting branes} In the construction of D-brane models the basic ingredient is the brane stack, i.e. a certain number of parallel, almost coincident D-branes. A single D-brane carries a $U(1)$ gauge symmetry which is the result of the reduction of the ten-dimensional Yang-Mills theory. A stack of $N$ parallel branes gives rise to a $U(N)$ gauge theory where the gauge bosons correspond to open strings having both their ends attached to some of the branes of the various stacks. The compact space is taken to be a six-dimensional torus $T^6=T^2\times T^2\times T^2$, however, for simplicity, let us assume first the intersections on a single $T^2$. $D$-branes in flat space lead to non-chiral matter. Chirality arises when they are wrapped on a torus. In this case chiral fermions sit in singular points in a transverse space while the number of fermion generations~\cite{Blumenhagen:2000wh,Aldazabal:2001cn}, and other fermions in intersecting branes are related to the two distinct numbers of wrappings of the branes around the two circles $R_1, R_2$ of the torus~\footnote{This picture is the dual of D-branes with magnetic flux~\cite{Bachas:1995ik}}. Consider intersecting D6-branes filling the four-dimensional space-time with $(n,m)$ wrapping numbers. (This means that we wrap $n$-times the brane along the circle of $R_1$ radius and $m$-times along $R_2$.) For two stacks $N_a, N_b$, we denote with $(n_a,m_a)$ and $(n_b,m_b)$ the wrappings respectively. The gauge group is $U(N_a)\times U(N_b)$ while the fermions (which live in the intersections) belong to the bi-fundamental $(N_a,\bar N_b)$, (or $(\bar N_a, N_b)$). We may also obtain representations in $(N_a,N_{b^*})$ from strings attached on the branes $a,b^*$, where $b^*$ is the mirror of the $b$-brane under the $\Omega\,R$ operation, $\Omega$ beeing the whorld-sheet parity operation and $R$ a geometrical action. The number of the intersections on the two-torus is given by $I_{ab}=n_am_b-m_an_b$, for $(N,\bar N)$ and $I_{ab^*}=n_am_b+m_an_b$ for $(N_a,N_{b^*})$. These also equal the number of the chiral fermion representations obtained at the intersections. Additional pairs can be constructed by the action on the vector $\vec v=(n,m)$ with the $SL(2,Z)$. The latter preserves the intersection numbers, therefore it preserves also the number of chiral fermions. The $SL(2,Z)$ elements act: \begin{eqnarray} \left(\begin{array}{c} n'\\m'\\\end{array}\right)&=&\left(\begin{array}{cc} a&b\\c&d\\\end{array}\right)\,\left(\begin{array}{c}n\\m \\\end{array}\right) \end{eqnarray} (Since $ad-bc=1$, it follows $n_a'm_b'-m_a'n_b'=n_am_b-m_an_b$.) The gauge couplings of the theory are given as follows: Let $g$ be the metric on the torus \begin{eqnarray} g&=&(2\pi)^2\left(\begin{array}{cc} R_1^2&R_1R_2\cos\theta\\ R_1R_2\cos\theta&R_2^2\\ \end{array} \right) \label{2tmetric} \end{eqnarray} where $\theta$ is the angle of the two vectors defining the torus lattice. The length of the $\vec v=(n,m)$ wrapping, is then $\ell_{nm}=\sqrt{g_{ab}v^av^b}$, i.e., \begin{eqnarray} \ell_{mn}&=&2\pi\sqrt{n^2R_1^2+m^2R_2^2+2nmR_1R_2\cos\theta} \label{lengthnm} \end{eqnarray} The gauge coupling $g_a$ of the $a^{th}$ group is given by \begin{eqnarray} \frac{4\pi^2}{g_a^2}&=&\frac{M_S}{\lambda_{II}}\ell_{m_an_a} \end{eqnarray} where $M_S$ is the string scale, $\lambda_{II}$ is the type-II string coupling and $\ell_{m_an_a}$ is given by (\ref{lengthnm}). Yukawa coefficients are also calculable in these constructions in terms of geometric quantities (area) of the torus\cite{Blumenhagen:2000wh,Aldazabal:2001cn}. For example, the size of the Yukawa coupling $y_{ijk}$ for a square torus is \begin{eqnarray} y_{ijk}&=& e^{-\frac{R_1R_2}{\alpha'}A_{ijk}} \end{eqnarray} where $A_{ijk}$ is the area of the world-sheet connecting three vertices, scaled by the area of the torus. We may generalize these results, considering compactifications on a 6-torus factorized as $T^6=T^2\times T^2\times T^2$. For example, if we denote $(n_a^i,m_a^i)$ the wrapping numbers of the D6$_a$ brane around the $i^{th}$ torus, the number of intersections is given by the product of the intersections in each of them \begin{eqnarray} I_{ab}&=&\prod_{i=1}^3(n_a^im_b^i-n_b^im_a^i), \;\; {\rm for} \;(N_a,\bar N_b) \\ I_{ab^*}&=&\prod_{i=1}^3(n_a^im_b^i+n_b^im_a^i), \;\; {\rm for} \;(N_a, N_b^*) \end{eqnarray} while cancellation of the $U(N)$ anomalies requires that the spectrum should satisfy $\sum_b\,I_{ab}N_b=0$. To satisfy this critirion, one usualy has to add additional matter states. For example, the fulfilment of this requirement in deriving the Standard Model in~\cite{Ibanez:2001nd} led to the introduction of the right-handed neutrinos. We should note that further consistency conditions, as the cancellation of RR-tadpoles for theories~\cite{Blumenhagen:2000wh} with open string sectors should be satisfied by $n_a^i,m_a^i$.\footnote{See for example the recent review\cite{Uraga} and references therein.} Due to the fact that D-brane constructions generate $U(N)$ symmetries, from $U(N)_a\ra SU(N)_a\times U(1)_a$, we conclude that several $U(1)$ factors appear in these models. For, example, the derivation of the SM may be obtained from a set of $U(3), U(2)$ and several $U(1)$ brane stacks, leading to a symmetry\cite{Antoniadis:2002en,Gioutsos:2005uw} \begin{eqnarray} {U(3)}_C\times{U(2)}_L\times\prod_{i=1}^n{U(1)}^i= {SU(3)}_C\times{SU(2)}_L\times U(1)_C\times U(1)_L\times\prod_{i=1}^n{U(1)}^i \label{ggg} \end{eqnarray} Fermion and Higgs representations are charged under $U(1)_{C,L,i}$. Some of these $U(1)$ symmetries, play a particular role in the low energy effective theory. For example, $U(1)_C$ is related to baryon number, since all quarks carry the same $U(1)_C$-charge. Thus, all global symmetries of SM are gauge symmetries in the context of D-brane constructions. Further, the $U(1)$ factors have mixed anomalies with the non-abelian groups $SU(N_a)$ given by $A_{ab}=(I_{ab}-I_{a^*b})N_a/2$. It happens that one linear combination remains anomaly free and contributes to the hypercharge generator which in general is a linear combination of the form \begin{eqnarray} Q_Y&=&\sum_m^{n+2} c_m\,Q_m \end{eqnarray} where the $c_m$ are to be specified in terms of the hypercharge assignment of the particle spectrum of a given D-brane construction. The remaining $U(1)$ combinations carry anomalies which are cancelled by a generalized Green-Schwartz mechanism and the corresponding gauge bosons become massive. These symmetries however, persist in the low-energy theory as global symmetries. \begin{figure}[h] \centering \includegraphics[scale=.4]{T2wrapping.eps} \caption{Schematic representation of a $(1,1)$ D-brane wrapping on a $T^2$ torus.} \label{t2w} \end{figure} \section{A $U(3)^3$ brane inspired model} We now present a specific non-supersymmetric example with gauge symmetry $U(3)^3$ which can be considered as the analogue of the ``Trinification'' model proposed long time ago~\cite{Glashow:1984gc,Rizov:1981dp}\footnote{For further explorations, as well as supersymmetric and string versions of the Trinification model see\cite{Babu:1986gi}-\cite{Choi:2003ag}} . We will therefore describe here the steps one has to follow for a viable D-brane construction. To generate this group one needs three stacks of D-branes, each stack containing 3 parallel almost coincident branes in order to form the $U(3)$ symmetry. We write the complete gauge symmetry as $$ U(3)_C\times U(3)_L\times U(3)_R,$$ so that the first $U(3)$ is related to $SU(3)$ color, the second involves the weak $SU(2)_L$ and the third is related to a possible intermediate $SU(2)_R$ gauge group. Since $U(3)\ra SU(3)\times U(1)$, we conclude that, in addition to the $SU(3)^3$ gauge group, the D-brane construction contains also three extra $U(1)$ abelian symmetries. The $U(1)$ symmetry obtained from the color $U(3)_C\ra SU(3)_C\times U(1)_C$ is related to the baryon number~\cite{Antoniadis:2002en}. All baryons have the same charge under $U(1)_C$ and consequently this $U(1)$ is identified with a gauged baryon number symmetry. There are two more abelian factors from the chains $U(3)_L\ra SU(3)_L\times U(1)_L$, $U(3)_R\ra SU(3)_R\times U(1)_R$ so the final symmetry can be written \begin{eqnarray} SU(3)_c\times SU(3)_L\times SU(3)_R\times U(1)_C\times U(1)_L\times U(1)_R\label{333111} \end{eqnarray} The abelian $U(1)_{C,L,R}$ factors have mixed anomalies with the non-abelian $SU(3)^3$ gauge part with are determined by the contributions of three fermion generations. There is an anomaly free combination, namely~\cite{Leontaris:2005ax} \begin{eqnarray} U(1)_{{\cal Z}'}&=& U(1)_C+U(1)_L+U(1)_R\label{aas} \end{eqnarray} which contributes to the hypercharge, while the two remaining combinations are anomalous; these anomalies are cancelled by a generalized Green-Schwarz mechanism. \begin{figure}[h] \centering \includegraphics[scale=.9]{u333brane.eps} \caption{Schematic representation of a $U(3)_C\times U(3)_L\times U(3)_R$ D-brane configuration and the matter fields of the model.} \label{u333} \end{figure} The possible representations which arise in this scenario should accommodate the standard model particles and the necessary Higgs fields to break the $U(3)^3$ symmetry down to SM. The spectrum of a D-brane model involves two kinds of representations, those obtained when the two string ends are attached to two different branes and those whose both ends are on the same brane stack. In figure \ref{u333} we show the minimum number of irreps required to accommodate the fermions and appropriate Higgs fields. Under (\ref{333111}) these states obtained from strings attached to two different branes have the following quantum numbers\footnote{ A schematic representation of the intersections of a $T^2$ torus which result to three fermion families is shown in figure \ref{u333}, however, in a realistic scenario one should solve the complete system of equations for all states arising in this constructions on $T^2\times T^2\times T^2$.} \begin{eqnarray} {\cal Q^{\hphantom{c}}}&=&(3,\bar 3,1)_{(+1,-1,\hphantom{+}0)}\label{QL}\\ {\cal Q}^c&=&(\bar 3,1,3)_{(-1,\hphantom{+}0,+1)}\label{QR}\\ {\cal L}^{\hphantom{c}}&=&(1,3,\bar 3)_{(\hphantom{+}0,+1,-1)}\label{Le}\\ {\cal H}^{\hphantom{c}}&=&(1,3,\bar 3)_{(\hphantom{+}0,+1,-1)}\label{Hi} \end{eqnarray} while the states arising from strings with both ends on the same 3-stack are \begin{eqnarray} {\cal H}_{\cal L}&=&(1,3,1)_{(0,-2,0)}\label{HL} \\ {\cal H}_{\cal R}&=&(1,1,3)_{(0,0,-2)}\label{HR} \end{eqnarray} Under $ SU(3)_L\times SU(3)_R$ $\ra$ $\left[SU(2)_L\times U(1)_{L'}\right]\,\times\,\left[U(1)_{R'}\times U(1)_{\Omega}\right] $ and the $U(1)_{{\cal Z}'}$, we employ the hypercharge embedding \begin{eqnarray} Y=-\frac{1}{6} X_{L'}+\frac 13 X_{R'}+\frac 16{\cal Z}'\label{HC} \end{eqnarray} where $X_{L'}, X_{R'},{\cal Z}'$ represent the generators of the corresponding $U(1)$ factors. Under the symmetry $SU(3)_C\times SU(2)_L\times U(1)_Y\times U(1)_{\Omega}$ (\ref{QL}-\ref{HR}) decompose as follows \begin{eqnarray} {\cal Q}^{\hphantom{c}}&=& q\left(3,2;\frac{1}{6},0_{\Omega}\right)+g\left(3,1;-\frac{1}{3},0_{\Omega}\right)\label{QL1}\\ {\cal Q}^c&=& d^c\left(\bar3,1;\frac{1}{3},1_{\Omega}\right)+u^c\left(\bar3,1;-\frac{2}{3},0_{\Omega}\right) +g^c\left(\bar3,1;\frac{1}{3},-1_{\Omega}\right)\label{QR1}\\ {\cal L}^{\hphantom{c}} &=& \ell^+\left(1,2;-\frac{1}{2},1_{\Omega}\right)+\ell^-\left(1,2;-\frac{1}{2},-1_{\Omega}\right) +{\ell^c}\left(1,2;+\frac{1}{2},0_{\Omega}\right)\nonumber\\ &~&+\nu^{c+}(1,1;0,1_{\Omega})+\nu^{c-}(1,1;0,-1_{\Omega})+e^c(1,1;1,0_{\Omega}) \label{L1} \\ {\cal H} &=&(1,3,\bar 3)=h^{d+}\left(1,2;-\frac{1}{2},1\right)+h^{d-}\left(1,2,-\frac{1}{2},-1\right) +{h^u}\left(1,2;\frac{1}{2},0\right)\nonumber\\ &~&+{e_H^c}(1,1;1,0)+{\nu_{H}^{c+}}(1,1;0,1)+{\nu_{H}^{c-}}(1,1;0,-1),\;\;\; \label{Hsm} \\ {\cal H}_{\cal L} &=&(1,3,1)=\hat h_L^{+}\left(1,2;-\frac{1}{2},0\right)+\hat\nu_{{\cal H}_{\cal L}}\left(1,1;1,0\right)\label{hl} \\ {\cal H}_{\cal R} &=&(1,1,3)= {\hat e_H^c}(1,1;1,0)+{\hat\nu_{{\cal H}_{\cal R}}^{c+}}(1,1;0,1)+\hat\nu_{{\cal H}_{\cal R}}^{c-}(1,1;0,-1)\label{hr} \end{eqnarray} Representation (\ref{QL1}) includes the left handed quark doublets and an additional colored triplet with quantum numbers as those of the down quark, while representation (\ref{QR1}) contains the right-handed partners of (\ref{QL1}). Further, (\ref{L1}) involves the lepton doublet, the right-handed electron and its corresponding neutrino, two additional $SU(2)_L$ doublets and another neutral state, called neutreto\cite{Glashow:1984gc}. The Higgs sector consists of (\ref{Hsm}) which is the same representation as that of the lepton fields, and the left and right triplets (\ref{hl}) and ({\ref{hr}) respectively. \begin{figure}[h] \centering \includegraphics[scale=.6]{333inters.eps} \caption{Schematic representation of the intersections on a $T^2$ for a three-generation $U(3)_C\times U(3)_L\times U(3)_R$ D-brane model.} \label{u333} \end{figure} \subsection{ Mass scales, Symmetry breaking and Yukawa couplings} \subsubsection{Mass scales} The reduction of the ${SU(3)}^3\times{U(1)}^3$ to the SM is in general associated with three different scales corresponding to the $SU(3)_R$, $SU(3)_L$ and $U(1)_{{\cal Z}'}$ symmetry breaking. We will assume here for simplicity that the $SU(3)_{L,R}$ and $U(1)_{{\cal Z}'}$ symmetries break simultaneously at a common scale $M_R$, hence the model is characterized only by two large scales, the String/brane scale $M_S$, and the scale $M_R$. Clearly, the $M_R$ scale cannot be higher than $M_S$, i.e., $M_R\le M_S$, and the equality holds if the $SU(3)_R\times SU(3)_L$ symmetry breaks directly at $M_S$. In a D-brane realization of the proposed model, since the three $U(3)$ gauge factors originate from 3-brane stacks that span different directions of the higher dimensional space, the corresponding gauge couplings $\alpha_{C,L,R}$ are not necessarily equal at the string scale $M_S$. However, in certain constructions, at least two D-brane stacks can be superposed and the associated couplings are equal\cite{Antoniadis:2002en}. In our bottom up approach, a crucial role in the determination of the scales $M_{R,S}$ is played by the neutrino physics. More precisely, in order to obtain the correct scale for the light neutrino masses, which are obtained through a see-saw mechanism and are found to be of the order $m_{\nu}\sim m_W^2/M_S$, the string scale $M_S$ should be in the range $M_S\sim 10^{13}-10^{15}$ GeV. In order to determine the range of $M_S,M_R$, we use as inputs the low energy data for $\alpha_3,\alpha_{em}$ and $\sin^2\theta_W$ and perform a one-loop renormalization group analysis. The cases $\alpha_L=\alpha_R$ and $\alpha_R=\alpha_C$ presented in Table \ref{ytab3a} are found to be consistent with the neutrino data. \begin{table}[!h] \centering \begin{tabular}{|c|c|l|} \hline model&$M_R/GeV$ &${M_S}/{GeV}$\\ \hline $a_L=a_R$&$ 1.7\times 10^{9}$&$ >1.7\times 10^{9}$\\ \hline $a_L=a_C$&$ <2.3\times 10^{16}$&$ >2.3\times 10^{16}$\\ \hline $a_C=a_R$&$ <2.3\times 10^{11}$&$ >2.3\times 10^{11}$\\ \hline \end{tabular} \caption{\label{ytab3a} Upper and lower bounds for $SU(3)_R$ breaking scale ($M_R$) and the corresponding String scale ($M_S$) for the three cases $a_L=a_C$, $a_R=a_C$ and $a_L=a_R$. } \end{table} In particular, we find that the case $\alpha_L=\alpha_R$ predicts $M_R$ constant, i.e., independent of the common gauge coupling $a\equiv\alpha_L=\alpha_R$ and $M_S$ also in the required region. For $\alpha_R=\alpha_C$, we also obtain $M_S\ge 2.3\times10^{11}$GeV.\footnote{The case $\alpha_L=\alpha_C$ is ruled out by neutrino data, since it predicts $M_S> 10^{16}$GeV.} \subsubsection{The Symmetry breaking} The Higgs states (\ref{Hsm}-{\ref{hr}) are sufficient to break the original gauge symmetry $U(3)^3$ down to the Standard Model\cite{Leontaris:2005ax}, however, according to ref\cite{Glashow:1984gc}, a non-trivial KM mixing and quark mass relations would require at least two Higgs fields in $(1,3,\bar3)$. We should mention however, that in string or intersecting brane models, Yukawas are calculable in terms of geometric quantities -such as torus area- thus, from this point of view a second Higgs is not necessary. To break the symmetry and provide with masses the various matter multiplets we assume two Higgs in $(1,3,\bar 3)$ and a pair ${\cal H}_{\cal L}=(1,3,1)$, ${\cal H}_{\cal R}=(1,1,3)$ with the following vevs: \begin{eqnarray} {\cal H}_1&\ra&\langle h^u_1 \rangle =u_1,\; \langle h^{d-}_{1} \rangle =u_2,\; \langle {{\nu^{c+}_{H,}}_1} \rangle =U,\nonumber\\ {\cal H}_2&\ra&\langle h^u_2 \rangle =v_1,\; \langle {h^{d-}}_2 \rangle =v_2,\; \langle {h^{d+}}_2 \rangle =v_3, \langle {\nu^{c-}_{H}}_2 \rangle =V_1,\; \langle {\nu^{c+}_{H}}_2 \rangle =V_2\nonumber\\ {\cal H}_{\cal L}&\ra&\langle\hat \nu_{H_L}^{}\rangle=A_L\nonumber\\ {\cal H}_{\cal R}&\ra&\langle\hat\nu_{H_R}^{}\rangle=A_R\nonumber \end{eqnarray} The vevs $U,V_{1,2}$ and $A_{L,R}$ are taken of the order $M_R$, while $u_{1,2}$ and $v_{1,2}$ are of the order of the electroweak scale. \subsubsection{Fermion masses} In the present $U(3)^3$ construction, due to the existence of the additional $U(1)_{C,L,R}$ symmetries, the following Yukawa coupling is present at the tree-level Yukawa potential \begin{eqnarray} \lambda_{Q,1}^{ij}{\cal Q}_i\,{\cal Q}_j^c\,{\cal H}_a, \;\;{a=1,2} \end{eqnarray} It can provide quark masses as well masses for the extra triplets. For the up quarks \begin{eqnarray} m_{uu^c}^{ij}&=&\lambda_{Q,1}^{ij}u_1+\lambda_{Q,2}^{ij}v_1\label{uqm} \end{eqnarray} For the down-type quarks $d_i,d_j^c, g_i, g_j^c$, we obtain a $6\times 6$ down type quark matrix in flavour space, of the form \begin{eqnarray} m_{d}&=&\left(\begin{array}{cc} m_{dd^c}&M_{gd^c}\\ m_{dg^c}&M_{gg^c}\label{mdg} \end{array} \right) \end{eqnarray} where $m_{dd^c}=\lambda_{Q,1}^{ij}\,u_2+\lambda_{Q,2}^{ij}\,v_2$ and $m_{dg^c}=\lambda_{Q,2}^{ij}\,v_3$ are $3\times 3$ matrices with entries of the electroweak scale, while $M_{gd^c}=\lambda_{Q,2}^{ij}\,V_1$, $M_{gg^c}=\lambda_{Q,1}^{ij}\,U+\lambda_{Q,2}^{ij}\,V_2$ are of the order $M_R$. The diagonalization of the non-symmetric mass matrix (\ref{mdg}) will lead to a light $3\times 3$ mass matrix for the down quarks and a heavy analogue of the order of the $SU(3)_R$ breaking scale. The extra $U(1)_{C,L,R}$ factors do not allow for a tree-level coupling for the lepton fields. The lowest order allowed leptonic Yukawa terms arise at fourth order. These are \begin{eqnarray} \frac{f_{ij}^{ab}}{M_S}\, {\cal H}_a^\dagger \,{\cal H}_b^\dagger\,{\cal L}_i\,{\cal L}_j + \frac{\zeta_{ij}}{M_S}\,{\cal H}_{\cal L} \,{\cal H}_{\cal R}^\dagger\,{\cal L}_i\,{\cal L}_j \end{eqnarray} where $f_{ij}^{ab},\zeta_{ij}$ are order one Yukawa couplings, and $a,b=1,2$. These terms provide with masses the charged leptons suppressed by a factor $M_R/M_S$ compared to quark masses. Thus, a natural quark-lepton hierarchy arises in this model. They further imply light Majorana masses for the three neutrino species through a see saw mechanism. All the remaining states (lepton like doublets and neutral singlets) obtain masses of the order $M_R^2/M_S$\cite{Leontaris:2005ax}. \section{Conclusions} In this talk, we have described the basic features of model building in the context of intersecting D-branes. As an example, we have analysed a D-brane analogue of the trinification model which can be generated by three separate stacks of D-branes. Each of the three stacks is formed by three identical branes, resulting to an $U(3)_C\times U(3)_L\times U(3)_R$ gauge symmetry for the model. Since $U(3)\ra SU(3)\times U(1)$, this symmetry is equivalent to the standard $SU(3)^3$ trinification gauge group supplemented by three abelian factors $U(1)_{C,L,R}$. The main characteristics of the model are: $\bullet$ The three $U(1)$ factors define an unique anomaly-free combination $U(1)_{{\cal Z}'}=U(1)_{C}+U(1)_L+U(1)_R$ as well as two other anomalous combinations whose anomalies can be cancelled by a generalized Green-Schwarz mechanism. $\bullet$ The Standard Model fermions are represented by strings attached to two different brane-stacks and belong to $(3,\bar 3,1)+(\bar 3,1,3)+(1,3,\bar 3)$ representations as is the case of the trinification model. $\bullet$ The scalar sector contains Higgs fields in $(1,3,\bar 3)$ (which is the same representation which accommodates the lepton fields), as well as Higgs in (1,3,1) and (1,1,3) representations which can arise from strings whose both ends are attached on the same brane stack. The Higgs fields break the $SU(3)_L\times SU(3)_R$ part of the gauge symmetry down to $U(1)_{em}$; they further provide a natural quark-lepton hierarchy since quark masses are obtained from tree-level couplings, while, due to the extra $U(1)$ symmetries, charged leptons are allowed to receive masses from fourth order Yukawa terms. $\bullet$ The $SU(3)_R$ breaking scale is found to be $M_R> 10^9$ GeV, while a string scale $M_S\sim 10^{13-15}$GeV is predicted which suppresses the light Majorana masses through a see-saw mechanism down to sub-eV range as required by neutrino physics. \vfill {\bf Ackmowledgements}. {\it This research was co-funded by the European Union in the framework of the program "Pythagoras I" (no.~1705 project 23) of the "Operational Program for Education and Initial Vocational Training" of the 3rd Community Support Framework of the Hellenic Ministry of Education, funded by 25\% from national sources and by 75\% from the European Social Fund (ESF).} \newpage
1,941,325,220,597
arxiv
\section*{}
1,941,325,220,598
arxiv
\section{Introduction} In a previous paper (M\'endez et al. 1993, in what follows MKCJ93) a procedure was described for the numerical simulation of the bright end of the planetary nebula luminosity function (PNLF). The purpose was to model the PNLF in such a way as to make it possible to study the effects, on the observed bright end of the PNLF, of the following: (1) sample size; (2) time elapsed since last episode of substantial star formation; and (3) incomplete nebular absorption of stellar H-ionizing photons. This motivation (in particular the desire to study optical thickness effects) led to the choice of a method characterized by the avoidance of nebular models for flux calculations. In fact, the spectacular images of several nearby PNs, produced by the Hubble Space Telescope, have recently added to the current uncertainties concerning the transition time between the AGB and the moment when the central star becomes hot enough to ionize the nebula in supporting our feeling that current nebular modeling cannot accurately predict how able is the nebula to absorb all the H-ionizing photons from the central star, and how will this ability evolve with time. This explains our selection of an approach based, as much as possible, on {\it observed\/} properties of PNs and their central stars. In the present paper we introduce several improvements in the PNLF simulations, in preparation for the moment when better and deeper PNLFs will be obtainable with the new 8m-class telescopes now in construction. The purpose of these improved PNLF simulations is to further develop the PNLF as a tool, not only for the accurate measurement of extragalactic distances, but also for studies of the initial-to-final mass relation and related mass loss processes in luminous galaxies with and without recent star formation. There has been abundant research on the reliability of the bright end of the PNLF as a secondary distance indicator (see e.g. Jacoby 1997), showing that there is excellent agreement between cepheid and PNLF distances. This might be interpreted as an indication that the bright end of the PNLF is not significantly affected by population characteristics. However, it is also possible (and perhaps more plausible) to argue (e.g. Feldmeier et al. 1997) that most PN searches in galaxies with recent star formation have been made in such a way as to avoid severe contamination with H\,{\sc ii} regions, discriminating in this way against the inclusion of the potentially most massive central stars, which would tend to be closer to such regions. The best way of resolving this ambiguity is through PN searches that do not discriminate against H\,{\sc ii} regions, like those in NGC 300 (Soffner et al. 1996). We believe that a reduction of the statistical noise through increased sample sizes, as well as an extension of the PNLF towards fainter magnitudes, should render population effects detectable through differences in the shape of the PNLF, as described in MKCJ93. Better simulations will allow us to predict more confidently what population effects should be detectable in practice. In parallel, the constraints derived from comparisons of simulated with observed PNLFs will test how successful we are in modeling the PNLF using an approach partly based on random numbers, and will allow us to further refine the PNLF simulations. In the end, even if population effects are not detectable, we may be able to understand why. Sections 2 to 5 describe the improvements we have introduced in our simulations, and in Sect.\,5 we show that most PNs in any real population must leak stellar H-ionizing photons. Sect.\,6 describes a few consistency checks that have been made or can be made in principle. In Sect.\,7 we discuss the shape of the PNLF, and in Sect.\,8 we describe an attempt to determine maximum post-AGB final masses in the Large Magellanic Cloud (LMC) and in the bulge of M 31. \section{The improvements} The basic idea of the procedure for the PNLF simulation is the same as in MKCJ93, to which we refer the reader for details. We generate a set of PNs with random post-AGB ages and central star masses. The ages are given by a uniform random distribution from 0 to 30\,000 years, counting from the moment when the central star has $T_{\rm eff} = $ 25\,000 K. The central star masses are given by an exponential random distribution selected to reproduce the observed white dwarf mass distribution in our Galaxy (see Fig.\,4 in MKCJ93). More recent research keeps showing an exponential tail in the white dwarf mass distribution (see e.g. Bragaglia et al. 1995), and therefore we are confident that this feature remains valid in cases with more or less constant star formation. As we will explain later, cases without recent star formation are simulated by truncating the mass distribution at a certain maximum final mass. For each PN in the set we have, then, a pair of random numbers giving mass and age of the central star. These random numbers are input to a routine that gives the corresponding luminosity $L$ and $T_{\rm eff}$ of the central star. Our first improvement is to replace the analytical representation of post-AGB tracks used in MKCJ93 by an interpolating routine that gives a better approximation. Knowing $L$ and $T_{\rm eff}$ we calculate, using recombination theory, the H$\beta$ luminosity that the nebula would emit if it were completely optically thick in the H Lyman continuum. Then we generate a random number, subject to several conditions (derived from observations of PNs and their central stars, see MKCJ93), for the absorbing factor $\mu$, which gives the fraction of stellar ionizing luminosity absorbed by the nebula. Using the absorbing factor we correct the H$\beta$ luminosity. Our second improvement concerns the generation of suitable absorbing factors for PNs on cooling tracks. Finally, we produce [O\,{\sc iii}] $\lambda$5007 fluxes from the previously derived H$\beta$ fluxes. This is done by generating another random number, again subject to several conditions, for the intensity of $\lambda$5007 relative to H$\beta$. The introduction of more stringent conditions for this intensity ratio, derived from a larger database, is our third improvement. In what follows we describe all these changes in more detail. We believe that the description will be more easily followed if we discuss the $\lambda$5007 intensities before dealing with the absorbing factor $\mu$. Stellar masses and luminosities are expressed in solar units. \section{Improved representation of post-AGB tracks} As in MKCJ93, we base our simulations on the H-burning post-AGB tracks of Sch\"onberner (1989) and Bl\"ocker \& Sch\"onberner (1990). We have added new H-burning tracks recently published by Bl\"ocker (1995). Since we were not satisfied with the analytical representation used in MKCJ93, especially concerning ages and luminosities on the white dwarf cooling tracks, we decided to implement some interpolation procedures. In addition to the tracks already mentioned, in order to guide the extrapolation for masses above 0.94 solar masses, we used an additional track for 1.2 solar masses, obtained by slightly modifying the ages in Paczynski's (1971) track so as to make them more consistent with the deceleration of the evolution that takes place along the Bl\"ocker \& Sch\"onberner 0.836 solar mass cooling track. We used the available evolutionary tracks to construct a look-up table giving log $T_{\rm eff}$ and log $L$ for 3000 ages between 0 and 30\,000 years and for 260 masses between 0.55 and 1.2 solar masses. Let us explain the construction of this look-up table; we used different methods for the required calculations in different regions of the log $L$ - log $T_{\rm eff}$ diagram. For temperatures between 25\,000 and 72\,000 K, where the tracks run almost horizontally, we plotted (1) log (age) and (2) log $L$ as functions of mass for a given $T_{\rm eff}$, using information derived from the known tracks; fitted log (age) and log $L$ curves as functions of mass; and derived age and $L$ for the 260 masses. The procedure was repeated for 40 temperatures in this region. For luminosities below log $L$ = 3.0, which is the region of the white dwarf cooling tracks, we plotted (1) log (age) and (2) log $T_{\rm eff}$ as functions of mass for a given $L$; fitted log (age) and log $T_{\rm eff}$ curves; and derived age and $T_{\rm eff}$ for the 260 masses. The procedure was repeated for 30 luminosities in this region. For the remaining region (the knees of the tracks) we produced fits along 40 straight lines radiating from a point with log $T_{\rm eff}$ = 4.86 and log $L$ = 3.0. These lines cross the tracks at approximately right angles. In this case it was of course necessary to plot log (age), log $L$ and log $T_{\rm eff}$ as functions of mass, and fit curves for the 3 parameters. From these curves we obtained age, $L$ and $T_{\rm eff}$ for the 260 masses, along each of the 40 lines we had defined. Finally, the full look-up table (log $T_{\rm eff}$ and log $L$ for 260 masses and 3000 ages) was completed for the missing ages using interpolation along each track. \begin{figure} \psfig{figure=imodf1.ps,height=8.5cm,angle=90} \caption[]{ The solid lines are post-AGB evolutionary tracks (H-burning) for 6 different central star masses, taken from Sch\"onberner (1989) and Bl\"ocker (1995). The unlabeled track is for 0.625 solar masses. We used also the track for 0.644 solar masses, but did not plot it to avoid overcrowding. The plus signs indicate central star luminosities and temperatures calculated, from our look-up table, for the same 6 masses at 100 post-AGB ages between 1 and 30\,000 years (300-yr intervals). All ages are counted from the moment when the central star has a temperature of 25\,000 K. For the 3 upper tracks we have added 30 ages between 1 and 300 yr (10-yr intervals) to obtain a better coverage of the fast evolution towards higher temperatures. } \end{figure} \begin{figure} \psfig{figure=imodf2.ps,height=8.5cm,angle=90} \caption[]{ The resulting values of luminosity and temperature for the central stars of 1500 randomly generated PNs, using the same exponential mass distribution as in MKCJ93. } \end{figure} The look-up table was used in the following way: after generating the two random numbers giving mass and age, the four neighboring values of age and mass in the table were identified, and the values for log $T_{\rm eff}$ and log $L$ were derived using bilinear interpolation. Fig.\,1 shows examples of evolutionary tracks generated for several masses, and Fig.\,2 shows the positions of 1500 randomly generated central stars in the log $L$ - log $T_{\rm eff}$ diagram. These two figures can be compared with Figs.\,5 and 6 of MKCJ93. In addition to a better representation of the post-AGB evolutionary tracks, the new procedure has the advantage that it is not tied to a specific set of post-AGB models; it can be applied (i.e. a new look-up table can be easily generated) for any preferred set of tracks. We considered, but finally decided against, the inclusion of He-burning post-AGB tracks for our simulations. It would be good to allow for a certain percentage of such tracks, because the evolution is slower than for a H-burning star of the same mass. The basic problem is that the He-burning tracks are affected by loops and jumps produced respectively by the late thermal pulse and by the reignition of the H-shell (see e.g. Vassiliadis \& Wood 1994, or Bl\"ocker 1995). It would be necessary to make an enormous amount of evolutionary calculations in order to accurately reproduce the complexities of this behavior, which is dependent on the phase, in the thermal pulse cycle, at which the star leaves the AGB. No interpolation procedure appears to be viable in this case. The situation would be probably simpler in the particular case of H-deficient central stars, which represent about 30\% of the well-observed central stars in our Galaxy (see e.g. M\'endez 1991). Obviously such stars must be He-burners, but in this case no H-shell reignition is expected, because no H is left, and the tracks may be more easily simulated. The problem is that the evolutionary status of these objects is not yet fully understood, and no reliable tracks are available. In summary, given the present knowledge we are not likely to gain much from any attempt to include He-burners in our simulations, but we remark that this is a potential source of uncertainty in PNLF modeling. \section{The observed distribution of $\lambda$5007 relative intensities} One weak point in MKCJ93 was the rather schematic representation of the distribution of intensities of $\lambda$5007 relative to H$\beta$. We have generated a new distribution, by comparison with two observed distributions: one for 118 PNs in the LMC (data taken from Wood et al. 1987; Meatheringham et al. 1988; Jacoby et al. 1990; Meatheringham and Dopita 1991a, 1991b; Vassiliadis et al. 1992) and another one for 983 PNs in our Galaxy, taken from the Strasbourg-ESO Catalogue of Galactic PNs (Acker et al. 1992). In about 80 cases, where the Catalogue's $\lambda$5007 intensity was not given or unreliable, we have taken it from other sources, listed in the Catalogue. \begin{figure} \psfig{figure=imodf3.ps,height=6.8cm,angle=90} \caption[]{ Histograms of the intensity of $\lambda$5007 relative to H$\beta$, on the scale $I$(H$\beta$) = 100. The dashed line indicates the histogram for 983 objects in our Galaxy. The other two histograms have been normalized to this number. The dotted line is the histogram for 118 LMC objects. The full line is our simulated distribution, generated as described in the text. } \end{figure} Fig.\,3 shows the observed distributions compared with our simulated distribution, which is produced by first generating a Gaussian centered at an intensity of 1000 (on the usual scale of $I$(H$\beta$) = 100) with FWHM=300. Then, following the discussion in Section 4 of MKCJ93 about low $\lambda$5007 values (see also Stasinska 1989), we decrease the $\lambda$5007 intensity to 50\% of its randomly generated value for central stars on heating tracks which have masses smaller than 0.57 solar masses and $T_{\rm eff} > $ 75\,000 K. In addition, for central stars with temperatures below 60\,000 K we do not use the generated random number, but use instead Eq.\,(5) in MKCJ93. In this way we minimize the risk of randomly creating a PN with a $\lambda$5007 intensity which is incompatible with the properties of its central star. Finally, we must compensate for an obvious selection effect: the observed distributions in our Galaxy and in the LMC are not likely to include PNs with very low-$L$ central stars, all of which have high temperatures; therefore, before plotting our simulated distribution we eliminate all PNs with central stars fainter than log $L$ = 2.4 (see Fig.\,2). We would like to emphasize that the evolutionary tracks do not produce enough low-$T_{\rm eff}$ central stars to explain the observed number of PNs with low $\lambda$5007 intensities. In order to illustrate this point, we can use our simulations. Let us adopt a limit of 500 for the intensity of $\lambda$5007. According to Eq.\,(5) in MKCJ93, this limit corresponds to a stellar $T_{\rm eff}$ = 43\,000 K. If we restrict again our attention to PNs with central stars brighter than log $L$ = 2.4, our simulations produce 15\% of these central stars with $T_{\rm eff}$ below the limit, while 25\% of the PNs have $\lambda$5007 fainter than 500. This means that about 40\% of the PNs with faint $\lambda$5007 should have hot central stars. Since there is insufficient direct information about $T_{\rm eff}$, we have tried to test this prediction using the Strasbourg-ESO Catalogue and a recent catalogue of $\lambda$4686 line intensities (Tylenda et al. 1994). We have found that, of 275 PNs in the Strasbourg-ESO Catalogue with an intensity of $\lambda$5007 below 500, at least 20\% show the He\,{\sc ii} $\lambda$4686 nebular emission, which indicates a stellar temperature above 45\,000 K. This percentage is less than the 40\% predicted, but on the other hand it is probably a lower limit, because in some cases a weak $\lambda$4686 line may be present but not detected in the surveys (in the catalogue of Tylenda et al. we find, among the 275 PNs with weak $\lambda$5007, a further 25\% for which the upper limit of the $\lambda$4686 intensity is 5 or higher, on the scale H$\beta=100$). Therefore, although the test cannot be considered to be sufficient for any definitive conclusion, given all the uncertainties involved, we think it indicates that our simulations have produced a useful first approximation to the observed variety of PN spectra. As shown in Fig.\,3, we now reproduce the observed distributions quite well. Notice in particular the Gaussian tail towards high intensities, which was not correctly reproduced in MKCJ93 (those simulations produced an excess of PNs below 1500, and no PN above 1500). \section{The absorbing factor $\mu$: reproducing the observed PNLF to fainter magnitudes} Assume for a moment that all the PNs are completely optically thick. We can make a simulation based on this assumption, and compare the resulting PNLF with the observed PNLF of the LMC: see Fig.\,4. The failure of the completely optically thick assumption is quite evident. The simulated PNLF, calculated for a sample size of 1000 objects (which is Jacoby's (1980) estimate for the total number of PNs in the LMC), reaches much brighter magnitudes. In order to force agreement by a horizontal shift of the observed PNLF, until it fits the simulated PNLF bright end, it would be necessary to adopt an implausible distance of 66 kpc for the LMC. An attempt to fit the observed LMC PNLF at the right distance, by {\it vertically\/} shifting the simulated PNLF, would lead to a sample size of about 300, implausibly close to the total number of known PNs in the LMC. This makes it clear that the existence of many optically thin PNs at the bright end of the PNLF is an essential feature in our simulations. The introduction of the absorbing factor $\mu$ in MKCJ93, using information independently derived from model atmosphere studies of central stars of PNs in our Galaxy (M\'endez et al. 1992), immediately led to satisfactory fits of the PNLFs of the LMC and M 31. We have not modified the procedure described in MKCJ93 for the generation of $\mu$ for PNs with high-luminosity central stars (namely, those on heating tracks). Here we just remind the reader that many, but not all, of the PNs with cooler central stars are allowed to have $\mu$=1, and that the procedure defines, for central stars on heating tracks and with temperatures above 40\,000 K, a random distribution of $\mu$ from 0.05 up to a parameter $\mu_{\rm max}$. If $\mu_{\rm max}$=1, then a small percentage of the bright PNs with hotter central stars can have $\mu$ close to 1. \begin{figure} \psfig{figure=imodf4.ps,height=6.8cm,angle=90} \caption[]{ A PNLF simulation for the LMC, assuming that all PNs are completely optically thick (for all objects $\mu$=1). The diamonds represent the observed $\lambda$5007 PNLF of the LMC, from data collected in the literature (see text, Sect.\,4). The LMC is assumed to be at a distance of 50 Mpc, and we adopt an average logarithmic extinction at H$\beta$, $c$=0.19 (Soffner et al. 1996). The magnitudes fainter than $-3$ are obviously affected by severe incompleteness. We have adopted the extension towards fainter magnitudes (plus signs) from Fig.\,1 of Ciardullo (1995). The solid line is the simulated PNLF, with data binned, like the observed ones, into 0.2 mag intervals. The sample size is 1000, and the maximum final mass is 0.7 solar masses (the exponential central star mass distribution is truncated at this mass). The choice of this maximum final mass will be explained in Sect.\,8. Even with this truncation, the simulated PNLF has too many bright PNs and too few of the very faint ones. The discrepancy is solved by forcing many PNs to have small $\mu$ values, as shown in Fig.\,5. } \end{figure} Now we want to extend the simulated PNLF towards fainter magnitudes. This requires some additional information about $\mu$ for PNs with central stars on cooling tracks. It is clear from Fig.\,4 that not all PNs on cooling tracks can have $\mu=1$, because that assumption produces an enormous hump in the PNLF at $M(5007) \sim 0$, and consequently also a pronounced deficit of very faint PNs. The required information about $\mu$ cannot be easily obtained from central star studies, because in this region of the HR diagram the central stars are intrinsically faint. We have therefore decided to adopt a simple procedure for the random generation of values of $\mu$ at low central star luminosities, and adjust it by requiring agreement of the simulated PNLF at fainter magnitudes with the statistically complete PNLF inferred from PN observations in nearby galaxies (see e.g. Fig.\,1 in Ciardullo 1995). \begin{figure} \psfig{figure=imodf5.ps,height=6.8cm,angle=90} \caption[]{ Final simulation of the LMC PNLF. The same observed LMC PNLF used in Fig.\,4 is now compared with our final simulation, which includes many PNs with small $\mu$ values. Most of the PNs with high values of $\mu$ have low-temperature central stars. We have adopted $\mu_{\rm max}$ = 1 (which allows some PNs on heating tracks and with temperatures above 40\,000 K to be almost completely optically thick), sample size = 1000, and maximum final mass 0.7 solar masses. } \end{figure} The procedure we have implemented is quite simple. We estimate for each mass how much time it takes for the central star to reach the turnover point of its evolutionary track, that is to say the beginning of the cooling track. For all ages larger than this time, the absorbing factor $\mu$ is set equal to a random number uniformly distributed between 0.1 and 1, and this number is multiplied by a factor $(1 - ({\rm age(years)} / 30\,000))$. In this way we ensure that all absorbing factors tend to 0 as the nebula dissipates. Somewhat surprisingly, this simple procedure works very well. Fig.\,5 compares the \lq\lq observed'' LMC PNLF with our final simulation, where we have set $\mu_{\rm max}$ = 1 for PNs with hot central stars on heating tracks. The sample size is 1000, in agreement with Jacoby's (1980) estimate, and we have selected a maximum final mass equal to 0.7 solar masses (see MKCJ93). This choice of maximum final mass will be explained in Sect.\,8. It is interesting to note that the factor $(1 - ({\rm age} / 30\,000))$ is necessary to achieve a reasonable representation of the PNLF. If we suppress this factor, then our simulations give a result very similar to that shown in Fig.\,4. The new value of $\mu_{\rm max}$ for the LMC is higher than determined in MKCJ93 (0.6). There are two reasons for this change: the new distribution of $\lambda$5007 intensities, and the adoption of a higher reddening for the LMC, discussed by Soffner et al. (1996). In next Section we subject our new simulation to a few consistency checks. \section{Consistency checks} In Fig.\,6 we have plotted the H$\beta$ LMC PNLF. In the same way as with the $\lambda$5007 PNLF, the simulation provides a good fit at the bright end, and severe incompleteness begins about one magnitude fainter. This indicates that the simulation is producing suitable values of the $\lambda$5007 / H$\beta$ ratio at the bright end of the PNLF. We can directly check this by extracting from the master simulation (208\,000 objects) a random subsample of 1000 objects. In this subsample the average $I$(5007) for the 15 brightest PNs is 1181, in good agreement with the average intensity of 1213 for the 15 brightest observed PNs in the LMC. It is also interesting to know the values of $\mu$ for the 15 brightest PNs in the simulated subsample: they are between 0.38 and 0.94, with an average 0.74. Thus we see that, although $\mu_{\rm max}$=1, the bright end of the PNLF is {\it not\/} dominated by PNs with $\mu$ values very close to 1. \begin{figure} \psfig{figure=imodf6.ps,height=6.8cm,angle=90} \caption[]{ The observed H$\beta$ LMC PNLF, compared with a PNLF built from the same simulation used in Fig.\,5: $\mu_{\rm max}$ = 1, sample size = 1000, and maximum final mass 0.7 solar masses. There is a good fit to the bright end of the H$\beta$ PNLF, although here a somewhat smaller sample size (800) would give a better fit. We find severe incompleteness in the observed PNLF for magnitudes fainter than $-1$. Thus the situation is very similar to that of Fig.\,5. } \end{figure} This result leads us to rediscuss the interpretation of the observed presence of low-excitation PNs among those with the brightest values of $M_\beta$ in the LMC (Figure 4a of Dopita et al. 1992). MKCJ93 concluded that this behavior had to be explained by a low value of $\mu_{\rm max}$=0.6. But now we see that this behavior can be produced by chance in cases where $\mu_{\rm max}$ is higher. We have run more than 100 simulations for a sample size of 1000 objects and $\mu_{\rm max}$=1, and we find that in 30\% of these simulations there are 2 PNs with $I(5007) < 500$ among those PNs with the 9 brightest values of $M_\beta$ (these numbers, 2 among 9, are what we find in our LMC database). For comparison, we have also run more than 100 simulations for 1000 objects with {\it all\/} values of $\mu$ = 1, and in this case less than 7\% of all simulations show 2 low-excitation PNs among the 9 brightest in H$\beta$. In this way we see that any argument based on plots of $I(5007)$ as function of $M_\beta$ makes sense only if applied to a sufficiently large sample of galaxies. For example, if we were to find that of 10 galaxies the {\it majority\/} show low-excitation PNs among the brightest in H$\beta$, then we would be induced to believe that something must be inconsistent in our simulations. This is a test that may become possible in the near future. For the moment we consider that all the available information is consistent with the following general rule: many optically thin PNs, but $\mu_{\rm max}$=1, as implemented in our simulations. One could argue that the use of $I(5007)$-$M_\beta$ plots to decide about $\mu_{\rm max}$ would be further compromised by the possible existence of low-excitation PNs with hot central stars, as described in Sect.\,4 above. Although this is true, we should point out that such objects are generally expected to have very low values of $\mu$, and are therefore unlikely to be among the brightest PNs in H$\beta$. For example, in the case of the LMC we have verified that the brightest low-excitation objects (SMP3, SMP5) do not show He\,{\sc ii} $\lambda$4686 nebular emission. This implies that the central star temperatures are indeed low. \begin{figure} \psfig{figure=imodf7.ps,height=6.8cm,angle=90} \caption[]{ The statistically complete $\lambda$5007 PNLF in M 31 (samples A + B of Ciardullo et al. 1989), adopting a distance of 770 kpc, compared with a simulated PNLF with $\mu_{\rm max}$ = 1, sample size = 1000, and maximum final mass 0.63 solar masses. The choice of maximum final mass will be explained in Sect.\,8. The change of slope at absolute $\lambda$5007 magnitude $-2.3$, predicted by our simulations, would seem to be reproduced by the data. } \end{figure} \begin{figure} \psfig{figure=imodf8.ps,height=6.8cm,angle=90} \caption[]{ The $\lambda$5007 PNLF of M 31 (all the 429 PNs measured by Ciardullo et al. 1989), compared with a simulated PNLF with $\mu_{\rm max}$ = 1, sample size = 2100, and maximum final mass 0.63 solar masses. There is significant incompleteness at mags fainter than $-1.5$. Notice again the possible change of slope at $-2.3$. } \end{figure} \section{On the shape of the PNLF} Let us compare our simulated PNLF with the formula used by Ciardullo et al. (1989, see their Eq.\,(2); their formula behaves like the plus signs in our Figs.\,4 and 5). There is an interesting difference: while the formula of Ciardullo et al. gives an ever increasing PNLF towards fainter magnitudes, our simulation shows a roughly constant, or even decreasing, PNLF between $M(5007)$ = $-3.5$ and $-2.3$, and only starts increasing again for mags fainter than $-2.3$. This change of slope can be considered a \lq\lq prediction'' of our simulation procedure. The LMC data are not suitable to test this prediction, because of the severe incompleteness at the relevant values of $M(5007)$. But the M 31 sample remains statistically complete until $M(5007) = -1.5$, and indeed Figs.\,7 and 8 would seem to give a hint of support to our simulation (see also the combined PNLF shown by Jacoby 1997). However, the evidence is insufficient. It would be important to verify if the slope change at $ -2.3$ is present in other galaxies (this verification will become possible with 8-m telescopes) because (1) it would allow to test how reliable is our PNLF generation, eventually indicating if further adjustments are needed; (2) it would give more confidence about how to use the shape of the PNLF for distance determinations and for the study of population characteristics (to be discussed in next section). The shape of the simulated H$\beta$ PNLF would be more adequate for shape tests (see Fig.\,6); unfortunately the nebular recombination lines (H$\alpha$ would be the natural choice) are more difficult to measure than $\lambda$5007. \section{On maximum final masses} Fig.\,9 shows our PNLF simulations for different maximum final masses. The reason for the different shapes is easy to understand: the more massive central stars tend to accumulate at the brightest (resp. faintest) magnitudes when they are on heating (resp. cooling) tracks. Therefore, when we eliminate them, the relative percentage of central stars at intermediate magnitudes (from $-4$ to 0) increases. This dependence of the PNLF shape on the maximum final mass is what may allow us to learn about maximum final masses in many different galaxies when suitably equipped 8-m telescopes become available. It will be a slow trial-and-error process, because at the same time we will have to test whether or not the simulated PNLFs produce acceptable and consistent fits. For example, one critical assumption needed to derive maximum final masses is that we can use the same value of $\mu_{\rm max}$ for all galaxies. For the moment, the fact that in the present work we find the same $\mu_{\rm max}$ for the LMC and M 31 encourages us to expect this parameter to be valid everywhere. But we still have to verify if $\mu_{\rm max}$ = 1 is statistically consistent with the morphology of excitation diagrams (plots of $I(5007)$ as function of $M_\beta$) made from observations in several different galaxies. Notice that the effect of the maximum final mass is quite different from the sample size effect: Fig.\,10 shows, for comparison, simulated PNLFs that correspond to several sample sizes. Besides, since a change in distance produces a {\it horizontal\/} displacement of the PNLF, there is no problem for a simultaneous determination of distance, sample size and maximum final mass, provided only that the sample size is large enough to produce small statistical fluctuations at the bright end of the PNLF. \begin{figure} \psfig{figure=imodf9.ps,height=6.8cm,angle=90} \caption[]{ Simulated PNLFs for $\mu_{\rm max}$ = 1, sample size = 1000, and three maximum final masses: 1.19 (full line), 0.70 (dotted) and 0.63 solar masses (dashed). In each case the exponential mass distribution has been truncated at the limiting mass. } \end{figure} \begin{figure} \psfig{figure=imodf10.ps,height=6.8cm,angle=90} \caption[]{ Simulated PNLFs for $\mu_{\rm max}$ = 1, maximum final mass= 0.70 solar masses, and three sample sizes: 1000, 3000, 5000. } \end{figure} \begin{figure} \psfig{figure=imodf11.ps,height=6.8cm,angle=90} \caption[]{ The observed PNLF in the LMC compared with simulations for different maximum final masses. We have overplotted on Fig.\,9 the same LMC PNLF used before. The fit is difficult, but it would seem that the best agreement is obtained for a maximum final mass of 0.70 solar masses (dotted). Notice that the curve for 0.63 solar masses (dashed) produces too many PNs at $-3.7$. } \end{figure} \begin{figure} \psfig{figure=imodf12.ps,height=6.8cm,angle=90} \caption[]{ The observed PNLF in M 31 compared with simulations for different maximum final masses. We have overplotted on Fig.\,9 the same M 31 PNLF used in Fig.\,7. The best agreement is obtained for a maximum final mass of 0.63 solar masses (dashed). } \end{figure} Using our improved PNLF simulations, we have tried to estimate the maximum final masses in the LMC and the bulge of M 31. Our preliminary estimates are 0.70 and 0.63 solar masses in the LMC and M 31, respectively; see Figs.\,11 and 12. This would be consistent with the existence of more recent star formation in the LMC. However, this result is only tentative, due to the small number of available bright PNs, and a confirmation would be desirable. An improvement of the statistics in the case of the LMC is rather unlikely, because we cannot expect to find many more of the bright PNs; we think it is more promising to attempt the collection of PN samples in more luminous galaxies with recent star formation, and to combine those samples in order to further increase the sample size. Although there is quite a lot of information published or in press about PNLFs in galaxies with recent star formation (see e.g. Jacoby 1997, Feldmeier et al. 1997), it is not suitable for the kind of test we would like to make. Four conditions have to be fulfilled: (1) there must be abundant evidence of recent star formation. This does not necessarily imply a restriction to spiral and irregular galaxies: consider e.g. the blue bulge of the lenticular galaxy NGC 5102 (McMillan et al. 1994); (2) the PN searches must have been made without avoiding the regions of recent star formation (for example, we cannot use PNs found in the halos of edge-on spiral galaxies like NGC 891 (Ciardullo et al. 1991) and NGC 4565 (Jacoby et al. 1996), or in the bulges of M 31 and M 81); (3) a statistically complete sample must have been established, extending at least 1.5 mag fainter than the bright end of the PNLF, in order to reach the section of the PNLF where the shape effects are predicted to be most easily detectable; (4) to use PNLF distances for this kind of study would be equivalent to running in circles. A cepheid distance (or any other universally accepted method of distance determination) must be available, in order to eliminate any effects derived from the assumption that the PNLF is universal, which is used in the application of the maximum likelihood method for PNLF distance determinations. It turns out that, of the more than 30 galaxies where many PNs have been found (Jacoby 1997 gives the most up-to-date summary) none satisfies simultaneously the 4 conditions. Of course this does not affect in any significant way the conclusion that cepheid distances and PNLF distances are in excellent agreement; we simply remark that the available samples do not allow us to properly study population effects. So far, the galaxy that comes closer to fulfill the 4 conditions is NGC 300 (Soffner et al. 1996), but the sample size must be substantially increased before NGC 300 can provide a convincing test. \section{Conclusion} One positive aspect of these PNLF simulations is that they make some testable predictions, like the possible change of slope at $M(5007)=-2.3$, or the relation between the $\lambda$5007 and H$\beta$ (or H$\alpha$) PNLFs, or the morphology of the plots of $I(5007)$ as function of $M\beta$ or $M\alpha$. Several 8m-class telescopes will soon become available, probably producing explosive progress in the field of extragalactic PNs. Since there will be inevitable efforts to detect many of these objects and obtain their spectra, given their usefulness for distance, kinematic and abundance studies, we can be assured that the predictions will be testable in the near future, leading perhaps to further improvements of the simulated PNLFs. Given deeper PN searches and larger sample sizes, we will then have better chances to decide if there are indeed detectable population effects in the shape of the PNLF, or, if there are not, to understand what are the astrophysical reasons for their absence. In any case, through such work the PNLF may become a more reliable tool for even more accurate extragalactic distance determinations. \section{Acknowledgements} This work has been supported by the Deutsche Forschungsgemeinschaft through Grant SFB (Sonderforschungsbereich) 375. We are grateful to J.J. Feldmeier, R. Ciardullo and G.H. Jacoby for showing us data before publication.
1,941,325,220,599
arxiv
\section{Acknowledgments} This work is supported by the U.S. NSF grants DMR-0605696 and DMR-0611562, the DOE grant DE-FG02-06ER46305 (HW, DNS), and NSF MRSEC program, Grant No. DMR-0819860 (FDMH). We also thank the KITP for support through the NSF grant PHY05-51164.
1,941,325,220,600
arxiv
\section{Introduction} \label{sec:intro} Commitment schemes are powerful cryptographic primitives. In a bit commitment scheme $\mathsf {Alice}$, the committer is supposed to commit a bit $b \in \{0,1\}$ to $\mathsf {Bob}$ in such a way that after the {\em commit phase} she cannot change her choice of the committed bit. This is referred to as the binding property. Also at this stage $\mathsf {Bob}$ should not be able to figure out what the committed bit is. This is referred to as the concealing property. Later in the {\em reveal phase} $\mathsf {Alice}$ is supposed to reveal the bit $b$ and convince $\mathsf {Bob}$ that this was indeed the bit which she committed earlier. Bit commitment schemes have been very well studied in both the classical and quantum models since existence of such schemes imply several interesting results in cryptography. It has been shown that bit commitment schemes imply existence of {\em quantum oblivious transfer}~\cite{yao:oblivious} which in turn provides a way to do any two-party secure computation~\cite{killian:oblivious}. They are also useful in constructing {\em zero knowledge proofs}~\cite{goldreich:crypto} and imply another very useful cryptographic primitive called secure {\em coin tossing}~\cite{blum:coin}. But unfortunately strong negative results are known about them in case $\mathsf {Alice}$ and $\mathsf {Bob}$ are assumed to possess arbitrary computation power and information theoretic security is required. In this paper we are concerned with this setting of information theoretic security with unbounded computational resources with cheating parties. Classically bit commitment schemes are known to be impossible. In the quantum setting several schemes were proposed but later several impossibility results were shown~\cite{mayer:imp,hklo:bitcomm1,hklo:bitcomm2,Dariano07}. Negative results were also shown for approximate implementations of bit commitment schemes~\cite{terry:bitcomm,Dariano07} in which trade-offs were shown for cheating probabilities of $\mathsf {Alice}$ and $\mathsf {Bob}$, referred to as binding-concealing trade-offs. Interestingly however Kent~\cite{kent:relative} has exhibited that bit-commitment can be achieved using relativistic constraints. However we point out that in this work we do not keep considerations of relativity into picture and our setting is non-relativistic. Now suppose instead of wanting to commit a bit $b \in \{0,1\}$, $\mathsf {Alice}$ wants to commit an entire string $x \in \{0,1\}^n $. One way to do this might be to commit all the bits of $x$ separately. Binding-concealing trade-offs of such schemes will be limited by the binding-concealing trade-offs allowable for bit commitment schemes. But it is conceivable that there might exist cleverer schemes which allow for better binding and concealing properties? This question was originally raised by Kent~\cite{kent:bitcomm}. Let us first begin by formally defining a quantum string commitment protocol. Our definition is similar to the one considered by Buhrman et al.~\cite{harry:QSC} \begin{definition}[Quantum string commitment] \label{def:QSC} Let $P = \{p_x: x \in \{0,1\}^n \}$ be a probability distribution and let $B$ be a {\em measure of information} (we define several measures of information later). A $(n,a,b)-B-\mathsf{ QSC}$ protocol for $P$ is a {\em quantum communication protocol~\cite{yao:oblivious,hklo:bitcomm2}} between $\mathsf {Alice}$ and $\mathsf {Bob}$. $\mathsf {Alice}$ gets an input $x \in \{0,1\}^n$ (chosen according to the distribution $P$), which is supposed to be the string to be committed. The starting joint state of the qubits of $\mathsf {Alice}$ and $\mathsf {Bob}$ is some pure state. There are no intermediate measurements during the protocol and $\mathsf {Bob}$ has a final checking $\mathsf{POVM}$ measurement $\{M_y | y \in \{0,1\}^n\} \cup \{I - \sum_y M_y\}$ (please see Sec.~\ref{sec:prelim} for definition of $\mathsf{POVM}$) to determine the value of the committed string by $\mathsf {Alice}$ or to detect her cheating. The protocol runs in two phases called the commit phase followed by the reveal phase. The following properties need to be satisfied. \begin{enumerate} \item {\bf (Correctness)} Let $\mathsf {Alice}$ and $\mathsf {Bob}$ act honestly. Let $\rho_x$ be the state of $\mathsf {Bob}$'s qubits at the end of the reveal phase of the protocol when $\mathsf {Alice}$ gets input $x$. Then $\forall x,y ~~ {\mathsf{Tr}} M_y \rho_x = 1$ iff $x=y$ and 0 otherwise. \item {\bf (Concealing)} Let $\mathsf {Alice}$ act honestly and $\mathsf {Bob}$ be possibly cheating. Let $\sigma_x$ be the state of $\mathsf {Bob}$'s qubits after the commit phase when $\mathsf {Alice}$ gets input $x$. Then the $B$ information of the ensemble ${\cal E} = \{p_x, \sigma_x \}$ is at most $b$. In particular this is also true for both $\mathsf {Alice}$ and $\mathsf {Bob}$ acting honestly. \item {\bf (Binding)} Let $\mathsf {Bob}$ act honestly and $\mathsf {Alice}$ be possibly cheating. Let $c \in \{0,1\}^n$ be a string in a special cheating register $C$ with $\mathsf {Alice}$ that she keeps independent of the rest of the registers till the end of the commit phase. Let $\rho_c'$ be the state of $\mathsf {Bob}$'s qubits at the end of the reveal phase when $\mathsf {Alice}$ has $c$ in the cheating register. Let $\tilde{p}_c \stackrel{\mathsf{def}}{=} {\mathsf{Tr}} M_c \rho_c'$. Then for all input strings $x$, $$ \sum_{c \in \{0,1\}^n } p_c \tilde{p}_c \quad \leq \quad 2^{a-n}.$$ \end{enumerate} \end{definition} The idea behind the above definition is as follows. At the end of the reveal phase of an honest run of the protocol $\mathsf {Bob}$ figures out $x$ from $\rho_x$ by performing the $\mathsf{POVM}$ measurement $\{M_x\} \cup \{ I - \sum_x M_x \}$. He accepts the committed string to be $x$ iff $M_x$ succeeds and this happens with probability ${\mathsf{Tr}} M_x \rho_x$. He declares $\mathsf {Alice}$ cheating if $I - \sum_x M_x$ succeeds. Thus due to the first condition, at the end of an honest run of the protocol, $\mathsf {Bob}$ accepts the committed string to be exactly the input string of $\mathsf {Alice}$ with probability 1. The second condition above takes care of the concealing property stating that the amount of $B$ information about $x$ that a possibly cheating $\mathsf {Bob}$ gets is bounded by $b$. In bit-commitment protocols, the concealing property was quantified in terms of the probability with which $\mathsf {Bob}$ can guess $\mathsf {Alice}$'s bit. Buhrman et al.~\cite{harry:QSC} in fact do consider $\mathsf {Bob}$'s probability of guessing $\mathsf {Alice}$'s input string as quantifying the concealing property. However in the proof of their trade-off result, they consider a related notion of information as a quantification of the concealing property. In this paper, we use various notions of information to quantify the concealing property of the protocol. The third condition guarantees the binding property. It makes sure that if a cheating $\mathsf {Alice}$ wants to postpone committing or wants to change her choice at the end of the commit phase, then she cannot succeed in making an honest $\mathsf {Bob}$ accept her new choice with good probability, for a lot of different strings of her choice. A few points regarding the above definition are important to note. We assume that the combined state of $\mathsf {Alice}$ and $\mathsf {Bob}$ at the beginning of the protocol is a pure state. Given this assumption, it can be assumed without loss of generality (due to the arguments of~\cite{yao:oblivious,hklo:bitcomm2}) that it remains a pure state till the end of the protocol (in an honest run). This is because $\mathsf {Alice}$ and $\mathsf {Bob}$ need not apply any intermediate measurements, before $\mathsf {Bob}$ applies the final checking $\mathsf{POVM}$ at the end of the protocol. Our impossibility result makes a critical use of this fact and fails to hold if the starting combined state is not a pure state. However, there are no restrictions on the starting pure state shared between $\mathsf {Alice}$ and $\mathsf {Bob}$, it could even be an entangled state between them. The impossibility result in~\cite{harry:QSC} has also been shown under this assumption. This assumption has also been made in showing impossibility results for bit-commitment schemes~\cite{mayer:imp,hklo:bitcomm1,hklo:bitcomm2}. The main reason why these arguments do not work, both for bit commitment and string commitment schemes, if the combined state is not a pure state is that the {\em Local Transition Theorem} (Thm.~\ref{thm:loctrans} mentioned later) fails to hold for mixed states. It is conceivable that, and will be interesting to see if better $\mathsf{ QSC}$ schemes exist when $\mathsf {Alice}$ and $\mathsf {Bob}$ are forced (by some third party say) to start in some mixed state. Please look at~\cite{Dariano07} for extension of impossibility results for bit-commitment to a very large class of protocols. \subsection{Measures of information} As we will see later, the notion of information used in the above definition is very important and therefore let us briefly define various notions of information that we will be concerned with in this paper. The following notion of information, referred to as the quantum mutual information or the Holevo-$\chi$ information is one of the most commonly used. \begin{definition}[Holevo-$\chi$ information] Given a quantum state $\rho$, the {\em von-Neumann} entropy of $\rho$ is defined as $\mathsf{S}(\rho) \stackrel{\mathsf{def}}{=} - {\mathsf{Tr}} \rho \log_2 \rho$. Given quantum states $\rho, \sigma$, the {\em Kullback-Leibler divergence} or {\em relative entropy} between them is defined as $\mathsf{S}(\rho \| \sigma) \stackrel{\mathsf{def}}{=} {\mathsf{Tr}} \rho (\log_2 \rho - \log_2 \sigma)$. Given an ensemble ${\cal E} = \{p_x, \rho_x \}$, let $\rho \stackrel{\mathsf{def}}{=} \sum_x p_x \rho_x$, then its Holevo-$\chi$ information is defined as $$\chi({\cal E}) \quad \stackrel{\mathsf{def}}{=} \quad \sum_x p_x (\mathsf{S}(\rho) - \mathsf{S}(\rho_x)) \quad = \quad \sum_x p_x \mathsf{S}(\rho_x \| \rho). $$ \label{def:holevo} \end{definition} The following notion captures the amount of information that can be made available to the real world through measurements on the quantum encoding of a classical random variable. \begin{definition}[Accessible information] Let ${\cal E} = \{p_x,\rho_x\}$ be an ensemble and let $X$ be a classical random variable such that $\Pr(X=x) \stackrel{\mathsf{def}}{=} p_x$. Let $Y^{\mathcal{M}}$, correlated with $X$, be the classical random variable that represents the result of a $\mathsf{POVM}$ measurement ${\mathcal{M}}$ performed on ${\cal E}$. The {\em accessible information\/}~$I_{{\mathrm{acc}}}({\cal E})$ of the ensemble~${\cal E}$ is then defined to be \begin{equation} \label{eqn-acc} I_{{\mathrm{acc}}}({\cal E}) \quad \stackrel{\mathsf{def}}{=} \quad \max_{{\mathcal{M}}} I(X:Y^{\mathcal{M}}). \end{equation} \end{definition} As mentioned before Buhrman et al. used $\mathsf {Bob}$'s probability of guessing $\mathsf {Alice}$'s input string as the measure of concealment of the protocol. However in the proofs of their impossibility result, they used the following notion of information. \begin{definition}[$\xi$ information~\cite{harry:QSC}] The $\xi$ information of an ensemble ${\cal E} = \{p_x, \rho_x \}$ is defined as $$\xi({\cal E}) \quad \stackrel{\mathsf{def}}{=} \quad n + \log_2 \sum_x {\mathsf{Tr}}(p_x\rho^{-1/2} \rho_x)^2 $$ where $\rho = \sum_x p_x \rho_x$. \end{definition} Let $q_x$ be the probability that $\mathsf {Bob}$ correctly guesses $\mathsf {Alice}$'s input string $x$ (with $\mathsf {Alice}$ honest) before the start of the reveal phase. \cite{harry:QSC} showed that any $(n,a,b)-\mathsf{ QSC}$ protocol with $\sum_{x \in \{0,1\}^n} q_x \leq 2^b$, is also a $(n,a,b)-\xi-\mathsf{ QSC}$ protocol. Hence their impossibility results for $(n,a,b)-\xi-\mathsf{ QSC}$ protocols implied same impossibility results for $(n,a,b)-\mathsf{ QSC}$ protocols with $\sum_{x \in \{0,1\}^n} q_x \leq 2^b$. In this paper we also consider a notion of {\em divergence information}. It is based on the following notion of distance between two quantum states, considered by Jain, Radhakrishnan and Sen~\cite{jain:substate}. \begin{definition}[Observational divergence~\cite{jain:substate}] Let $\rho, \sigma$ be two quantum states. The observational divergence between them denoted $\mathsf{D}(\rho \| \sigma)$, is defined as, $$\mathsf{D}(\rho \| \sigma) \quad \stackrel{\mathsf{def}}{=} \quad \max_{\mathsf{M:POVM~element}} {\mathsf{Tr}} M \rho \log_2 \frac{{\mathsf{Tr}} M \rho} {{\mathsf{Tr}} M \sigma}.$$ \end{definition} The definition of divergence information of an ensemble is similar to the Holevo-$\chi$ information except the notion of distance between quantum states used is now observational divergence instead of relative entropy. \begin{definition}[Divergence information] \label{def:divinf} Let ${\cal E} = \{p_x,\rho_x\}$ be an ensemble and let $\rho \stackrel{\mathsf{def}}{=} \sum_x p_x \rho_x$. Its divergence information is defined $${\cal D}({\cal E}) \quad \stackrel{\mathsf{def}}{=} \quad \sum_x p_x \mathsf{D}(\rho_x \| \rho).$$ \end{definition} \subsection{Previous results} The impossibility of a strong string commitment protocol, in which both $a,b$ are required to be 0, is immediately implied by the impossibility of strong bit-commitment protocols. The question of a trade-off between $a$ and $b$ was studied by Buhrman et al. They studied this trade-off both in the scenario of single execution of the protocol and also in the asymptotic regime with several parallel executions of the protocol. In the scenario of single execution of the protocol they showed the following result. \begin{theorem}[\cite{harry:QSC}] \label{thm:harrysingle} For single execution of the protocol of a $(n,a,b)$-${\xi}$-$\mathsf{ QSC}$, $a + b + 5 \log_2 5 - 4 \geq n$. \end{theorem} This then (as argued before) implied similar trade-off for a $(n,a,b)$-$\mathsf{ QSC}$ with $\sum_{x \in \{0,1\}^n} q_x \leq 2^b$ (where $q_x$ be the probability that $\mathsf {Bob}$ correctly guesses $\mathsf {Alice}$'s input string $x$, with $\mathsf {Alice}$ honest, before the start of the reveal phase.) In the asymptotic regime they showed the following result in terms of the Holevo-$\chi$ information. \begin{theorem}[\cite{harry:QSC}] \label{thm:avgasym} Let $\Pi$ be a $(n, *, b)-\chi-\mathsf{ QSC}$ scheme. Let $\Pi_m$ represent $m$ parallel executions of $\Pi$. Let $a_m$ represent the binding parameter of $\Pi_m$ and let $a \stackrel{\mathsf{def}}{=} \lim_{m \rightarrow \infty} \frac{a_m}{m}$. Then, $ a + b \geq n $. \end{theorem} There are two reason why Thm.~\ref{thm:avgasym} may appear stronger than Thm.~\ref{thm:harrysingle}. One because there is no additive constant and the other because for many ensembles ${\cal E}$, $\chi({\cal E}) \leq \xi({\cal E})$ as we show in Sec.~\ref{sec:separation}. In fact, as we also show in Sec.~\ref{sec:separation}, there exists ensembles ${\cal E}$ for which $\xi({\cal E})$ is exponentially (in $n$) larger than $\chi({\cal E})$. Along with these impossibility results Buhrman et al. interestingly also showed that if the measure of information considered is the accessible information, the above trade-offs no longer hold. For example there exists a $\mathsf{ QSC}$ scheme where $a = 4 \log_2 n + O(1)$ and $b = 4$ when measure of information is the accessible information. This therefore asserts that the choice of measure of information is crucial to (im)possibility. Previously Kent~\cite{kent:bitcomm} also exhibited trade-offs for some schemes on $\mathsf {Alice}$'s probability of cheating and the amount of accessible information that $\mathsf {Bob}$ gets about the committed string. However he did not allow $\mathsf {Alice}$ to be arbitrarily cheating, in particular $\mathsf {Alice}$ could not have started with a superposition of strings in the input register. Therefore the schemes that he considered were truly not $\mathsf{ QSC}$s as we have defined them. \subsection{Our results} We show the following binding-concealing trade-off for $\mathsf{ QSC}$s. \begin{theorem} \label{thm:avg} For single execution of the protocol of a $(n,a,b)-{\cal D}-\mathsf{ QSC}$ scheme, $$ a + b + 8 \sqrt{b + 1} + 16 \quad \geq \quad n. $$ \end{theorem} It was shown by Jain, Radhakrishnan and Sen~\cite{jain:substate} that for any two states $\rho, \sigma$, $\mathsf{D}(\rho \| \sigma) \leq \mathsf{S}(\rho \| \sigma) + 1$, which implies from Defn.~\ref{def:holevo} and~\ref{def:divinf} that for any ensemble ${\cal E}, {\cal D}({\cal E}) \leq \chi({\cal E}) + 1$. This immediately gives us the following impossibility result in terms of Holevo-$\chi$ information. \begin{theorem} \label{thm:avgchi} For single execution of the protocol of a $(n,a,b)-\chi-\mathsf{ QSC}$ scheme $$ a + b + 8 \sqrt{b + 2} +17 \quad \geq \quad n. $$ \end{theorem} We also consider the notion of {\em maximum possible divergence information} (similar to the notion of maximum possible Holevo-$\chi$ information considered by Jain~\cite{jain:remote}) of an {\em encoding} $E: x \mapsto \rho_x$. For a probability distribution $\mu \stackrel{\mathsf{def}}{=} \{p_x\}$ over $\{0,1\}^n$, let the ensemble ${\cal E}_{\mu}(E) \stackrel{\mathsf{def}}{=} \{p_x, \rho_x \}$. Let $\rho_{\mu} \stackrel{\mathsf{def}}{=} \sum_x p_x \rho_x$. \begin{definition}(Maximum possible divergence information) {\em Maximum possible divergence information} of an encoding $E: x \mapsto \rho_x$ is defined as $\tilde{{\cal D}}(E) \stackrel{\mathsf{def}}{=} \max_{\mu} {\cal D}({\cal E}_{\mu}(E))$. \end{definition} We show the following theorem which states that if the maximum possible divergence information in the qubits of $\mathsf {Bob}$ at the end of the commit phase is small then $\mathsf {Alice}$ can actually cheat with good probability for any string $x \in \{0,1\}^n$ and not just on the average. \begin{theorem} \label{thm:worst} For a $\mathsf{ QSC}$ scheme let $\sigma_x$ be as in Defn.~\ref{def:QSC} when $\mathsf {Alice}$ and $\mathsf {Bob}$ act honestly in the commit phase. If for the encoding $E:x \mapsto \sigma_x, \tilde{{\cal D}}(E) \leq b$ then for all strings $c \in \{0,1\}^n$, $$ \tilde{p}_c \quad \geq \quad 2^{-(b + 8 \sqrt{b + 1} + 16)}, $$ where $\tilde{p}_c$ (as in Defn.~\ref{def:QSC}) represents the probability of successfully revealing string $c$ (in the cheating string) by cheating $\mathsf {Alice}$. \end{theorem} Again using the fact that for all ensembles $\mathsf{D}(\rho \| \sigma) \leq \mathsf{S}(\rho \| \sigma) + 1$ we immediately get the following theorem in terms of maximum possible Holevo-$\chi$ information $\tilde{\chi}(E)$ (which is similar to maximum possible divergence information and obtained by just replacing divergence with relative entropy.) \begin{theorem} \label{thm:worstchi} For a $\mathsf{ QSC}$ scheme let $\sigma_x$ be as in Defn.~\ref{def:QSC} when $\mathsf {Alice}$ and $\mathsf {Bob}$ act honestly in the commit phase. If for the encoding $E:x \mapsto \sigma_x, \tilde{\chi}(E) \leq b$ then for all strings $c \in \{0,1\}^n$, $$ \tilde{p}_c \quad \geq \quad 2^{-(b + 8 \sqrt{b + 2} + 17)}, $$ where $\tilde{p}_c$ (as in Defn.~\ref{def:QSC}) represents the probability of successfully revealing string $c$ (in the cheating string) by cheating $\mathsf {Alice}$. \end{theorem} Now let us now discuss some aspects of our results. \begin{enumerate} \item In Thm.~\ref{thm:avgchi} the trade-off between $a$ and $b$ is similar (up to lower order terms of $b$) to the one shown by Buhrman et al.~\cite{harry:QSC} as in Thm.~\ref{thm:harrysingle}. However the fact that $b$ in Thm.~\ref{thm:avgchi} represents the Holevo-$\chi$ information instead of the $\xi$-information (as in Thm.~\ref{thm:harrysingle}) makes it significantly stronger in certain cases as follows. We show in Sec.~\ref{sec:separation} that for any ensemble ${\cal E} \stackrel{\mathsf{def}}{=} \{2^{-n}, \rho_x \}$, where for all $x$, $\rho_x$ commutes with $\rho \stackrel{\mathsf{def}}{=} \sum_x 2^{-n}\rho_x$, we have, $\xi({\cal E}) \geq \chi({\cal E})$. In fact, as we also show in Sec.~\ref{sec:separation}, there exists ensembles ${\cal E}$ for which $\xi({\cal E})$ is exponentially (in $n$) larger than $\chi({\cal E})$. Thm.~\ref{thm:avgchi} therefore becomes much stronger than Thm.~\ref{thm:harrysingle} for ensembles where $\xi({\cal E}) \gg \chi({\cal E})$. \item As mentioned before, Jain, Radhakrishnan and Sen~\cite{jain:substate} have shown that for any ensemble ${\cal E}, {\cal D}({\cal E}) \leq \chi({\cal E}) + 1$. However recently, Jain, Nayak and Su~\cite{JainNS08} have shown that there exists ensembles ${\cal E}$ such that $\chi({\cal E}) \gg {\cal D}({\cal E})$ ($\chi({\cal E}) = \Omega(\log_2 n \cdot {\cal D}({\cal E}))$ for some ensembles ${\cal E}$ supported on $\{0,1\}^n$). For ensembles where this holds, Thm.~\ref{thm:avg} becomes much stronger than Thm.~\ref{thm:avgchi}. \item As we show in Sec.~\ref{sec:proof}, our one shot result Thm.~\ref{thm:avgchi} immediately implies the asymptotic result Thm.~\ref{thm:avgasym} of Buhrman et al. \item No counterparts of Thm.~\ref{thm:worst} and Thm.~\ref{thm:worstchi} were shown by Buhrman et al. and are therefore completely new. \item If $b$ is large then the cheating attack (that we present) of $\mathsf {Alice}$ would succeed with low probability (like $2^{-b}$). However, as we show in a remark in Sec.~\ref{sec:proof}, in case $\mathsf {Alice}$'s cheating attack succeeds with low probability, she would still be able to 'reverse' her cheating operations and reveal, with a high probability, at least some $x' \in \{0,1\}^n$ to $\mathsf {Bob}$. That is, with a high probability, $\mathsf {Alice}$ will be able to prevent herself from being detected cheating by $\mathsf {Bob}$. \item It is easily seen that up to lower order terms in $b$, the above trade-offs are achieved by trivial protocols. For Thm.~\ref{thm:avg} above consider the following protocol. $\mathsf {Alice}$ in the concealing phase sends the first $b$ bits of the $n$-bit string $x$. In this case $\mathsf {Bob}$ gets to know $b$ bits of divergence information about $x$. In the reveal phase a cheating $\mathsf {Alice}$ can now reveal any of the $2^{n-b}$ strings $x$ (consistent with the first $b$ bits being the ones sent) with probability 1. Hence $a = \log_2 2^{n-b} = n-b$. For Thm.~\ref{thm:worst} above let $\mathsf {Alice}$ send one of the $2^b$ strings $s \in \{0,1\}^b$ uniformly to $\mathsf {Bob}$ representing the first $b$ bits of $x$. The condition of Thm.~\ref{thm:worst} is satisfied. Now if in the reveal phase she wants to commit any $x$, she can do so with probability $2^{-b}$ (in the event that the sent $s$ is consistent with $x$). \end{enumerate} In the next section we state some quantum information theoretic facts that will be useful in the proofs of the impossibility results that we present in Sec.~\ref{sec:proof}. \section{Preliminaries} \label{sec:prelim} All logarithms in this paper are taken with base 2 unless otherwise specified. Let ${\cal H}, {\cal K}$ be finite dimensional Hilbert spaces. For a linear operator $A$ let $|A| = \sqrt{A^{\dagger}A}$ and let ${\mathsf{Tr}} A$ denote the trace of $A$. Given a state $\rho \in {\cal H}$ and a pure state $\ket{\phi} \in {\cal H} \otimes {\cal K}$, we call $\ket{\phi}$ a {\em purification} of $\rho$ iff ${\mathsf{Tr}}_{{\cal K}} \ketbra{\phi} = \rho $. A {\em positive operator-valued measurement $(\mathsf{POVM})$ element} $M$ is a positive semi-definite operator such that $I - M$ is also positive semi-definite, where $I$ is the identity operator. A $\mathsf{POVM}$ is defined as follows. \begin{definition}[$\mathsf{POVM}$] An $m$ valued $\mathsf{POVM}$ measurement ${\mathcal{M}}$ on a Hilbert space ${\cal H}$ is a set of operators $\{M_i, i \in [m]\}$ on ${\cal H}$ such that $\forall i, M_i$ is positive semi-definite and $\sum_{i \in [m]} M_i = I$ where $I$ is the identity operator on ${\cal H}$. A classical random variable $Y^{{\mathcal{M}}}$ representing the result of the measurement ${\mathcal{M}}$ on a state $\rho$ is an $m$ valued random variable such that $\forall i \in [m], \Pr[Y^{{\mathcal{M}}} = i ] \stackrel{\mathsf{def}}{=} {\mathsf{Tr}} M_i\rho$. \end{definition} Following fact follows easily from definition of von-Neumann entropy. \begin{lemma} \label{lem:sadd} Let $\rho_1, \rho_2$ be quantum states. Then $\mathsf{S}(\rho_1 \otimes \rho_2) = \mathsf{S}(\rho_1) + \mathsf{S}(\rho_2)$. \end{lemma} We make a central use the following information-theoretic result called the substate theorem due to Jain, Radhakrishnan, and Sen~\cite{jain:substate}. \begin{theorem}[Substate theorem, \cite{jain:substate}] \label{thm:substate} Let ${\cal H}, {\cal K}$ be two finite dimensional Hilbert spaces and $\dim({\cal K}) \geq \dim({\cal H})$. Let ${\mathbb C}^2$ denote the two dimensional complex Hilbert space. Let $\sigma, \tau$ be density matrices in ${\cal H}$ such that $\mathsf{D}(\sigma \| \tau) < \infty$. Let $\ket{\overline{\sigma}}$ be a purification of $\sigma$ in ${\cal H} \otimes {\cal K}$. Then, for $r > 1$, there exist pure states $\ket{\phi}, \ket{\theta} \in {\cal H} \otimes {\cal K}$ and $\ket{\overline{\tau}} \in {\cal H} \otimes {\cal K} \otimes {\mathbb C}^2$, depending on $r$, such that $\ket{\overline{\tau}}$ is a purification of $\tau$ and ${\mathsf{Tr}} |\ketbra{\overline{\sigma}} - \ketbra{\phi}| \leq \frac{2}{\sqrt{r}}$, where \begin{displaymath} \ket{\overline{\tau}} \stackrel{\mathsf{def}}{=} \sqrt{\frac{r-1}{r 2^{r k}}} \, \ket{\phi}\ket{1} + \sqrt{1 - \frac{r-1}{r 2^{r k}}} \, \ket{\theta}\ket{0} \end{displaymath} and $k \stackrel{\mathsf{def}}{=} \mathsf{D}(\sigma \| \tau) + 6 \sqrt{\mathsf{D}(\sigma \| \tau) + 1} + 4$. \end{theorem} \paragraph{Remarks:} \begin{enumerate} \item In the above theorem if the last qubit in $\ket{\overline{\tau}}$ is measured in the computational basis, then probability of obtaining 1 is $(1 - 1/r) 2^{-r k}$. \item Later in a proof below we will let $\sigma \stackrel{\mathsf{def}}{=} \rho_c$ , $\tau \stackrel{\mathsf{def}}{=} \rho_B$ and $\ket{\overline{\sigma}} \stackrel{\mathsf{def}}{=} \ket{\phi_c}$ which will be explained later. \end{enumerate} Following theorem is implicit in~~\cite{jozsa:loctrans,mayer:imp,hklo:bitcomm1,hklo:bitcomm2} although not called explicitly by the same name. \begin{theorem}[Local transition theorem] \label{thm:loctrans} Let $\rho$ be a quantum state in ${\cal K}$. Let $\ket{\phi_1}$ and $ \ket{\phi_2}$ be two purification of $\rho$ in ${\cal H} \otimes {\cal K}$. Then there is a local unitary transformation $U$ acting on ${\cal H}$ such that $(U \otimes I) \ket{\phi_1} = \ket{\phi_2}$. \end{theorem} We would also need the following theorem which follows from arguments similar to the one in Jain~\cite{jain:remote} for a similar theorem about relative entropy. \begin{theorem} \label{thm:remote} Let $X$ be a finite set. Let $E: x \mapsto \rho_x$ be an encoding. Let $\tilde{{\cal D}}(E) \leq b$, then there exists a distribution $\mu \stackrel{\mathsf{def}}{=} \{q_x \}$ on $X$ such that $$\forall x \in X, \quad \mathsf{D}(\rho_x \| \rho) \quad \leq \quad b,$$ where $\rho \stackrel{\mathsf{def}}{=} \sum_x q_x \rho_x$. \end{theorem} The following theorem is shown by Helstrom~\cite{Helstrom67}. \begin{theorem} \label{thm:dist} Given two quantum states $\rho$ and $\sigma$, the probability of identifying the correct state is at most $\frac{1}{2} + \frac{{\mathsf{Tr}} |\rho - \sigma |}{4}$, or in other words the probability of distinguishing them is at most $\frac{{\mathsf{Tr}} |\rho - \sigma |}{2}$. \end{theorem} \section{Proofs of impossibility} \label{sec:proof} \noindent \begin{proofof}{Thm.~\ref{thm:avg}} Let us consider a $\mathsf{ QSC}$ scheme and let $\mathsf {Alice}$ get input $x$. After an honest run of the commit phase, let $\ket{\phi_x}$ be the combined state of $\mathsf {Alice}$ and $\mathsf {Bob}$ and $\rho_x$ be the state of $\mathsf {Bob}$'s qubits. Let ${\cal E} = \{p_x, \rho_x\}$. From the concealing property of the $\mathsf{ QSC}$ it follows $\mathsf{D}({\cal E}) \leq b$. Let $c$ be the string in the cheating register $C$ of $\mathsf {Alice}$. Consider a cheating run of the protocol by $\mathsf {Alice}$ in which she starts with the superposition $\sum_x \sqrt{p_x}\ket{x}$ in the input register and proceeds with the rest of the commit phase as before in the honest protocol. Let $\mathsf {Bob}$ be honest all throughout our arguments. Since the input is classical and $\mathsf {Alice}$ can make its copy we can assume without loss of generality that the operations of $\mathsf {Alice}$ in the honest run are such that they do not disturb the input register. Let $\ket{\psi}$ be the combined state of $\mathsf {Alice}$ and $\mathsf {Bob}$ in this cheating run at the end of the commit phase. Let $A, B$ correspond to $\mathsf {Alice}$ and $\mathsf {Bob}$'s systems respectively. Now it can be seen that in the cheating run, at the end of the commit phase the qubits of $\mathsf {Bob}$ are in the state $\rho_B \stackrel{\mathsf{def}}{=} {\mathsf{Tr}}_A \ketbra{\psi} = \sum_x p_x \rho_x$. Let $r > 1$ to be chosen later. Let us now invoke substate theorem (Thm.~\ref{thm:substate}) by putting $\sigma \stackrel{\mathsf{def}}{=} \rho_c, \ket{\overline{\sigma}} \stackrel{\mathsf{def}}{=} \ket{\phi_c}$, $\tau \stackrel{\mathsf{def}}{=} \rho_B$ and $r \stackrel{\mathsf{def}}{=} r$. Let $\ket{\psi_c} \stackrel{\mathsf{def}}{=} \ket{\overline{\tau}}$ be obtained from Thm.~\ref{thm:substate} such that the extra single qubit register ${\mathbb C}^2$ is also with $\mathsf {Alice}$. Since ${\mathsf{Tr}}_{A} \ketbra{\psi_c} = {\mathsf{Tr}}_{A} \ketbra{\psi} = \rho_B$, from Local transition theorem (Thm.~\ref{thm:loctrans}) there exists a unitary transformation $A_c$ acting just on $\mathsf {Alice}$'s system $A$ such that $(A_c \otimes I_B)\ket{\psi} = \ket{\psi_c}$, where $I_B$ is the identity transformation on $\mathsf {Bob}$'s system. Now the cheating $\mathsf {Alice}$ (who's intention is to reveal string $c$), applies the transformation $A_c$ to $\ket{\psi}$ and then continues with the rest of the reveal phase as in the honest run. Let $\ket{\phi_c'} \stackrel{\mathsf{def}}{=} \ket{\phi}$ be obtained from Thm.~\ref{thm:substate} and hence, ${\mathsf{Tr}} |\ketbra{\phi_c} - \ketbra{\phi_c'}| \leq 2/\sqrt{r}$. Now it can be seen that when $\mathsf {Bob}$ makes the final checking $\mathsf{POVM}$, the probability of success $\tilde{p}_c$ for $\mathsf {Alice}$ is at least $(1 - 1/r)2^{-rk_c}(1 - 1/\sqrt{r})$ where $ k_c = \mathsf{D}(\rho_c \| \rho_B) + 6 \sqrt{\mathsf{D}(\rho_c \| \rho_B) + 1} + 4$. One way to see this is to imagine that $\mathsf {Alice}$ first measures the single qubit register ${\mathbb C}^2$ and then proceeds with the rest of the reveal phase. Now imagine that she obtains one on this measurement which from Thm.~\ref{thm:substate} has probability $(1 - 1/r)2^{-rk_c}$. Also once she obtains one, the combined joint state of $\mathsf {Alice}$ and $\mathsf {Bob}$ is $\ket{\phi_c'}$ whose trace distance with $\ket{\phi_c}$ is at most $2/\sqrt{r}$. Since trace distance is preserved by unitary operations and is only smaller for subsystems and since after this $\mathsf {Alice}$ follows the rest of the reveal phase honestly, we can conclude the following: the final state resulting with $\mathsf {Bob}$ will have trace distance at most $2/\sqrt{r}$ with the state with him at the end of a completely honest run of the protocol in which Alice starts with $c$ in the input register. Hence it follows from Thm.~\ref{thm:dist} that $\mathsf {Bob}$ will accept at the end with probability at least $1 - 1/\sqrt{r}$ since he was accepting with probability 1 in the complete honest run of the protocol . Hence the overall cheating probability $\tilde{p}_c$ of $\mathsf {Alice}$ is at least $(1 - 1/r)2^{-rk_c}(1 - 1/\sqrt{r})$. Although here we have imagined $\mathsf {Alice}$ doing an intermediate measurement on the single qubit register ${\mathbb C}^2$, it is not necessary and she will have the same cheating probability when she proceeds with the rest of the honest protocol after just applying the cheating transformation $A_c$ since the final qubits of $\mathsf {Bob}$ will be in the same state in either case. Now, \begin{eqnarray*} 2^{a-n} & \geq & \sum_c p_c \tilde{p}_c \\ & \geq & (1 - 1/r)(1 - 1/\sqrt{r}) \left( \sum_c p_c 2^{-r(\mathsf{D}(\rho_c \| \rho_B) + 6 \sqrt{\mathsf{D}(\rho_c \| \rho_B) + 1} + 4)} \right) \\ & \geq & (1 - 1/r)(1 - 1/\sqrt{r})2^{\sum_c -r p_c(\mathsf{D}(\rho_c \| \rho_B) + 6 \sqrt{\mathsf{D}(\rho_c \| \rho_B) + 1} + 4)} \\ & \geq & (1 - 1/r)(1 - 1/\sqrt{r}) 2^{ -r(b + 6 \sqrt{b + 1} + 4) } \end{eqnarray*} The first inequality comes from definition of $a$ in Defn.~\ref{def:QSC}. The third inequality comes from the convexity of the exponential function and the fourth inequality comes from definition of $b$ in Defn.~\ref{def:QSC}, Defn.~\ref{def:divinf} and concavity of the square root function. Now when $b > 15$, we let $r = 1 + \frac{1}{b}$ and therefore, \begin{eqnarray*} (1 - 1/r)(1 - 1/\sqrt{r}) 2^{ -r(b + 6 \sqrt{b + 1} + 4) } & \geq & \frac{0.5}{(b + 1)^2} 2^{ -(b + 6 \sqrt{b + 1} + 7) } \\ & \geq & 2^{ -(b + 8 \sqrt{b + 1} + 8) } \end{eqnarray*} When $b \leq 15$, we let $r = 1 + 1/15$ and therefore, \begin{eqnarray*} (1 - 1/r)(1 - 1/\sqrt{r}) 2^{ -r(b + 6 \sqrt{b + 1} + 4) } & \geq & 2^{ -(b + 6 \sqrt{b + 1} + 16) } \end{eqnarray*} Therefore we get always, $2^{a - n} \geq 2^{ -(b + 8 \sqrt{b + 1} + 16) } $ which finally implies, $$ a + b + 8 \sqrt{b + 1} + 16 \geq n .$$ \end{proofof} \vspace{0.1cm} \begin{proofof}{Thm.~\ref{thm:avgasym}} Let $b_m$ represent the concealing parameter for $\Pi_m$. It is easy to verify from Lem.~\ref{lem:sadd} and definition of Holevo-$\chi$ information, Defn.~\ref{def:holevo}, that $b = b_m/m$. Then Thm.~\ref{thm:avgchi} when applied to $\Pi_m$ implies, \begin{eqnarray*} & \Rightarrow & a_m + b_m + 8 \sqrt{b_m + 2} + 17 \geq mn \\ & \Rightarrow & \lim_{m \rightarrow \infty} \frac{1}{m}(a_m + b_m + 8 \sqrt{b_m + 2} + 17) \geq n \\ & \Rightarrow & a + b \geq n \end{eqnarray*} \end{proofof} \vspace{0.1cm} \noindent \begin{proofof}{Thm.~\ref{thm:worst}} Let $\mu = \{\lambda_x\}$ be the distribution on $\{0,1\}^n$ obtained from Thm.~\ref{thm:remote}. Consider a cheating strategy of $\mathsf {Alice}$ in which she puts the superposition $\sum_x \sqrt{\lambda_x} \ket{x}$ in the register where she keeps the commit string. Let $c$ be the string in the cheating register of $\mathsf {Alice}$. Now by arguments as above probability of success $\tilde{p}_c$ for $\mathsf {Alice}$ is at least $(1 - 1/\sqrt{r})(1 - 1/r)2^{-rk_c}$ where $ k_c, \rho_c, \rho$ being as before. Since for all $c, \mathsf{D}(\rho_c \| \rho) \leq b$ it implies (by setting $r$ appropriately) $\forall c, \tilde{p}_c \geq 2^{-(b + 8 \sqrt{b + 1} + 16)}. $ \end{proofof} \paragraph{Remark:} Let us now see how, with a good probability overall, $\mathsf {Alice}$ will be able to prevent herself from being detected cheating by $\mathsf {Bob}$. Let $\mathsf {Alice}$ have $c$ in the cheating register. Let $r_c$ be the probability of getting one on performing the two outcome measurement (obtained from Thm.~\ref{thm:substate}) after the commit phase as in the cheating strategy described above in proof of Thm.~\ref{thm:avg}. In case she gets one, she proceeds with the cheating strategy. In case she gets zero, she tries to rollback so that she can successfully reveal at least some string to $\mathsf {Bob}$. For this she does the following. \begin{enumerate} \item She applies the transformation $A_c^\dagger$ (that is inverse of $A_c$). \item She measures the input register in the computational basis and say she obtains $x'$. \item She proceeds with the rest of the reveal phase as if her actual input was $x'$. \end{enumerate} Assume that $\mathsf {Alice}$ obtains zero on performing the two-outcome measurement as in the cheating strategy described above which happens with probability $1 - r_c$. Now it can be verified that the trace distance between $\ketbra{\psi_c}$ and the combined state of $\mathsf {Alice}$ and $\mathsf {Bob}$ after obtaining zero on performing the measurement is at most $2r_c$. Since, $A_c^\dagger$ is unitary, this implies that the combined state of $\mathsf {Alice}$ and $\mathsf {Bob}$ after applying $A_c^\dagger$, and $\ketbra{\psi}$ will be at most $2r_c$. Now we can argue as before that $\mathsf {Alice}$ can reveal some string successfully to $\mathsf {Bob}$ with probability at least $1 - r_c$. Therefore overall, the probability that $\mathsf {Alice}$ will be able to reveal some string is at least $r_c + (1 - r_c)^2 \geq 1 - r_c$. Now since typically $r_c$ is quite small (like $2^{-b}$), $1 - r_c$ is quite close to 1. \subsection*{Acknowledgment} We thank Harry Buhrman, Matthias Christandl, Hoi-Kwong Lo, Jaikumar Radhakrishnan, and Pranab Sen for discussions. We also thank anonymous referees for suggestions on an earlier draft. \newcommand{\etalchar}[1]{$^{#1}$}
1,941,325,220,601
arxiv
\section{Introduction} Many recent efforts seek to integrate renewable energy resources with the power grid to reduce the carbon footprint. The high variability associated with wind and solar power can be balanced using \glspl{der} providing ancillary services such as frequency regulation. Consequently, there is a growing interest among market operators in DER aggregations with flexible generation and load capabilities to balance fluctuations in grid frequency and minimize \glspl{ace}. The fast ramping rate and minimal marginal standby cost put many DERs at an advantage against conventional generators and make them suitable for participation in the frequency regulation market. The fast ramping rates reduce the required power capacity of DERs to only 10\% of an equivalent generator to balance a frequency drop within 30~s~\cite{ZAO-LMC-LA-MTM:19}. However, most individual DERs have small capacities, typically on the order of kWs compared to 10~s of MW for conventional frequency control resources. Commanding the required thousands to millions of DERs to replace existing frequency regulation resources over a large balancing area entails aggregating DERs that are distributed at end points all over the grid on customer premises. The dynamic nature, large number, and distributed location of DERs requires coordination. This is in contrast to existing frequency regulation~\cite{MKM:14} implementation with conventional energy resources. For example, \ac{caiso} requires all generators to submit their bids once per regulation interval. Then, the setpoints are assigned centrally to all resources every 2-4~s without any consideration of operational costs~\cite{CAISO:12}. While distributed control has the potential to enable DER participation in the frequency regulation market (e.g.,~\cite{PS-CYC-JC:18-acc}), there is a general lack of large-scale testing to prove its effectiveness for widespread adoption by system operators. The 2017 National Renewable Energy Laboratory Workshop on Autonomous Energy Grids~\cite{NREL:17} concluded that ``A major limitation in developing new technologies for autonomous energy systems is that there are no large-scale test cases (...). These test cases serve a critical role in the development, validation, and dissemination of new algorithms''. The results of this paper are the outcome of a project under the ARPA-e \ac{nodes} program\footnote{\url{https://arpa-e.energy.gov/arpa-e-programs/nodes}}, which postulates DER aggregations as virtual power plants that enable variable renewable penetrations of at least 50\%. The vision of the NODES program was to employ state-of-the-art tools from control systems, computer science, and distributed systems to optimally respond to dynamic changes in the grid by leveraging DERs while maintaining customer quality of service. The NODES program required testing with at least 100 DERs at power. Here, we demonstrate the challenges and opportunities of testing on a heterogeneous fleet of DERs for eventual operationalization of optimal distributed control at frequency regulation time scales. \textit{Literature Review.} To the best of our knowledge, real-world testing of frequency regulation by DERs has been limited. A \ac{v2g} \ac{ev}~\cite{WK-VU-KH-KK-SL-SB-DB-NP:08} and two \ac{bess}~\cite{MS-DS-AS-RT-RL-PCK:13} provided frequency regulation. 76 bitumen tanks were integrated with a simplified power system model to provide frequency regulation via a decentralized control algorithm in~\cite{MC-JW-SJG-CEL-NG-WWH-NJ:16}. In buildings, a decentralized control algorithm controlled lighting loads in a test room~\cite{JL-WZ-YL:17}, centralized frequency control was applied to an \ac{ahu}~\cite{YL-PB-SM-TM:15,EV-ECK-JM-GA-DSC:18}, an inverter and four household appliances~\cite{BL-SP-SA-MVS:18}, and four heaters in different rooms~\cite{LF-TTG-FAQ-AB-IL-CNJ:18}. A laboratory home with an EV and an AHU, and a number of simulated homes were considered for demand response in~\cite{KB-XJ-DV-WJ-DC-BS-JW-HS-ML:16} through an aggregator at a 10~s level. Technologies for widespread, but centrally controlled, cycling of air conditioners directly by utilities~cf.~\cite{SDGE} and aggregators are common place for peak shifting, but occur over time scales of minutes to hours. Industrial solutions enabling heterogeneous DERs to track power signals also exist, but they are either centralized, cf.~\cite{SC-PA:16} or require all-to-all communication~\cite{AT-SZ-SR:17}. Our literature review exposes the following limitations: (i) centralized control or need for all-to-all communication~\cite{WK-VU-KH-KK-SL-SB-DB-NP:08,MS-DS-AS-RT-RL-PCK:13,YL-PB-SM-TM:15,EV-ECK-JM-GA-DSC:18,BL-SP-SA-MVS:18,LF-TTG-FAQ-AB-IL-CNJ:18,KB-XJ-DV-WJ-DC-BS-JW-HS-ML:16,SDGE,SC-PA:16,AT-SZ-SR:17}, which does not scale to millions of DERs; (ii) small numbers of DERs~\cite{WK-VU-KH-KK-SL-SB-DB-NP:08,MS-DS-AS-RT-RL-PCK:13,YL-PB-SM-TM:15,EV-ECK-JM-GA-DSC:18,BL-SP-SA-MVS:18,LF-TTG-FAQ-AB-IL-CNJ:18,KB-XJ-DV-WJ-DC-BS-JW-HS-ML:16}; (iii) lack of diversity in DERs~\cite{WK-VU-KH-KK-SL-SB-DB-NP:08,MS-DS-AS-RT-RL-PCK:13,MC-JW-SJG-CEL-NG-WWH-NJ:16,JL-WZ-YL:17,YL-PB-SM-TM:15,EV-ECK-JM-GA-DSC:18,LF-TTG-FAQ-AB-IL-CNJ:18}, with associated differences in tracking time scales and accuracy. No trial has been reported that demonstrated generalizability to a real scenario with (i) scalable distributed control and a (ii) large number of (iii) heterogeneous DERs. \textit{Statement of Contributions.} To advance the field of real-world testing of DERs for frequency control, we conduct a series of tests using a group of up to 69 active and 107 passive heterogeneous DERs on the University of California, San Diego (UCSD) microgrid~\cite{BW-JD-DW-JK-NB-WT-CR:13}. To the best of the authors' knowledge, this is the first work to consider such a large, diverse portfolio of real physical DERs for secondary frequency response. As such, the major contributions of this work are: \begin{itemize} \item A detailed account of the testbed, including the DER actuation and sampling interfaces, the distributed optimization setup, and communication framework. \item A description of techniques to work around technical barriers, provision of lessons learned, and suggestions for future improvement. \item Evaluation of the performance of both the cyber and physical layers, including an evaluation of eligibility requirements for and the economic benefit of participating in the ancillary services market. \end{itemize} \textit{Paper Overview.} Frequency regulation is simulated on the UCSD microgrid using real controllable DERs (Section \ref{ders}) to follow the \ac{pjm} RegD signal~\cite{PJM-signal:19} interpolated from 0.5Hz to 1Hz (Sections \ref{regulation-signal}). The DER setpoint tracking is formulated as a power allocation problem at every regulation instant (Section \ref{optimization-statements}), and uses three types of provably convergent distributed algorithms from~\cite{ADDG-CNH-NHV:12,AC-JC:16-allerton,AC-BG-JC:17-sicon,TA-CYC-SM:18-auto} to solve the optimization problem; see Appendix~\ref{sec:appendix}. Setpoints are computed distributively on multiple Raspberry Pi's communicating via ethernet switches (Section \ref{computing-setup}). The setpoints are implemented on up to 176 DERs at power using dedicated command interfaces via TCP/IP communication (Section \ref{actuation-interface}), the DER power outputs monitored (Section \ref{power-measurements}), and their tracking performance evaluated (Section \ref{error-metrics}). Results (Section~\ref{sec:results}) for the various test scenarios described in Section~\ref{sec:test-scenario} show that the test system tracks the signal with reasonable error despite delays in response and inaccurate tracking behavior of some groups of DERs, and qualifies for participation in the PJM ancillary services market . \section{Problem Setting} This paper validates real-world DER controllability for participation in secondary frequency regulation through demonstration tests implemented on a real distribution grid. The tests showcase the ability of aggregated DERs to function as a single market entity that responds to frequency regulation requests from the \ac{iso} by optimally coordinating DERs. The goal is to monitor and actuate a set of real controllable DERs to collectively track a typical \ac{agc} signal issued by the ISO. Three different distributed coordination schemes optimize the normalized contribution of each DER to the cumulative active power signal. Unlike simulated models, the use of real power hardware exposes implementation challenges associated with measurement noise, sampling errors, data communication problems, and DER response. To that end, precise load tracking is pursued at timescales that differ by DER type consistent with individual DER responsiveness and communication latencies, yet meet frequency regulation requirements in aggregation. The 69 kV substation and 12 kV radial distribution system owned by UCSD to operate the 5~km$^2$ campus was the chosen demonstration testbed. It has diverse energy resources with real-time monitoring and control capabilities, allowing for active load tracking. This includes over 3~MW of solar \ac{pv} systems, 2.5~MW/5~MWh of BESS, building \ac{hvac} systems in 14 million square feet of occupied space, and over 200 \ac{v1g}~\revision{\cite{CAISO:14}} and V2G EV chargers. The demonstration tests used a representative population of up to 176 such heterogeneous DERs to investigate tracking behavior of specific DER types as well as their cooperative tracking abilities. While the available DER capacity at UCSD far exceeds the minimum requirements for an ancillary service provider set by most ISOs (typically $\sim$ 1~MW), logistical considerations and controller capabilities dictated the choice of a DER population size with less aggregate power capacity (up to 184~kW) for this demonstration. Since this magnitude of power is insufficient to measurably impact the actual grid frequency, we chose to simulate frequency regulation by following a frequency regulation signal. \section{Test Elements}\label{sec:elements} Here, we elaborate on the different elements of the validation tests. These include the optimization formulation employed to compute DER setpoints (Section~\ref{optimization-statements}), the reference AGC signal (Section~\ref{regulation-signal}) and types of DERs used to track it (Section~\ref{ders}), the computing platform (Section \ref{computing-setup}), the actuation (Section~\ref{actuation-interface}) and monitoring interfaces (Section~\ref{power-measurements}), the performance metrics used to assess the cyber and physical layers, and eligibility for market participation (Section \ref{error-metrics}). \subsection{Optimization Formulation}\label{optimization-statements} The optimization model for AGC signal tracking using DERs can be mathematically stated as a separable resource allocation problem subject to box constraints as follows: \begin{equation}\label{eq:opt} \begin{aligned} \underset{p\in\ensuremath{\mathbb{R}}^n}{\text{min}} \ &f(p) = \sum_{i=1}^n f_i(p_i), \\ \text{s.t.} \ &\sum_{i=1}^n p_i = \ensuremath{\subscr{P}{ref}}, \\ &p_i\in [\ensuremath{\underline{p}}_i, \ensuremath{\overline{p}}_i], \quad \forall i\in\N = \{1,\dots,n\}. \end{aligned} \end{equation} The agents $i\in\N$ each have local ownership of a decision variable $p_i\in\ensuremath{\mathbb{R}}$, representing an active power generation or consumption quantity (setpoint), a local convex cost function $f_i$, and local box constraints $[\ensuremath{\underline{p}},\ensuremath{\overline{p}} ]$, representing active power capacity limits. $\ensuremath{\subscr{P}{ref}}$ is a given active power reference value determined by the ISO and transmitted to a subset of the agents as problem data, see e.g.~\cite{CAISO:18}. $\ensuremath{\subscr{P}{ref}}$ is a signal that changes over time, so a new instance of~\eqref{eq:opt} is solved in \revision{real-time} 1~s intervals corresponding to these changes. \revision{Note that with just 1~s difference between the instances, the box constraints might also change due to the limited ramp rates of DERs. In this work we consider them constant and assume~\eqref{eq:opt} is feasible.} For the validation tests, we used two types of cost functions: constant and quadratic. Constant functions were used for the Ratio-Consensus (RC) solver~\revision{\cite{ADDG-CNH-NHV:12}}, which turns the optimization into a feasibility problem. Quadratic functions were used for the primal-dual based (PD)~\revision{\cite{AC-JC:16-allerton,AC-BG-JC:17-sicon}} and Distributed Approximate Newton Algorithm (DANA)~\revision{\cite{TA-CYC-SM:18-auto}} methods. \revision{In short, RC prescribes dynamics which seek to achieve \emph{consensus} on a \emph{ratio} of operating capacity with respect to $\ensuremath{\underline{p}}_i,\ensuremath{\overline{p}}_i$ so that the agents achieve $\sum_i p_i = \ensuremath{\subscr{P}{ref}}$. PD and DANA each are Lagrangian-based dynamics; in particular, PD is gradient-based (``first-order") and DANA is Newton-based (``second-order"). See Appendix~\ref{sec:appendix} for more technical detail on these algorithms}. The quadratic functions were artificially chosen to produce satisfactorily diverse and representative solutions \revision{to~\eqref{eq:opt}} for each DER population. \revision{Costs associated with a physical or economic metric (e.g. deviation from a building setpoint for AHUs, user-specified charging demands for V1G and V2Gs, and resistive losses in a BESS) are of great interest, but are far from trivial to model and thus not the focus of this study.} We split the total time period of the signal, $\ensuremath{\subscr{P}{ref}}$ into three equal segments, and implemented RC, PD, and DANA in that order. Box constraints $[\ensuremath{\underline{p}}_i,\ensuremath{\overline{p}}_i ]$ \revision{are given in Table~\ref{table:devicerating} and} were centered at zero for simplicity; \revision{for example, an AHU $i$ with 2~kW capacity has $[\ensuremath{\underline{p}}_i,\ensuremath{\overline{p}}_i] = [-1,1]$, while a V2G $j$ with $\pm$5~kW capacity has $[\ensuremath{\underline{p}}_j,\ensuremath{\overline{p}}_j] = [-5,5]$. } \subsection{Regulation Signal }\label{regulation-signal} The 40~min RegD signal published by PJM~\cite{PJM-signal:19} served as the reference AGC signal for the validation tests, and was used to obtain the value for $\ensuremath{\subscr{P}{ref}}$ in~\eqref{eq:opt}. The normalized RegD signal, contained in $[-1,1]$ \revision{(see Figure~\ref{fig:regd})}, was interpolated from 0.5~Hz to 1~Hz. The signal was then treated by subtracting the normalized contributions of building loads and PV systems, cf. Section~\ref{ders}. Finally, the normalized signal was scaled by a factor proportional to the total DER capacity $\sum_i (\ensuremath{\overline{p}}_i - \ensuremath{\underline{p}}_i)$ before sending to the optimization solvers. More precisely, \begin{equation}\label{eq:norm-sig} \ensuremath{\subscr{P}{ref}} = \beta \frac{\sum_i (\ensuremath{\overline{p}}_i - \ensuremath{\underline{p}}_i)}{\| P_\text{RegD} + P_\text{PV} - P_\text{b} \|_{\infty} }\left(P_\text{RegD} + P_\text{PV} - P_\text{b}\right), \end{equation} where $P_\text{RegD}$ refers to the normalized RegD signal data, $P_\text{PV}$ and $P_\text{b}$ respectively refer to the normalized PV generation and building load data obtained from the UCSD ION server as described in Section \ref{power-measurements}, and $0 < \beta < 1$ is an arbitrary scaling constant. \revision{Note that this results in a different target signal $\ensuremath{\subscr{P}{ref}}$ for the different test scenarios considered in Section~\ref{sec:test-scenario} due to the different power ratings of the DERs (cf. Section~\ref{ders}) used across the tests.} For most test scenarios, $\beta = 0.75$ to prevent extreme set points that would require all DERs to operate at either $\ensuremath{\overline{p}}_i$ or $\ensuremath{\underline{p}}_i$ simultaneously, which may be infeasible in some time steps due to slower signal update times, see Table~\ref{table:devices}. Each $P$ in~\eqref{eq:norm-sig} is a vector with 2401 elements corresponding to each 1~s time step's instance of~\eqref{eq:opt} over the 40~min time horizon. \begin{figure}[hbt!] \centering \includegraphics[width=\linewidth]{RegD_sig.eps} \caption{\revision{Normalized PJM RegD signal.}}\label{fig:regd} \vspace*{-2ex} \end{figure} \subsection{DERs}\label{ders} The reference AGC signal was to be collectively tracked using DERs consisting of HVAC AHUs, BESS, V1G and V2G EVs, PV systems, and whole-building loads. Since PV systems and (non-AHU) building loads were not controllable, they participated in the test as passive DERs. Consequently, the active DERs were commanded to track a modified target signal derived by subtracting the net active power output of passive DERs from the reference AGC signal and applying appropriate scaling (cf. Section \ref{regulation-signal}). \revision{Table~\ref{table:devicerating}} lists the typical net power capacity $\ensuremath{\overline{p}}_i - \ensuremath{\underline{p}}_i$ of the different active DER types. \begin{table}[tbh] \centering \caption{\revision{Typical power rating of active DER types}}\label{table:devicerating} \begin{tabular}{|c|c|c|c|c|} \hline \textbf{DER Type} & \textbf{AHU} & \textbf{V1G EV} & \textbf{V2G EV} & \textbf{BESS} \\ \hline \makecell{\textbf{Typical power} \\ \textbf{rating per DER type}} & 2~kW & \makecell{3.3~kW (Tests 0 \& 1), \\ 4.9~kW (Test 2)} & \revision{$\pm$} 5~kW & \revision{$\pm$} 3~kW \\ \hline \end{tabular} \end{table} The contribution of each active DER to the target signal was defined with respect to a baseline power, around which $[\ensuremath{\underline{p}}_i,\ensuremath{\overline{p}}_i ]$ was centered, to enable tracking of both positive and negative ramps in the target signal. For DERs like V2G EVs and BESS, which were capable of power adjustments in both directions, the baseline was 0~kW. The baseline for V1G EVs was defined to be halfway between their allowed minimum and maximum charging rates, where the former was restricted by the SAE J1772 charging standard to 1.6~kW. Similarly, the baseline for AHUs was defined to be half of their power draw when on. Further, since AHUs were limited to binary on-off operational states, the continuous and arbitrarily precise AHU setpoints obtained by solving \eqref{eq:opt} were rounded to the closest discrete setpoint obtained from a combination of on-off states before actuation. AHU control was restricted, by UCSD Facilities Management, to specifying only DER setpoints and duration of actuation; since building automation controllers could not be modified, model-based designs were impossible. This was to avoid malfunctioning or disruptions to real physical infrastructure in the networked building management system that also controls lighting, security, and fire protection systems. \begin{figure*}[tbh] \centering \includegraphics[width=\linewidth]{commDiag.png} \caption{Communication architecture for computation and actuation of control policies.}\label{fig:network-diagram} \end{figure*} \subsection{Computing Setup}\label{computing-setup} The DER active power setpoints \revision{were computed for the entire 40-min test horizon prior to any device actuation using a set of 9 Linux-based nodes. The nodes C1-C9 }communicate with each other over an undirected ring topology, cf. Fig.~\ref{fig:network-diagram}. As one of the sparsest network topologies, where message passing occurs only between a small number of neighbors, the ring topology presents a challenging scenario for distributed control. Since there were more active DERs than computing nodes, the 9 nodes were mapped subjectively to the 69 active DERs such that nodes C1-C2 computed the actuation setpoints for the AHUs, C3 for V1G EVs, C4-C8 for V2G EVs and C9 for the BESS. \revision{The computing steps are summarized in Algorithm \ref{alg:computing}.} Each computing node generated actuation commands as CSV files containing the power setpoints for their respective group of DERs at a uniform update rate of 1~Hz. Preliminary testing revealed different response times across DER types, with AHUs and V1G EVs exhibiting slower response than other active DER types. DERs with response times greater than 1~s were subject to a stair-step control signal with a signal update time consistent with DER responsiveness and constant setpoints during intermediate time steps. Table~\ref{table:devices} lists the signal update times for the different DER types. \floatname{algorithm}{\color{black}Algorithm} \begin{algorithm} \caption{\revision{Computing process}} \label{alg:computing} \color{black} \begin{algorithmic}[1] \Require {Map $f: C_i \rightarrow$ {DER-type}} \State Initialize time of last solution update $t_{\texttt{sol-update}_i} = 0$, initial setpoints for DERs mapped to computing node $C_i$ as $P_{f(C_i)}, \forall i\in\{1,\dots,9\}$ \For {$k=0,\dots,2400$} \For {$i=1,\dots,9$} \If{$k - t_{\texttt{sol-update}_i} == t_{\texttt{signal-update}_i}$} \State {Solve~\eqref{eq:opt} to update $P_{f(C_i)}(k)$} \State $t_{\texttt{sol-update}_i} = k$ \EndIf \State $P_{f(C_i)}(k) \gets P_{f(C_i)}(t_{\texttt{sol-update}_i})$ \If{$\operatorname{mod}(k,60) == 0$} \State Send $P_{f(C_i)}(k)$ to DER type, $f(C_i)$ \EndIf \EndFor \EndFor \end{algorithmic} \end{algorithm} \subsection{Actuation Interfaces and Communication Framework}\label{actuation-interface} The actuation commands were issued using fixed IP computers through dedicated interfaces that varied by DER type as depicted in Fig.~\ref{fig:network-diagram}. The setpoints for AHUs were issued through a custom Visual Basic program that interfaced with the Johnson Control Metasys building automation software. The power rate of the BESS was set via API-based communication with a dedicated computer that controlled the battery inverter. The V1G and V2G EVs charging rates were adjusted through proprietary smart EV charging platforms of the charging station operators. EVs using ChargePoint\textsuperscript{\tiny\textregistered} V1G stations were manually controlled via the load shedding feature of ChargePoint’s station management software. The actuation of EVs using PowerFlex\textsuperscript{\tiny\textregistered} V1G chargers and Nuvve\textsuperscript{\tiny\textregistered} V2G chargers was automated and commands were issued via API-based communication. \subsection{Power Measurements}\label{power-measurements} The active power of all DERs was metered at a 1~Hz frequency. The power outputs of \revision{individual} PV systems and building loads were obtained prior to the test from their respective ION meters by logging data from the UCSD ION \ac{scada} system \revision{and aggregated to obtain the total power output of all PVs and building loads}. A moving average filter with a 20~s time horizon was used to remove noise from the \revision{aggregate} measured data for these passive DERs. V2G EVs and BESS power data were acquired using the same interfaces that were used for their actuation, which logged data from dedicated power meters. Since neither AHUs nor the ChargePoint V1G EVs had dedicated meters, they were monitored via their respective building ION meters by subtracting a baseline building load from the building meter power output. Assuming constant baseline building load, any change in the meter outputs can be attributed to the actuation of AHUs and V1G EVs. This assumption is justifiable considering the tests were conducted at 0400 PT to 0600 PT on a weekend, when building occupancy was likely zero and building load remained largely unchanged. Noise in the ION meter outputs observed as frequent 15~-~30~kW spikes in the measured data for AHUs (Fig.~\ref{fig:trial0trial1}) and ChargePoint V1G EVs was treated by removing outliers and passing the resulting signal through a 4~s horizon moving average filter. Here, outliers refer to points that change in excess of 50\% of the mean of the 40~min signal in a 1~s interval. \subsection{Performance Metrics}\label{error-metrics} The performance of the distributed implementation (cyber-layer) was measured by the normalized \ac{mse} between the distributed and true (i.e. exact) centralized optimization solutions. The true solutions were computed for each instance of~\eqref{eq:opt} using a centralized CVX solver in MATLAB~\cite{website:cvx}. The MSE was normalized by dividing by the mean of the squares of the true solutions. The tracking performance of the DERs was evaluated through (i) the \ac{rmse} in tracking \begin{equation}\label{eq:rmse-calc} \text{RMSE} = \sqrt{\frac{\sum_{t=1}^T (P_t^{\text{prov}}-P_t^{\text{tar}})^2}{\sum_{t=1}^T (P_t^{\text{tar}})^2}}, \end{equation} where $P_t^{\text{prov}}$ is the total power that was provided (measured), and $P_t^{\text{tar}}$ is the target (commanded) regulation power at time step $t\in\{1,\dots,T=2401\}$; and (ii) the tracking delay, computed as the time shift of the measured signal which yields the lowest RMSE between the commanded and measured signals. \revision{ The sum of the delays due to local computation and communication between the computing nodes is capped by the algorithm computation time, and would be less than 1~s. Therefore, these delays are not explicitly considered in the tracking delay calculation, and the computed tracking delay only includes the device response times and measurement delays.} The PJM Performance Score~$S$ following~\cite[Section 4.5.6]{PJM:20} was computed as a test for eligibility to participate in the ancillary services market, and is given by the mean of a Correlation Score~$S_c$, Delay Score~$S_d$, and Precision Score~$S_p$: \begin{align*} S_c &= \frac{1}{T-1}\sum_{t=1}^T \frac{(P_t^{\text{prov}} - \mu^{\text{prov}})(P_t^{\text{tar}} - \mu^{\text{tar}})}{\sigma^{\text{prov}}\sigma^{\text{tar}}}, \\ S_d &= \bigg\lvert \frac{\delta - 5 \text{ min}}{5 \text{ min}} \bigg\rvert, \quad S_p = 1 - \frac{1}{T} \sum_{t=1}^T \bigg\lvert \frac{P_t^{\text{prov}} - P_t^{\text{tar}}}{\mu^{\text{tar}}} \bigg\rvert, \\ S &= 1/3(S_c + S_d + S_p), \end{align*} where $P_t^{\text{prov}}$ and $P_t^{\text{tar}}$ are as in~\eqref{eq:rmse-calc}, $\mu^{\text{prov}}, \mu^{\text{tar}}$ and $\sigma^{\text{prov}}, \sigma^{\text{tar}}$ denote their respective means and standard deviations, and $\delta$ is the corresponding maximum delay in DER response for when $S_c$ was maximized. A performance score of at least 0.75 is required for participating in the PJM ancillary services market. \section{\revision{Test Scenarios} }\label{sec:test-scenario} \revision{In this section, we describe the test scenarios carried out on the UCSD microgrid elaborating on the challenges we faced and the differences across the tests, summarized by type of DER in Table~\ref{table:devices}.} \subsection{Commonalities} A series of three tests were conducted on December 12, 2018 (Test~0), April 14, 2019 (Test~1) and December 17, 2019 (Test~2). All three tests involved a 40~min preparatory run followed by a 40~min final test. \revision{Table~\ref{table:devices} lists the type of DERs across the tests. All tests were carried out during non-operational hours (between 0400 PT and 0540 PT) to avoid potential disruptions to building occupants with the exception of V1G EVs in Test 2, which were tested at the start of the work day (0900 - 1010 PT) to maximize fleet EV availability (cf. Section~\ref{test-2}).} Day-time PV output data from February 24, 2019 was used as a proxy for an actual daytime PV signal. \begin{table}[tbh] \centering \caption{\revision{Characteristics of each test by DER type.}}\label{table:devices} \begin{tabular}{|>{\color{black}}c|>{\color{black}}c|>{\color{black}}c|>{\color{black}}c|>{\color{black}}c|} \hline \textbf{DER Type} & \textbf{AHU} & \textbf{V1G EV} & \textbf{V2G EV} & \textbf{BESS} \\ \hline \textbf{\# DERs - Test 0} & 7 & 4 & 5 & 1 \\ \hline \textbf{\# DERS - Test 1} & 34 & 29 & 5 & 1 \\ \hline \textbf{\# DERs - Test 2} & 34 & 17 & 6 & 1 \\ \hline \textbf{Signal updates} & 1 m & \begin{tabular}[c]{@{}c@{}}5 m \\ (Tests 0 \& 1),\\ 1 m \\ (Test 2)\end{tabular} & 1 s & 20 s \\ \hline \textbf{DER Actuation} & \begin{tabular}[c]{@{}c@{}}Synchronous\\ (Tests 0 \& 1),\\ Two-stage: Stage 1 \\(Test 2)\end{tabular} & \multicolumn{3}{>{\color{black}}c|}{\begin{tabular}[c]{@{}c@{}}Synchronous \\ (Tests 0 \& 1),\\ Two-stage: Stage 2 \\ (Test 2)\end{tabular}} \\ \hline \textbf{Operation Mode} & Automatic & \begin{tabular}[c]{@{}c@{}}Manual \\(Tests 0 \& 1),\\ Automatic \\(Test 2)\end{tabular} & \multicolumn{2}{>{\color{black}}c|}{Automatic} \\ \hline \textbf{Time of test} & 0400 - 0500 PT & \multicolumn{3}{>{\color{black}}c|}{\begin{tabular}[c]{@{}c@{}}0400 - 0500 PT (Tests 0 \& 1),\\ 0900 - 1010 PT (Test 2)\end{tabular}} \\ \hline \textbf{Computing setup} & \multicolumn{4}{>{\color{black}}c|}{\begin{tabular}[c]{@{}c@{}}Semi-centralized using ROS (Tests 0 \& 1),\\ Fully distributed using Raspberry Pi (Test 2)\end{tabular}} \\ \hline \end{tabular} \end{table} \subsection{Test~0}\label{test-0} Test~0 was a preliminary calibration that \revision{was used to examine the response times and tracking behavior of every DER type and detect issues related to communication and actuation. \subsubsection{DERs} Test~0 used only a representative sample of 17 DERs. The V1G and V2G population was composed of UCSD fleet EVs plugged in at ChargePoint and Nuvve charging stations, respectively. \subsubsection{Computing Setup} 9 laptops running a Robotic Operating System (ROS) communicated via local Wi-Fi hotspot to implement the distributed coordination algorithms and compute the DER setpoints.} \subsubsection{\revision{Actuation}} \revision{All DERs were actuated synchronously.} \subsection{Test~1}\label{test-1} Test~1 was identical to Test~0 except in the number of DERs utilized. \subsubsection{DERs} \revision{Test~1 used a larger population of 69 active DERs and 107 passive DERs.} \subsubsection{Computing Setup} \revision{ The same semi-centralized ROS-based computing setup as in Test~0 was used in Test~1.} Given that the available power capacity of fast-responding DERs such as V2G and BESS was smaller than slow-responding DERs, the steep ramping demands of the target signal were met by upscaling the power of the fast responding DERs in solving for the contribution of individual DERs. Another option would have been to reduce the number of slow responding DERs, but the funding agency stipulated prioritizing the number and types of heterogeneous DERs over accuracy in signal tracking. A real DER aggregator would instead require a more balanced capacity of slow and fast DERs to ensure feasibility of tracking these ramp features. \subsubsection{\revision{Actuation}} \revision{All DERs were actuated synchronously. Since the ChargePoint V1G EVs in Test~1 were operated via manual input of DER setpoints (an interface to their API had not been developed yet), to avoid overloading the (human) operators, they were grouped into three groups and actuated in a staggered fashion such that each of the three groups maintained a signal update time of 5~min but were commanded 1~min apart from each other.} \subsection{Test~2}\label{test-2} Test~2 also used the entire population of DERs but substituted the cumbersome V1G population with more capable V1G chargers and used a new distributed computing setup and method of actuation based on lessons learned from Test~1. \subsubsection{DERs} The V1G EVs used in Test~1 performed poorly owing to an unreliable actuation-interface that experienced seemingly random stalling and lacked automated control capabilities. Therefore, 17 PowerFlex V1G charging stations at one location replaced the distributed 29 V1G charging stations used in Test~1. Since the PowerFlex interface did not permit actuating individual stations, the 17 charging stations participated in the test as a single aggregate DER. The 0930 – 1010 PT timing of the V1G EV part of the test coincided with the start of the workday and a V1G EV population that had only recently plugged in and therefore had ample remaining charging capacity. The EVs were contributed by UCSD employees and visitors randomly plugging in at the PowerFlex charging stations just before the start of the trial. An aggregate signal of 15~kW to 19~kW was distributed equally amongst the 17 EVs. \revision{In addition to the new V1G EVs, the V2G population in Test~2 was replaced with a different set of Nuvve chargers to resolve a tracking/noise issue during discharge-to-grid observed in Test~1 and expanded to include an additional charger. \subsubsection{Computing Setup} \revision{Test~2 featured a fully distributed architecture that consisted of a network of Raspberry Pi’s that asynchronously communicated with each other via an ethernet switch.} In addition, a modified synchronization technique was implemented in the software which improved the fidelity and robustness of message-passing. This upgraded message-passing framework and synchronization technique for both software and hardware resulted in significantly faster communication between nodes. \subsubsection{\revision{Actuation}} \revision{The order of AHU actuation was modified in Test~2 to allow for device settling time and prevent interference. In particular, in Tests~0 and 1, individual AHUs were ordered and actuated using a protocol that was not cognizant of settling times or building groupings, while the protocol was revised in Test~2 to systematically command the entire population of AHUs in a manner which maximized time between consecutive actuations for an individual unit.} Test~2 also featured a two-stage approach of actuation that was a result of the DER tracking behavior in Test~1. Some DERs, such as BESS, V1G EVs and V2G EVs, tracked quickly and accurately, whereas others, such as AHUs, tracked poorly. The overall tracking performance in Test~2 was improved by using ``well-behaved" DERs to compensate for AHU tracking errors by incorporating the error signal from actuating AHUs in Stage 1 to the cumulative target signal for BESS, V1G EVs and V2G EVs in Stage 2. Although synchronous actuation of all participating DERs is preferred in practice, the two-stage approach highlights the significance of systematic characterization of DERs in minimizing~ACE. \section{\revision{Test Results}} \label{sec:results} \subsection{Distributed Optimization/Cyber-Layer Results} In Table~\ref{table:error}, we present MSE results of our 1~s real-time Raspberry-pi distributed optimization solutions (the ``cyber-layer'' of the system). \begin{table}[htb] \centering \caption{Normalized mean-squared-error of distributed solutions obtained from real-time 1~s intervals compared to centralized solver solution for Test~2 (Section \ref{error-metrics})}\label{table:error} \begin{tabular}{|c|c|c|c|c|} \hline \textbf{DER Type} & \textbf{RC} & \textbf{PD} & \textbf{DANA} & \textbf{all} \\ \hline AHU & $0$ & $1.4\times 10^{-7}$ & $2.8\times 10^{-9}$ & $4.6\times 10^{-8}$ \\ \hline V1G EVs & $0$ & $7.0 \times 10^{-8}$ & $1.7\times 10^{-9}$ & $2.3\times 10^{-8}$ \\ \hline V2G EVs & $0$ & $6.6\times 10^{-5}$ & $5.0\times 10^{-7}$ & $2.1\times 10^{-5}$ \\ \hline BESS & $0$ & $2.0\times 10^{-6}$ & $9.1\times 10^{-8}$ & $6.5\times 10^{-7}$ \\ \hline Total & $0$ & $1.8\times 10^{-5}$ & $1.1\times 10^{-7}$ & $4.9\times 10^{-6}$ \\ \hline \end{tabular} \end{table} RC converged to the exact solution in all instances. This is unsurprising, as the RC problem formulation does not account for individual DER costs and thus, is a much simpler problem with a closed-form solution. For PD and DANA, we obtained excellent convergence, with errors on the order of $0.001\%$ in the worst cases. In general, DANA tended to converge faster than PD \revision{in the sense that the obtained solutions were more accurate under the same fixed 1~s computation time}. For our application with 1~s real-time windows, accuracy and convergence differences did not affect the physical layer results in any tangible way, but applications with more stringent accuracy or speed requirements may benefit from using a faster algorithm like DANA. The differences between DER populations can be largely attributed to the faster time scale of the V2G EVs (and to a lesser extent the BESS), see Table~\ref{table:devices}. Since the V2G EVs were responsible for the high-frequency component of $\ensuremath{\subscr{P}{ref}}$, the solver was required to converge to new solutions at every time step, which induced more error compared to the slow V1G EVs and AHUs with relatively static solutions. \subsection{Physical-Layer Test Results}\label{ssec:phys-results} We now present the results of the tracking performance pertaining to the physical-layer of the experiment. We provide only some selective plots for Test~0 and Test~1 in Fig.~\ref{fig:trial0trial1}, and a complete set of plots for each Test~2 DER population in Fig.~\ref{fig:trial2}. Error and tracking delay data defined in Section \ref{error-metrics} is given in Table~\ref{table:phys-error} for Test~1 and Test~2. Data for Test~0 is omitted due to its preliminary nature. The optimal shift described in Section \ref{error-metrics} is applied to each time series and hence some areas in plots may appear like the provided signal anticipated the target. \begin{figure}[hbt!] \centering \includegraphics[width=\linewidth]{trial0_trial1_2.eps} \caption{\revision{Selected plots from Tests~0 and~1.} \textbf{Top:} AHU response in Test~0. \revision{Note the poor tracking and spikes in the measured response.} \textbf{Middle:} V2G response in Test~1. \revision{Note the inaccuracy in tracking during discharge-to-grid phases.} \textbf{Bottom:} Total response in Test~1. \revision{Note the large-magnitude, low-frequency features demonstrating some broad tracking behavior, but overall poor performance.} }\label{fig:trial0trial1} \vspace*{-2ex} \end{figure} \begin{figure}[hbt!] \centering \includegraphics[width=\linewidth]{trial2_2.eps} \caption{\revision{Test~2 results.} From \textbf{top to bottom}, AHU, V2G EVs, V1G EVs, BESS, and total responses. \revision{Note the substantially improved AHU, V2G, and total tracking performance compared to Figure~\ref{fig:trial0trial1}. }}\label{fig:trial2} \vspace*{-2ex} \end{figure} Signal tracking accuracy in Test~0 was generally poor despite the small number of DERs employed, largely due to inexperience in actuating the AHUs and V1Gs. In particular, Fig.~\ref{fig:trial0trial1} reveals some oscillations in the AHU response. It is overall difficult to determine if even large-feature, low-frequency components of the signal were tracked. Further, data gathering for V1Gs and AHUs was done via noisy and unreliable building ION meters, which motivated the need for outlier treatment (Section~\ref{power-measurements}) in Tests~1 and~2, and resulted in the smoother and better tracking signal in the top plot of Fig.~\ref{fig:trial2}. Test~1 yielded a 111\% RMSE for AHUs. We speculate that the small 4~s delay \revision{in Test~1} is not representative of the actual AHU delay due to random correlations dominating the time shift for this large error. This is confirmed by a much better AHU response in Test~2 with RMSE 12\%, where a 105~s delay is more likely to be representative of the true AHU actuation delay. Given the poor visibility into AHU and V1G controllers explained in Section~\ref{sec:test-scenario}, it is challenging to identify the source of the poor tracking behavior. We speculate that DER metering at the building level rather than the DER level was a major source of error for AHU and V1G in Test~1. This was largely resolved in Test~2 by utilizing a different population of V1Gs with dedicated meters and by modifying the actuation scheme for AHUs to be less susceptible to metering errors as described in Section~\ref{test-2}. Additionally, the actuation-interface stalling for V1G EVs, described in Section~\ref{test-1}, was dominant in Test~1, resulting in the poor tracking for V1Gs. Actuating-interface issues were resolved in Test~2 by utilizing an automated control scheme for the V1Gs, which led to significantly lower error. The BESS emerged as the star performer achieving very accurate tracking across all tests with no delay. The V2G EVs also performed relatively well aside from a signal overshoot issue observed during the discharge cycle in Test~1 seen in Fig.~\ref{fig:trial0trial1}. The issue was resolved in Test~2 by using V2G EV charging stations from a different manufacturer (Princeton Power), as described in Section \ref{test-2}. The V2G charging stations deployed for these tests were pre-commercial or early commercial models that had a few operating issues, such as the overshoot issue during Test~1. The inability of the AHUs to respond to steep, short ramps (Fig.~\ref{fig:trial2}) could be due to slow start-up sequences programmed into the building automation controllers to increase device longevity or due to transients associated with driving their AC induction electric motors. Tackling this would require dynamic models and parameter identification of signal response and delay. With the new V1G EV population in Test~2, tracking delay reduced from 40~s to 10~s and the tracking accuracy improved significantly. The 1~kW bias seen in Fig.~\ref{fig:trial2} is likely due to rounding errors arising from the inability of PowerFlex charging stations to accept non-integer setpoints. The superior performance of the BESS and V2Gs motivated the two-stage actuation scheme described in Section~\ref{test-2}, which contributed to reducing the total RMSE from 50\% in Test~1 to 10\% in Test~2 (compare the bottom plots of Figs~\ref{fig:trial0trial1} and~\ref{fig:trial2}). The two-stage approach allows a sufficiently large proportion of accurately tracking DERs to compensate for the errors of the first stage, where tracking is worse. In this way, poorly-tracking DERs, such as AHUs, can still contribute by loosely tracking some large-feature, low-frequency components of the target signal. The low-frequency contribution reduces the required total capacity of the strongly-performing DERs in the second stage leading to more fine-tuned signal tracking in aggregation. Some recommended rules of thumb for two-stage approach are: (i) Total capacity of first-stage DERs is less than or equal to total capacity of second-stage DERs. (ii) DERs in the first stage are capable of tracking with $<$ 50\% RMSE. (iii) DER cost functions are such that the deviation from the baseline is lower cost for first-stage DERs than for second-stage. (iii) allocates a significant portion of the target signal initially to first-stage DERs, freeing up DER capacity in the second-stage for error compensation. \begin{table}[htb] \caption{\textbf{Left:} Relative root mean-squared-error of tracking error by DER type. \textbf{Right:} Delay (optimal time-shift) of DER responses in seconds.}\label{table:phys-error} \centering \begin{tabular}{|c|c|c|} \hline \textbf{DER Type} & \textbf{Test~1} & \textbf{Test~2} \\ \hline AHU & 1.11 & 0.12\\ \hline V1G EVs & 0.68 & 0.077 \\ \hline V2G EVs & 0.30 & 0.060 \\ \hline BESS & 0.054 & 0.018 \\ \hline Total & 0.50 & 0.097 \\ \hline \end{tabular} \qquad \begin{tabular}{|c|c|c|} \hline \textbf{DER Type} & \textbf{Test~1} & \textbf{Test~2} \\ \hline AHU & 4 & 105 \\ \hline V1G EVs & 40 & 10 \\ \hline V2G EVs & 5 & 3 \\ \hline BESS & 0 & 0 \\ \hline Total & N/A & N/A \\ \hline \end{tabular} \end{table} \subsection{Economic Benefit Analysis Here, we evaluate the economic benefit of the proposed test system, which is vital for wider scale adoption of DERs as a frequency regulation resource in real electricity markets. To this end, we take an approach similar to~\cite{YL-PB-SM-TM:15} to first demonstrate that the testbed is eligible to participate in the PJM ancillary services market. Following the PJM Manual 12~\cite{PJM:20} (Section~\ref{error-metrics}), we compute a Correlation Score $S_c$ = 0.98, Delay Score $S_d$ = 0.65, and Precision Score $S_p$ = 0.91 from data for Test~2, and obtain a Performance Score $S = 0.85\geq 0.75$, which confirms the eligibility to participate in the PJM ancillary service market. Next, we compute the estimated annual revenue assuming that the resources are available throughout the day. \revision{Using PJM's ancillary service market data\footnote{\url{https://dataminer2.pjm.com/feed/reg_prices/definition}} with our total (active) DER capacity of 184~kW and performance score of 0.85, the capability and performance credits for this population of resources (cf.~\cite[Section 4]{PJM:19}) would respectively be \$135 and \$11, for July 9, 2020. This gives an estimated amount of \$53,290 as the total annual revenue.} Note that the 184~kW DER capacity employed in this work represents less than 5\% of the total DER capacity and less than 0.5\% of the total capacity of the UCSD microgrid, cf.~\cite{BW-JD-DW-JK-NB-WT-CR:13}. As such, the revenue would significantly increase if more microgrid resources are utilized for regulation, even with reduced availability. \section{Conclusions} We have presented one of the first real-world demonstrations of secondary frequency response in a distribution grid using up to 176 heterogeneous DERs. The DERs include AHUs, V1G and V2G EVs, a BESS, and passive building loads and PV generators. The computation setup utilizes state-of-the-art distributed algorithms to find the solution of a power allocation problem. We show that the real-time distributed solutions are close to the true centralized solution in an MSE sense. Tests with real, controllable DERs at power closely track the given active-power reference signal in aggregation. \revision{Further, our economic benefit analysis shows a potential annual revenue of \revision{\$53K} for the chosen DER population. These tests highlight the importance of dedicated and noise-free measurement sensors and a well-understood and reliable DER control interface for precise signal tracking. } \revision{Extensions of this work are ongoing under DERConnect\footnote{\url{https://sites.google.com/ucsd.edu/derconnect/home}}, a new project at UCSD that aims to develop a testbed consisting of 2500 DERs that allows for online implementation of various distributed algorithms.} As is already recognized by the power systems community and federal funding agencies such as ARPA-e and National Science Foundation, large-scale power-in-the-loop testing is needed for transitioning distributed technologies to real distribution systems. We hope that this work spurs further testing and ultimately widespread adoption of coordinated resource control algorithms by relevant players in industry. \appendices \section{Distributed Coordination Algorithms}\label{sec:appendix} In this section we describe the algorithms used in our distributed computing platform to solve~\eqref{eq:opt}. \emph{Ratio-Consensus (RC)}: The ratio-consensus of~\cite{ADDG-CNH-NHV:12} computes equitable contributions from all DERs without DER-specific cost functions (or constant DER costs). The ratio-consensus algorithm for providing $\ensuremath{\subscr{P}{ref}}$ is given by \begin{alignat*}{2} y_i[k+1] &= \sum_{j\in\N_i} \frac{1}{\vert \N_i\vert}y_j[k], & z_i[k+1] &= \sum_{j\in\N_i} \frac{1}{\vert \N_i\vert}z_j[k], \\ y_i[0] &= \begin{cases} \frac{\ensuremath{\subscr{P}{ref}}}{\vert\I\vert} - \ensuremath{\underline{p}}_i, & i\in\I, \\ -\ensuremath{\underline{p}}_i, & i\notin\I, \end{cases} & z_i[0] &= \ensuremath{\overline{p}}_i - \ensuremath{\underline{p}}_i, \end{alignat*} where, $k$ is the iteration number, $y_i$ and $z_i$ are two auxiliary variables maintained by each agent, $\N_i$ denotes the neighboring DERs of DER $i$, and $\ensuremath{\underline{p}}_i$ and $\ensuremath{\overline{p}}_i$ are the minimum and maximum power level for DER $i$ from the problem formulation in Section~\ref{optimization-statements}. $\I$ denotes the subset of DERs which know the value of the reference signal. One can see~that \begin{equation*} \begin{aligned} p_i^\star &= \ensuremath{\underline{p}}_i + \underset{k\rightarrow\infty}{\lim} y_i[k]/z_i[k](\ensuremath{\overline{p}}_i - \ensuremath{\underline{p}}_i) \\ &= \ensuremath{\underline{p}}_i + \frac{\ensuremath{\subscr{P}{ref}} - \sum_i \ensuremath{\underline{p}}_i}{\sum_i \ensuremath{\overline{p}}_i - \ensuremath{\underline{p}}_i}(\ensuremath{\overline{p}}_i - \ensuremath{\underline{p}}_i), \end{aligned} \end{equation*} where $p_i^\star$ is then the power assignment for DER $i$. \emph{Primal-Dual (PD)}: Both this dynamics and DANA (described next) take into account the cost functions of the DER types when computing the power setpoints, i.e., $f_i$ are nonconstant. These functions are modeled as quadratics, which is a common choice in generator dispatch~\cite{AW-BW-GS:12}. The dynamics is based on the discretization of the primal-dual dynamics~\cite{AC-BG-JC:17-sicon} for the augmented Lagrangian of the equivalent reformulated problem, see~\cite{AC-JC:16-allerton}, and it has a linear rate of convergence to the optimizer. The algorithm is given by \begin{equation*} \begin{aligned} \begin{bmatrix} \dot{p}_i \\ \dot{y}_i \\ \dot{\lambda}_i \end{bmatrix} = \begin{bmatrix} -\left(f_i'(p_i)+ \lambda_i + p_i \sum_{j\in\N_i} L_{ij}y_j - \ensuremath{\subscr{P}{ref}}/n \right) \\ -\left( \sum_{j\in\N_i} L_{ij} (\lambda_j + x_j - \ensuremath{\subscr{P}{ref}}/n) + \sum_{j\in\N_i^2} L_{ij}^2 y_j \right) \\ p_i + \sum_{j\in\N_i} L_{ij} y_j - \ensuremath{\subscr{P}{ref}}/n \end{bmatrix}, \end{aligned} \end{equation*} where, $L$ is the Laplacian matrix of the communication graph (see~\cite{FB-JC-SM:09}), $y_i$ is an auxiliary variable, and $\lambda_i$ is the dual variable associated with agent $i$. The update step is followed by a projection of the primal variable $p_i$ onto the box constrained local feasible set. These dynamics converge from any set of initial conditions. Since this algorithm evolves in continuous time, we use an Euler discretization with fixed step-size to implement it in discrete time. \emph{Distributed Approximate Newton Algorithm (DANA)}: The Distributed Approximate Newton Algorithm (DANA) of~\cite{TA-CYC-SM:18-auto} has an improved rate of convergence compared to PD. This algorithm solves the equivalent reformulated problem \begin{equation}\label{eq:DANA-opt} \begin{aligned} \underset{z\in\ensuremath{\mathbb{R}}^n}{\text{min}} \ &f(p^0 + Lz) = \sum_{i=1}^n f_i(p_i^0 + L_i z), \\ \text{subject to} \ &\ensuremath{\underline{p}} - p^0 - Lz \leq \ensuremath{\mathbf{0}}_n, \\ &p^0 + Lz - \ensuremath{\overline{p}} \leq \ensuremath{\mathbf{0}}_n, \end{aligned} \end{equation} where $p^0$ is a vector of initial power levels of all the DERs with $\sum_i p_i^0 = \ensuremath{\subscr{P}{ref}}$, and $z$ is the new variable of optimization. The continuous time dynamics are given by \begin{equation*} \begin{aligned} \dot{z} &= -A_q\nabla_z \Lagr (z,\lambda), \\ \dot{\lambda} &= [\nabla_\lambda \Lagr (z,\lambda)]^+_\lambda , \end{aligned} \end{equation*} where $\Lagr$ is the Lagrangian of~\eqref{eq:DANA-opt} and $A_q$ is a positive definite weighting on the gradient direction which provides distributed second-order information. For brevity, we do not provide the full details of the algorithm here, which can instead be found in~\cite{TA-CYC-SM:18-auto}. The cost functions are again taken to be quadratic with strictly positive leading coefficients. \section*{Acknowledgements} We would like to thank numerous people in the UCSD community and beyond for their generous contributions of time and resources to enable such an ambitious project to come together. We extend thanks to: (i) Aaron Ma and Jia (Jimmy) Qiu for assisting with hardware setup and software development for the distributed computation systems; (ii) Kevin Norris for coordinating the fleet vehicles; (iii) Abdulkarim Alamad for overseeing V1G drivers in Test~2; (iv) Kelsey Johnson for managing the Nuvve contributions; (v) Ted Lee, Patrick Kelly, and Steven Low for managing the PowerFlex contribution; (vi) Marco Arciniega, Martin Greenawalt, James Gunn, Josh Kavanagh, Jennifer Rodgers, Patricia Roman and Lashon Smith from UCSD parking for reserving EV charging station parking spaces; (vii) Charles Bryant, Harley Crace, John Denhart, Nirav Desai, John Dilliott, Mark Gaus, Martin Greenawalt, Gerald Hernandez, Brandon Hirsch, Mark Jurgens, Josh Kavanagh, Jose Moret, Chuck Morgan, Curt Lutz, Jose Moret, Cynthia Wade, Raymond Wampler and Ed Webb for contributing their EVs in Test~1; (viii) Adrian Armenta, Adrian Gutierrez and Minghua Ong who helped with ChargePoint manual control; (ix) Bob Caldwell (Centaurus Prime), Gregory Collins, Charles Bryant, and Robert Austin for programming and enabling the AHU control; (x) Gary Matthews and John Dilliott for permitting the experimentation on ``live'' buildings and vehicles; and (xi) Antoni Tong and Cristian Cortes-Aguirre for supplying the BESS. Finally, we would like to extend a sincere thanks to the ARPA-e NODES program for its financial support and to its leadership, including Sonja Glavaski, Mario Garcia-Sanz, and Mirjana Marden, for their vision and push for the development of large-scale power-in-the-loop testing environments. \bibliographystyle{IEEEtran}
1,941,325,220,602
arxiv
\section*{Introduction}\label{S:one} It was revealed by the works of Murthy \cite{Mur} and Griffin \cite{Gri} that the Curry-Howard isomorphism, {which establishes a correspondence between natural deduction style proofs in intuitionistic logic and terms of the typed $\lambda$-calculus}, can be extended to the case of classical logic, as well. Since their discovery many calculi appeared aiming to give an encoding of proofs formulated either in classical natural deduction or in classical sequent calculus. The $\l \mu$-calculus presented by Parigot in \cite{Par5} finds its origin in the so called Free Deduction (FD). Parigot resolves the deterministic nature of intuitionistic natural deduction: unlike in the case of intuitionistic natural deduction, when eliminating an instance of a cut in FD, there can be several choices for picking out the subdeductions to be transformed. By introducing variables of a new kind, the so called $\mu$-variables, {Parigot distinguishes formulas that} are not active at the moment but the current continuation can be passed over to them. Besides the usual $\beta$-reduction, Parigot introduces a new reduction rule called the $\mu$-rule corresponding to structural cut eliminations made necessary by the occurrence of new forms of cuts due to the rule in connection with the $\mu$-variables. The result is a calculus, the $\l \mu$-calculus (Parigot \cite{Par1}), which is in relation with classical natural deduction. The $\mu'$-rule is the symmetric counterpart of the $\mu$-rule. It was introduced by Parigot \cite{Par2} with the intention of keeping the unicity of representation of data (Nour \cite{Nou1}), {the price was, however, that confluence had been lost}. In the presence of other simplification rules besides $\mu$ and $\mu'$, even the strong normalization property is lost (Batty\'anyi \cite{Batt}). Historically, the first calculus reflecting the symmetry of classical propositional logic was the $\ls$-calculus of Berardi and Barbanera \cite{Ber-Bar} establishing a formulae-as-types connection with natural deduction in classical logic. The calculus $\ls$ {uses an involutive negation which is not defined as} $A\rightarrow \bot$. {There are negated and non-negated atomic types}, and the main connective is not the arrow but the classical $\wedge$ and $\vee$. Berardi and Barbanera make use of the natural symmetry of classical logic expressed by the de Morgan laws in defining negated types. In their paper, Berardi and Barbanera proved that $\ls$ is strongly normalizable with a symmetric version of the Tait-Girard reducibility method (Tait \cite{Tai}). In this paper, leaning on the combinatorial proof applied by David and Nour in \cite{Dav-Nou4}, we prove that $\ls$ is strongly normalizing. The novelty in our proof is the application of so-called zoom-in sequences of redexes, which was inspired by the work in Raamsdonk et al. \cite{Sor}. We prove strong normalizability by verifying that it is closed under substitution. From the assumption that $U[x:=V]$ is strongly normalizing and $U$, $V$ are strongly normalizing, we can identify a subterm $U'$ of a reduct of $U$ such that $U'[x:=V]$ also is strongly normalizing. The reduction sequence leading to $U'$ is a so-called zoom-in sequence of redexes: each subsequent element is a subterm of the one-step reduct of the preceding one. We prove that zoom-in sequences have useful invariant properties, which makes it relatively easy for us to set the stage for the main theorem. Due to its intrinsic symmetry in dealing with the typing relation, the $\ls$-calculus also proves to be very close to the calculus named by Nour as classical combinatory logic (CCL). Nour \cite{Nou} defined a calculus of combinators which is equivalent to the full classical propositional logic in natural deduction style. Then a translation is given in both directions between $\ls$ and CCL. Curien and Herbelin introduced the $\overline{\lambda}\mu\tilde{\mu}$-calculus (Curien et al. \cite{Cur-Her}), which established a correspondence, via the Curry-Howard isomorphism, between classical Gentzen-style sequent calculus {and a logical calculus}. The $\overline{\lambda}\mu\tilde{\mu}$-calculus possesses a rather strong symmetry: it has right-hand side and left-hand side terms (also referred to as environments). The strong normalization of the calculus was proved by Polonovski \cite{Poi}, and a proof formalizable in first order Peano arithmetic was found{ by David and Nour \cite{Dav-Nou3}.} As to the connection between the $\l \mu$ and the $\overline{\lambda}\mu\tilde{\mu}$-calculus, Curien and Herbelin \cite{Cur-Her} defined a translation both for the call-by-value and the call-by-name part of the $\lambda \mu$-calculus into the $\overline{\lambda}\mu\tilde{\mu}$-calculus. Rocheteau \cite{Roc} finished this work by defining simulations between the two calculi in both directions. In this paper we define the $\overline{\lambda}\mu\tilde{\mu}^*$-calculus, which is the $\overline{\lambda}\mu\tilde{\mu}$-calculus extended with negation, and we describe translations between the $\overline{\lambda}\mu\tilde{\mu}^*$-calculus and the $\ls$-calculus. As a consequence, we obtain that, if one of the calculi is strongly normalizable, then the other one necessary admits this property. The proof applied in the paper is an adaptation of that of David and Nour \cite{Dav-Nou3}. David and Nour \cite{Dav-Nou3} gave arithmetic proofs, that is, proofs formalizable in first-order Peano arithmetic, for the strong normalizability of the $\overline{\lambda}\mu\tilde{\mu}$- and Parigot's symmetric $\l \mu$-calculus. It is demonstrated that the set of strongly normalizable terms are closed under substitution. The goal is achieved by applying implicitly an alternating substitution to find out which part of the substitution would be responsible for being not strongly normalizable provided the basis of the substitution and the terms written in are strongly normalizable. In this paper we reach the same goal by identifying a minimal non strongly normalizing sequence of redexes provided an infinite reduction sequence is given. We call this sequence of redexes a minimal zoom-in reduction sequence. The idea of zoom-in sequence was inspired by Raamsdonk et al. \cite{Sor}, where perpetual reduction strategies are defined in order to locate the minimal non strongly normalizing subterms of the elements of an infinite reduction sequence. Again, alternating substitutions are defined inductively starting from two sets of terms, and it is proven that zoom-in reduction sequences do not lead out of these substitutions. With this in hand, the method of David and Nour \cite{Dav-Nou3} can be applied. We prove the strong normalization of the \smash{$\ls$}-calculus, though our proof works with some slight modification in the case of the $\overline{\lambda}\mu\tilde{\mu}$-calculus, as well (Batty\'anyi \cite{Batt}). However, instead of repeating the proof here, we give a translation of the \smash{$\ls$}-calculus into the $\overline{\lambda}\mu\tilde{\mu}$-calculus, and vice versa. In fact, to make the connection more visible we define the $\overline{\lambda}\mu\tilde{\mu}^*$-calculus, which is the $\overline{\lambda}\mu\tilde{\mu}$-calculus extended with terms expressing negated types. Hence, we also obtain a new proof of strong normalization of the $\overline{\lambda}\mu\tilde{\mu}$-calculus. The paper is organized as follows. In the first section we introduce the \smash{$\ls$}-calculus of Berardi and Barbanera, and, as the first step towards strong normalization, prove that the permutation rules can be postponed. In the next section we show that the $\beta$, $\beta^\bot$, $\pi$ and $\pi^\bot$ rules together are strongly normalizing. Section \ref{section3} introduces the $\overline{\lambda}\mu\tilde{\mu}$-calculus defined by Curien and Herbelin, and we augment the calculus with negation in order to make the comparison of the \smash{$\ls$}- and the $\overline{\lambda}\mu\tilde{\mu}$-calculi simpler. Section \ref{section4} provides translations between the \smash{$\ls$}- and the $\overline{\lambda}\mu\tilde{\mu}^*$-calculi such that the strong normalization of one of the calculi implies that of the other. The last section contains conclusions with regard to the results of the paper. \section{The $\ls$-calculus} The ${\lambda }^{{\tiny\textit{Sym}}}$-calculus was introduced by Berardi and Barbanera \cite{Ber-Bar}. It is organized entirely around the duality in classical logic. It has a negation ``built-in'': the negation of $A$ is not defined as $A\rightarrow\bot$. Each type is rather related to its natural negated type by the notion of duality introduced by negation in classical logic. In fact, Berardi and Barbanera defined a calculus equivalent to first order Peano arithmetic. However, we only consider here its propositional part, denoted by $\ls$, since all the other calculi treated by us in this work are concerned with propositional logic. \begin{defi}\label{int:typels} The set of types are built from two sets of base types $\mathcal{A}=\{ a,b,\ldots \}$ (atomic types) and ${\mathcal A}^{\bot} =\{ a^{\bot},b^{\bot},\ldots \}$ (negated atomic types). \begin{enumerate} \item The set of m-types is defined by the following grammar $$A := \alpha \mid {\alpha}^{\bot} \mid A\wedge A\mid A\vee A$$ where $\alpha $ ranges over $\mathcal{A}$ and ${\alpha}^{\bot}$ over ${\mathcal A}^{\bot}$. \item The set of types is defined by the following grammar $$C := A\mid \bot.$$ \item We define the negation of an m-type as follows \begin{tabular}{ l l l l} $(\alpha )^{\bot}=\alpha ^\bot \;$ & $(\alpha^{\bot})^{\bot}=\alpha\;$ & $(A\wedge B)^{\bot}=A^{\bot}\vee B^{\bot}\;$ & $(A\vee B)^{\bot}=A^{\bot}\wedge B^{\bot}.$ \end{tabular}\\ In this way we get an involutive negation, i.e. for every $m$-type $A$, $(A^{\bot})^{\bot}=A$. \newpage \item The complexity of a type is defined inductively as follows. \begin{itemize} \item[] $cxty(A)=0$, if $A \in \mathcal{A} \cup \mathcal{A}^{\bot} \cup \{ \bot \}$. \item[] $cxty(A_{1}\wedge A_{2})=cxty(A_{1}\vee A_{2})=cxty(A_{1})+cxty(A_{2})+1$. \end{itemize} Then, for every $m$-type $A$, $cxty(A)=cxty(A^{\bot})$. \end{enumerate} \end{defi} \begin{defi}\hfill \begin{enumerate} \item We denote by $Var$ the set of term-variables. The set of terms ${\mathcal T}$ of the $\ls$-calculus together with their typing rules are defined as follows. In the definition below the type of a variable must be an $m$-type and $\Gamma$ denotes a context (the set of declarations of variables). $$\hspace{.7cm} var\;\;\; \displaystyle\frac{}{\Gamma, x:A\;\vdash\;x:A}\vspace{.1cm}$$ \begin{tabular}{l l} $\langle \: ,\rangle\;\;\; \displaystyle\frac {\Gamma\;\vdash\;P_{1}:A_1\;\;\;\;\; \Gamma\;\vdash\;P_{2}:A_2}{\Gamma\;\vdash\;\langle P_{1},P_{2}\rangle :A_{1}\wedge A_{2}}$ & $\;\;\; \sigma_{i}\;\;\; \displaystyle\frac {\Gamma\;\vdash\;P_{i}:A_{i}}{\Gamma\;\vdash\;{\sigma_{i}}\; (P_{i}):A_{1}\vee A_{2}}\;\;i\in\{1,2\}$\vspace{.4cm}\\ $\lambda \;\;\; \displaystyle\frac {\Gamma, x:A\;\vdash\;P:\bot}{\Gamma\;\vdash\;\lambda xP:A^{\bot}}$ & $\;\;\;\; \star\;\;\; \displaystyle\frac {\Gamma\;\vdash\;P_{1}: A^{\bot}\;\;\;\;\;\Gamma\;\vdash\;P_{2}:A}{\Gamma\;\vdash\;(P_{1}\star P_{2}):\bot}$ \end{tabular} \item We say that $M$ has type $A$, if there is a context $\Gamma$ such that $\Gamma\;\vdash\;M:A$. We consider $A$ as fixed for a certain element $\Gamma\;\vdash\;M:A$ of the typability relation. \item As usual, we denote by $Fv(M)$, the set of the free variables of the term $M$. \item The complexity of a term of ${\mathcal T}$ is defined as follows. \begin{itemize} \item[] $cxty(x)=0$, \item[] $cxty(\langle P_1 , P_2\rangle)= cxty((P_1\star P_2)) = cxty(P_1)+cxty(P_2)$, \item[] $cxty(\lambda xP)= cxty(\sigma_i(P)) = cxty(P)+1$, for $i\in \{ 1,2\}$. \end{itemize} \end{enumerate} \end{defi} \begin{defi}\hfill \begin{enumerate} \item The reduction rules are enumerated below. \begin{tabular}{ l l l l l} $(\beta)$ & $(\lambda xP\star Q)$ & $\rightarrow_{\beta}$ & $P[x:=Q]$ & $\;$ \\ $(\beta^{\bot})$ & $(Q \star \lambda xP)$ & $\rightarrow_{\beta^{\bot}}$ & $P[x:=Q]$ & $\;$ \\ $(\eta)$ & $\lambda x(P\star x)$ & $\rightarrow_{\eta}$ & $P$ & if $x\notin Fv(P)$\\ $(\eta^{\bot})$ & $\lambda x(x\star P)$ & $\rightarrow_{\eta^{\bot}}$ & $P$ & if $x\notin Fv(P)$\\ $(\pi)$ & $(\langle P_{1},P_{2}\rangle \star \sigma_{i}(Q_{i}))$ & $\rightarrow_{\pi}$ & $(P_{i}\star Q_{i})$ & $i \in \{1,2\}$\\ $(\pi^{\bot})$ & $(\sigma_{i}(Q_{i})\star \langle P_{1},P_{2}\rangle)$ & $\rightarrow_{\pi^{\bot}}$ & $(Q_{i}\star P_{i})$ & $i \in \{1,2\}$\\ $(Triv)$ & $E[P]$ & $\rightarrow_{Triv}$ & $P$ & $(*)$\\ \end{tabular}\medskip \noindent{\rm (*)} If $E[-]$ is a context with type $\bot$ and $E[-]\neq[-]$, $P$ has type $\bot$ and $E[-]$ does not bind any free variables in $P$. \item Let us take the union of the above rules. Let $\rightarrow$ stand for the compatible closure of this union and, as usual, $\rightarrow^*$ denote the reflexive, symmetric and transitive closure of $\rightarrow$. The notions of reduction sequence, normal form and normalization are defined with respect to $\rightarrow$. \item Let $M,N$ be terms. Assume $M \rightarrow^* N$. The length (i.e. the number of steps) of the reduction $\rightarrow^*$ is denoted by $lg(M \rightarrow^* N)$. \end{enumerate} \end{defi} \noindent We enumerate below some theoretical properties of the $\ls$-calculus following Berardi and Barbanera \cite{Ber-Bar} and de Groote \cite{de Gro2}.\newpage \begin{prop}[Type-preservation property] If $\Gamma\;\vdash\; P:A$ and $P \rightarrow^* Q$, then $\Gamma \;\vdash\; Q:A$. \end{prop} \begin{prop}[Subformula property] If $\Pi$ is a derivation of $\Gamma \;\vdash\; P:A$ and $P$ is in normal form, then every type occurring in $\Pi$ is a subformula of a type occurring in $\Gamma$, or a subformula of $A$. \end{prop} \begin{thm}[Strong normalization]\label{SN1} If $\Gamma \;\vdash\; P:A$, then $P$ is strongly normalizable, i.e. every reduction sequence starting from $P$ is finite. \end{thm} Berardi and Barbanera proved Theorem \ref{SN1} for the extension of the $\ls$-calculus equivalent to first-order Peano-arithmetic. The proof of this result by Berardi and Barbanera \cite{Ber-Bar} is based on reducibility candidates, but the definition of the interpretation of a type relies on non-arithmetical fixed-point constructions. We present a syntactical and arithmetical proof of the strong normalization of the $\ls$-calculus in Section 3. The proof was inspired by a method of David and Nour \cite{Dav-Nou3}. First we establish that the permutation rules $\eta$, $\eta^\bot$ and $Triv$ can be postponed so that we can restrict our attention uniquely to the rules $\beta$, $\beta^\bot$, $\pi$, and $\pi^\bot$. \subsection{Permutation rules} First of all, we prove that the $\eta$- and $\eta^{\bot}$-reductions can be postponed w.r.t. $\beta$, $\beta^\bot$, $\pi$, and $\pi^\bot$. \begin{defi}\hfill \begin{enumerate} \item Let $\lambda_{\beta\pi}$-calculus denote the calculus with only the reduction rules $\rightarrow_\beta$, $\rightarrow_{\b^{\bot}}$, $\rightarrow_\pi$, and $\rightarrow_{\pi^{\bot}}$. \item Let $\rightarrow_{\beta \pi}$ stand for the union of $\rightarrow_\beta, \rightarrow_{\b^{\bot}}, \rightarrow_\pi , \rightarrow_{\pi^{\bot}}$ and let $M\rightarrow_{e}N$ denote the fact that $M\rightarrow_{\eta}N$ or $M\rightarrow_{\eta^{\bot}}N$. \item We denote by $\rightarrow_{\beta_{0}}$ (resp. by $\rightarrow_{\beta^{\bot}_{0}}$) the $\beta$-reduction $(\lambda xM\star N)\ra_\b M[x:=N]$ (resp. the $\beta^{\bot}$-reduction $(N\star \lambda xM)\ra_{\b^{\bot}} M[x:=N]$), where $x$ occurs at most once in $M$. \item We use the standard notation $\rightarrow^+$ and $\rightarrow^*$ for the transitive and reflexive, transitive closure of a reduction, respectively. \end{enumerate} \end{defi} \noindent We examine the behaviour of a $\ra_e$ rule followed by a $\ra_\b$ and a $\ra_{\b_0}$ rule in Lemmas \ref{ch3:ete} and \ref{ch3:bn}. \begin{lem}\label{ch3:ete} If $U \ra_e V \ra_\b W$, then $U\ra_\b V'\rightarrow^*_e W$ or $U\ra_{\b_0} V'\ra_\b W$ for some $V'$. \end{lem} \begin{proof} We assume $\ra_e$ is an $\eta$-reduction, the proof of the case of $\ra_\b$ is similar. The proof is by induction on $cxty(U)$. The only interesting case is $U=(U_{1}\star U_{2})$. We consider only some of the subcases. \begin{enumerate} \item $U_{1}=\lambda x(U_{3}\star x)$, with $x\notin Fv(U_3)$, and $V=(U_3\star U_2) \rightarrow_{\beta}U_4[y:=U_2]=W$, where $U_3=\lambda yU_4$. In this case $U=(\lambda x(U_{3}\star x)\star U_{2})\ra_{\b_0} (U_{3}\star U_{2})\ra_\b U_{4}[y:=U_{2}]=W$, so $\rightarrow_{\eta} \ra_\b$ turns into $\ra_{\b_0} \ra_\b$. \item $U_{1}=\lambda xU_{3}$, $U_{3}\rightarrow_{\eta} U_{4}$ and $V=(\lambda xU_4\star U_2) \rightarrow_{\beta}U_{4}[x:=U_{2}]=W$. Then $U\ra_\b V'=U_{3}[x:=U_{2}]\rightarrow_{\eta}U_{4}[x:=U_{2}]=W$. \item $U_{1}=\lambda xU_{3}$, $U_{2}\rightarrow_{\eta} U_{4}$ and $V=(\lambda xU_3\star U_4) \rightarrow_{\beta}U_{3}[x:=U_{4}]=W$. Then $U\ra_\b V'=U_{3}[x:=U_{2}]\rightarrow^*_{\eta}U_{3}[x:=U_{4}]$.\qedhere \end{enumerate} \end{proof} \begin{lem}\label{ch3:bn} If $U \ra_e V\ra_{\b_0} W$, then $U\ra_{\b_0} W$ or $U\ra_{\b_0} V'\ra_e W$ or $U\ra_{\b_0} V'\ra_{\b_0} W$ for some $V'$. \end{lem} \begin{proof} By induction on $cxty(U)$. We assume $U=(U_1\star U_2)$ and we consider some of the more interesting cases. \begin{enumerate} \item $U_{1}=\lambda x(U_{3}\star x)$, with $x\notin Fv(U_3)$, and $V=(U_3\star U_2) \rightarrow_{\beta_0}U_4[y:=U_2]=W$, where $U_3=\lambda yU_4$. In this case $U=(\lambda x(U_{3}\star x)\star U_{2})\ra_{\b_0} (U_{3}\star U_{2})\ra_{\b_0} U_{4}[y:=U_{2}]=W$, thus $\rightarrow_{\eta} \ra_{\b_0}$ turns into $\ra_{\b_0} \ra_{\b_0}$. \item $U_{1}=\lambda xU_{3}$, $U_{3}\rightarrow_{\eta} U_{4}$ and $V=(\lambda xU_4\star U_2) \rightarrow_{\beta_0}U_{4}[x:=U_{2}]=W$. Then $U\ra_{\b_0} V'=U_{3}[x:=U_{2}]\rightarrow_{\eta}U_{4}[x:=U_{2}]=W$. \item $U_{1}=\lambda xU_{3}$, $U_{2}\rightarrow_{\eta} U_{4}$ and $V=(\lambda xU_3\star U_4) \rightarrow_{\beta_0}U_{3}[x:=U_{4}]=W$. Then $U\ra_{\b_0} V'=U_{3}[x:=U_{2}]\rightarrow_{\eta}U_{3}[x:=U_{4}]$ provided $x$ occurs in $U_3$. Otherwise $U\ra_{\b_0} U_3=W$.\qedhere \end{enumerate} \end{proof} \noindent We obtain easily the following lemma on the behaviour of several $\ra_e$ rules followed by a $\ra_\b$ or a $\ra_{\b_0}$ rule. \begin{lem}\label{ch3:bnk} If $U{\rightarrow^*_e} V\ra_{\b_0} W$, then $U{\ra_{\b_0}}^+ V'{\rightarrow^*_e} W$ for some $V'$, and \\ $lg(U{\ra_{\b_0}}^+ V'{\rightarrow^*_e} W)\leq lg(U{\rightarrow^*_e} V\ra_{\b_0} W)$. \end{lem} \begin{proof} By induction on $lg(U \rightarrow^*_e V\ra_{\b_0} W)$, using Lemma \ref{ch3:bn}. \end{proof} \begin{lem}\label{ch3:incb} If $U{\rightarrow^*_e} V\ra_\b W$, then $U{\ra_\b}^+ V'\rightarrow^*_e W$ for some $V'$. \end{lem} \begin{proof} By induction on $lg(U{\rightarrow^*_e} V\ra_\b W)$. Use Lemmas \ref{ch3:ete} and \ref{ch3:bnk}. \end{proof} \begin{lem}\label{ch3:incbb} If $U{\rightarrow^*_e} V\ra_{\b^{\bot}} W$, then $U{\ra_{\b^{\bot}}}^+ V'{\rightarrow^*_e} W$ for some $V'$. \end{lem} \begin{proof} Similar to that of the previous lemma. \end{proof} We investigate now how a $\ra_e$ rule behaves when followed by a $\ra_\pi$ or $\rightarrow_{\pi^{\bot}}$ rule. \begin{lem}\label{ch3:etp} If $U\ra_e V\ra_\pi W$ (resp. $U\ra_e V\rightarrow_{\pi^{\bot}} W$), then $U\ra_\pi V'\ra_e W$ or $U\ra_\pi W$ (resp. $U\rightarrow_{\pi^{\bot}} V'\ra_e W$ or $U\rightarrow_{\pi^{\bot}} W$) for some $V'$. \end{lem} \begin{proof} Observe that in case of $U\ra_e V\ra_\pi W$ the following possibilities can occur: either $U=\lambda x(V\star x)$ and $V\ra_\pi W$ or $U=(\langle P_1,P_2\rangle \star \sigma(Q))$ and $V=(\langle P_1',P_2'\rangle \star \sigma(Q'))$, where exactly one of $P_i\ra_e P_i'$, $Q\ra_e Q'$ holds, the other two terms are left unchanged. From this, the statement easily follows. \end{proof} \begin{lem}\label{ch3:inc} If $U\rightarrow^*_e V\rightarrow_{\beta \pi} W$, then $U\rightarrow^{+}_{\beta \pi} V'\rightarrow_e W$ for some $V'$. \end{lem} \begin{proof} By Lemmas \ref{ch3:incb}, \ref{ch3:incbb} and \ref{ch3:etp}. \end{proof} \begin{lem} \label{ch3:exch} If $U\rightarrow^*_e V\rightarrow^*_{\beta \pi} W$, then $U\rightarrow^+_{\beta \pi} V'\rightarrow^*_e W$ for some $V'$. \end{lem} \begin{proof} Follows from the previous lemma. \end{proof} We are now in a position to prove the main result of the section. \begin{lem}\label{ch3:sne} The $\eta$- and the $\eta_\bot$-reductions are strongly normalizing. \end{lem} \begin{proof} The $\eta$- and $\eta_\bot$-reductions on $M$ reduce the complexity of $M$. \end{proof} \begin{defi}\hfill \begin{enumerate} \item Let $\lambda_{\beta \pi \eta}$-calculus denote the calculus obtained from the $\lambda_{\beta\pi}$-calculus by adding the $\eta$- and $\eta^{\bot}$-reductions to it. \item Let $\rightarrow_{\beta \pi \eta}$ denote the union of $\rightarrow_\beta$, $\rightarrow_{\beta^\bot}$, $\rightarrow_\pi$, $\rightarrow_{\pi^\bot}$, $\rightarrow_\eta$ and $\rightarrow_{\eta^\bot}$. \item Assume $M$ is a term strongly normalizable in the $\lambda_{\beta \pi}$-calculus. Then we denote by $\eta_{\beta \pi}(M)$ the length of the longest reduction sequence $\rightarrow^*_{\beta\pi}$ starting from $M$. \end{enumerate} \end{defi} \begin{cor}\label{ch3:b+e} If the $\lambda_{\beta\pi}$-calculus is strongly normalizing, then the $\lambda_{\beta \pi \eta}$-calculus is also strongly normalizing. \end{cor} \begin{proof} Let $M$ be a term, we prove by induction on $\eta_{\beta \pi} (M)$ that $M$ is strongly normalizable in the $\lambda_{\beta \pi \eta}$-calculus. Assume $S$ is an infinite ${\beta \pi \eta}$-reduction sequence starting from $M$. If $S$ begins with a $\rightarrow_{\beta \pi}$, then the induction hypothesis applies. In the case when $S$ contains only $\rightarrow_e$-reductions, we are done by Lemma \ref{ch3:sne}. Otherwise there is an initial subsequent $M\rightarrow_e^+ M'\rightarrow_{\beta \pi}N$. By Lemma \ref{ch3:inc}, we have $M\rightarrow^+_{\beta \pi}M''\rightarrow^*_{e}N$. Thus, we can apply the induction hypothesis to $M''$. \end{proof} In the rest of the section we deal with the rule $Triv$. For strong normalization, it is enough to show that $\rightarrow_{Triv}$ can be postponed w.r.t. $\rightarrow _{\beta\pi\eta}$. \begin{lem}\label{ch3:exct} If $U\rightarrow^*_{Triv} V\rightarrow_{\beta\pi\eta} W$, then $U\rightarrow _{\beta\pi\eta}^+ V'\rightarrow^*_{Triv} W$ for some $V'$. \end{lem} \begin{proof} It is enough to prove that if $U\rightarrow_{Triv} V\rightarrow_{\beta\pi\eta} W$, then $U\rightarrow _{\beta\pi\eta} V'\rightarrow_{Triv} W$ for some $V'$. Observe that if $U=E[V]\rightarrow_{Triv}V\rightarrow_{\beta\pi\eta} W$, then $V:\bot$ and $W:\bot$, from which the statement follows. \end{proof} \begin{lem}\label{ch3:tsn} The reduction $\rightarrow_{Triv}$ is strongly normalizing. \end{lem} \begin{proof} The reduction $\rightarrow_{Triv}$ on $M$ reduces the complexity of $M$. \end{proof} \begin{cor}\label{ch3:snlsfull} If the $\lambda_{\beta\pi}$-calculus is strongly normalizing, then the ${\lambda}^{\tiny{\textit{Sym}}}_{Prop}$-calculus is also strongly normalizing. \end{cor} \begin{proof} By Corollary \ref{ch3:b+e} and Lemmas \ref{ch3:exct} and \ref{ch3:tsn}. \end{proof} \section{Strong normalization of the $\lambda_{\beta\pi}$-calculus} In this section, we give an arithmetical proof for the strong normalization of the $\lambda_{\beta\pi}$-calculus. In the sequel we detail the proofs for the $\beta$- and $\pi$-reductions only, all the proofs below can be extended with the cases of the $\beta^{\bot}$- and $\pi^{\bot}$-reduction rules in a straightforward way. We intend to examine how substitution behaves with respect to strong normalizability. The first milestone in this way is Lemma \ref{ch3:nsnus}. Before stating the lemma, we formulate some auxiliary statements. \begin{defi}\hfill \begin{enumerate} \item Let $SN_{\beta\pi}$ denote the set of strongly normalizable terms of the $\lambda_{\beta\pi}$-calculus. \item Let $M\in SN_{\beta\pi}$, then $\eta c(M)$ stands for the pair $\langle \eta_{\beta\pi}(M) , cxty(M)\rangle$. \end{enumerate} \end{defi} \begin{lem}\label{ch3:nsn} Let us suppose $M\in SN_{\beta\pi}$, $N\in SN_{\beta\pi}$ and $(M\star N)\notin SN_{\beta\pi}$. Then there are $P\in SN_{\beta\pi}$, $Q\in SN_{\beta\pi}$ such that $M \rightarrow^*_{\beta\pi} P$ and $N \rightarrow^*_{\beta\pi} Q$ and $(P\star Q)\notin SN_{\beta\pi}$ is a redex. \end{lem} \begin{proof} By induction on $\eta c(M)+\eta c(N)$. Assume $M\in SN_{\beta\pi}$, $N\in SN_{\beta\pi}$ and $(M\star N)\notin SN_{\beta\pi}$. When $(M\star N)\rightarrow (M'\star N)$ or $(M\star N)\rightarrow (M\star N')$, then the induction hypothesis applies. Otherwise $(M\star N)\rightarrow P\notin SN_{\beta\pi}$, and we have the result. \end{proof} \begin{defi}\hfill \begin{enumerate} \item A proper term is a term differing from a variable. \item For a type $A$, $\Sigma_A$ denotes the set of simultaneous substitutions of the form $[x_{1}:=N_{1},\ldots ,x_{k}:=N_{k}]$ where $N_i$ ($1 \leq i \leq n$) is proper and has type $A$. \item A simultaneous substitution $\sigma \in \Sigma_A$ is said to be in $SN_{\beta\pi}$, if, for every $x\in dom(\sigma )$, $\sigma (x)\in SN_{\beta\pi}$ holds. \end{enumerate} \end{defi} \begin{lem} \label{ch3:ml} Let $M,N$ be terms such that $M \rightarrow^*_{\beta\pi} N$. \begin{enumerate} \item If $N=\lambda xP$, then $M=\lambda xP_1$ with $P_1 \rightarrow^*_{\beta\pi} P$. \item If $N=\langle P , Q\rangle$, then $M=\langle P_1 , Q_1 \rangle$ with $P_1 \rightarrow^*_{\beta\pi} P$ and $Q_1 \rightarrow^*_{\beta\pi} Q$. \item If $N=\sigma_i (P)$, then $M=\sigma_i (P_1)$ with $P_1 \rightarrow^*_{\beta\pi} P$, for $i\in \{ 1,2\}$. \end{enumerate} \end{lem} \begin{proof} Straightforward. \end{proof} We remark that in the presence of the $\rightarrow_{\eta}$ and $\rightarrow_{\eta}^{\bot}$ rules the above lemma would not work. For example, $\lambda x(y\star x)\rightarrow_{\eta}y$. \begin{lem}\label{ch3:mx} If $M\in SN_{\beta\pi}$ and $x \in Var$, then $(M\star x)\in SN_{\beta\pi}$ (resp. $(x\star M)\in SN_{\beta\pi}$). \end{lem} \begin{proof} Let us suppose $M\in SN_{\beta\pi}$ and $(M\star x)\notin SN_{\beta\pi}$. By Lemma \ref{ch3:nsn}, we must have $M \rightarrow^*_{\beta\pi} \lambda y M_1\in SN_{\beta\pi}$ such that $(M\star x) \rightarrow^*_{\beta\pi} (\lambda yM_1\star x)\rightarrow_{\beta\pi} M_1[y:=x]$ and $M_1[y:=x]\notin SN_{\beta\pi}$. Being a subterm of a reduct of $M\in SN_{\beta\pi}$, we also have $M_1\in SN_{\beta\pi}$. Moreover, $M_1[y:=x]$ is obtained from $M_1$ by $\alpha$-conversion, hence $M_1[y:=x]\in SN_{\beta\pi}$, a contradiction. \end{proof} \begin{defi}\hfill \begin{enumerate} \item Let $M,N$ be terms. \begin{enumerate} \item We denote by $M \leq N$ (resp. $M < N$) the fact that $M$ is a sub-term (resp. a strict sub-term) of $N$. \item We denote by $M \prec N$ the fact that $M \leq P$ for some $N \rightarrow^+_{\beta\pi} P$ or $M < N$. We denote by $\preceq$ the reflexive closure of $\prec$. \item Let $R$ be a ${\beta\pi}$-redex. We write $M \rightarrow^R N$ if $N$ is the term $M$ after the reduction of $R$. \end{enumerate} \item Let ${\mathcal R} = [R_1,\dots ,R_n]$ where $R_i$ is a ${\beta\pi}$-redex $(1 \leq i \leq n)$. Then ${\mathcal R}$ is called zoom-in if, for every $1\leq i< n$, $R_i\rightarrow^{R_i} R_i'$ and $R_{i+1} \leq R_i'$. Moreover, ${\mathcal R}$ is minimal, if, for each $R_i=(P_i\star Q_i)$, we have $P_i$, $Q_i\in SN_{\beta\pi}$ and $(P_i\star Q_i)\notin SN_{\beta\pi}$. We write $M \rightarrow^{\mathcal R} N$, if $M \rightarrow^{R_1} ... \rightarrow^{R_n} N$. \end{enumerate} \end{defi} \noindent For the purpose of proving the strong normalization of the calculus, it is enough to show that the set of strongly normalizable terms is closed under substitution. To this end, we show that, if $U$, $S\in SN_{\beta\pi}$ and $U[x:=S]\notin SN_{\beta\pi}$, then there is a term $W\leq U$ of a special form such that $W\in SN_{\beta\pi}$ and $W[x:=S]\notin SN_{\beta\pi}$. Moreover, we show that the sequence of redexes leading to $W$ is not completely general, this is a zoom-in sequence defined below. Reducing the outermost redexes of a zoom-in sequence preserve some useful properties, which is the statement of Lemma \ref{ch3:zoom}. \begin{lem} \label{ch3:nsnus} Let $U$, $S\in SN_{\beta\pi}$ and suppose $U[x:=S]\notin SN_{\beta\pi}$. Then there are terms $P,V\preceq U$ and a zoom-in minimal ${\mathcal R}$ such that $U[x:=S] \rightarrow^{\mathcal R} V[x:=S]$, $(x\star P) \leq V$ (or $(P\star x) \leq V$), $P[x:=S]\in SN_{\beta\pi}$ and $(x\star P)[x:=S]\notin SN_{\beta\pi}$ (or $(P\star x)[x:=S]\notin SN_{\beta\pi}$). \end{lem} \begin{proof} The proof goes by induction on $\eta c(U)$. If $U$ is other than an application, we can apply the induction hypothesis. Assume $U=(U_1\star U_2)$ with $U_i[x:=S]\in SN_{\beta\pi}$ $(i\in \{1,2\})$ and $U[x:=S]\notin SN_{\beta\pi}$. By Lemma \ref{ch3:nsn} and the induction hypothesis we may assume that $(U_1\star U_2)[x:=S]\rightarrow_{\rho} U'\notin SN_{\beta\pi}$, where $\rho\in \{\beta,\beta^\bot,\pi,\pi^\bot\}$. Let us suppose $\rho=\beta$, the other cases can be treated similarly. If $U_1=\lambda yU_1'$, then the induction hypothesis applies to $U_1'[y:=U_2]$. Otherwise $U_1=x$, and we have obtained the result. \end{proof} Next we define an alternating substitution: we start from two sets of terms of complementary types and the substitution is defined in a way that we keep track of which newly added sets of substitutions come from which of the two sets. The reason for this is that Lemma \ref{ch3:nsnus} in itself is not enough for proving the strong normalizability of \smash{$\ls$} even if we would consider the $\beta$ and $\beta^\bot$ rules alone. We have to show that, if we start from a term $(U_1\star U_2)$, where $U_1$ and $U_2\in SN_{\beta\pi}$ and we assume that $U_1[x:=U_2]\notin SN_{\beta\pi}$, then there are no deep interactions between the terms which come from $U_1$ and from $U_2$. We can identify a subterm of a reduct of $U_1$ which is the cause for being non $SN_{\beta\pi}$, when performing a substitution with $U_2$. \begin{defi}\label{ch3:lssimsub}\hfill \begin{enumerate} \item A set $\mathcal{A}$ of proper terms is called $\preceq$-closed from below if, for all terms $U,U'$, if $U'\preceq U$, $U\in \mathcal{A}$ and $U'$ is proper, then $U'\in \mathcal{A}$. \item Let $\mathcal{A},\mathcal{B}$ be sets $\preceq$-closed from below and $A$ a type. We define simultaneously two sets of substitutions \begin{enumerate} \item $\Pi_{A}(\mathcal{B})\subseteq \Sigma_{A}$ and $\Theta_{A^{\bot}}(\mathcal{A})\subseteq \Sigma_{A^{\bot}}$ as follows. \begin{itemize} \item[] \begin{itemize} \item $\emptyset \in \Pi_{A}(\mathcal{B})$, \item $[y_1:=V_1\tau_1,\dots ,y_m:=V_m\tau_m]\in \Pi_A(\mathcal{B})$ if $V_i\in \mathcal{B}$ such that $type(V_i)=A$ and $\tau_i\in \Theta_{A^{\bot}}(\mathcal{A})$ $(1\leq i\leq m)$. \end{itemize} \item[] \begin{itemize} \item $\emptyset \in\Theta_{A^{\bot}}(\mathcal{A})$. \item $[x_1:=U_1\rho_1,\dots ,x_m:=U_m\rho_m]\in\Theta_{A^{\bot}}(\mathcal{A})$ if $U_i\in \mathcal{A}$ such that $type(U_i)=A^{\bot}$ and $\rho_i\in \Pi_A(\mathcal{B})$ $(1\leq i\leq m)$. \end{itemize} \end{itemize} \item Let $\mathcal{S}_{A}(\mathcal{A},\mathcal{B}) =\{U\rho \;|\; U\in \mathcal{A} \textrm{ and } \rho \in \Pi_A(\mathcal{B})\}\cup \{V\tau \;|\; V\in \mathcal{B} \textrm{ and } \tau \in\Theta_{A^{\bot}}(\mathcal{A})\}$. It is easy to see that, from $U\leq V$ and $V\in \mathcal{S}_{A}(\mathcal{A},\mathcal{B})$, it follows $U\in \mathcal{S}_{A}(\mathcal{A},\mathcal{B})$. \end{enumerate} \end{enumerate} \end{defi} \begin{lem}\label{ch3:zoom} Let $n$ be an integer, $A$ a type of length $n$ and ${\mathcal R} =[R_1,\dots ,R_m]$ a zoom-in minimal sequence of redexes. Assume the property $H$ ``if $U$, $V\in SN_{\beta\pi}$ and $cxty(type(V))<n$, then $U[x:=V]\in SN_{\beta\pi}$'' holds. If $R_1\in\mathcal{S}_{A}(\mathcal{A},\mathcal{B})$ for some sets $\mathcal{A}$ and $\mathcal{B}$ $\preceq$-closed from below, then $R_m\in \mathcal{S}_{A}(\mathcal{A},\mathcal{B})$. \end{lem} \begin{proof} The proof goes by induction on $m$. We prove the induction step from $m=1$ to $m=2$, the proof is the same when $m\in \mathbb{N}$ is arbitrary. We only treat the more interesting cases. Assume $R_1\in \mathcal{S}_{A}(\mathcal{A},\mathcal{B})$.\enlargethispage{\baselineskip} \begin{enumerate} \item $R_1=(\lambda xQ\star S) \rightarrow_{\beta} R_1'=Q[x:=S]$ and $R_2\leq R_1'$. \begin{enumerate} \item Suppose $R_1=U\rho$ for some $U\in \mathcal{A}$ and $\rho \in \Pi_A(\mathcal{B})$. Then $U=(U_1\star U_2)$ with $U_1\rho =\lambda xQ$ and $U_2\rho = S$, and, since $\rho\in \Sigma_{A}$, $U_1$ must be proper. Then we have $U_1=\lambda xU_1'$ and $U_1'\rho = Q$ for some $U_1'$. Now, $R_1'=U_1'[x:=U_2]\rho\in \mathcal{S}_{A}(\mathcal{A},\mathcal{B})$, which yields $R_2\in \mathcal{S}_A(\mathcal{A},\mathcal{B})$. \item Assume now $R_1=V\tau$. Then $V=(V_1\star V_2)$ with $V_1\tau =\lambda xQ$ and $V_2\tau = S$, and, since $\tau\in \Sigma_{A^\bot}$, $V_2$ must be proper. If $V_1$ is proper, then, as before, we obtain the result. Otherwise $V_1\tau =U\rho =\lambda xQ$. Since $U\in \mathcal{A}$ is proper, $U=\lambda xU_1$ and $U_1\rho =Q$ for some $U_1$. Then $U_1\rho_1\in \mathcal{S}_{A}(\mathcal{A},\mathcal{B})$ with $\rho_1=\rho + [x:=V_2\tau]$, since $type(V_2\tau)=type(S)=A$. This implies $R_2\in \mathcal{S}_{A}(\mathcal{A},\mathcal{B})$. \end{enumerate} \item $R_1=(\langle Q_1,Q_2\rangle\star \sigma_1(S))\rightarrow_{\pi}(Q_1\star S) =R_1'$ and $R_2\leq R_1'$. \begin{enumerate} \item Assume $R_1=U\rho$ for some $U\in \mathcal{A}$ and $\rho \in \Pi_A(\mathcal{B})$. Then $U_1\rho=\langle Q_1,Q_2\rangle$ and $U_2\rho=\sigma_1(S)$. \begin{itemize} \item[-] Let $U_1$ and $U_2$ be proper. Then $U_1=\langle U_1',U_1''\rangle$ and $U_2=\sigma_1(U_2')$ such that $U_1'\rho=Q_1$, $U_1''\rho=Q_2$ and $U_2'\rho=S$. We have $R_1'=(U_1'\star U_2')\rho\in\mathcal{S}_{A}(\mathcal{A},\mathcal{B})$, which yields the result. \item[-] Assume $U_2\in Var$. Then $V\tau =\sigma (S)$, and $cxty(type(S))<$ $cxty(type(\sigma S))=n$. Then assumption $H$ and the fact that $\langle Q_1,Q_2\rangle\in SN_{\beta\pi}$, together with Lemma \ref{ch3:mx}, lead to $(Q_1\star S)\in SN_{\beta\pi}$, which is not possible. Since $\rho\in \Sigma_A$, $U_1\in Var$ is impossible. \end{itemize} \item Assume $R_1=V\tau$ for some $V\in \mathcal{B}$ and $\tau \in \Theta_{A^{\bot}}(\mathcal{A})$. Then $V_1\tau=\langle Q_1,Q_2\rangle$ and $V_2\tau=\sigma_1(S)$, where $V=(V_1\star V_2)$. \begin{itemize} \item[-] Let $V_1$ and $V_2$ be proper. Then $V_1=\langle V_1',V_1''\rangle$ and $V_2=\sigma_1(V_2')$ such that $V_1'\tau=Q_1$, $V_1''\tau=Q_2$ and $V_2'\tau=S$. We have $R_1'=(V_1'\star V_2')\tau\in\mathcal{S}_{A}(\mathcal{A},\mathcal{B})$. \item[-] Assume $V_1\in Var$. Then $U\rho =\langle Q_1,Q_2\rangle$, where $cxty(type(Q_1))<$ $cxty(type(\langle Q_1,Q_2\rangle))=n$. Then assumption $H$ and the fact that $S\in SN_{\beta\pi}$, together with Lemma \ref{ch3:mx}, lead to $(Q_1\star S)\in SN_{\beta\pi}$, which is not possible. Since $\tau\in\Sigma_{A^{\bot}}$, the case of $V_2\in Var$ is impossible.\qedhere \end{itemize} \end{enumerate} \end{enumerate} \end{proof} \noindent The next lemma identifies the subterm of $U$ being responsible for the non strong normalizability of $U[x:=V]$. \begin{lem} \label{ch3:lsmain} Let $n$ be an integer and $A$ a type of length $n$. Assume the property $H$ ``if $U$, $V\in SN_{\beta\pi}$ and $cxty(type(V))<n$, then $U[x:=V]\in SN_{\beta\pi}$'' holds. \begin{enumerate} \item Let $U$ be a proper term, $\sigma \in \Sigma_{A}$ and $a\notin Im(\sigma)$. If $U\sigma, P\in SN_{\beta\pi}$ and $U\sigma [a:=P]\notin SN_{\beta\pi}$, then there exists $U'$ such that $(U'\star a)\preceq U$ and $\sigma'\in \Sigma_{A}$ such that $U'\sigma'\in SN_{\beta\pi}$ and $(U'\sigma'\star a)[a:=P] \notin SN_{\beta\pi}$. \item Let $U$ be a proper term, $\sigma \in \Sigma_{A^{\bot}}$ and $a\notin Im(\sigma)$. If $U\sigma, P\in SN_{\beta\pi}$ and $U\sigma [a:=P]\notin SN_{\beta\pi}$, then there exists $U'$ such that $(a\star U')\preceq U$ and $\sigma'\in \Sigma_{A^\bot}$ such that $U'\sigma'\in SN_{\beta\pi}$ and $(a\star U'\sigma')[a:=P] \notin SN_{\beta\pi}$. \end{enumerate} \end{lem} \begin{proof} Let us consider only case (1). We identify the reason of $U\sigma [a:=P]$ being non strongly normalizable, we find a subterm $(U'\star a)$ of a reduct of $U$ such that, for a substituted instance of $(U'\star a)$, $(U'\star a)\sigma'\in SN_{\beta\pi}$ and $(U'\star a)\sigma'[a:=P]\notin SN_{\beta\pi}$. This will contradict by some minimality assumption concerning $U$ in the next lemma. For this we define two sets of substitutions as in Definition \ref{ch3:lssimsub} with the sets $\mathcal{A}$ and $\mathcal{B}$ as below. We note that Property H of the previous lemma implicitly ensures that the type of $U$ and the type of the elements in $\sigma$ can be of the same lengths.\enlargethispage{\baselineskip} Let $$\mathcal{A}=\{M\,|\,M\preceq U \textrm{ and } M \textrm{ is proper} \},$$ $$\mathcal{B}=\{V\,|\,V\preceq \sigma (b)\textrm{ for some } b \in dom(\sigma ) \textrm{ and } V \textrm{ is proper} \}.$$ Then $U\sigma \in \mathcal{S}_{A}(\mathcal{A},\mathcal{B})$. By Lemma \ref{ch3:nsnus}, there exists a minimal zoom-in ${\mathcal R}=[R_1,\dots,R_n]$ and there are terms $U^*$ and $V\preceq U\sigma$ such that $U\sigma [a:=P]\rightarrow^{\mathcal R} V[a:=P]$ and $(U^*\star a)\leq V$ and $(U^*\star a)\in SN_{\beta\pi}$ and $(U^*\star a)[a:=P]\notin SN_{\beta\pi}$ or $(a\star U^*)\leq V$ and $(a\star U^*)\in SN_{\beta\pi}$ and $(a\star U^*)[a:=P]\notin SN_{\beta\pi}$. Assume the former. By Lemma \ref{ch3:zoom}, $(U^*\star a)\in \mathcal{S}_{A}(\mathcal{A},\mathcal{B})$. Then $(U^*\star a)=S\rho$ for some $S \in \mathcal{A}$ or $(U^*\star a)=W\tau$ for some $W\in \mathcal{B}$. Since $a\notin Im(\sigma )$, the latter is impossible. The former case, however, yields $S=(U'\star a)$ with $U'\rho=U^*$ for some $U'\in \mathcal{A}$, which proves our assertion. \end{proof} The next lemma states closure of strong normalizability under substitution. \begin{lem}\label{ch3:sn} If $M,N\in SN_{\beta\pi}$, then $M[x:=N]\in SN_{\beta\pi}$. \end{lem} \begin{proof} We are going to prove a bit more general statement. Suppose $M, N_{i}\in SN_{\beta\pi}$ are proper, $type(N_{i})=A$ $(1\leq i\leq k)$. Let $\tau _{i}\in \Sigma_{A^{\bot}}$ are such that $\tau _{i}\in SN_{\beta\pi}$ $(1\leq i\leq k)$ and let $\rho =[x_{1} :=N_{1}\tau _{1},\ldots ,x_{k} :=N_{k}\tau _{k}]$. Then we have $M \rho \in SN_{\beta\pi}$. The proof is by induction on $(cxty(A), \eta_{\beta\pi}(M), cxty(M)$, $\Sigma_i \; \eta_{\beta\pi}(N_i), \Sigma_i \; cxty(N_i))$ where, in $\Sigma_i \; \eta_{\beta\pi}(N_i)$ and $\Sigma_i \; cxty(N_i)$, we count each occurrence of the substituted variable. For example if $k=1$ and $x_1$ has $n$ occurrences, then $\Sigma_i \; \eta_{\beta\pi}(N_i)=n\cdot \eta_{\beta\pi}(N_1)$. The only nontrivial case is when $M=(M_{1}\star M_{2})$ and $M\rho\notin SN_{\beta\pi}$. By the induction hypothesis $M_{i}\rho\in SN_{\beta\pi}$ $(i\in \{1,2\})$. We select some of the typical cases. \begin{enumerate} \item[(A)] $M_{1}\rho \rightarrow_{\beta\pi} \lambda zM'$ and $M'[z:=M_2]\notin SN_{\beta\pi}$. \begin{enumerate} \item[1.] $M_{1}$ is proper, then there is an $M_{3}$ such that $M_{1}=\lambda zM_{3}$ and $M_{3}\rho \rightarrow_{\beta\pi} M'$. In this case $(M_{3}[z:=M_{2}])\rho \notin SN_{\beta\pi}$ and since $\eta_{\beta\pi}(M_{3}[z:=M_{2}])<\eta_{\beta\pi}(M)$, the induction hypothesis gives the result. \item[2.] $M_{1}\in Var$. Then $M_{1}=x\in dom(\rho)$, $\rho(x)=N_{j}\tau_{j}\rightarrow_{\beta\pi} \lambda zM'$ for some $(1\leq j\leq k)$. Since $N_{j}$ is proper, there is an $N'$ such that $N_{j}=\lambda zN'$, $N'\tau_{j}\rightarrow_{\beta\pi} M'$. Then $N'\tau_{j}[z:=M_{2}\rho]\notin SN_{\beta\pi}$ and $type(z)={type({N_{j}})}^{\bot}=type(\tau_{j})$, so, by the previous lemma, we have $N''\prec N'$ and $\tau '$ such that $(N''\tau '\star M_{2}\rho )\notin SN_{\beta\pi}$. Now we have $(N''\tau '\star M_{2}\rho )=(y\star M_{2}\rho )[y:=N''\tau ']$, $type(N'')={type(\tau ')}^{\bot}=A$ and $\eta_{\beta\pi} cxty(N'')<\eta c(N_{j})$, which contradicts the induction hypothesis. \end{enumerate} \item[(B)] $M_{1}\rho \rightarrow_{\beta\pi} \langle M',M''\rangle $ and either $(M'\star M_2)\notin SN_{\beta\pi}$ or $(M''\star M_2)\notin SN_{\beta\pi}$. Suppose the former. \begin{enumerate} \item[1.] $M_{1},M_{2}$ are proper, then there are $M_{3},M_{4}$ such that $M_{1}=\langle M_{3},M_{4}\rangle$ and $M_{3}\rho \rightarrow_{\beta\pi} M'$, or $M_{4}\rho \rightarrow_{\beta\pi} M''$. Assume the former. Then we have $(M_{3}\star M_{2})\rho \notin SN_{\beta\pi}$ and $\eta_{\beta\pi}((M_{3}\star M_{2}))<\eta_{\beta\pi}(M)$, a contradiction. \item[2.] $M_{1}=x\in dom(\rho)$, then $\rho (x)=N_{j}\tau_{j}\rightarrow_{\beta\pi} \langle M',M''\rangle$, $N_{j}$ is proper and $N_{j}=\langle U,V\rangle $, $U\tau_{j}\rightarrow_{\beta\pi} M'$ or $V\tau_{j}\rightarrow_{\beta\pi} M''$. Now $(U\tau_{j}\star M_2)=(y\star M_2)[y:=U\tau_{j}]\notin SN_{\beta\pi}$, but $cxty(type(U))<cxty(type(N_{j}))$, a contradiction again. \item[3.] $M_{2}\in Var$. This is similar to the previous case. By the same argument as in part (A)-2.-(a) of the proof of the previous lemma, $M_{1}$ and $M_{2}$ cannot be both variables. This completes the proof of the lemma.\qedhere \end{enumerate} \end{enumerate} \end{proof} \begin{thm} \label{ch3:snt} The $\lambda_{\beta\pi}$-calculus is strongly normalizing. \end{thm} \begin{proof} It is enough to show that, for every term, $M$, $N\in SN_{\beta\pi}$ implies $(M\star N)\in SN_{\beta\pi}$. Supposing $M,N\in SN_{\beta\pi}$, Lemma \ref{ch3:mx} gives $(M\star x)\in SN_{\beta\pi}$, which yields, by the previous lemma, $(M\star N)=(M\star x)[x:=N]\in SN_{\beta\pi}$.\end{proof} \newpage \section{The $\overline{\lambda}\mu\tilde{\mu}$- and the $\overline{\lambda}\mu\tilde{\mu}^*$-calculus}\label{section3} In this section, we introduce the $\overline{\lambda}\mu\tilde{\mu}$-calculus together with one of its extensions, the $\overline{\lambda}\mu\tilde{\mu}^*$-calculus, by which we establish a translation of the $\ls$-calculus and thus obtain the strong normalization of the $\overline{\lambda}\mu\tilde{\mu}^*$-calculus as a consequence. \subsection{The $\overline{\lambda}\mu\tilde{\mu}$-calculus} The $\overline{\lambda}\mu\tilde{\mu}$-calculus was introduced by Curien and Herbelin (\cite{Her} and \cite{Cur-Her}). We examine here the calculus defined by Curien et al. \cite{Cur-Her}, which is a simply typed one. The $\overline{\lambda}\mu\tilde{\mu}$-calculus was invented for representing proofs in classical Gentzen-style sequent calculus: under the Curry-Howard correspondence a version of Gentzen-style sequent calculus is obtained as a system of simple types for the $\overline{\lambda}\mu\tilde{\mu}$-calculus. Moreover, the system presents a clear duality between call-by-value and call-by-name evaluations. \begin{defi} There are three kinds of terms, defined by the following grammar, and there are two kinds of variables. We assume that we use the same set of variables in the $\overline{\lambda}\mu\tilde{\mu}^*$-calculus, too. In the literature, different authors use different terminology. Here, we will call them either $c$-terms, or $l$-terms or $r$-terms. Similarly, the variables will be called either $l$-variables (and denoted as $x,y,...$) or $r$-variables (and denoted as $a, b, ...$). \[ \begin{array}{ccccccccc} p &::= & \lfloor t , e \rfloor & \, &\, &\, &\, &\, &\, \\ t &::= & x &\mid & \lambda x t &\mid &\mu \alpha p &\, &\, \\ e &::= &\alpha &\mid & (t.e) &\mid & \tilde{\mu}x p &\, &\, \end{array} \] As usual, we denote by $Fv(u)$, the set of the free variables of the term $u$. \end{defi} \begin{defi} The types are built from atomic formulas (or, in other words, atomic types) with the connector $\rightarrow$. We assume that the same set of type variables ${\mathcal A}$ is used in the $\overline{\lambda}\mu\tilde{\mu}^*$-calculus, also. The typing system is a sequent calculus based on judgements of the following form. \begin{center} $p : (\Gamma \;\rhd\; \triangle )$ $\;\;\;\;\;\;\;\;\;$ $\Gamma \;\rhd\; t : A\; |\; \triangle$ $\;\;\;\;\;\;\;\;\;$ $\Gamma\; |\; e : A \;\rhd\; \triangle$ \end{center} where $\Gamma$ (resp. $\triangle$) is a set of declarations of the form $x : A$ (resp. $a : A$), $x$ (resp. $a$) denoting a $l$-variable (resp. an $r$-variable) and $A$ representing a type, such that $x$ (resp. $a$) occurs at most once in an expression of $\Gamma$ (resp. $\triangle$) of the form $x:A$ (resp. $a: A$). We say that $\Gamma$ an $l$-context and $\triangle$ is an $r$-context, respectively. The typing rules are as follows \[ \begin{array}{ll} \begin{minipage}[t]{180pt} $Var_1 \;\;\;\displaystyle\frac{}{\Gamma, x : A \;\rhd\; x : A\; |\; \triangle }$\\ \end{minipage} & \begin{minipage}[t]{180pt} $Var_2 \;\;\;\displaystyle\frac{}{\Gamma \; |\; \alpha : A\;\rhd\; \alpha : A , \triangle }$\\[0.3cm] \end{minipage} \\ \begin{minipage}[t]{180pt} $\lambda \;\;\;\displaystyle\frac{\Gamma, x : A \;\rhd\; t : B\; |\; \triangle}{\Gamma\;\rhd\; \lambda x t : A \rightarrow B\; |\; \triangle}$\\ \end{minipage} & \begin{minipage}[t]{180pt} $(.) \;\;\;\displaystyle\frac{\Gamma \;\rhd\; t : A\; |\; \triangle \;\;\; \Gamma \; |\; e : B \;\rhd\; \triangle}{\Gamma \; |\; (t.e) : A \rightarrow B \;\rhd\; \triangle}$\\[0.1cm] \end{minipage} \end{array} \] $$\lfloor ,\rfloor \;\;\;\displaystyle\frac{\Gamma \;\rhd\; t : A \; |\; \triangle \;\;\; \Gamma \; |\; e : A\;\rhd\; \triangle}{\lfloor t , e \rfloor : (\Gamma \;\rhd\; \triangle )}$$ \[ \begin{array}{ll} \begin{minipage}[t]{180pt} $\mu \;\;\;\displaystyle\frac{p : (\Gamma \;\rhd\; \alpha : A , \triangle)}{\Gamma \;\rhd\; \mu \alpha p : A\; |\; \triangle}$\\ \end{minipage} & \begin{minipage}[t]{180pt} $\tilde{\mu} \;\;\;\displaystyle\frac{p : (\Gamma , x : A \;\rhd\; \triangle)}{\Gamma \; |\; \tilde{\mu} x p : A\;\rhd\; \triangle}$\\ \end{minipage} \end{array} \] \end{defi} \newpage \begin{defi} The cut-elimination procedure (on the logical side) corresponds to the reduction rules (on the terms) given below.\\ \begin{tabular}{ l l l l l} $(\lambda)$ & $\lfloor \lambda x t , (t'.e) \rfloor$ & $\hookrightarrow_{\;\lambda}$ & $\lfloor t', \tilde{\mu} x \, \lfloor t , e \rfloor \rfloor$ \\ $(\mu)$ & $\lfloor \mu \alpha p , e \rfloor$ & $\hookrightarrow_{\;\mu}$ & $p[\alpha := e]$ & $\;$ \\ $(\tilde{\mu})$ & $\lfloor t , \tilde{\mu} x p \rfloor$ & $\hookrightarrow_{\;\tilde{\mu}}$ & $p[x:= t]$ & \\ $(s_l)$ & $\mu \alpha \lfloor t , \alpha \rfloor$ & $\hookrightarrow_{\;s_l}$ & $t$ & ${\rm if} \; \alpha \not \in Fv(t)$\\ $(s_r)$ & $\tilde{\mu} x \lfloor x , e \rfloor$ & $\hookrightarrow_{\;s_r}$ & $e$ & ${\rm if} \; x \not \in Fv(e)$\\ \end{tabular} \item Let us take the union of the above rules. Let $\hookrightarrow$ stand for the compatible closure of this union and, as usual, $\hookrightarrow^*$ denote the reflexive, symmetric and transitive closure of $\hookrightarrow$. The notions of reduction sequence, normal form and normalization are defined with respect to $\hookrightarrow$. \end{defi} We present below some theoretical properties of the $\overline{\lambda}\mu\tilde{\mu}$-calculus (Herbelin \cite{Her}, Curien and Herbelin \cite{Cur-Her}, de Groote \cite{de Gro2}, Polonovski \cite{Poi} and David and Nour \cite{Dav-Nou4}). \begin{prop}[Type-preservation property]\label{ch4:pres} If $\Gamma\;\rhd\; t:A\;|\;\triangle$ (resp. $\Gamma\;|\;e:A\;\rhd\;\triangle$, resp. $p:(\Gamma\;\rhd\;\triangle)$) and $t \hookrightarrow^* t'$ (resp. $e\hookrightarrow^* e'$, resp. $p \hookrightarrow^* p'$), then $\Gamma\;\rhd\; t':A\;|\;\triangle$ (resp. $\Gamma\;|\;e':A\;\rhd\;\triangle$, resp. $p':(\Gamma\;\rhd\;\triangle)$). \end{prop} \begin{prop}[Subformula property]\label{ch4:subform} If $\Pi$ is a derivation of $\Gamma\;\rhd\; t:A\;|\;\triangle$ (resp. $\Gamma\;|\;e:A\;\rhd\;\triangle$, resp. $p:(\Gamma\;\vdash\;\triangle)$) and $t$ (resp. $e$, resp. $p$) is in normal form, then every type occurring in $\Pi$ is a subformula of a type occurring in $\Gamma \cup \triangle$, or a subformula of $A$ (only for $t$ and $e$). \end{prop} \begin{thm}[Strong normalization property]\label{SN2} If $\Gamma\;\rhd\; t:A\;|\;\triangle$ (resp. $\Gamma\;|\;e:A\;\rhd\;\triangle$, resp. $p:(\Gamma\;\rhd\;\triangle)$), then $t$ (resp. $e$, resp. $p$) is strongly normalizable, i.e. every reduction sequence starting from $t$ (resp. $e$, resp. $p$) is finite. \end{thm} The proof of Theorem \ref{SN2} can be found in the thesis of Polonovski \cite{Poi}, as well as in the work of David and Nour \cite{Dav-Nou4}, where an arithmetical proof is presented. \subsection{The $\overline{\lambda}\mu\tilde{\mu}^*$-calculus} Since we work in a sequent calculus, where negation is implicitly built in the rules, the typing rules of the $\overline{\lambda}\mu\tilde{\mu}$-calculus do not handle negation. However, for a full treatment of propositional logic we found it more convenient to introduce rules concerning negation. Since $c$-terms, which could have been candidates for objects of type $\bot$, are distinctly separated from terms, adding new term- and type-forming operators seems to be the easiest way to define negation. \begin{defi}\label{ch4:defterms}\hfill \begin{enumerate} \item The terms of the $\overline{\lambda}\mu\tilde{\mu}^*$-calculus are defined by the following grammar. \[ \begin{array}{ccccccccc} p &::= & \lfloor t , e \rfloor & \, &\, &\, &\, &\, &\, \\ t &::= & x &\mid & \lambda x \, t &\mid &\mu \alpha \, p &\mid &\overline{e} \\ e &::= &\alpha &\mid & (t.e) &\mid & \tilde{\mu}x \, p &\mid &\widetilde{t} \end{array} \] As an abuse of terminology, in the sequel when speaking about the syntactic elements of the $\overline{\lambda}\mu\tilde{\mu}^*$-calculus, we may not distinguish $l$-, $r$- and $c$-terms, we may speak about terms in general. We denote by $\mathfrak{T}$ the set of terms of the $\overline{\lambda}\mu\tilde{\mu}^*$-calculus. \newpage \item The complexity of a term of $\mathfrak{T}$ is defined as follows. \begin{itemize} \item[] $cxty(x)=cxty(\alpha)=0$, \item[] $cxty(\lambda x t)= cxty(\widetilde{t}) = cxty(t)+1$, \item[] $cxty(\overline{e}) = cxty(e) +1$, \item[] $cxty(\mu \alpha p)= cxty(\tilde{\mu}x \, p) = cxty(p)+1$, \item[] $cxty(\lfloor t , e \rfloor )= cxty((t.e) ) = cxty(t)+cxty(e)$. \end{itemize} \end{enumerate} \end{defi} \begin{defi} The type inference rules are the same as in the $\overline{\lambda}\mu\tilde{\mu}$-calculus with two extra rules added for the types of the complemented terms. Moreover, we introduce an equation between types (for all types $A$, $(A^\bot)^\bot = A$) to ensure that our negation is involutive. \[ \begin{array}{ll} \begin{minipage}[t]{180pt} $\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\overline{.}\;\;\;\displaystyle\frac{\Gamma\; |\; e : A \rhd \triangle}{\Gamma \rhd \overline{e}:A^\bot\; |\; \triangle }$\\ \end{minipage} & \begin{minipage}[t]{180pt} $\widetilde{.}\;\;\; \displaystyle\frac{\Gamma \rhd t : A\; |\; \triangle }{\Gamma \; |\; \widetilde{t}:A^\bot\rhd \triangle }$\\ \end{minipage} \end{array} \] We also define the complexity of types in the $\overline{\lambda}\mu\tilde{\mu}^*$-calculus. \begin{enumerate} \item[] $cxty(A)=0$ for atomic types, \item[] $cxty(A\rightarrow B)=cxty(A)+cxty(B)+1$, \item[] $cxty(A^{\bot})=cxty(A)$. \end{enumerate} That is, the complexity of a type $A$ provides us with the number of arrows in $A$. The presence of negation makes it necessary for us to introduce new rules handling negation. \end{defi} \begin{defi} Besides the reduction rules already present in $\overline{\lambda}\mu\tilde{\mu}$, we endow the calculus with some more new rules to handle the larger set of terms. In what follows $cl$ stands for the name: complementer rule. We shall refer to the $cl_{1,l}$- and $cl_{1,r}$-rules by a common notation as the $cl_{1}$-rules. \begin{tabular}{ l l l l l} $(cl_{1,l})$ & $\overline{\widetilde{t}}$ & $\hookrightarrow_{\; cl_{1,l}}$ & $t$ \\ $(cl_{1,r})$ & $\widetilde{\overline{e}}$ & $\hookrightarrow_{\; cl_{1,r}}$ & $e$ & $\;$ \\ $(cl_{2})$ & $\lfloor \overline{e} , \widetilde{t}\rfloor$ & $\hookrightarrow_{\; cl_{2}}$ & $\lfloor t , e\rfloor$ & \\ \end{tabular}\medskip \noindent In the sequel, we continue to apply the notation $\hookrightarrow$ and $\hookrightarrow^*$ in relation with this new calculus. \end{defi} Obviously, the statements analogous to Propositions \ref{ch4:pres} and \ref{ch4:subform} are still valid. \section{Relating the \texorpdfstring{$\ls$}{lambda-Sum-Prop}-calculus to the \texorpdfstring{$\overline{\lambda}\mu\tilde{\mu}^*$}{blambda-mu-bmu}-calculus}\label{section4} Rocheteau \cite{Roc} defined a translation between the $\overline{\lambda}\mu\tilde{\mu}$-calculus and the $\l \mu$-calculus, treating both a call-by-value and a call-by-name aspect of $\overline{\lambda}\mu\tilde{\mu}$. In this subsection, we give a translation (in both directions) between the \smash{$\ls$}-calculus and the $\overline{\lambda}\mu\tilde{\mu}^*$-calculus, which is a version of the $\overline{\lambda}\mu\tilde{\mu}$-calculus extended with negation. The translations are such that strong normalization of one calculus follows from that of the other in both directions. We omit issues of evaluation strategies, however. In the end of the section we give an exact description of the correspondence between the two translations. Preparatory to presenting the translations, let us introduce some definitions and notation below. We assume that the two calculi have the same sets of variables and atomic types. Moreover, as an abuse of notation, if $\overline{{\alpha}}:A^{\bot}$ stems from the $r$-variable $\alpha:A$ in the $\overline{\lambda}\mu\tilde{\mu}^*$-calculus, then we suppose that in the \smash{$\ls$}-calculus $\overline{{\alpha}}$ denotes a variable with type $A^{\bot}$. \subsection{A translation of the \texorpdfstring{$\overline{\lambda}\mu\tilde{\mu}^*$}{blambda-my-bmu}-calculus into the \texorpdfstring{$\ls$}{lambda-Sym-Prop}-calculus} \begin{defi}\label{ch5:pi}\hfill \begin{enumerate} \item Let us consider the $\ls$-calculus. For $i\in\{ 1 , 2\}$, we write ${\pi}_i(y)=\lambda z(y\star {\sigma}_i(z))$. Then, we can observe that $y : A_1 \wedge A_2 \vdash {\pi}_i(y):A_i$, for $i\in\{ 1 , 2\}$. \item We define a translation $.^\mathfrak{e}:\mathfrak{T} \longrightarrow \mathcal{T}$ as follows. \[ p^\mathfrak{e}=(u^\mathfrak{e}\star v^\mathfrak{e}) \;\;\; \textrm{ if $\;\;p=\lfloor v , u\rfloor$}. \] \[ t^\mathfrak{e}=\left\{ \begin{array}{ll} x & \;\;\;\textrm{ if $\;\;t=x$},\\ \lambda y(\lambda x({\pi}_2(y)\star u^\mathfrak{e})\star {\pi}_1(y)) & \;\;\;\textrm{ if $\;\;t=\lambda xu$}, \\\lambda x(e^\mathfrak{e}\star t^\mathfrak{e})& \;\;\;\textrm{ if $\;\;t=\tilde{\mu} x\lfloor t , e\rfloor$},\\u^\mathfrak{e}&\;\;\;\textrm{ if $\;\;t=\overline{u}$}.\end{array}\right. \] \[ e^\mathfrak{e}=\left\{ \begin{array}{ll} \overline{\alpha} & \;\;\;\textrm{ if $\;\;e=\alpha$},\\ \langle t^\mathfrak{e} , h^\mathfrak{e}\rangle & \;\;\;\textrm{ if $\;\;e=t.h$}, \\ \lambda \overline{\alpha}(e^\mathfrak{e}\star t^\mathfrak{e})& \;\;\;\textrm{ if $\;\;e=\mu \alpha\lfloor t , e\rfloor$}, \\h^\mathfrak{e}&\;\;\;\textrm{ if $\;\;e=\widetilde{h}$}. \end{array}\right. \] \item The translation $.^\mathfrak{e}$ also applies to types. \begin{itemize} \item $A^\mathfrak{e}=A$, where $A$ is an atomic type, \item $(A^\bot)^\mathfrak{e}=(A^\mathfrak{e})^\bot$, \item $(A\rightarrow B)^\mathfrak{e}=(A^\mathfrak{e})^\bot\vee B^\mathfrak{e}$. \end{itemize} \item Let $\Gamma$, $\triangle$ be $l$- and $r$-contexts, respectively. Then $\Gamma^\mathfrak{e}=\{x:A^\mathfrak{e}\; | \; x:A\in \Gamma\}$ and similarly for $\triangle$. Furthermore, for any $r$-context $\triangle$, let $\triangle^{\bot}=\{\overline{\alpha}: A^\bot \;|\; \alpha:A\in \triangle\}$. \end{enumerate} \end{defi} \begin{lem} \label{ch5:typelmtsls} \begin{enumerate} \item If $\Gamma \;{\rhd} \; t : A\; |\; \triangle$, then $\Gamma^\mathfrak{e} , (\triangle^\mathfrak{e})^{\bot}\;{\vdash} \; t^\mathfrak{e}:A^\mathfrak{e}$. \item If $\Gamma \; |\; e : A\;{\rhd} \; \triangle$, then $\Gamma^\mathfrak{e} , (\triangle^\mathfrak{e})^{\bot}\;{\vdash} \; e^\mathfrak{e}: (A^\mathfrak{e})^\bot$. \item If $p:(\Gamma\;{\rhd} \;\triangle )$, then $\Gamma^\mathfrak{e}, (\triangle^\mathfrak{e})^{\bot}\;{\vdash} \;p^\mathfrak{e}:\bot$. \end{enumerate} \end{lem} \begin{proof} The above statements are proved simultaneously according to the length of the $\overline{\lambda}\mu\tilde{\mu}^*$-deduction. We remark that $.^{\mathfrak{e}}$ is defined in Definition \ref{ch5:pi} exactly in the way to make the assertions of the lemma true. Let us examine some of the more interesting cases. \begin{enumerate} \item Suppose \[ \frac{\Gamma , x:A\;{\rhd}\;u:B\;|\;\triangle}{\Gamma \;{\rhd}\;\lambda xu:A\rightarrow B\;|\;\triangle}. \] Then we have, by the induction hypothesis and Notation \ref{ch5:pi}, \[\Gamma^\mathfrak{e} , (\triangle^\mathfrak{e})^\bot , x:A^\mathfrak{e} , y:A^\mathfrak{e}\wedge (B^\mathfrak{e})^\bot \;{\vdash}\;u^\mathfrak{e}:B^\mathfrak{e},\] \[\Gamma^\mathfrak{e} , (\triangle^\mathfrak{e})^\bot , y:A^\mathfrak{e}\wedge (B^\mathfrak{e})^\bot \;{\vdash}\;{\pi}_1(y):A^\mathfrak{e},\] \[\Gamma^\mathfrak{e} , (\triangle^\mathfrak{e})^\bot , y:A^\mathfrak{e}\wedge (B^\mathfrak{e})^\bot \;{\vdash}\;{\pi}_2(y):(B^\mathfrak{e})^\bot.\] Thus we can conclude \[\Gamma^\mathfrak{e} , (\triangle^\mathfrak{e})^\bot , x:A^\mathfrak{e} , y:A^\mathfrak{e}\wedge (B^\mathfrak{e})^\bot \;{\vdash}\;({\pi}_2(y)\star u^\mathfrak{e}):\bot, \] \[\Gamma^\mathfrak{e} , (\triangle^\mathfrak{e})^\bot , y:A^\mathfrak{e}\wedge (B^\mathfrak{e})^\bot\;{\vdash}\;\lambda x({\pi}_2(y)\star u^\mathfrak{e}): (A^\mathfrak{e})^\bot. \] From which, we obtain \[\Gamma^\mathfrak{e} , (\triangle^\mathfrak{e})^\bot\;{\vdash}\; \lambda y(\lambda x({\pi}_2(y)\star u^\mathfrak{e})\star {\pi}_1(y)): (A^\mathfrak{e})^\bot\vee B^\mathfrak{e}. \] \item Assume now \[\frac{\Gamma\;{\rhd}\;t:A\;|\;\triangle \;\;\;\;\; \Gamma\;|\;e:B\;{\rhd}\;\triangle} {\Gamma\;|\;t.e:A\rightarrow B\;{\rhd}\;\triangle}. \] Then we have \[ \frac{\Gamma^\mathfrak{e} , (\triangle^\mathfrak{e})^\bot\;{\vdash}\;t^\mathfrak{e}:A^\mathfrak{e} \;\;\;\;\; \Gamma^\mathfrak{e} , (\triangle^\mathfrak{e})^\bot\;{\vdash}\;e^\mathfrak{e}:(B^\mathfrak{e})^\bot} {\Gamma^\mathfrak{e} , (\triangle^\mathfrak{e})^\bot\;{\vdash}\; \langle t^\mathfrak{e} , e^\mathfrak{e}\rangle :A^\mathfrak{e}\wedge (B^\mathfrak{e})^\bot}. \] \item From \[ \frac{\Gamma\;{\rhd}\;t:A\;|\;\triangle \;\;\;\;\; \Gamma\;|\;e:A\;{\rhd}\;\triangle} {\lfloor t , e\rfloor :(\Gamma\;{\rhd}\;\triangle)}, \] we obtain \[ \frac{\Gamma^\mathfrak{e} , (\triangle^\mathfrak{e})^\bot\;{\vdash}\;t^\mathfrak{e}:A^\mathfrak{e} \;\;\;\;\; \Gamma^\mathfrak{e} , (\triangle^\mathfrak{e})^\bot\;{\vdash}\;e^\mathfrak{e}:(A^\mathfrak{e})^\bot} {\Gamma^\mathfrak{e} , (\triangle^\mathfrak{e})^\bot\;{\vdash}\; (e^\mathfrak{e}\star t^\mathfrak{e}):\bot}. \] \end{enumerate} \end{proof} \noindent Our next aim is to prove that $\overline{\lambda}\mu\tilde{\mu}^*$ can be simulated by the \smash{$\ls$}-calculus. To this end we introduce a new notion of equality in the $\ls$-calculus. \begin{defi} We define an equivalence relation $\sim$ on $\mathcal{T}$, which is the smallest relation compatible with the term forming rules and containing $((M\star N),(N\star M))$. \begin{itemize} \item $x\sim x$, \item if $M\sim M'$, then $\lambda xM\sim \lambda xM'$ and $\sigma_i(M)\sim \sigma_i(M')$ for $i\in \{1, 2\}$, \item if $M\sim M'$ and $N\sim N'$, then $\langle M , N\rangle \sim \langle M' , N'\rangle$ and $(M \star N)\sim (M'\star N')$ and $(M \star N)\sim (N'\star M')$. \end{itemize} We say that $M$ and $N$ are equal up to symmetry provided $M \sim N$. \end{defi} \begin{lem} \label{ch4:sbstsim} \label{ch4:ppsim} Let $M,M',N,N' \in \mathcal{T}$. \begin{enumerate} \item If $M\sim M'$ and $N\sim N'$, then $M[x:=N]\sim M'[x:=N']$. \item If $M\sim M'$ and $M'\rightarrow N$, then there is $N'$ for which $M\rightarrow N'$ and $N\sim N'$. \end{enumerate} \end{lem} \begin{proof} 1. By induction on $cxty(M)$. 2. By 1. \end{proof} \begin{lem} \label{ch5:subls} Let $u,t,e \in \mathfrak{T}$. Then $(u[x:=t])^\mathfrak{e}=u^\mathfrak{e}[x:=t^\mathfrak{e}]$ and $(u[a:=e])^\mathfrak{e}=u^\mathfrak{e}[a:=e^\mathfrak{e}]$. \end{lem} \begin{proof} By induction on $cxty(u)$. \end{proof} Now we can formulate our assertion about the simulation of the $\overline{\lambda}\mu\tilde{\mu}^*$-calculus by the $\ls$-calculus. \begin{thm} \label{ch5:simlmtsls}~ Let $v,w \in \mathfrak{T}$. \begin{enumerate} \item If $v\hookrightarrow_r w$ and $r \in \{ \beta \, , \, \mu \, , \, {\tilde{\mu}} \, , \, {s_l} \, , \, {s_r} \}$, then $\;v^\mathfrak{e}\rightarrow^+w^\mathfrak{e}$. \item If $v\hookrightarrow_r w$ and $r\in\{{cl_{1,l}} \, , \, {cl_{1,r}} \, , \, {cl_2} \}$, then $v^\mathfrak{e}\sim w^\mathfrak{e}$. \end{enumerate} \end{thm} \begin{proof} \begin{enumerate} \item Let us only treat the typical cases. \begin{enumerate} \item If $v=\lfloor \lambda xu , (t.e)\rfloor \hookrightarrow_{\beta} \lfloor t , \tilde{\mu} x\lfloor u , e\rfloor \rfloor =w$, then $v^\mathfrak{e} = (\langle t^\mathfrak{e} , e^\mathfrak{e}\rangle \star \lambda y(\lambda x({\pi}_2(y)\star u^\mathfrak{e})\star {\pi}_1(y)))\rightarrow_{\beta^{\bot}} {} (\lambda x({\pi}_2(\langle t^\mathfrak{e} , e^\mathfrak{e}\rangle )\star u^\mathfrak{e})\star {\pi}_1(\langle t^\mathfrak{e} , e^\mathfrak{e}\rangle ))\rightarrow^* {} (\lambda x(e^\mathfrak{e}\star u^\mathfrak{e})\star t^\mathfrak{e})=w^\mathfrak{e}$. \item If $v=\lfloor \mu ap , e\rfloor \hookrightarrow_{\mu} p[a:=e]=w$, then, by Lemma \ref{ch5:subls}, $v^\mathfrak{e}=(e^\mathfrak{e}\star \lambda ap^\mathfrak{e}) \rightarrow_{\beta^{\bot}}p^\mathfrak{e}[a=e^\mathfrak{e}]=w^\mathfrak{e}$. \item If $v=\mu a\lfloor w , a\rfloor \hookrightarrow_{s_l} w$, $a\notin w$, then $v^\mathfrak{e}=\lambda a (a\star w^\mathfrak{e})\rightarrow_{\eta^\bot}w^\mathfrak{e}$. \end{enumerate} \item \begin{enumerate} \item If $v=\overline{\widetilde{u}}\hookrightarrow_{cl_{1,l}}u=w$, then $v^\mathfrak{e}=(\overline{\widetilde{u}})^\mathfrak{e}= u^\mathfrak{e}=w^\mathfrak{e}$. \item If $v=\lfloor \overline{v} , \widetilde{u}\rfloor \hookrightarrow_{cl_2} \lfloor u , v\rfloor =w$, then $v^\mathfrak{e}=\lfloor \overline{v} , \widetilde{u}\rfloor^\mathfrak{e} =(u^{\mathfrak{e}}\star v^{\mathfrak{e}})\sim w^\mathfrak{e}$.\qedhere \end{enumerate} \end{enumerate} \end{proof} \begin{cor} The $\overline{\lambda}\mu\tilde{\mu}^*$-calculus is strongly normalizable. \end{cor} \begin{proof} Let $\sigma$ be a reduction sequence in the $\overline{\lambda}\mu\tilde{\mu}^*$-calculus, assume $\sigma$ is $v_0\hookrightarrow v_1\ldots \hookrightarrow v_n$ and $\sigma$ contains $k\geq 0$ number of $\beta$-, $\mu$-, $\tilde{\mu}$-, $s_l$- or $s_r$-reductions. By Theorem \ref{ch5:simlmtsls}, $v_0^\mathfrak{e}$, $v_1^\mathfrak{e},\ldots$, $v_n^\mathfrak{e}$ forms a sequence of \smash{$\ls$}-terms, where either $v_i^\mathfrak{e}\rightarrow v_{i+1}^\mathfrak{e}$ or $v_i^\mathfrak{e}\sim v_{i+1}^\mathfrak{e}$ $(0\leq i\leq n-1)$ and, for every $\beta$-, $\mu$-, $\tilde{\mu}$-, $s_l$- or $s_r$-reduction, there corresponds a reduction step in the \smash{$\ls$}-calculus. By Lemma \ref{ch4:ppsim}, we obtain that $\sim$ can be postponed, that is, there are $w_0$, $w_1,\ldots$, $w_{k+1}$ in $\mathcal{T}$ such that $w_0=v_0^\mathfrak{e}$, $w_{k+1}=v_n^\mathfrak{e}$ and $w_0\rightarrow\ldots \rightarrow w_k\sim w_{k+1}$. This means that we can establish a reduction sequence of length $k$ starting from $v_0^\mathfrak{e}$ in the \smash{$\ls$}-calculus. Hence, by Theorem \ref{ch3:snt} and Corollary \ref{ch3:snlsfull}, an infinite reduction sequence starting from $v_0$ can contain only a finite number of $\beta$-, $\mu$-, $\tilde{\mu}$-, $s_l$- or $s_r$-reductions. Thus there would exist an infinite reduction sequence in the $\overline{\lambda}\mu\tilde{\mu}^*$-calculus consisting entirely of $cl_{1,l}$-, $cl_{1,r}$- and $cl_2$-reductions, which is impossible. \end{proof} \subsection{A translation of the \texorpdfstring{$\ls$}{lambda-Sym-Prop}-calculus into the \texorpdfstring{$\overline{\lambda}\mu\tilde{\mu}^*$}{blambda-mu-bmu}-calculus} Now we are going to deal with the converse relation. That is we will present a translation of the $\ls$-calculus into the $\overline{\lambda}\mu\tilde{\mu}^*$-calculus which faithfully reflects the typability relations of one calculus in the other one. Then we prove that our translation is in fact a simulation of the $\ls$-calculus in the $\overline{\lambda}\mu\tilde{\mu}^*$-calculus. \begin{defi}\label{ch5:mumutilde}\hfill \begin{enumerate} \item The translation $.^\mathfrak{f}: \mathcal{T} \longrightarrow \mathfrak{T} $ is defined as follows. \[ M^{\mathfrak{f}}=\left\{ \begin{array}{ll} x & \;\;\;\textrm{ if $\;\;M=x$},\\ \lfloor Q^\mathfrak{f} , \widetilde{P^\mathfrak{f}}\rfloor & \;\;\; \textrm{ if $\;\;M=(P\star Q)$},\\ \overline{\tilde{\mu} xN^\mathfrak{f}}& \;\;\;\textrm{ if $\;\;M=\lambda xN$},\\ \overline{(P^\mathfrak{f} .\widetilde{Q^\mathfrak{f}})}&\;\;\;\textrm{ if $\;\;M=\langle P , Q\rangle$},\\ \lambda x\mu \beta\lfloor N^\mathfrak{f} , \widetilde{x}\rfloor & \;\;\;\textrm{ if $\;\;M=\sigma_1(N)$, $x\notin Fv(N^\mathfrak{f})$ and $\beta\notin Fv(\lfloor N^\mathfrak{f} , \widetilde{x}\rfloor)$},\\ \lambda xN^\mathfrak{f} & \;\;\;\textrm{ if $\;\;M=\sigma_2(N)$ and $x\notin Fv(N^\mathfrak{f})$}. \end{array}\right. \] \item The translation $.^\mathfrak{f}$ applies to the types as follows. \begin{itemize} \item ${\alpha}^\mathfrak{f}=\alpha$, \item ${({\alpha}^\bot )}^\mathfrak{f}={\alpha}^\bot$, \item $(A\wedge B)^\mathfrak{f} =(A^\mathfrak{f} \rightarrow (B^\mathfrak{f})^\bot )^\bot$, \item $(A\vee B)^\mathfrak{f} = (A^\mathfrak{f})^\bot \rightarrow B^\mathfrak{f}$. \end{itemize} We remark that $.^\mathfrak{f}$ maps the terms of the $\ls$-calculus with type $\bot$ to $c$-terms of the $\overline{\lambda}\mu\tilde{\mu}^*$-calculus, which have no types. We also have, for all types $A$, ${(A^\bot )}^\mathfrak{f} = (A^\mathfrak{f})^\bot$. Therefore the translation $.^\mathfrak{f}$ maps equal types to equal types. \end{enumerate} \end{defi} \begin{lem} \label{ch5:typelslmts} \begin{enumerate} \item If $\Gamma \;{\vdash} \; M : A$ and $A\neq \bot$, then $\Gamma^\mathfrak{f}\;{\rhd} \; M^\mathfrak{f} :A^\mathfrak{f}$. \item If $\Gamma \;{\vdash} \; M : \bot$, then $M^\mathfrak{f} :(\Gamma^\mathfrak{f}\;{\rhd} \;)$. \end{enumerate} \end{lem} \begin{proof} The proof proceeds by a simultaneous induction on the length of the derivation in the $\ls$-calculus. We can observe again that the notion of $.^\mathfrak{f}$ in Definition \ref{ch5:mumutilde} is conceived in a way to make the statements of the lemma true. Let us only examine some of the typical cases of the first assertion. \begin{enumerate} \item Suppose \[ \frac{\Gamma , x : A\;{\vdash}\; u : \bot} {\Gamma\;\vdash\;\lambda xu : A^\bot}. \] Then, applying the induction hypothesis, \[ \dfrac{ \dfrac{u^\mathfrak{f} :(\Gamma^\mathfrak{f} , x : A^\mathfrak{f} \;{\rhd}\;)} {\Gamma^\mathfrak{f} \;|\;\tilde{\mu} xu^\mathfrak{f} : A^\mathfrak{f} \;\rhd\;}} {\Gamma^\mathfrak{f} \;\rhd\;\overline{\tilde{\mu} xu^\mathfrak{f} }:(A^\mathfrak{f})^\bot}. \] \item If \[ \frac{\Gamma \;{\vdash}\; u:A} {\Gamma \;{\vdash}\; \sigma_1(u):A\vee B}, \] then, we obtain \[ \dfrac{ \dfrac{ \dfrac{ \dfrac{\;} {\Gamma^\mathfrak{f} , x:(A^\mathfrak{f})^\bot \;{\rhd}\;u^\mathfrak{f} : A^\mathfrak{f} \;|\;\beta :B^\mathfrak{f}} \;\;\;\;\; \dfrac{\Gamma^\mathfrak{f} , x:(A^\mathfrak{f})^\bot \;{\rhd}\;x:(A^\mathfrak{f})^\bot \;|\;\beta :B^\mathfrak{f}} {\Gamma^\mathfrak{f} , x:(A^\mathfrak{f})^\bot \;|\;\widetilde{x}: A^\mathfrak{f} \;{\rhd}\;\beta : B^\mathfrak{f}}} {\lfloor u^\mathfrak{f} , \widetilde{x}\rfloor : (\Gamma^\mathfrak{f} , x:(A^\mathfrak{f})^\bot \;{\rhd}\;\beta :B^\mathfrak{f})}} {\Gamma^\mathfrak{f} , x:(A^\mathfrak{f})^\bot \;{\rhd}\; \mu \beta \lfloor u^\mathfrak{f} , \widetilde{x}\rfloor :B^\mathfrak{f}}} {\Gamma^\mathfrak{f} \;{\rhd}\;\lambda x\mu \beta \lfloor u^\mathfrak{f} , \widetilde{x}\rfloor : (A^\mathfrak{f})^\bot \rightarrow B^\mathfrak{f}}. \] \item From \[ \frac{\Gamma \;{\vdash}\; u : A^\bot \;\;\;\;\; \Gamma \;{\vdash}\; v: A} {\Gamma \;{\vdash}\; (u\star v):\bot}, \] we obtain \[ \dfrac{ \dfrac{\Gamma^\mathfrak{f} \;{\rhd}\; u^\mathfrak{f}:(A^\mathfrak{f})^\bot} {\Gamma^\mathfrak{f} \;|\; \widetilde{u^\mathfrak{f}}:A^\mathfrak{f}\;{\rhd}\;} \;\;\;\;\;\; \dfrac{\;} {\Gamma^\mathfrak{f} \;{\rhd}\; v^\mathfrak{f}:A^\mathfrak{f}}} {\lfloor v^\mathfrak{f} , \widetilde{u^\mathfrak{f}}\rfloor :(\Gamma^\mathfrak{f} \;{\rhd}\;)}. \] \end{enumerate} \end{proof} \noindent Now we turn to the proof of the simulation of the $\ls$-calculus in the $\overline{\lambda}\mu\tilde{\mu}^*$-calculus. \begin{lem} \label{ch5:sublb} Let $M,N \in \mathcal{T}$. Then ${(M[x:=N])}^\mathfrak{f} =M^\mathfrak{f} [x:=N^\mathfrak{f}]$. \end{lem} \begin{proof} By induction on $cxty(M)$. \end{proof} \begin{thm} \label{ch5:simlslmts} Let $M,N \in \mathcal{T}$. If $M \rightarrow N$, then $M^\mathfrak{f} \hookrightarrow^+N^\mathfrak{f}$. \end{thm} \begin{proof} Let us prove some of the more interesting cases. \begin{enumerate} \item If $M=(\lambda xP\star Q)\rightarrow_{\beta}P[x:=Q]=N$, then, applying Lemma \ref{ch5:sublb}, $M^\mathfrak{f} = \lfloor Q^\mathfrak{f} , \widetilde{\overline{\tilde{\mu} xP^\mathfrak{f} }}\rfloor \hookrightarrow_{cl_{1}} {}\lfloor Q^\mathfrak{f} , \tilde{\mu} xP^\mathfrak{f} \rfloor \hookrightarrow_{\tilde{\mu}} {} P^\mathfrak{f} [x:=Q^\mathfrak{f} ]=N^\mathfrak{f}$. \item If $M=(Q\star \lambda xP)\rightarrow_{\beta_{\bot}}P[x:=Q]=N$, then $M^\mathfrak{f} = \lfloor \overline{\tilde{\mu} xP^\mathfrak{f} } , \widetilde{Q^\mathfrak{f}}\rfloor \hookrightarrow_{cl_{2}} {} \lfloor Q^\mathfrak{f}, \tilde{\mu} xP^\mathfrak{f} \rfloor \hookrightarrow_{\tilde{\mu}} {} P^\mathfrak{f} [x:=Q^\mathfrak{f} ]=N^\mathfrak{f}$. \item If $M=(\langle P , Q\rangle \star \sigma_1(R))\rightarrow_{\pi}(P\star R)=N$, then $M^\mathfrak{f} = \lfloor \lambda x\mu b\lfloor R^\mathfrak{f} , \widetilde{x}\rfloor , \widetilde{\overline{(P^\mathfrak{f} .\widetilde{Q^\mathfrak{f}})}}\rfloor \hookrightarrow_{cl_1} {} \lfloor \lambda x\mu b\lfloor R^\mathfrak{f} , \widetilde{x}\rfloor , (P^\mathfrak{f} .\widetilde{Q^\mathfrak{f}},)\rfloor \hookrightarrow_{\lambda}$ $\lfloor P^\mathfrak{f} , \tilde{\mu} x\lfloor \mu b\lfloor R^\mathfrak{f} , \widetilde{x}\rfloor , \widetilde{Q^\mathfrak{f}}\rfloor \rfloor \hookrightarrow_{\tilde{\mu}} {} \lfloor \mu b\lfloor R^\mathfrak{f} , \widetilde{P^\mathfrak{f}}\rfloor , \widetilde{Q^\mathfrak{f}}\rfloor \hookrightarrow_{\mu} {} \lfloor R^\mathfrak{f} , \widetilde{P^\mathfrak{f}}\rfloor = N^\mathfrak{f}$.\qedhere \end{enumerate} \end{proof} We could have as well demonstrated that the $\overline{\lambda}\mu\tilde{\mu}^*$-calculus is strongly normalizable by applying the method presented in Section 3 as accomplished by Batty\'anyi \cite{Batt}. The following result states that in this case the strong normalizability of the \smash{$\ls$}-calculus would arise as a direct consequence of that of the $\overline{\lambda}\mu\tilde{\mu}^*$-calculus. \begin{cor} If the $\overline{\lambda}\mu\tilde{\mu}^*$-calculus is strongly normalizable, then the same is true for the $\ls$-calculus as well. \end{cor} \begin{proof} By Theorem $\ref{ch5:simlslmts}$. \end{proof} \subsection{The connection between the two translations} In this subsection we examine the connection between the two transformations. We prove that both compositions $.^{\mathfrak{e}^\mathfrak{f}}:\mathfrak{T} \longrightarrow \mathfrak{T}$ and $.^{\mathfrak{f}^\mathfrak{e}}:\mathcal{T} \longrightarrow \mathcal{T}$ are such that we can get back the original terms by performing some steps of reduction on $u^{\mathfrak{e}^\mathfrak{f}}$ or on $M^{\mathfrak{f}^\mathfrak{e}}$, respectively. That is, the following theorems are valid. The case of $.^{\mathfrak{f}^\mathfrak{e}}$ is the easier one. First we describe the effect of $.^{\mathfrak{f}^\mathfrak{e}}$ on the typing relations. \begin{lem} If $\Gamma \;{\vdash} \; M : A$, then $\Gamma\;{\vdash} \; {M^\mathfrak{f}}^\mathfrak{e}:A$. \end{lem} \begin{proof} Combining Lemmas \ref{ch5:typelslmts} and \ref{ch5:typelmtsls}. \end{proof} \begin{thm}\label{comp:fe} Let $M \in \mathcal{T}$. Then ${M^{\mathfrak{f}}}^\mathfrak{e} \rightarrow^* M$. \end{thm} \begin{proof} By induction on $cxty(M)$. We consider only the more interesting cases. \begin{enumerate} \item If $M=(P\star Q)$, then ${(P\star Q)^\mathfrak{f}}^\mathfrak{e}=\lfloor Q^\mathfrak{f},\widetilde{P^\mathfrak{f}}\rfloor^\mathfrak{e}=({P^\mathfrak{f}}^\mathfrak{e}\star{Q^\mathfrak{f}}^\mathfrak{e})\rightarrow^*(P\star Q)$. \item If $M=\langle P,Q\rangle$, then ${\langle P,Q\rangle^\mathfrak{f}}^\mathfrak{e}=\overline{({P^\mathfrak{f}.\widetilde{Q^\mathfrak{f}}})}^\mathfrak{e}=\langle {P^\mathfrak{f}}^\mathfrak{e},{Q^\mathfrak{f}}^\mathfrak{e}\rangle\rightarrow^* \langle P,Q\rangle$. \item If $M=\sigma_1(N)$, then ${\sigma_1(N)^\mathfrak{f}}^\mathfrak{e}={\lambda x\mu \beta\lfloor N^\mathfrak{f},\widetilde{x}\rfloor}^\mathfrak{e}=\lambda y(\lambda x({\pi}_2(y)\star {(\mu \beta\lfloor N^\mathfrak{f}, \widetilde{x}\rfloor)}^\mathfrak{e})\star {\pi}_1(y))=\\ \lambda y(\lambda x({\pi}_2(y)\star \lambda \overline{\beta}(x\star {N^\mathfrak{f}}^\mathfrak{e}))\star {\pi}_1(y))\rightarrow_{\beta^\bot}\lambda y (\lambda x(x\star {N^\mathfrak{f}}^\mathfrak{e})\star {\pi}_1(y))\rightarrow_{\eta^\bot}\\ \lambda y({N^\mathfrak{f}}^\mathfrak{e}\star {\pi}_1(y)) \rightarrow_{\beta}\lambda y(y\star \sigma_1({N^\mathfrak{f}}^\mathfrak{e}))\rightarrow_{\eta^\bot} \sigma_1({N^\mathfrak{f}}^\mathfrak{e})\rightarrow^*\sigma_1(N)$.\qedhere \end{enumerate} \end{proof} \noindent We begin to examine the composition ${.^\mathfrak{e}}^\mathfrak{f}:\mathfrak{T}\rightarrow\mathfrak{T}$ for an arbitrary $u$. First we make the following observation. \begin{lem} \begin{enumerate} \item If $\Gamma \;{\rhd} \; t : A\; |\; \triangle$, then $\Gamma, \triangle^\bot\;{\rhd} \; {t^\mathfrak{e}}^\mathfrak{f}:A$. \item If $\Gamma \; |\; e : A\;{\rhd} \; \triangle$, then $\Gamma,\triangle^\bot\;{\rhd} \; {e^\mathfrak{e}}^\mathfrak{f}: A^\bot$. \item If $p:(\Gamma\;{\rhd} \;\triangle )$, then ${p^\mathfrak{e}}^\mathfrak{f}:(\Gamma,\triangle^\bot\;{\rhd} )$. \end{enumerate} \end{lem} \begin{proof} Combining Lemmas \ref{ch5:typelmtsls} and \ref{ch5:typelslmts}. \end{proof} Theorem \ref{comp:fe} states that, if $M$ is an \smash{$\ls$}-term, then $M$ can be related to ${M^{\mathfrak{f}}}^\mathfrak{e}$ by the reductions in the \smash{$\ls$}-calculus. We note that we are not able to obtain $u$ from ${{u^{\mathfrak{e}}}}^\mathfrak{f}$ in such a way. We can find a term $T$ instead such that ${{u^{\mathfrak{e}}}}^\mathfrak{f}\hookrightarrow^* T(u)$. The function $T$ can intuitively be considered as the description how \smash{$\ls$}-connectives can be embedded into the $\overline{\lambda}\mu\tilde{\mu}^*$-calculus. It turns out that the $\overline{\lambda}\mu\tilde{\mu}^*$-calculus translates the \smash{${\ls}$}-terms not so smoothly as it was the case with the other direction. \begin{defi}\label{comp:T} We define a function $T$ assigning a $\overline{\lambda}\mu\tilde{\mu}^*$-term to a $\overline{\lambda}\mu\tilde{\mu}^*$-term.\\\\ \begin{tabular}{ll} \begin{minipage}[t]{220pt} \begin{itemize} \item $T(x)=x$, \item $T(\lambda xu)=\overline{\tilde{\mu} y\lfloor T(u)[x:=p_1(y)] , \widetilde{p_2(y)}\rfloor}$, \item $T(\mu \alpha p)=\overline{\tilde{\mu} \overline{\alpha}T(p)}$, \item $T(\overline{u})=T(u)$, \end{itemize} \end{minipage} & \begin{minipage}[t]{150pt} \begin{itemize} \item $T(\alpha)=\overline{\alpha}$, \item $T((u.v))=\langle T(u) , T(v)\rangle$, \item $T(\tilde{\mu} xp)=\overline{\tilde{\mu} xT(p)}$, \item $T(\widetilde{h})=T(h)$, \item $T(\lfloor t ,e\rfloor)=\lfloor T(t) ,\widetilde{T(e)}\rfloor$. \end{itemize} \end{minipage} \end{tabular} \end{defi} \begin{thm} Let $u \in \mathfrak{T}$. We have ${u^{\mathfrak{e}}}^\mathfrak{f}\hookrightarrow^* T(u)$. \end{thm} \begin{proof} By induction on $cxty(u)$. We consider only some of the cases. \begin{enumerate} \item If $u=\lambda xv$, then ${u^{\mathfrak{e}}}^\mathfrak{f} = (\lambda y(\lambda x({\pi}_2(y)\star u^\mathfrak{e})\star {\pi}_1(y)))^\mathfrak{f} = \overline{\tilde{\mu} y\lfloor p_1(y),\widetilde{\overline{\tilde{\mu} x\lfloor {v^{\mathfrak{e}}}^\mathfrak{f},\widetilde{p_2(y)}\rfloor}}\rfloor} \hookrightarrow_{cl_{1,r}}$ \\ $\overline{\tilde{\mu} y\lfloor p_1(y),\tilde{\mu} x\lfloor {v^{\mathfrak{e}}}^\mathfrak{f},\widetilde{p_2(y)}\rfloor\rfl} \hookrightarrow_{\tilde{\mu}} \overline{\tilde{\mu} y\lfloor {v^{\mathfrak{e}}}^\mathfrak{f}[x:=p_1(y)],\widetilde{p_2(y)}[x:=p_1(y)]\rfloor} \hookrightarrow^* T(u)$.\medskip \item If $u=\tilde{\mu} x\lfloor t,v\rfloor$, then ${u^{\mathfrak{e}}}^\mathfrak{f} = (\lambda x(v^\mathfrak{e}\star t^\mathfrak{e}))^\mathfrak{f} =\overline{\tilde{\mu} x\lfloor {t^\mathfrak{e}}^\mathfrak{f},\widetilde{{v^\mathfrak{e}}^\mathfrak{f}}\rfloor} \hookrightarrow^* \overline{\tilde{\mu} x\lfloor T(t),\widetilde{T(v)}\rfloor} =\overline{\tilde{\mu} xT(\lfloor t,v\rfloor)}=T(u)$.\qedhere \end{enumerate} \end{proof} \begin{rem} We remark that we cannot expect $T(u)$ to be expressible with the help of $\mathfrak{T}$. Namely, we can show that, if \smash{$=_{\tiny{\overline{\lambda}\mu\tilde{\mu}^*}}$} denotes the reflexive, transitive closure of the compatible union of the reduction relations in the $\overline{\lambda}\mu\tilde{\mu}^*$-calculus, then none of the assertions below are valid. \begin{enumerate} \item There exists a a $\overline{\lambda}\mu\tilde{\mu}^*$-term $\Phi$ such that, for every $c$-term $c$, $T(c)=_{\tiny{\overline{\lambda}\mu\tilde{\mu}^*}}\Phi(c)$. \item There exists a a $\overline{\lambda}\mu\tilde{\mu}^*$-term $\Phi_1$ such that, for every $l$-term $t$, $T(t)=_{\tiny{\overline{\lambda}\mu\tilde{\mu}^*}}\Phi_1(t)$. \item There exists a a $\overline{\lambda}\mu\tilde{\mu}^*$-term $\Phi_2$ such that, for every $r$-term $e$, $T(e)=_{\tiny{\overline{\lambda}\mu\tilde{\mu}^*}}\Phi_2(e)$. \end{enumerate} \end{rem} \section{Conclusion} The paper is mainly devoted to an arithmetical proof of the strong normalization of the \smash{$\ls$}-calculus introduced by Berardi and Barbanera \cite{Ber-Bar}. The proof is an adaptation of the work of David and Nour \cite{Dav-Nou3}. The novelty of our paper is the application of the method of zoom-in sequences of redexes: we achieve the main theorem by identifying the minimal non-strongly normalizing redexes of an infinite reduction sequence, which we call a zoom-in sequence of redexes. The idea of zoom-in sequences was inspired by the notion of perpetual reduction strategies introduced by Raamsdonk et al. \cite{Sor}. Following the proof of the strong normalization of the \smash{$\ls$}-calculus, the $\overline{\lambda}\mu\tilde{\mu}$-calculus is introduced, which was defined by Curien and Herbelin \cite{Cur-Her}. The same proof of strong normalization as we have presented for the \smash{$\ls$}-calculus would also work for the calculus of Curien and Herbelin as was shown by Batty\'anyi \cite{Batt}. However, instead of adapting the proof method for the $\overline{\lambda}\mu\tilde{\mu}$-calculus, we designed a translation of the \smash{$\ls$}-calculus in the $\overline{\lambda}\mu\tilde{\mu}^*$-calculus and vice versa, where the $\overline{\lambda}\mu\tilde{\mu}^*$-calculus is the $\overline{\lambda}\mu\tilde{\mu}$-calculus augmented with terms explicitly expressing negation and with rules handling them. The translation allows us to assert strong normalization for the $\overline{\lambda}\mu\tilde{\mu}^*$- and, hence, for the $\overline{\lambda}\mu\tilde{\mu}$-calculus. On the technical side, we remark that there were two main difficulties that rendered the proof a little more involved. First, we had to work with an alternating substitution defined inductively starting from two sets of terms. The reason was that we had to prove a more general statement to locate the supposedly non strongly normalizing part of a term emerging as a result of a substitution. Simple substitutions would not have been enough for our purpose. The second difficulty was that in order to establish a key property of zoom-in sequences in Lemma \ref{ch3:zoom} we had to move forward the Hypothesis ``H'' from the main theorem, thus making the application of the hypothesis implicit in the sequel. We think that the elimination of both problems would considerably enhance the paper's intelligibility. It seems promising to investigate whether the present method of verifying strong normalization can be applied to systems other than simple typed logical calculi, for example, proof nets (Laurent \cite{Lau}). Another fields of interest could be intuitionistic and classical typed systems with explicit substitutions (Rose \cite{Ros}). To handle these systems, the present proof must be simplified, we have to pay attention in our proof, for example, that the substitutions are defined by two sets of terms of different types. Finally, we remark that it is a natural requirement of a proof formalizable in first order arithmetic to enable us to find an upper bound for the lengths of the reduction sequences. At its present form, our proof does not make it possible, this raises another demand for the simplification of the results. \section*{Acknowledgment} \noindent We wish to thank Ren\'e David and the anonymous referees for helpful discussions and remarks.
1,941,325,220,603
arxiv
\section*{Introduction} Hartshorne~\cite{hart-cdav,hart-as} pioneered the systematic study of the cohomological properties of pairs consisting of a projective scheme which is regular in a neighbourhood of a local complete intersection subscheme with ample normal bundle. Ample subvarieties of projective varieties were defined by Ottem~\cite{ottm}, based on Totaro's work~\cite{tot} on cohomological ampleness. Inspired by {\textit{op.\,cit.}}, but in a totally different framework (cf.~\cite[Introduction]{hlc-subvar}), the author considered the weaker $q$-ampleness property. Moreover, recently there is an increased interest in studying and understanding various positivity properties of higher codimensional subvarieties and cycles, see \textit{e.g.}~\cite{ful+leh} and the references therein. Our goal is to introduce partially ample subschemes. Besides general properties and examples, an important feature are their connectedness properties. The main result is a contribution to an issue raised by Fulton-Hansen~\cite{fult+hans}, known to be false in its original form. We prove that appropriate partial ampleness of a subvariety yields the connectedness of its pre-image under a morphism; in fact, it implies the G3 property, in Hironaka-Matsumura's terminology. The article consists of three sections. In the first one, we introduce the relevant definitions and \emph{compress} those properties which carry over from~\cite{ottm}; for details, the reader should consult the original reference. Next we present a class of situations where Fulton-Hansen's question does admit a positive answer. \begin{thm-nono}{(cf.~\ref{thm:f-1},~\ref{thm:q})} Let $V,X$ be irreducible projective varieties, $V\srel{f}{\to} X$ a morphism. Let $Y\subset X$ be a closed subscheme. \begin{enumerate}[leftmargin=5ex] \item[\rm(i)] If $f$ is surjective and $\mathop{\rm cd}\nolimits(X\sm Y)\les\dim X-2$, then $f^{-1}(Y)$ is connected. \item[\rm(ii)] Suppose $Y$ is $\big(\dim f(V)+\dim(Y)-\dim(X)-1\big)$-ample. \item[] Then $f^{-1}(Y)$ is connected and $\pi_1^{alg}(f^{-1}(Y))\to\pi_1^{alg}(V)$ is surjective. \end{enumerate} \end{thm-nono} When $f$ is an embedding, the theorem yields a connectedness criterion for intersections, reminiscent to an problem posed by Hartshorne (cf.~\cite{hart-as,petr}) which is still wide open. An example from {\textit{op.\,cit.}}\, shows that the first part of our result is optimal (cf. Remark~\ref{rmk:optim}). To our knowledge, such numerical conditions are not available in this generality. Existing results (cf. Fulton-Hansen~\cite{fult+hans,hans}, Faltings~\cite{falt-homog}, Debarre~\cite{debr2}) hold for subvarieties of various homogeneous spaces. Also, \emph{our result goes beyond the applicability of Ottem's work}. On one hand, for a subvariety, the requirement to be partially ample is obviously less restrictive than to be ample. On the other hand---at a deeper level---even in the basic case of two subvarieties of an ambient space, one of them being ample, \cite{ottm} yields only the non-emptiness of their intersection, but gives \emph{no information about the connectedness} without further smoothness assumptions. Since the image of a morphism can have arbitrarily bad singularities, it is not possible to conclude anything regarding the connectedness of pre-images. Precisely for this reason, the first part of the Theorem is essential: it holds in full generality. In the last section, we show the ubiquity of partially ample---\emph{mostly not ample}---subvarieties by analysing several classes of examples: vanishing loci of sections in vector bundles (cf. {\S}\ref{ssct:glob-gen}); Bialynicki-Birula decompositions (cf. {\S}\ref{ssct:fixed}); rational homogeneous spaces (cf. {\S}\ref{ssct:subvar-homog}). \section{{\textit{q}-ample and \textit{p}-positive subvarieties}} \begin{m-notation}\label{not:XYN} Let $X$ be a projective scheme defined over an algebraically closed field ${\Bbbk}$ of characteristic zero. Let $Y$ be a closed subscheme; we denote the maximal dimension of its components by $\dim Y$, and assume that they are all at least $1$-dimensional. Let $\eI_Y\subset\eO_X$ be the sheaf of ideals defining $Y$; for $m\ges0$, $Y_m$ is the subscheme defined by $\eI_{Y}^{m+1}$. The formal completion of $X$ along $Y$ is $\hat X_Y:=\disp\varinjlim Y_m$; for any coherent sheaf $\eG$ on $X$, it holds \begin{equation}\label{eq:XY} H^t(\hat X_Y,\eG)=\varprojlim H^t(Y_m,\eG). \end{equation} Let $\tld X:=\Bl_Y(X)\srel{\si}{\to}X$ be the blow-up of $\eI_Y$ and $E_Y\subset\tld X$ the exceptional divisor. If $X$ is Cohen-Macaulay and $Y$ is locally complete intersection---\emph{lci} for short---, its normal sheaf $\eN_{Y/X}:=(\eI_Y/\eI_Y^2)^\vee$ is locally free. A variety is a reduced and irreducible scheme. The symbol $\cst{A}$ stands for a constant depending on the quantity $A$. Further necessary notions are recalled in the appendix. \end{m-notation} \subsection{Basic properties}\label{sct:1-property} Let the situation be as above. \begin{m-definition}\label{def:q1} We denote $$ \delta:=\codim_X(Y)=\min\{\codim_XY'\mid Y'\;\text{irreducible component of}\;Y\}. $$ \begin{enumerate}[leftmargin=5ex] \item[\rm(i)] {\rm(cf. \cite[Definition 3.1]{ottm})} We say that $Y$ is \emph{$q$-ample} if $\eO_{\tld X}(E_Y)$ is $(q+\delta-1)$-ample. That is, for any coherent sheaf $\tld\eF$ on $\tld X$ it holds: \begin{equation}\label{eq:q11} H^{t}(\tld X,\tld\eF\otimes\eO_{\tld X}(mE_Y))=0,\;\forall\,t\ges q+\delta,\;\forall\,m\ges\cst{\tld\eF}. \end{equation} \item[\rm(ii)] We say that $Y$ is (has the property) $p^\pos$---that is, \emph{$p$-positive}---if it holds: \begin{equation}\label{eq:q12} H^t(X,\eF\otimes\eI_Y^m)=0,\;\forall\,t\les p,\;\forall\,m\ges\cst{\eF}, \end{equation} for all locally free sheaves $\eF$ on $X$. \end{enumerate} \end{m-definition} Partial ampleness behaves well under restrictions, while positivity yields connectedness results. The notions are dual under certain regularity assumptions. \begin{m-remark} \nit{\rm(i)} For $q=0$, one recovers the ample subschemes~\cite{ottm}. \nit{\rm(ii)} For a closed point $y\in Y$, it holds \begin{equation}\label{eq:les} \delta_{Y,y}-1\les\dim\si^{-1}(y)\les q+\delta-1, \end{equation} where $\delta_{Y,y}$ is the codimension of the irreducible component of $Y$ containing $y$ (cf.~\cite[Proposition 3.4]{ottm}). See~\ref{lm:conn} for equidimensionality criteria of partially ample subvarieties. \end{m-remark} \begin{m-proposition}\label{prop:N+cd} \begin{enumerate}[leftmargin=5ex] \item[\rm(i)] A subscheme is $q$-ample if and only if so is its integral closure. \item[\rm(ii)] The $q$-ampleness property of lci subschemes is open relative to projective, flat morphisms. \item[\rm(iii)] $Y$ is $q$-ample if and only if it holds: \item[] {\rm(a)} $\eO_{E_Y}(E_Y)$ is $(\delta+q-1)$-ample; {\rm(b)} $\mathop{\rm cd}\nolimits(X\sm Y){\les}\,\delta+q-1.$ (Note:$\;\mathop{\rm cd}\nolimits(X\sm Y){\ges}\,\delta-1$.) \end{enumerate} \end{m-proposition} \begin{m-proof} (i)+(ii) See \cite[Proposition~6.8, Theorem~6.1]{ottm}. \nit(iii) Verify~\eqref{eq:q11} for $\tld\eF_{E_Y}$---$\tld\eF={\tld\eA}^{-k}$, $k\ges 1$, and $\tld\eA\in\Pic(\tld X)$ ample---by using the sequence $0\to\tld\eF((m-1)E_Y)\to\tld\eF(mE_Y)\to\tld\eF_{E_Y}(mE_Y)\to0;$ proceed as in~\cite[Theorem 5.4]{ottm}. \end{m-proof} \begin{m-proposition}\label{prop:p>0} The following statements are equivalent: \begin{enumerate}[leftmargin=5ex] \item[\rm(i)] $Y\subset X$ is $p^\pos$;\qquad {\rm(ii)} $E_Y\subset\tld X$ is $p^\pos$; \item[\rm(iii)] The condition~\eqref{eq:q11} holds for $\eF=\eA^{k}$, $k\ges1$, where $\eA\in\Pic(X)$ is ample. \item[\rm(iv)] For all locally free sheaves $\eF$ on $X$, the properties below are satisfied:\\[1ex] $\begin{array}{rl} {\rm(a)}& H^t\big(X,\eI_Y^m/\eI_Y^{m+1}\otimes\eF\big)=0,\;\forall\,m\ges\cst{\eF},\;0\les t\les p-1, \\[0ex] {\rm(b)}& \res^X_{Y}:H^t(X,\eF)\to H^t(\hat X_Y,\eF) \text{ is }\;\biggl\{ \begin{array}{l} \text{an isomorphism, for }t\les p-1,\\[1ex] \text{injective, for }t=p. \end{array} \Big. \end{array}$ \end{enumerate} \end{m-proposition} \begin{m-proof} (i)$\Leftrightarrow$(iii) Observe that $\eF$ fits into $0{\to}\eF{\to}\eA^k\otimes{\Bbbk}^N{\to}\eG{\to}0$, with $k,N>0$, and $\eG$ locally free. Then $H^t(\eF\otimes\eI_Y^m)\cong H^{t-1}(\eG\otimes\eI_Y^m)$---so $t$ decreases---and we repeat the process. \nit(i)$\Leftrightarrow$(ii) Let $\eA\in\Pic(X)$ be ample such that $\tld\eA:=\si^*\eA(-E_Y)$ is ample on $\tld X$. Now note that $\,H^t(\tld X,\tld\eA^{k}(-mE_Y)){=} H^t(\tld X,\si^*\eA^{k}(-(k+m)E_Y)){=} H^t(X,\eA^k\otimes\eI_Y^{k+m})$, for $m{\gg}0$. \nit(i)$\Leftrightarrow$(iv) For (a), we use $0{\to}\eI_Y^{m+1}{\to}\eI_Y^m{\to}\eI_Y^m/\eI_Y^{m+1}{\to}0$; for (b), twist by $\eF$ the following sequence and use~\eqref{eq:XY}: $0{\to}\eI_Y^{m+1}{\to}\eO_X{\to}\eO_{Y_m}{\to}0$. Conversely, for $t\les p-1$ and $m\gg\cst{\eF}$, using $0{\to}\eI_Y^m/\eI_Y^{m+1}{\to}\eO_{Y_m}{\to}\eO_{Y_{m-1}}{\to}0$ we deduce that $H^t(Y_m,\eF){\to} H^t(Y_{m-1},\eF)$ are injective and eventually isomorphic, so $H^t(\hat X_Y,\eF)=H^t(Y_m,\eF)$ for $m$ large enough. Thus $H^t(X,\eF){\to} H^t(Y_m,\eF)$ are isomorphisms, so $H^t(\eI_Y^m\otimes \eF)=0$. It remains $t=p$. For $m\ges\cst{\eF}$, $H^p(\eI_Y^{m+1}\otimes\eF){\to} H^p(\eI_Y^{m}\otimes\eF)$ are injective, eventually isomorphic to a vector space $H(p,\eF)$. The previous step and the sequence $0{\to}\eI_Y^{m+1}{\to}\eO_X{\to}\eO_{Y_m}{\to}0$ imply that $H(p,\eF)=\Ker\big(H^p(X,\eF){\to} H^p(\hat X_Y,\eF)\big)=0$. \end{m-proof} \begin{m-proposition}\label{prop:q1} We consider the conditions: $$\text{{\rm(A)} $Y$ is a $(\dim Y-p)$-ample subscheme; \qquad{\rm(P)} $Y$ is $p^\pos$.}$$ The following statements hold: \begin{enumerate}[leftmargin=5ex] \item[\rm(i)] If $\tld X$ is a Cohen-Macaulay scheme, then $\,\text{\rm(A)}\,\Rightarrow\,\text{\rm(P)}$; \item[\rm(ii)] If $\tld X$ is a Gorenstein, then $\,\text{\rm(A)}\,\Leftrightarrow\,\text{\rm(P)}$. \item[\rm(iii)] For $X$ smooth and $Y$ lci, one has the equivalence: \begin{equation}\label{eq:equiv-p} \text{$Y$ is $p^\pos$} \quad\Leftrightarrow\quad \bigg\{\begin{array}{l} \text{the normal bundle $\eN_{Y/X}$ is $(\dim Y-p)$-ample,} \\[1ex] \text{the cohomological dimension $\mathop{\rm cd}\nolimits(X\sm Y)\les\dim X-(p+1)$}. \end{array} \end{equation} \end{enumerate} \end{m-proposition} \begin{proof} (i) Since $\eO_{\tld X}(-E_Y)$ is relatively ample, for $m\gg0$, it holds: $$ H^t(X,\eF\otimes\eI_Y^m) \cong H^t(\tld X,\eF\otimes\eO_{\tld X}(-mE_Y)) \cong H^{\dim X-t}(\tld X,\omega_{\tld X}\otimes\eF^\vee\otimes\eO_{\tld X}(mE_Y)). $$ \nit(ii) The equation above shows that~\eqref{eq:q11} holds for $\omega_{\tld X}\otimes\eL$, $\eL\in\Pic(X)$; let us prove for a coherent sheaf $\tld\eF$ on $\tld X$. Take $\eA\in\Pic(X)$ ample such that $\eA(-E_Y)$ is ample on $\tld X$, and $c>0$ such that $(\tld\eF\otimes\omega_{\tld X}^{-1})\otimes\eA(-E_Y)^c$ is globally generated. The recursion~\cite[Lemma~2.1]{ottm} applied to $\;0\to\tld\eF_1:=\Ker(\veps)\to \big(\omega_{\tld X}\otimes\eA^{-c}\otimes\eO_{\tld X}(cE_Y)\big)^{\oplus N} \srel{\veps}{\to}\tld\eF\to0,$ for some $N>0$, yields $H^j(\tld\eF(mE_Y))\subset H^{j+1}(\tld\eF_1(mE_Y))$, $j\ges\codim Y+q$, $m\gg0$. \nit(iii) Apply Proposition~\ref{prop:N+cd}, since $\tld X$ is Gorenstein. \end{proof} The Cohen-Macaulay (resp. Gorenstein) property of blow-ups has been investigated by several authors; combinatorial conditions are determined in~\cite{kaw,hyry}. A situation which covers many geometric applications is when $X$ is Cohen-Macaulay (resp. smooth) and $Y$ is lci. The proposition above breaks the estimation of the amplitude into a local and a global problem. The former is easier but, in general, the cohomological dimension is difficult to control (cf.~\cite{andr+grau,ogus,falt-homog}); in~\cite{hlc+taj} we obtained upper bounds---those of interest---in the presence of affine stratifications. Below is a manageable situation which will be used in Section~\ref{sct:expl}. \begin{m-proposition}\label{prop:x-b} Let $V$ be a projective scheme, $\tld X\srel{\phi}{\to}V$ a morphism such that $\eO_{\tld X}(E_Y)$ is $\phi$-relatively ample. Then $Y\subset X$ is $q$-ample, for $\,q:=1+\dim\phi(\tld X)-\codim_X(Y).$ \end{m-proposition} \begin{proof} For a coherent sheaf $\tld\eF$ on $\tld X$, one has $R^t\phi_*(\tld\eF\otimes\eO_{\tld X}(mE_Y)){=}0,\,t>0, m\gg0$, so\\ $H^{j}\big(\tld X,\tld\eF\otimes\eO_{\tld X}(mE_Y)\big){=}H^j\big(\,V,\phi_*(\tld\eF\otimes\eO_{\tld X}(mE_Y))\,\big){=}0,\,j\ges q+\delta>\dim\phi(\tld X).$ \end{proof} \subsection{Elementary operations}\label{sct:N+cd} We study the behaviour of partial ampleness under various natural operations: intersection, pull-back, product. \begin{m-proposition}\label{prop:pull-back} Let $X'\srel{f}{\to}X$ be a morphism, $d:=$ the maximal dimension of its fibres, and $Y':=X'\times_XY=f^{-1}(Y)\subset X'$. \begin{enumerate}[leftmargin=5ex] \item[\rm(i)] If $Y\subset X$ is $q$-ample, then $Y'\subset X'$ is $(q+d)$-ample. \item[\rm(ii)] If $f$ is flat and surjective and $Y$ is $p^\pos$, then so is $Y'$. \end{enumerate} \end{m-proposition} \begin{m-proof} (i) The universality property of the blow-up yields the commutative diagram $$ \xymatrix@R=1.2em@C=4em{ \tld X'=\Bl_{Y'}(X')\ar[d]\ar[r]^-{\tld f}&\tld X=\Bl_{Y}(X)\ar[d] & \\ X'\ar[r]^-f&X& \tld f^*\eO_{\tld X}(E_{Y})=\eO_{\tld X'}(E_{Y'}). } $$ Moreover, $\tld X'\subset X'\times_X\tld X$ is a closed subscheme, so the maximal dimension of the fibres of $\tld f$ is still $d$. For a coherent sheaf $\tld\eG$ on $\tld X'$, the projection formula implies $\, R^i\tld f_*(\tld\eG\otimes\eO_{\tld X'}(mE_{Y'})) =R^i\tld f_*\tld\eG\otimes\eO_{\tld X}(mE_{Y}),\; R^{j}\tld f_*\tld\eG=0,\,j>d, $ and the condition~\eqref{eq:q11} follows. \nit(ii) Since $f$ is flat, the diagram is Cartesian. Take $\tld\eA'\in\Pic(\tld X')$ ample such that $R^j\tld f_*\tld{\eA'}^k=0,$ for $k,j>0$; then $\tld\eF_k:=\tld f_*\tld{\eA'}^k$ is locally free on $\tld X$, by Grauert's criterion. For $t\les p$, one has $H^t(\tld X',\tld{\eA'}^{k}(-mE_{Y'}))=H^t(\tld X,\tld\eF_k(-mE_Y))$, and we conclude by~\ref{prop:p>0}. \end{m-proof} \begin{m-proposition}\label{prop:product} Suppose $X$ is smooth. \begin{enumerate}[leftmargin=5ex] \item[\rm(i)] Let $Y_1,Y_2\subset X$ be respectively $q_1$-, $q_2$-ample lci subvarieties such that $\;\codim(Y_1\cap Y_2)=\codim(Y_1)+\codim(Y_2).$ Then $Y_1\cap Y_2\subset X$ is $(q_1+q_2)$-ample. \item[\rm(ii)] Suppose $Y_j\subset X_j$ are lci and $p_j^\pos$, for $j=1,2$. Then $Y_1\times Y_2\subset X_1\times X_2$ is $\min\{p_1,p_2\}^\pos$. \end{enumerate} \end{m-proposition} \begin{m-proof} (i) Note that $Y_1\cap Y_2$ is lci in $X$; the sub-additivity property~\cite[Theorem 3.1]{arap} applied to the normal bundle sequence implies that $\eN_{Y_1\cap Y_2/X}$ is $(q_1+q_2)$-ample. The Mayer-Vietoris sequence for $X\sm(Y_1\cup Y_2)$ yields the bound on the cohomological dimension. \nit(ii) The inequality $\mathop{\rm cd}\nolimits(X_1\times X_2\sm Y_1\times Y_2)<\dim(X_1\times X_2)-\min\{p_1,p_2\}$ follows from the Mayer-Vietoris sequence for $X_1\times X_2\sm Y_1\times Y_2=\big((X_1\sm Y_1)\times X_2\big)\cup\big(X_1\times(X_2\sm Y_2)\big)$. It remains to show that $\eN_{Y_1\times Y_2/X_1\times X_2}$ is $q$-ample, for $q=\dim Y_1+\dim Y_2-\min\{p_1,p_2\}$. We use the equivalent characterization in Definition~\ref{def:q-line}: let $\eA_1,\eA_2$ be ample line bundles on $X_1,X_2$, respectively, and $\eA_1\boxtimes\eA_2$ the tensor product of their pull-backs. Then, for $a\gg0$, $k\ges 1$, and $t>\max\{q_1+\dim X_2,q_2+\dim X_1\}$, it holds:\\[1ex] $\begin{array}{l} H^t\big(Y_1\times Y_2,(\eA_1^{-k}\boxtimes\eA_2^{-k})\otimes\Sym^a(\eN_{Y_1/X_1}\boxplus\eN_{Y_2/X_2})\big) \\[1.5ex] \null\kern1em= \uset{\genfrac{}{}{0pt}{}{t_1+t_2=t,}{a_1+a_2=a}}{\bigoplus} H^{t_1}\big(Y_1,\eA_1^{-k}\otimes\Sym^{a_1}(\eN_{Y_1/X_1})\big)\otimes H^{t_2}\big(Y_2,\eA_2^{-k}\otimes\Sym^{a_2}(\eN_{Y_2/X_2})\big)=0. \end{array}$ \end{m-proof} \subsection{Weak positivity}\label{ssct:aprox-q} Our goal is to prove a transitivity result for the $p^\pos$-property. \begin{m-definition}\label{def:ap>0} The subscheme $Y\subset X$ is $p^\apos$---that is, \emph{weakly $p$-positive}---if there is a decreasing sequence of sheaves of ideals $\{\eJ_m\}_{m}$ with the following properties: \begin{equation}\label{eq:ap} \begin{array}{lll} \bullet\;& \forall\,m,n\ges1,\;\exists\,m'>m,\,n'>n\text{ such that } \eJ_{m'}\subset\eI_Y^m,\;\eI_Y^{n'}\subset\eJ_{n}; & \\[1ex] \bullet\;& \text{for any locally free sheaf $\eF$ on $X,\;$} \\[.5ex]& \exists\,\cst{\eF}\ges1\text{ such that }H^t\big(X,\eF\otimes{\eJ_{m}}\big)=0, \;\forall\,t\les p\;\forall\,m\ges\cst{\eF}. &\kern10ex\null \end{array} \end{equation} Obviously, $Y$ is $p^\apos$ if and only if so is $Y_{\text{red}}$. \end{m-definition} \begin{m-lemma}\label{lm:cd-apos} Suppose $X$ is smooth, $Y$ is lci and $p^\apos$. Then $\,\mathop{\rm cd}\nolimits(X\sm Y){\les}\dim X{-}(p+1).$ \end{m-lemma} \begin{m-proof} Since $X$ is smooth, \cite[Proposition III.3.1]{hart-as} states: $$ \mathop{\rm cd}\nolimits(X\sm Y)< c\;\Leftrightarrow\; H^t(X\sm Y,\eL)=0,\,\forall \eL\in\Pic(X),\,\forall t\ges c. $$ We have $X\sm Y\cong\tld X\sm E_Y$ and $H^t(\tld X\sm E_Y,\eL)=\varinjlim H^t(\tld X,\eL(mE_Y))$, cf.~\cite[(5.1)]{ottm}. Since $\Pic(\tld X)\cong\Pic(X)\oplus\mbb ZE_Y$, one has $\omega_{\tld X}\otimes\eL^{-1}\cong {\euf M}(lE_Y)$ for some ${\euf M}\in\Pic(X)$, $l\in\mbb Z$. By applying Serre duality---as $\tld X$ is Gorenstein---we find:\\ $\varprojlim H^j(X,{\euf M}\otimes\eI_Y^m)=\varprojlim H^j(X,{\euf M}\otimes\eJ_n)=0, \,\forall\, {\euf M}\in\Pic(X),\,j\les p.$ \end{m-proof} \begin{m-corollary}\label{cor:pic} Let $Y\subset X$ be complex, smooth varieties. If $Y$ is $p^\apos$, then the following statements hold: \begin{enumerate}[leftmargin=5ex] \item[\rm(i)] $H^t(X;\bbQ)\to H^t(Y;\bbQ)\;\text{is}\;\biggl\{ \begin{array}{rl} \text{an isomorphism, for}&t\les p-1; \\[1ex] \text{injective, for}&t=p. \end{array}$ \item[\rm(ii)] For $p\ges 3$, the following maps are isomorphisms: $$\Pic(X)\otimes\bbQ\to\Pic(\hat{X}_Y)\otimes\bbQ\to\Pic(Y)\otimes\bbQ.$$ \item[\rm(iii)] For $p=\dim Y-q$ and $X,Y$ as in~\ref{prop:x-b}, the previous properties hold with $\mbb Z$-coefficients. \end{enumerate} \end{m-corollary} \begin{m-proof} (i) Use the previous lemma and~\cite[Corollary 5.2]{ottm}. \nit(ii) One has $H^j(\eO_X)\srel{\cong}{\to}H^j(\hat X_Y;\cO_{\hat X_Y})$, $j=1,2$. On the other hand, the sequence $$ H^1(Y;\mbb Z)\to H^1(\hat X_Y;\cO_{\hat X_Y})\to \Pic(\hat X_Y) \to H^2(Y;\mbb Z)\to H^2(\hat X_Y;\cO_{\hat X_Y}) $$ is exact (cf.~\cite[Lemma 8.3]{hart-cdav}). Now use (i) and the exponential sequences of $X,Y$. \nit(iii) Note that $\eO_{\tld X}(E_Y)$ is $\dim\phi(\tld X)$-positive, so~\ref{thm:mats-bott}(iii) applies. Indeed, consider an embedding $\tld X\srel{\iota}{\to}\mbb P^N\times V$ over $V$, such that $\eO_{\tld X}(m_0E_Y)=\iota^*(\eO_{\mbb P^N}(1)\boxtimes\euf M)$, for some $m_0>0$, $\euf M\in\Pic(V)$. Now take the Fubini-Study metric on $\eO_{\mbb P^N}(1)$ and an arbitrary on $\euf M$. \end{m-proof} \begin{m-proposition}\label{prop:approx-p} Let $X$ be smooth. Suppose $Z\subset Y$ is $p^\pos$, $Y\subset X$ is $r^\pos$, and they are both irreducible and lci. Then the following statements hold: \begin{enumerate}[leftmargin=5ex] \item[\rm(i)] $Z\subset X$ is $\big(p-(\dim Y-r)\big)^\pos$; more precisely, one has: \begin{equation}\label{eq:trans-p} \bigg\{ \begin{array}{l} \text{$\eN_{Z/X}$ is $\big(\dim Y+\dim Z-(r+p)\big)$-ample,} \\[1ex] \text{$\mathop{\rm cd}\nolimits(X\sm Z)\les\dim X-(\min\{r,p\}+1)$}. \end{array} \end{equation} \item[\rm(ii)] If $Z, Y$ are smooth, then $Z\subset X$ is ${\mn{\{p,r\}}}^\apos$. \end{enumerate} \end{m-proposition} \begin{m-proof} (i) The first claim follows from~\ref{prop:N+cd}. For the second, let $U_Z{:=}X\sm Z,\, U_Y{:=}X\sm Y$, and $\eG$ be a coherent sheaf on $X$. The left- and right-hand side of the exact sequence $$ \ldots\to H^i_{Y\sm Z}(U_Z,\eG)\to H^i(U_Z,\eG)\to H^i(U_Y,\eG)\to\ldots, $$ vanish for $i\ges\dim X-p$ and $i\ges \dim X-r$, respectively (cf.~\cite[Proposition 6.4]{ottm}). \nit(ii) Both $Y,Z$ are ${\mn{\{p,r\}}}^\pos$, so we may assume without loss of generality that $p=r$. Consider $\xi_1,\dots,\xi_u,\zeta_1,\dots,\zeta_v\in\eO_{X,z}$, whose images in $\hat{\cal O}_{X,z}$ yield independent variables, such that $\eI_{Y,z}=\lran{\bsymb{\xi}}=\lran{\xi_1,\dots,\xi_u}$ and $\eI_{Z,z}=\lran{\bsymb{\xi},\bsymb{\zeta}}=\lran{\xi_1,\dots,\xi_u,\zeta_1,\dots,\zeta_v}$. For $l\ges a$, a direct computation yields $\,\eI_{Y,z}^a\cap\eI_{Z,z}^l =\ouset{i=a}{l}{\sum}\lran{\bsymb{\xi}}^{i}\cdot\lran{\bsymb{\zeta}}^{l-i} =\eI_{Y,z}^a\cdot\eI_{Z,z}^{l-a},$ which implies $(\eI_{Z,z}^l+\eI_{Y,z}^{a})/{\eI_{Y,z}^{a}}\cong\eI_{Z,z}^{l}/\eI_{Y,z}^a\cdot\eI_{Z,z}^{l-a}$. We obtain the exact sequences: \begin{equation}\label{eq:la} 0\to \frac{\eI_Y^a}{\eI_Y^{a+1}}\otimes\biggl(\frac{\eI_Z}{\eI_Y}\biggr)^{l-a} \!\to \frac{\eI_Z^l+\eI_Y^{a+1}}{\eI_Y^{a+1}} \to \frac{\eI_Z^l+\eI_Y^{a}}{\eI_Y^{a}} \to0,\quad\forall\,l\ges a+1. \end{equation} The left side is an $\eO_Y$-module: $\eI_Z/\eI_Y=\eI_{Z\subset Y}$ is the ideal of $Z\subset Y$; $\;\eI_Y^a/\eI_Y^{a+1}=\Sym^a\eN_{Y/X}^\vee$. Let $\eF$ be locally free on $X$. By the $p^\pos$-property, there is a linear function $l(k)=\cst{}_1\!\cdot\,k+\cst{}_2$ (with $\cst{}_1,\cst{}_2$ independent of $\eF$) and $k_\eF,l_\eF\in\mbb N$, such that: $$ \begin{array}{rl} H^t(\eF\otimes\eI_{Y}^k)=0, & \;\forall\,t\les p,\;\forall\,k\ges k_\eF, \\[1ex] H^t(\eF_Y\otimes\eI_{Z\subset Y}^{l})=0, & \;\forall\,t\les p,\;\forall\,l\ges l_\eF, \\[1ex] H^t(\eF_Y\otimes\Sym^a\eN_{Y/X}^\vee\otimes\eI_{Z\subset Y}^{l-a})=0, & \;\forall\,t\les p,\;\forall\,a\les k,\;\forall\,l\ges l(k). \end{array} $$ The last claim is a consequence of the uniform $q$-ampleness and the sub-additivity property of the amplitude (cf.~\cite[Theorems 7.1]{tot}, \cite[Theorem 3.1]{arap}): \begin{itemize}[leftmargin=5ex] \item[--] There is a function $\text{linear}(r)$ such that, for any locally free sheaf $\eF$ whose regularity satisfies $\max\{1, \mathop{\rm reg}\nolimits(\eF_Y)\}\les r$, it holds: $\;H^t(\eF_Y\otimes\eI_{Z\subset Y}^l)=0,\;\forall\,t\les p,\; l\ges{\rm linear}(r).$ \item[--] If $a\les k$, then ${\rm reg}(\eF_Y\otimes\Sym^a\eN_{Y/X}^\vee)\les{\rm linear}(k)$. \end{itemize} Recursively for $a=1,\ldots,k$, and starting by $\frac{\eI_Z^l+\eI_Y}{\eI_Y}=\eI_{Z\subset Y}^l$, \eqref{eq:la} yields: \\[.5ex] \centerline{ $ H^t\Big( \eF\otimes\frac{\eI_Z^l+\eI_Y^{k}}{\eI_Y^{k}} \Big)=0,\;\forall t\les p,\;\forall\,l\ges l(k). $ }\\[.5ex] Now tensor $0\to\eI_Y^k\to\eI_Z^l+\eI_Y^{k}\to\frac{\eI_Z^l+\eI_Y^{k}}{\eI_Y^{k}}\to0$ by $\eF$ and deduce: $$ H^t\big(\eF\otimes(\eI_Y^k+\eI_Z^l)\big)=0, \;\forall\,t\les p,\;\;\forall\,k\ges k_\eF,\;\forall\,l\ges l(k). $$ The subschemes defined by $\eI_Y^k+\eI_Z^l$ are `asymmetric' thickenings of $Z$ in $X$. The ideals $\eJ_k:=\eI_Y^k+\eI_Z^{k+l(k)}$ satisfy~\eqref{eq:ap}: $\eJ_{k'}\subset\eI_Z^k,\;k'\ges k,\;\; \eI_Z^{m'}\subset\eJ_m,\;m'\ges m+l(m).$ \end{m-proof} \section{Connectedness properties}\label{ssct:G3} \begin{m-notation}\label{not:VfX} Let $V,X$ be irreducible projective varieties, $V\srel{f}{\to} X$ a morphism, and $Y\subset X$ a closed subscheme. \end{m-notation} The issue regarding the connectedness of pre-images of subschemes by morphisms was raised by Fulton-Hansen in the late~70s. \begin{conj-nono}{(cf.~\cite[p.\,161]{fult+hans})} Suppose that $\dim f(V)+\dim Y>\dim X$ and the normal bundle $\eN_{Y/X}$ is ample. Then $f^{-1}(Y)$ is connected. \end{conj-nono} Despite its elementary nature, it turns out that the question is surprisingly difficult to answer. It is known that, in this form, the conjecture is false; a counterexample can be found in \cite{hart-as}. However, it does hold for subvarieties of various homogeneous spaces (cf.~\cite{fult+hans,hans,falt-homog,debr2}), so it is interesting to find a framework which yields a positive answer. \subsection{Connectedness of pre-images}\label{ssct:conn} \begin{m-theorem}\label{thm:f-1} Suppose $\mathop{\rm cd}\nolimits(X\sm Y)\leq\dim(X)-2$ and $f$ is surjective. Then $f^{-1}(Y)$ is connected, in particular so is $Y$. \end{m-theorem} The statement generalizes~\cite[Corollary III.3.9]{hart-as} in two directions. First, it allows morphisms into the picture; this is important, taking into account the Fulton-Hansen-problem. Second, there is no assumption on the smoothness of the varieties; this is crucial, since one can not control the regularity of the image of an arbitrary morphism. \begin{m-proof} We may assume that $V$ is smooth. Otherwise, let $V'\srel{\si}{\to}V$ be a (surjective) resolution of singularities; if $(f\si)^{-1}(Y)$ is connected, then so is $f^{-1}(Y)=\si\big((f\si)^{-1}(Y)\big)$. In order to prove that $Z:=f^{-1}(Y)$ is connected, it suffices to show that \begin{equation}\label{eq:res} \res_Z:H^0(V,\eO_V)\to H^0(\hat V_{Z},\eO_{\hat V_Z}) \end{equation} is an isomorphism. By formal duality~\cite[Theorem~III.3.3]{hart-as}, the right-hand side is isomorphic to $H_Z^{\dim V}(V,\omega_V)^\vee$ and the dual of $\res_Z$ fits into the exact sequence: $$ H^{\dim V-1}(V\setminus Z,\omega_V)\to H_Z^{\dim V}(V,\omega_V)\to H^{\dim V}(V,\omega_V)\to H^{\dim V}(V\setminus Z,\omega_V). $$ The rightmost cohomology group vanishes, by Lichtenbaum's theorem. We claim that the leftmost group vanishes too. This a consequence of Leray's spectral sequence for $f$, combined with Koll\'ar's higher direct image theorem~\cite[Theorem 2.1]{kolr-I}. Indeed, the cohomology group $H^{a+b}(V\setminus Z,\omega_V)$ can be computed using the spectral sequence whose $E_2$-term is $H^a(X\setminus Y,R^bf_*\omega_V)$. With the \textit{ad hoc} notation $$ v:=\dim V,\; x:=\dim X,\;\text{so}\;v-x=\dim(\text{generic fibre of}\;f), $$ Koll\'ar's theorem states that $R^bf_*\omega_V=0,\;\text{for}\;b\ges v-x+1.$ The restriction of $f$ to $V\setminus Z$ is proper, so the higher direct images are coherent (in fact torsion free, by {\textit{loc.\,cit.}}). The assumption on the cohomological dimension of $X\setminus Y$ yields $H^a(X\setminus Y,R^bf_*\omega_V)=0,\;\text{for}\;a\geq x-1.$ \end{m-proof} \begin{m-corollary}\label{cor:etale} The induced homomorphism $\pi_1^{alg}(f^{-1}(Y))\to\pi_1^{alg}(V)$ between the algebraic fundamental groups is surjective. \end{m-corollary} \begin{m-proof} For any \'etale morphism $W\srel{g}{\to}V$, $(fg)^{-1}(Y)=g^{-1}\big(f^{-1}(Y)\big)$ is connected. \end{m-proof} In general, for arbitrary $V$, there is no control on the homomorphism~\eqref{eq:res}. \begin{m-proposition}\label{thm:rtl} Let the situation be as above and suppose moreover that $V$ is normal and has rational singularities. Then $\res_Z:H^0(V,\eO_V)\to H^0(\hat V_{Z},\eO_{\hat V_Z})$ is an isomorphism. \end{m-proposition} Subschemes satisfying this property are called G1 in Hironaka-Matsumura~\cite{hir+mats}.\smallskip \begin{m-proof} The argument is the same as above. Kempf's criterion implies that $V$ is Cohen-Macaulay and, at the first step of the previous proof, the resolution $V'\srel{\si}{\to} V$ has the property that $\si_*\omega_{V'}=\omega_V$. Consequently, formal duality---needed for dualizing~\eqref{eq:res}---holds on $V$. (Cohen-Macaulayness suffices for the Serre duality in the proof of~\cite[Theorem~III.3.3]{hart-as}.) For $Z':=\si^{-1}(Z)$, one has: $\,H^{j}(V'\setminus Z',\omega_{V'})\cong H^{j}(V\setminus Z,\omega_V),\;j=v-1,v;$ this transfers the computation from $V$ to $V'$, which is smooth. \end{m-proof} \begin{m-remark}\label{rmk:optim} \begin{enumerate}[leftmargin=5ex] \item Lichtenbaum's theorem states that $\mathop{\rm cd}\nolimits(X\sm Y)\leq x-1$, but it is unclear when is maximal. Our result implies that, contrary to the intuition, the cohomological dimension does not drop by removing effective divisors, the reason is not due to the existence of `disjoint divisors'. Indeed, suppose $\mathop{\rm cd}\nolimits(X\sm Y)=x-1$ and $D\subset X$ is a (complete) effective divisor, disjoint of $Y$. Then it still holds $\mathop{\rm cd}\nolimits(X\sm(Y\cup D))=x-1$; otherwise $Y\cup D$ would be connected, contradicting the hypothesis. The observation is false if the divisor is allowed to intersect $Y$: $\mathop{\rm cd}\nolimits(\mathbb P^2\sm[1{:}0{:}0])=1$ and it remains the same by removing a line disjoint of $[1{:}0{:}0]$. However, by removing a line passing through the point, one obtains the affine $2$-plane, whose cohomological dimension vanishes. \item The Fulton-Hansen-conjecture is false in general; a counterexample is due to Hartshorne (cf.~\cite[pp.\,199]{hart-as}). Let $V{\srel{f}{\to}}X$ be an \'etale (surjective) morphism, $Y'$ a $\delta$-co\-dimensional, general complete intersection in $V$, with $2\delta\,{>}\dim V$; let $Y{:=}f(Y')$. Then $\eN_{Y/X}$ is ample, $f^{-1}(Y')$ is disconnected. What (necessarily) fails is $\mathop{\rm cd}\nolimits(X\sm Y){\leq}\dim X{-}2$. This shows that Theorem~\ref{thm:f-1} is optimal. \end{enumerate} \end{m-remark} \subsection{Application to partially ample subvarieties}\label{ssct:applic} We start discussing the equidimensionality of partially ample subvarieties. In general, they are not so: consider $X:=\mbb P^2$ and $Y:=\{x=0\}\cup\{y=z=0\}$. Then $\tld X=\Bl_Y(\mbb P^2)$ is isomorphic to the blow-up $\wtld{\mbb P^2}$ of $\mbb P^2$ at $[1:0:0]$, with exceptional divisor $E$, and $\eO_{\tld X}(E_Y)=\eO_{\mbb P^2}(1)\otimes\eO_{\wtld{\mbb P^2}}(E)$. A short computation shows that $Y$ is $1$-ample. \begin{m-lemma}\label{lm:conn} Let $X$ be a projective variety, and suppose that $X, Y$ are both Cohen-Macaulay. If $Y$ is either $1^\apos$ or $(\dim Y-1)$-ample in $X$, then $Y$ is equidimensional. \end{m-lemma} \begin{m-proof} Suppose $Y$ is $1^\apos$. Then it is G1: the isomorphism $H^0(\eO_X)\cong H^0(\eO_{\hat X_Y})$ holds `in finite time': with notation~\ref{def:ap>0}, one has $H^0(\eJ_m)=H^1(\eJ_m)=0$, $m\gg0$. In particular, $Y$ is connected. The same holds if $Y$ is $(\dim Y-1)$-ample. Now conclude by using the unmixedness property, that local Cohen-Macaulay rings are equidimensional. \end{m-proof} Recall that partial ampleness yields an upper bound for the cohomological dimension of the complement of a subvariety. Thus we obtain a convenient class of subvarieties for which Theorem~\ref{thm:f-1} does apply. Below is our main result: partial ampleness is a \emph{numerical condition} which ensures that pre-images are connected. We stress that, in the generality below, there are \emph{no similar statements} in the literature. \begin{m-theorem}\label{thm:q} Let the situation be as in~\ref{not:VfX}. \begin{enumerate}[leftmargin=5ex] \item[\rm(i)] Let $Y\subset X$ be a $\big(\dim f(V)+\dim(Y)-\dim(X)-1\big)$-ample closed subscheme. Then the following properties hold: $$\text{$f^{-1}(Y)\subset V$ is connected;\qquad $\pi_1^{alg}\big(f^{-1}(Y)\big)\to\pi_1^{alg}(V)$ is surjective.}$$ \item[\rm(ii)] In particular, suppose $Y\subset X$ is a $(\dim Y-1)$-ample subscheme and $f$ is surjective. Then $f^{-1}(Y)$ is connected. \end{enumerate} \end{m-theorem} \begin{m-proof} Observe that $\mathop{\rm cd}\nolimits\big(\,f(V)\sm(Y\cap f(V))\,\big)\les\mathop{\rm cd}\nolimits(X\sm Y)\les\dim f(V)-2$, and apply~\ref{thm:f-1} to the surjective morphism $f:V\to f(V)$. \end{m-proof} \begin{m-corollary}\label{cor:fh} Let $V, Y$ be closed subschemes of $X$. Suppose $V$ is connected and $Y$ is $q$-ample in $X$, with $0\les q\les\dim V+\dim Y-\dim X-1.$ Then $Y\cap V$ is non-empty and $\big(\dim(Y\cap V)-1\big)$-ample in $V$, hence connected. \end{m-corollary} The statement reminds a problem of Hartshorne \cite[Ch.~III, Conjecture~4.5]{hart-as}, concerning the connectedness and non-emptiness of the intersection of smooth subvarieties with ample normal bundles. According to the survey~\cite{petr}, this issue is currently still wide open.\smallskip \begin{m-proof} The intersection $Y\cap V$ is non-empty, because $$\;\mathop{\rm cd}\nolimits(V\sm Y\cap V)\les\mathop{\rm cd}\nolimits(X\sm Y)\les\codim_XY+q-1=\dim V-2,\quad\text{(cf.~\ref{prop:N+cd}).}$$ Actually one has $\dim(Y\cap V)\ges1$, since otherwise $\mathop{\rm cd}\nolimits(V\sm Y\cap V)=\dim V-1$: there are effective divisors avoiding a finite number of points. \end{m-proof} \begin{m-example} Suppose $Y\subset X$ is ample. Then, for any subvariety $V\subset X$ of dimension at least $\codim(Y)+1$, the intersection $Y\cap V$ is non-empty---this is already proved in~\cite{ottm}---and also connected. The Lefschetz-type hyperplane theorem in {\textit{op.\,cit.}}, Corollary~5.2---in particular the connectedness of the intersection---requires the smoothness of $V\sm(Y\cap V)$. Therefore, by pursuing this path, one can not deduce the connectedness of pre-images, as we do in Theorem~\ref{thm:q}, because it is not possible to control the smoothness of the image of an arbitrary morphism. \end{m-example} So far we used only the bound on the cohomological dimension of $X\sm Y$. The partial ampleness of $Y\subset X$ actually carries more information. \begin{m-theorem} \begin{enumerate}[leftmargin=5ex] \item[\rm(i)] Let the situation as in~\ref{cor:fh}, $X,V$ smooth; suppose $Y{\subset} X$ and $Y{\cap} V{\subset} V$ are lci. (The intersection is automatically lci if $\codim_X(V{\cap} Y){=}\codim_XV{+}\codim_XY.$) \item[] Then $V{\cap} Y$ is G3 in $V$. In particular, a $1^\pos$ lci subscheme of a smooth variety is G3. \item[\rm(ii)] Let the situation as in~\ref{not:VfX}, with $f(V)$ is smooth and $Y\cap f(V)$ is lci. \item[] If $Y$ is $\big(\dim f(V)+\dim(Y)-\dim(X)-1\big)$-ample, then $f^{-1}(Y)$ is G3 in $V$. \end{enumerate} \end{m-theorem} For the definition of the G3-property, the reader in invited to consult~\cite{hir+mats}.\smallskip \begin{m-proof} (i) First note that if $V\cap Y$ has the expected codimension, then each of its components has at most that codimension, so $V\cap Y$ is equidimensional; thus $V\cap Y\subset V$ is lci. Back to the general case, the commutative diagram below shows that the exceptional divisor $E_{Y\cap V}$ is $(\dim V-2)$-ample: $$ \xymatrix@R=1.2em{ \Bl_{Y\cap V}(V)\;\ar@{^(->}[r]\ar[d]&\Bl_{Y}(X)\ar[d] \\ V\;\ar@{^(->}[r]&X. } $$ Hence $\eN_{V{\cap} Y/V}$ is $\big(\dim(V{\cap} Y)-1\big)$-ample by~\ref{prop:q1}(iii), so $V{\cap} Y$ is G2 in $V$ (cf.~\cite[\S3]{hlc-subvar}). As $\mathop{\rm cd}\nolimits(V\sm V{\cap} Y)\les\dim V-2$, Speiser's result~\cite[Corollary V.2.2]{hart-as} yields the conclusion. \nit(ii) The proof of~\ref{thm:q} above and the previous step imply that $Y\cap f(V)$ is G3 in $f(V)$. It remains to apply~\cite[Theorem~2.7]{hir+mats}, since $f^{-1}(Y)=f^{-1}(Y\cap f(V))$. \end{m-proof} In Hartshorne's counterexample~\ref{rmk:optim}, $Y{\subset} X$ is not G3, but it is G2. Also, $Y$ does not possess the $1^\pos$-property. This indicates that, for being G3, the $1^\pos$-property is close to optimal; see~\cite[Proposition~V.2.1]{hart-as}. It has the advantage to be a numerical condition. \section{Examples of partially ample subvarieties}\label{sct:expl} We show that partially ample subvarieties occur in a variety of situations: \begin{enumerate} \item zero loci of sections in vector bundles; \item sources of Bialynicki-Birula decompositions; \item subvarieties of rational homogeneous varieties. \end{enumerate} \subsection{Vanishing loci of sections}\label{ssct:glob-gen} Throughout this section, $\eN$ is a vector bundle of rank $\nu$ on the smooth projective variety $X$. \subsubsection{$q$-ample vector bundles}\label{sssct:q-vb} \begin{m-proposition} \label{prop:q21} Suppose $\eN$ is $q$-ample and $Y$ is the zero locus of a \emph{regular} section in it. Then $Y\subset X$ is a $q$-ample subvariety. \end{m-proposition} \begin{proof} We verify~\eqref{eq:q12} for a vector bundle $\eF$ on $X$. Since $s$ is regular, $Y$ is lci, $\codim_X(Y)=\nu$, so~\ref{prop:q1}(ii) applies. One has the resolution (cf. \cite[Theorem 3.1]{bu+ei}) \begin{equation}\label{eq:koszul-m} 0\to L^\nu_m(\eN^\vee)\to\ldots\to L^j_m(\eN^\vee)\to\ldots\to \Sym^m(\eN^\vee)\srel{s^m\ort}{-\kern-1ex-\kern-1ex\lar}\eI_Y^m\to 0,\;\;\forall\,m\ges 1, \end{equation} where $L^j_m(\eN^\vee):= \Img\Bigl( \Sym^{m-1}(\eN^\vee)\otimes\overset{j}{\hbox{$\bigwedge$}}\,\eN^\vee \srel{\phi^j_m}{-\kern-1ex\lar} \Sym^{m}(\eN^\vee)\otimes\overset{j-1}{\hbox{$\bigwedge$}}\,\eN^\vee \Bigr), 1\les j\les\nu.$ The general linear group is linearly reductive and $\phi^j_m$ is equivariant, so $L^j_m(\eN^\vee)$ is a direct summand of $\Sym^m(\eN^\vee)\otimes\overset{j-1}{\hbox{$\bigwedge$}}\,\eN^\vee$. For $m\gg0$, one has: \\ \centerline{ $H^{t+j-1}(X,\eF\otimes\oset{j-1}{\bigwedge}\eN^\vee\otimes\Sym^m\eN^\vee)=0$,\; for $1\les j\les\nu,\;\;t+\nu-1\les\dim X-q-1$. }\\[1ex] It follows that $H^t(X,\eF\otimes\eI_Y^m)=0$, for $0\les t\les\dim Y-q$. \end{proof} \subsubsection{Globally generated vector bundles}\label{ssct:x-b} Henceforth we assume that $\eN$ is globally generated; thus the notions of $q$-ampleness and Sommese-$q$-ampleness agree (cf.~\ref{prop:q2}). Let $Y\subset X$ be lci of codimension $\delta$, the zero locus of $s\in\Gamma(\eN):=H^0(X,\eN)$. We \emph{do not require} $s$ to be regular, so we allow $\delta<\nu$. We are going to use~\ref{prop:x-b} to estimate the ampleness of $Y$. We observe that the blow-up fits into the diagram \begin{equation}\label{eq:tld-x} \xymatrix@R=1.2em@C=1.75em{ \tld X\ar@{^(->}[r]\ar[d]_-\si\ar@<-5pt>[rrd]_-\phi & \mbb P(\eN) {=}\, \mbb P\Bigl( \mbox{$\overset{\nu-1}{\bigwedge}$}\eN^\vee\otimes\det(\eN) \Bigr)\ar@{^(->}[r] & X\times\mbb P\Bigl( \mbox{$\overset{\nu-1}{\bigwedge}$}\Gamma(\eN)^\vee \Bigr)\ar[d] \\ X&&\mbb P:=\mbb P\Bigl( \mbox{$\overset{\nu-1}{\bigwedge}$}\Gamma(\eN)^\vee \Bigr), } \end{equation} and it holds \begin{equation}\label{eq:o1} \eO_{\tld X}(E_Y)=\eO_{\mbb P(\eN)}(-1)\big|_{\tld X}= \bigl(\det(\eN)\boxtimes\eO_{\mbb P}(-1)\bigr)\big|_{\tld X}. \end{equation} \begin{m-proposition}\label{prop:p} Suppose $\det(\eN)$ is ample. If the dimension of the generic fibre of $\phi$ over its image is $p+1$, then $\eO_{\tld X}(E_Y)$ is $\dim\phi(\tld X)$-positive, and $Y$ is $(\dim Y-p)$-ample. \end{m-proposition} \begin{proof} The assumptions of \ref{prop:x-b} are satisfied. \end{proof} Note that the proposition applies also when only some symmetric power $\Sym^a\eN$ is globally generated. Then $s\in\Gamma(\eN)$ induces $s^a\in\Gamma(\Sym^a\eN)$ and $\eI_{\{s^a=0\}}=\eI_{\{s=0\}}^a$. By~\ref{prop:N+cd}(i), the amplitude of $\{s^a=0\}$ coincides with that of $\{s=0\}$. \subsubsection{Special Schubert subvarieties of the Grassmannian} \label{sssct:spec-grass} Let $W\subseteq\Gamma(\eN)$ be a vector subspace generating $\eN$, $\dim W=\nu+u+1$. It is equivalent to a morphism $f:X\to\Grs(W;\nu)$ to the Grassmannian of $\nu$-dimensional quotients; $\det(\eN)$ is ample when $\vphi$ is finite. Henceforth let $X=\Grs(W;\nu)$; it is isomorphic to $\Grs(u+1;W)$, the $(u+1)$-dimensional subspaces of $W$; let $\eN$ be the universal quotient. The morphism $\phi$ in \eqref{eq:tld-x} is explicit: \begin{equation}\label{eq:q} \mbb P(\eN)\to\mbb P,\quad (x,\lran{e_x})\mt \det(\eN_x/\lran{e_x})^\vee\subset \oset{\nu-1}{\bigwedge}\eN_x^\vee\subset \oset{\nu-1}{\bigwedge}W^\vee. \end{equation} ($\lran{e_x}$ stands for the line generated by $e_x\in\eN_x$, $x\in\Grs(W;\nu)$.) The restriction to the Grassmannian corresponds to the commutative diagram \begin{equation}\label{eq:wn} \xymatrix@R=1.2em{ 0\ar[r]&\eO_{\Grs(W;\nu)}\ar[r]^-{s}\ar@{=}[d]& W\otimes\eO_{\Grs(W;\nu)}\ar[r]\ar@{->>}[d]^-{\;\beta}& W/\lran{s}\otimes\eO_{\Grs(W;\nu)}\ar[r]\ar@{->>}[d]&0 \\ &\eO_{\Grs(W;\nu)}\ar[r]^-{\beta s}&\eN\ar[r]&\eN/\lran{\beta s}\ar[r]&0. } \end{equation} Thus $\phi$ is the desingularization of the rational map \begin{equation}\label{eq:q-grs} g_s:\Grs(W;\nu)\dashto\Grs(W/\lran{s};\nu-1),\quad [W\surj N]\mt [W/\lran{s}\;\surj N/\lran{\beta s}], \end{equation} followed by the Pl\"ucker embedding of $\Grs(W/\lran{s};\nu-1)$. The indeterminacy locus of $\phi$ is $\Grs(W/\lran{s};\nu)$, so the latter is $u^\pos$ in $\Grs(W;\nu)$. The observation can be generalized. \begin{m-corollary}\label{cor:Yl} For $\ell\les\nu$, fix an $\ell$-dimensional subspace $\Lambda_\ell\subset W$. Consider the Schubert subvariety $Y_\ell:=\{U\in\Grs(u+1;W)\mid U\cap\Lambda_\ell\neq0\}.$ Then $Y_\ell$ is $\big(\ell(u+1)-1\big)^\pos.$ \end{m-corollary} Thus the Chow ring of the Grassmannian is generated by partially ample subvarieties. \begin{m-proof} Note that $Y_\ell$ is $(\nu-\ell+1)$-codimensional and is the vanishing locus of $$ s_\ell:\eO\cong\det(\Lambda_\ell\otimes\eO)\to\oset{\ell}{\bigwedge} W\otimes\eO\to\oset{\ell}{\bigwedge}\eN. $$ We are in the situation~\ref{prop:x-b}. The diagram \eqref{eq:tld-x} corresponds to the rational map $$ \phi:\Grs(u+1;W)\dashto\Grs(u+1;W/\Lambda_\ell),\quad U\mt(U+\Lambda_\ell)/\Lambda_\ell, $$ followed by a large Pl\"ucker embedding; its indeterminacy locus is precisely $Y_\ell$. Since $\phi$ is surjective, a dimension counting yields the conclusion. \end{m-proof} \begin{m-remark}\label{rmk:sommese-weak} \begin{enumerate}[leftmargin=5ex] \item[\rm(i)] Propositions~ \ref{prop:q21} and~\ref{prop:p} deal with complementary situations: $\eO_{\mbb P(\eN^\vee)}(1)$ is the pull-back of an ample line bundle, while $\eO_{\tld X}(E_Y)$ is relatively ample. \item[\rm(ii)] The criterion \ref{prop:q21} is not optimal: by~\ref{prop:q2}, for $X=\Grs(\nu+u+1;\nu)$, the universal quotient $\eN$ is $q$-ample, with $q=\dim\mbb P(\eN^\vee)-\mbb P^{\nu+u}=\dim X-(u+1)$. So $Y=\Grs(\nu+u;\nu)$, the zero locus of a section of $\eN$, is $(u+1-\nu)^\pos$; this may be negative and the estimate irrelevant. On the other hand, \ref{prop:x-b} implies that $Y$ is $u^\pos$. Moreover, for $\ell\neq1,\nu$, the section $s_\ell$ above is \emph{not regular}, so~\ref{prop:q21} does not apply, anyway. \item[\rm(iii)] Subvarieties obtained as zero loci of sections in globally generated vector bundles and pull-backs of Schubert cycles appear in the recent work~\cite[\S3]{ful+leh}, in the definition of the pliant cone of a projective variety, which is a full-dimensional subcone of the nef cone---an object of central interest. The discussion above, together with Proposition~\ref{prop:pull-back}, implies that these elements of the pliant cone are in fact partially ample. \end{enumerate} \end{m-remark} \subsection{Sources of torus actions}\label{ssct:fixed} Let $X$ be a smooth projective variety with a faithful action $\lda:G_m\times X\to X$ of the multiplicative group $G_m={\Bbbk}^\times$. This determines the well-known Bialynicki-Birula---BB for short---decomposition of $X$ (cf.~\cite{bb}): \begin{itemize}[leftmargin=5ex] \item[$\bullet$] The fixed locus $X^\lda$ of the action is a disjoint union $\underset{s\in S_{\rm BB}}{\bigsqcup}\kern-1exY_s$ of smooth subvarieties. For $s\in S_{\rm BB}$, $Y_s^+:=\{x\in X\mid\underset{t\to 0}{\lim}\,\lda(t, x)\in Y_s\}$ is locally closed in $X$ (a BB-cell) and it holds: $\;X=\underset{s\in S_{\rm BB}}{\bigsqcup}\kern-1exY_s^+.$ \item[$\bullet$] The \emph{source} $Y:=Y_{\rm source}$ and the \emph{sink} $Y_{\text{\rm sink}}$ of the action are uniquely characterized by the conditions: $Y^+=Y_{\rm source}^+\subset X$ is open and $Y_{\rm sink}^+=Y_{\rm sink}$. \end{itemize} A linearization of the action in a sufficiently ample line bundle yields a $G_m$-equivariant embedding $X\subset\mbb P^N_{\Bbbk}$. There are homogeneous coordinates ${\textbf{z}}_{0}\in{\Bbbk}^{N_0+1},\ldots,{\textbf{z}}_{r}\in{\Bbbk}^{N_r+1}$ such that the $G_m$-action on $\mbb P^N_{\Bbbk}$ is: \begin{equation}\label{eq:c*} \lda\big(t,[{\textbf{z}}_{0},{\textbf{z}}_{1},\ldots,{\textbf{z}}_{r}]\big) =[{\textbf{z}}_{0},t^{m_1}{\textbf{z}}_{1},\ldots,t^{m_r}{\textbf{z}}_{r}],\quad\text{with}\;0<m_1<\ldots<m_r. \end{equation} The source and sink of $\mbb P^N, X$ are respectively: \begin{equation}\label{eq:YP} \begin{array}{lcll} \mbb P^N_{\rm source}=\{[{\textbf{z}}_{0},0,\ldots,0]\}, && \mbb P^N_{\rm sink}=\{[0,\ldots,0,{\textbf{z}}_{r}]\}, \\[1ex] Y=Y_{\rm source}=X\cap \mbb P^N_{\rm source}, && Y_{\rm sink}=X\cap \mbb P^N_{\rm sink}, \\[1ex] Y^+=X\cap (\mbb P^N_{\rm source})^+, && (\mbb P^N_{\rm source})^+ =\{[{\textbf{z}}]=[{\textbf{z}}_{0},{\textbf{z}}_{1},\ldots,{\textbf{z}}_{r}]\mid{\textbf{z}}_{0}\neq 0\}. \end{array} \end{equation} Let $m$ be the lowest common multiple of $\{m_\rho\}_{\rho=1,\dots,r}$ and $l_\rho:=m/m_\rho$. Let us denote ${\textbf{z}}_{\rho}^{l_\rho}:=(z_{\rho 0}^{l_\rho},\dots,z_{\rho N_\rho}^{l_\rho})$ and $\eI\subset\eO_{\mbb P^N}$ the sheaf of ideals generated by ${\textbf{z}}_1^{l_1},\dots,{\textbf{z}}_r^{l_r}$. The choice of the exponents makes the assignment \begin{equation}\label{eq:phi} \mbb P^N\dashto\mbb P^{N'},\quad [{\textbf{z}}_0,{\textbf{z}}_1,\dots,{\textbf{z}}_r]\mt[{\textbf{z}}_1^{l_1},\dots,{\textbf{z}}_r^{l_r}], \end{equation} well-defined. It defines a $G_m$-invariant rational map whose indeterminacy locus is the subscheme determined by $\eI$. Then $\eJ:=\eI\otimes\eO_{X}$ defines the subscheme $Y_\eJ\subset X$ whose reduction is $(Y,\eO_Y)$. We have the diagram: \begin{equation}\label{eq:blup} \xymatrix@C=4em@R=1.2em{ \tld X:=\Bl_{Y_\eJ}(X)\ar[r]^-{\tld\iota}\ar[d]_-{\si} \ar@/^4ex/@<+.5ex>[rr]|{\;\phi_X\;} &\Bl_\eI(\mbb P^N)\ar[r]^-{\phi}\ar[d]^-{B} &\mbb P^{N'}\ar@{=}[d] \\ X\ar[r]^-\iota&\mbb P^N\ar@{-->}[r]&\mbb P^{N'} } \end{equation} \begin{m-lemma}\label{lm:B} The diagram \eqref{eq:blup} has the following properties: \begin{enumerate}[leftmargin=5ex] \item[\rm(i)] The exceptional divisor of $B$ is $\phi$-relatively ample, hence the exceptional divisor of $\si$ is $\phi_X$-relatively ample. \item[\rm(ii)] The morphism $\phi:\Bl_{\eI}(\mbb P^{N})\to\mbb P^{N'}$ is $G_m$-invariant and \begin{equation}\label{eq:estim} \dim\phi_X(\tld X)=\dim\phi_X(X\sm Y^+)\les\dim (X\sm Y^+). \end{equation} \end{enumerate} \end{m-lemma} \begin{m-proof} (i) The subscheme determined by $\eI$ is the vanishing locus of a section in a direct sum of ample line bundles over $\mbb P^N$, so~\ref{prop:p} applies. \nit(ii) It holds: $\dim\phi_X(\Bl_\eJ(X))=\dim\ovl{\phi_X(X\sm Y)}\;\text{and}\;\; \phi_X(X\sm Y)=\phi_X(X\sm Y^+)\cup\phi_X(Y^+\sm Y).$ For $[{\textbf{z}}_0,{\textbf{z}}']\in Y^+\sm Y$ and $t\in G_m$, the $G_m$-invariance of $\phi_X$ yields: $$ \phi_X\big([{\textbf{z}}_0,{\textbf{z}}']\big)=\phi_X\big(t\times[{\textbf{z}}_0,{\textbf{z}}']\big) =\phi_X\big(\lim_{t\to\infty}t\times[{\textbf{z}}_0,{\textbf{z}}']\big). $$ But $\disp\lim_{t\to\infty}t\times[{\textbf{z}}_0,{\textbf{z}}']=[0,{\textbf{z}}'']\in X\sm Y^+$, which implies $\phi_X(Y^+\sm Y)\subset\phi_X(X\sm Y^+)$. \end{m-proof} Now we can estimate the ampleness of the source $Y$. \begin{m-theorem}\label{thm:y-sink} Let $X$ be a smooth $G_m$-variety with source $Y$, and $p:=\codim(X\sm Y^+)-1.$ The following statements hold: \begin{enumerate}[leftmargin=5ex] \item[\rm(i)] The thickening $Y_\eJ$ of $Y$ in \eqref{eq:blup} is a $(\dim Y-p)$-ample subscheme of $X$; in particular, $Y$ is a $p^\apos$ subvariety. \item[\rm(ii)] If $G_m$ acts on $\eN_{Y/X}$ by scalar multiplication, then $Y\subset X$ is $p^\pos$. \end{enumerate} \end{m-theorem} \begin{m-proof} (i) We apply the Proposition~\ref{prop:x-b}: $Y_\eJ$ is a $q$-ample subscheme, with $$ q=1+\dim\phi(\tld X)-\codim_X(Y)\srel{\eqref{eq:estim}}{\les} 1+\dim(X\sm Y^+)-\codim_X(Y). $$ \nit(ii) In this case we have $Y^+\cong\unbar{\sf N}:= \Spec\big(\Sym^\bullet\eN_{Y/X}^\vee\big)$, cf. \cite[Remark pp. 491]{bb}. Thus $\unbar{\sf N}\subset X$ is open and $G_m$ acts, fibrewise over $Y$, by scalar multiplication. The inclusions $\unbar{\sf N}\subset\unbar{\sf N}_{\mbb P^N_{\rm source}/\mbb P^N} = \{\,[{\textbf{z}}_{N_0},{\textbf{z}}']\mid{\textbf{z}}_{N_0}\neq0\,\}\subset\mbb P^N$ are $G_m$-equivariant. The scalar multiplication on the coordinates ${\textbf{z}}'$ exists globally on $\mbb P^N$, so $X\subset\mbb P^N$ is invariant. Hence the exponents $l_\rho$ in \eqref{eq:phi} are all equal one, $\eJ=\eI_Y\subset\eO_X$. \end{m-proof} \begin{m-remark}\label{rmk:cd} $X\sm Y^+\subset X\sm Y$ is closed, so~\ref{lm:cd-apos} implies $\,\mathop{\rm cd}\nolimits(X\sm Y)=\dim(X\sm Y^+).$ This simple answer contrasts the elaborate techniques~\cite{ogus,falt-homog}. \end{m-remark} \begin{m-example}\label{expl:o-gr} \nit(i) Endow $W\cong{\Bbbk}^{w+1}$, $w+1$ even, with a non-degenerate, symmetric bilinear form $\beta$. Let $X:=\oGrs(u+1;W)$ be the orthogonal Grassmannian of $(u+1)$-dimensional isotropic subspaces; in particular, $w+1\ges2(u+1)$. Choose a Lagrangian decomposition $W={\Bbbk}^{(w+1)/2}\oplus{\Bbbk}^{(w+1)/2}$ such that $$ \beta=\left[\begin{array}{cc}0&\bone_{(w+1)/2}\\ \bone_{(w+1)/2}&0\end{array}\right], \quad\text{($\bone$ stands for the identity matrix)}. $$ Consider the action $G_m\srel{\lda}{\to}\SO_{(w+1)/2},\,\lda(t)=\diag\big[t^{-1},\bone_{(w-1)/2},t,\bone_{(w-1)/2}\big],$ whose source is $Y=\{U\mid s:=(1,0,\ldots,0)\in U\}\cong\oGrs(u;w-1)$. For $U\in Y$, $\lda$ acts with weight $t$ on $\eN_{Y/X,U}=\Hom(\lran{s},\lran{s}^\perp/U)$. $$ X\sm Y^+=\big\{U\in X\mid s\not\in\underset{t\to 0}{\lim}\lda(t)U\big\} = \{U\mid U\subset W':={\Bbbk}^{(w-1)/2}\oplus{\Bbbk}^{(w+1)/2}\}. $$ Let $\lran{s'}:=\Ker(\beta{{\upharpoonright}_{W'}})$: if $w=2u+1$, then $s'\in U$ for all $U\in X\sm Y^+$; for larger $w$, this is not the case. It follows: $$ \codim(X\sm Y^+)=\bigg\{ \begin{array}{cl} u&\text{if } w=2u+1;\\ u+1&\text{if }w\ges2u+3, \end{array} \;\Rightarrow\; \text{$Y\subset X$ is: } \bigg\{ \begin{array}{cl} (u-1)^\pos&\text{if }w=2u+1;\\ u^\pos&\text{if }w\ges2u+3. \end{array} $$ \nit(ii) With the previous notation, let $\omega$ be the skew-symmetric bilinear form $$ \omega=\left[\begin{array}{cc}0&\bone_{(w+1)/2}\\ -\bone_{(w+1)/2}&0\end{array}\right]. $$ Let $X:=\spGrs(u+1;W)$ be the symplectic Grassmannian of $(u+1)$-dimensional isotropic subspaces of $W$. The action of $\,\lda:G_m\to\Sp_{(w+1)/2},$ $\lda(t)=\diag\big[t^{-1},\bone_{(w-1)/2},t,\bone_{(w-1)/2}\big]$ has the source $Y=\{U\mid s:=(1,0,\ldots,0)\in U\}\cong\spGrs(u;w-1)$. Note that $\eN_{Y/X,U}\cong\Hom(\lran{s},W/U)$, so $G_m$ acts by weight $t^2$ on $\Hom(\lran{s},W/\lran{s}^\perp)$ and weight $t$ on the complement. As before, it holds $\codim(X\sm Y^+)=1+u$. Hence $Y$ is $u^\apos$; more precisely, there is a non-reduced scheme with support $Y$ which is $u^\pos$. \end{m-example} \subsection{Subvarieties of homogeneous varieties}\label{ssct:subvar-homog} Results due to Faltings, Barth-Larsen, Ogus show that the subvarieties of homogeneous spaces enjoy positivity properties. \begin{thm-nono}{} Given a rational homogeneous variety $X=G/P$, with $G$ is semi-simple; let $\ell$ be the minimal rank of its simple factors. Let $Y\subset X$ be a smooth subvariety of codimension $\delta$. The following statements hold: \begin{enumerate}[leftmargin=5ex] \item[\rm(i)]{\rm(cf. \cite[Satz~5, Satz~7]{falt-homog})} $Y$ is $(\ell-2\delta+1)^\pos$. \item[\rm(ii)] $Y\subset\mbb P^n\mbox{ is }p^\pos\;\Leftrightarrow\; \res_Y^t:H^t(\mbb P^n;\mbb Q)\to H^t(Y;\mbb Q) \mbox{ is an isomorphism},\;\forall\,t<p.$ \end{enumerate} \end{thm-nono} \begin{m-proof} (i) $\mathop{\rm cd}\nolimits(X\sm Y)\les\dim X-\ell+2\delta-2$ and $\eT_X$ is $(\dim X-\ell)$-ample. Since $\eN_{Y/X}$ is a quotient of $\eT_X$, we conclude by~\ref{prop:N+cd}. \nit(ii) $\eN_{Y/\mbb P^n}$ is ample. By \cite[Theorem 4.4, 2.13]{ogus}, $\mathop{\rm cd}\nolimits(\mbb P^n\sm Y)<n-p$ if and only if $\res_Y^t$ is an isomorphism and the local cohomological dimension of $Y\subset\mbb P^n$ is at most $n-p$. The latter equals $\codim_{\mbb P^n}Y=n-\dim Y$, since $Y$ is smooth. \end{m-proof} Note that the techniques in~\cite{falt-homog} used for proving (i) above are involved, yet the estimate is not optimal: \ref{cor:Yl} shows that the special Schubert cycles are actually much more positive.
1,941,325,220,604
arxiv
\section{Introduction}\label{sc:Introduction} The traditional theory of enhanced $^1$H NMR (nuclear magnetic resonance) relaxation of water due to paramagnetic transition-metal ions and lanthanide ions in aqueous solutions originates from Solomon \cite{solomon:pr1955}, Bloembergen and Morgan \cite{bloembergen:jcp1957,bloembergen:jcp1961}, a.k.a. the Solomon-Bloembergen-Morgan (SBM) model. The extended SBM model \cite{kowalewski:jcp1985} accounts for paramagnetic relaxation of inner-sphere water in the paramagnetic-ion complex \cite{solomon:pr1955,bloembergen:jcp1957,bloembergen:jcp1961}, outer-sphere water \cite{torrey:pr1953,hwang:JCP1975}, a.k.a. the Hwang-Freed (HF) model, the contact term \cite{bloembergen:jcp1957,morgan:jcp1959,bernheim:jcp1959,bloembergen:jcp1961}, the Curie term \cite{fries:jcp2003,helm:pnmrs2006}, and the electron-spin relaxation \cite{bloembergen:jcp1961,korringa:pr1962,westlund:book1995,fries:jcp2003,kowalewski:book,helm:pnmrs2006,belorizky:jcp2008}. The extended SBM model is most widely used in the interpretation of paramagnetic enhanced $^1$H relaxation of water due to Gd$^{3+}$-based contrast agents \cite{koenig:jcp1975,southwood:jcp1980,banci:1985,lauffer:cr1987,koenig:pnmrs1970,micskei:mrc1993,strandberg:jmr1996,powell:jacs1996,caravan:cr1999,rast:jcp01,bertini:book2001,borel:jacs2002,zhou:jmr2004,zhou:sap2005,yazyev:jcp2007,yazyev:ejic2008,lindgren:pccp2009,luchinat:jbio2014,aime:mp2019,fragai:cpc2019,li:jacs2019,washner:cr2019} used in clinical MRI (magnetic resonance imaging). The SBM model also forms the basis for the interpretation of paramagnetic relaxation in water-saturated sedimentary rocks \cite{kleinberg:jmr1994,foley:JMR1996,straley:sca2002,zhang:petro2003,korb:pre2009,mitchell:pnmrs2014,faux:pre2015,saidian:fuel2015}. The inner-sphere water constitutes the ligands of the Gd$^{3+}$ complex. The SBM inner-sphere model assumes a rigid Gd$^{3+}$--$^1$H dipole-dipole pair undergoing rotational diffusion, which according to the Debye model results in mono-exponential decay of the autocorrelation function. The Debye model was previously used in the Bloembergen-Purcell-Pound (BPP) model \cite{bloembergen:pr1948} for like spins (e.g. $^1$H--$^1$H pairs), and then adopted in the SBM model for unlike spins (e.g. Gd$^{3+}$--$^1$H pairs). The SBM inner-sphere model also takes the electron-spin relaxation time into account, resulting in 6 free parameters for the $^1$H NMR relaxivity $r_1$. The inner-sphere relaxation is generally considered the largest contribution to relaxivity. The outer-sphere water are less tightly bound than the inner-sphere water. The HF outer-sphere model assumes that the Gd$^{3+}$ ion and H$_2$O are two force-free hard-spheres undergoing translational diffusion, which results in stretched-exponential decay of the autocorrelation function \cite{torrey:pr1953,hwang:JCP1975}. The HF outer-sphere model adds an additional 2 free parameters, bringing the total to 8 free parameters for the extended SBM model. Relaxation from the contact term is negligible for $^1$H NMR in Gd$^{3+}$--aqua \cite{powell:jacs1996,yazyev:jcp2007}, and is therefore neglected. Likewise, the Curie term is negligible in the present case \cite{fries:jcp2003,helm:pnmrs2006}. The application of the SBM and HF models to Gd$^{3+}$--aqua therefore requires fitting 8 free parameters over a large frequency range in measured $r_1$ dispersion (NMRD). In chelated Gd$^{3+}$ complexes, an order parameter plus a shorter correlation time \cite{lipari:jacs1982,lipari:jacs1982b} are added to the rotational motion of the complex \cite{fragai:cpc2019,aime:mp2019}, or to the electron-spin relaxation \cite{strandberg:jmr1996,zhou:sap2005,lindgren:pccp2009}, taking the total to 10 free parameters. This over-parameterized inversion problem often requires guidance in setting a range of values for the free parameters. It has also often speculated that the model for electron-spin relaxation is inadequate \cite{kowalewski:jcp1985}. Atomistic molecular dynamics (MD) simulations can help elucidate some of these issues. MD simulation were previously used for like-spins, e.g. $^1$H--$^1$H dipole-dipole pairs, such as liquid-state alkanes, aromatics, and water \cite{singer:jmr2017,asthagiri:seg2018,singer:jcp2018,asthagiri:jpcb2020}, as well as methane over a large range of densities \cite{singer:jcp2018b}. In all these cases, good agreement was found between simulated and measured $^1$H NMR relaxation and diffusion, without any adjustable parameters in the interpretation of the simulations. With the simulations thus validated against measurements, simulations can then be used to separate the intramolecular (i.e. rotational) from intermolecular (i.e. translational) contributions to relaxation, and to explore the corresponding $^1$H--$^1$H dipole-dipole autocorrelation functions in detail. For instance, MD simulations revealed for the first time ever that water and alkanes do not conform to the BPP model of a mono-exponential decay in the rotational autocorrelation function, except for highly symmetric molecules such as neopentane. More complex systems such as viscous polymers \cite{singer:jpcb2020} and heptane confined in a polymer matrix \cite{parambathu:jpcb2020} have also been simulated, which again saw good agreement with measurements, and which lead to insights into the distribution in dynamic molecular modes. In this report, we extend these MD simulation techniques to Gd$^{3+}$--$^1$H dipole-dipole pairs, i.e. unlike spins, in a Gd$^{3+}$--aqua complex. The simulations show good agreement with measurements in the range $f_0 \simeq $ 5 $\leftrightarrow$ 500 MHz, without any adjustable parameters in the interpretation of the simulations, and without any relaxation models. These findings show potential for predicting $r_1$ at high frequencies in chelated Gd$^{3+}$ contrast agents used for clinical MRI. At the very least, the simulations could reduce the number of free parameters in the SBM and HF models, and help put constraints on its inherent assumptions for clinical MRI applications. \section{Methodology} \label{sc:Methodolgy} \subsection{Molecular simulation}\label{subsc:MD} To model the Gd$^{3+}$--aqua system, we use the AMOEBA polarizable force field to describe solvent water \cite{oldamoeba} and the ion \cite{Clavaguera:2006ee}. The Gd$^{3+}$ ion has an incomplete set of $4f$ orbitals, but this incomplete shell lies under the filled $5s$ and $5p$ orbitals. Thus ligand-field effects \cite{Asthagiri:jacs04} are not expected to play a part in hydration and a spherical model of the ion ought to be adequate. However, polarization effects are expected to be important given the large charge on the cation. Experimental NMR studies on Gd$^{3+}$--aqua use concentrations of about 0.3~mM. This amounts to having one GdCl$_3$ molecule in a solvent bath in a cubic cell about 176~{\AA} in size. Such large simulations are computationally rather demanding with the AMOEBA polarizable forcefield. Further, at such dilutions, the ions essentially ``see'' only the water around them, and since understanding the behavior of water around the ion is of first interest, here we study a single Gd$^{3+}$ ion in a bath of 512 water molecules. (Note that within the Ewald formulation for electrostatic interactions, there is an implicit neutralizing background. This background does not impact the forces between the ion and the water molecules that are of interest here.) The partial molar volume of Gd$^{3+}$ in water at 298.15 K has been estimated by Marcus (1985) to be $-59.6$~cc/mol. We use this to fix the length of the cubic simulation cell to $24.805$~{\AA}. (From constant pressure simulations in a 2006 water system we find a Gd$^{3+}$ partial molar volume of about -63~cc/mol, in good agreement with the value suggested by Marcus. However, in this work, we will use the value suggested by Marcus.) All the simulations were performed using the OpenMM-7.5.1 package \cite{Eastman2017}. The van der Waals forces were switched to zero from 11~{\AA} to 12~{\AA}. The real space electrostatic interactions were cutoff at 9~{\AA} and the long-range electrostatic interactions were accounted using the particle mesh Ewald method with a relative error tolerance of 10$^{-5}$ for the forces. In the polarization calculations the (mutually) induced dipoles were converged to 10$^{-5}$~Debyes. The equations of motion were integrated using the “middle” leapfrog propagation algorithm with a time step of 1~fs coupled; this combination of method and time step provides excellent energy conservation in constant energy simulations. Exploratory work shows that the NMR relaxation in the Gd$^{3+}$-aqua system is sensitive to the system temperature. So we additionally use with a Nos{\'e}-Hoover chain \cite{nose:1984,hoover:1985} with three thermostats \cite{Zhang2019} to simulate the system at 298.15~K. The collision frequency of the thermostat was set to 100 fs to ensure canonical sampling. We carried out extensive tests with and without thermostats to ensure that the Nos\'e-Hoover thermostat does not affect dynamical properties. Our conclusions are consistent with an earlier study on good practices for calculating transport properties in simulations \cite{Maginn2018}. Initially, the system was equilibrated under $NVT$ conditions at 298.15 K for over 200~ps. We then propagated the system under the same $NVT$ conditions for 8~ns, saving frames every 0.1~ps for analysis. We used the last 6.5536~ns of simulation (equal to 65536($=2^{16}$) frames) for analysis. The mean temperature in the production phase was $298.15$~K with a standard error of the mean being 0.03~K. \subsection{Structure and dynamics} Fig. \ref{fg:Gr_Nr} shows the ion-water radial distribution function. The location and magnitude of the peak is in good agreement with earlier studies founded on either \emph{ab initio} \cite{yazyev:jcp2007} or empirical force field-based \cite{lindgren:pccp2009} simulations. Consistent with those studies, we find that the first sphere (i.e. inner sphere) contains between $q = 8\leftrightarrow 9$ water molecules, with a mean of $q \simeq $ 8.5 consistent with published values \cite{koenig:jcp1975,banci:1985,powell:jacs1996,strandberg:jmr1996,rast:jcp01,lindgren:pccp2009,luchinat:jbio2014}. \begin{figure}[!ht] \begin{center} \includegraphics[width=1\columnwidth]{GadCleanO_ILT_Gr} \end{center} \caption{Radial distribution function $g(r)$ of water oxygen around the Gd$^{3+}$ ion; the function $n(r)$ gives the coordination number.} \label{fg:Gr_Nr} \end{figure} \subsection{Residence time analysis} To estimate the residence time of water molecules in the inner sphere, we need to keep track of the water molecule as it moves into/out of the inner sphere. To this end, we follow the residence time of a defined water molecule $w$ using an indicator function $\chi_w$ that equals 1 if the water molecule is in the inner sphere and zero otherwise. The inner sphere is defined as a sphere of radius $r \leq 3.3$~{\AA} around the Gd$^{3+}$ ion; this corresponds to the first hydration of the ion (Fig.~\ref{fg:Gr_Nr}). We perform this analysis for all the water molecules that visit the inner sphere at least once during the simulation. (We emphasize that the time here is discrete because configurations are saved only every 100~fs.) Fig.~\ref{fg:restime} shows the trace of the indicator function for a particular water molecule. \begin{figure}[h!] \includegraphics[width=0.95\columnwidth]{ResTime2} \caption{The trace of the indicator function $\chi_w$ for part of the simulation trajectory (red curve). Note the several transient escapes of the water molecule out of the inner sphere. The dashed (blue) curve is obtained by ``windowing" the raw data as noted in the text. $\tau$ is the length of time the water molecule spends continuously inside the inner sphere (one such domain, $\tau_i$, is shown).} \label{fg:restime} \end{figure} Note that during the approximately 4~ns window (from the $\approx 8$~ns trajectory), the water molecule makes several excursions out of the inner sphere before permanently leaving the inner sphere around 4.5~ns. The length of time that the water molecule spends continuously inside the inner sphere is denoted by $\tau_i$. As Fig.~\ref{fg:restime} shows, there can be several such islands of continuous occupation. To test if the transient excursion is a bona fide escape from the inner sphere, we window average the data as follows: we consider a transient escape as a bona fide escape only if it persists for a defined number of consecutive time points. Fig.~\ref{fg:restime} shows the trace (blue curve) for such a ``windowing" for a window length of 200~fs, i.e.\ two consecutive frames in the trajectory of configurations. As can be expected, windowing extends the length of time that the molecule is defined to be inside the inner sphere. For all the water molecules that visit the inner sphere, we accumulate the set $\{\tau_i\}$ and then construct the histogram $h(\tau)$. Figure~\ref{fg:residence} shows the $h(\tau)$ for the raw data and the data with window length of 200~fs. \begin{figure}[h!] \includegraphics[width=1\columnwidth]{GadCleanO_ILT_Hist} \caption{Distribution of continuous occupancy times $\tau$, $h(\tau)$, for raw data and 200 fs window.} \label{fg:residence} \end{figure} The $h(\tau)$ curve shows that there are many cases where water only transiently resides in the inner sphere. But we also find several water molecules that spend upwards of 0.5 ns continuously within the inner sphere. Specifically, for the data that has not been smoothed by ``windowing", we find three instances of water molecules continuously spending between 1 to 1.5~ns inside the inner sphere (and these also happen to be three distinct water molecules [data not shown]); with windowing using a 200~fs window, the upper limit is extended to just over 2~ns. Thus residence time, $\tau_m$, between $1 \leftrightarrow$ 2 ns are predicted for a complete rejuvenation of the inner sphere population. This time-scale is in accord with the range of published $^{17}$O NMR data that suggest residence times $\tau_m \simeq 1.0 \leftrightarrow 1.5$ ns \cite{southwood:jcp1980,powell:hel1993,micskei:mrc1993,helm:ccr99,borel:jacs2002}. \subsection{$^1$H NMR relaxivity} \label{subsc:Relaxation} The enhanced $^1$H NMR relaxation rate $1/T_{1}$ for water is given by the average over the $N = 1024$ $^1$H nuclei in the $L=24.805$ {\AA} box containing one paramagnetic Gd$^{3+}$ ion: \begin{align} \frac{1}{T_{1}} &= \frac{1}{N}\sum_{i = 1}^{N}\frac{1}{T_{1i}}, \label{eq:R1} \\ \frac{1}{T_{2}} &= \frac{1}{N}\sum_{i = 1}^{N}\frac{1}{T_{2i}}, \label{eq:R1_2} \end{align} where $T_{1i}$ is the $T_1$ relaxation for the $i$'th $^1$H nucleus. The gadolinium molar concentration is given by $[Gd] =[H]/N$, where $[H] $ is the molar concentration of $^1$H in the simulation box. Equivalently, $[H] = 2[W]$ where $[W] = $ 55,705 mM is the molar concentration of H$_2$O at 25$^{\circ}$C. This leads to the following expression for the relaxivity in units of (mM$^{-1}$s$^{-1}$): \begin{align} r_1 &= \frac{1}{[Gd]} \frac{1}{T_1} = \frac{1}{[H]}\sum_{i = 1}^{N}\frac{1}{T_{1i}}, \label{eq:R2}\\ r_2 &= \frac{1}{[Gd]} \frac{1}{T_2} = \frac{1}{[H]}\sum_{i = 1}^{N}\frac{1}{T_{2i}}. \label{eq:R2_2} \end{align} Note that $r_{1,2}$ are independent of $N$ (or box size $L$, equivalently). The ``fast-exchange" regime ($T_{1,2} \gg \tau_m$) is assumed \cite{korringa:pr1962}, and the chemical shift term in $r_2$ \cite{helm:pnmrs2006} is neglected for simplicity. Note also that the $^1$H--$^1$H dipole-dipole relaxation \cite{singer:jmr2017} is not considered in these simulations. The computation of $T_{1i,2i}$ originates from the Gd$^{3+}$--$^1$H dipole-dipole autocorrelation function $G(t)$ \cite{bloembergen:pr1948,torrey:pr1953,abragam:book,mcconnell:book,cowan:book,kimmich:book} shown in Fig. \ref{fg:Gt_Jw}(a), where $t$ is the lag time. This autocorrelation is well suited for computation using MD simulations \cite{peter:jbnmr2001,case:acr2002}. Using the convention in the text by McConnell \cite{mcconnell:book}, $G_i(t)$ in units of (s$^{-2}$) is determined by: \begin{multline} G_i(t) = \frac{1}{4} \! \left(\frac{\mu_0}{4\pi}\right)^2 \! \hbar^2 \gamma_I^2\gamma_S^2 S(S+1) \times\\ \left<\frac{(3\cos^{2}\!\theta_{i}\!(t+t')-1)}{r_{i}^3\!\left(t+t'\right)} \frac{(3\cos^{2}\!\theta_{i}\!(t')-1)}{r_{i}^3\!(t')} \right>_{\!\! t'}, \label{eq:R3} \end{multline} for the $i$'th $^1$H nucleus. $\theta_{i}$ is the angle between the Gd$^{3+}$--$^1$H vector ${\bf r}_{i} $ and the applied magnetic field ${\bf B}_0 $. $\mu_0$ is the vacuum permeability, $\hbar$ is the reduced Planck constant. $\gamma_I/2\pi = 42.58$ MHz/T is the nuclear gyro-magnetic ratio for $^1$H with spin $I = 1/2$, and $\gamma_S = 658\,\gamma_I$ is the electron gyro-magnetic ratio for Gd$^{3+}$ with spin $S = 7/2$. Note that in Eq. \ref{eq:R3} we assume a spherically symmetric (i.e. isotropic) system, and therefore $G_i^m(t) = G_i(t)$ is independent of the order $m$, which amounts to saying that the direction of the applied magnetic field ${\bf B}_0 = B_0 \bf z$ is arbitrary. This assumption was verified in Ref. \cite{lindgren:pccp2009}. For simplicity, we therefore use the $m = 0$ harmonic $Y_2^0(\theta,\phi) = \sqrt{5/16\pi}\, (3\cos^{2}\theta-1) $ for the MD simulations \cite{singer:jmr2017}. \begin{figure}[!ht] \begin{center} \includegraphics[width=1\columnwidth]{GadCleanO_ILT_Jw_Gt_Combo} \end{center} \caption{(a) MD simulations of the autocorrelation function $G(t)$, where 1 in 10 data points are shown for clarity. Also shown is the modes expansion (Eq. \ref{eq:ILT1}) to $G(t)$, and SBM model (Eq. \ref{eq:HS1}) with $\tau_R = \left< \tau\right> $ = 30 ps. (b) Spectral density functions $J(\omega)$ from FFT (fast Fourier transform) (Eq. \ref{eq:R10}), including the $f = 0$ data point represented as a horizontal line placed at low frequency. Also shown is the modes expansion (Eq. \ref{eq:ILT6}), and the $f^{-2}$ power-law expected in the SBM model at high frequencies.} \label{fg:Gt_Jw} \end{figure} The second-moment $\Delta\omega_i^2$ (i.e. strength) of the dipole-dipole interaction is defined as such \cite{cowan:book}: \begin{align} G_i(0) &= \frac{1}{3} \Delta\omega_i^2. \label{eq:R4} \end{align} Assuming the angular term in Eq. \ref{eq:R3} is uncorrelated with the distance term at $t = 0$, the relation $\left<(3\cos^{2}\!\theta_{i}\!(\tau)-1)^2\right>_\tau = 4/5$ (which is independent of $i$) reduces the second moment to: \begin{align} \Delta\omega_i^2 = \frac{3}{5} \! \left(\frac{\mu_0}{4\pi}\right)^2 \! \hbar^2 \gamma_I^2\gamma_S^2 S(S+1) \left< \! \frac{1}{r_{i}^6(t')} \!\right>_{\!\! t'}. \label{eq:R5} \end{align} The next step is to take the (two-sided even) fast Fourier transform (FFT) of the $G_i(t)$ to obtain the spectral density function: \begin{align} J_i(\omega) &= 2\int_{0}^{\infty} \! G_i(t) \cos(\omega t)\, dt. \label{eq:R10} \end{align} The relaxation rates are then determined for unlike spins \cite{mcconnell:book}: \begin{align} \frac{1}{T_{1i}} &= J_i(\omega_0) + \frac{7}{3} J_i(\omega_e), \label{eq:R11} \\ \frac{1}{T_{2i}} &= \frac{2}{3}J_i(0) + \frac{1}{2}J_i(\omega_0) + \frac{13}{6} J_i(\omega_e), \label{eq:R12} \end{align} assuming $\omega_e \gg \omega_0$, where $\omega_0 = \gamma_I B_0 = 2\pi f_0$ is the $^1$H NMR resonance frequency, and $\omega_e = 658\,\omega_0$ is the electron resonance frequency. The expressions for $T_{1i,2i}$ are then summed in Eqs. \ref{eq:R2} and \ref{eq:R2_2} to compute $r_1$ and $r_2$, respectively. We also define the following quantities summed over the $N = 1024$ $^1$H nuclei: \begin{align} G(t) &= \sum\limits_{i = 1 }^{N} \! G_i(t), \label{eq:R7}\\ \Delta\omega^2 &=\sum\limits_{i = 1 }^{N} \Delta\omega_i^2 \label{eq:R8}\\ J(\omega) &=\sum\limits_{i = 1 }^{N} J_i(\omega). \label{eq:R9} \end{align} Note that the summed quantities $G(t)$, $\Delta\omega^2$, and $J(\omega)$ are independent of $N$ (or box size $L$, equivalently). The $G(t)$ simulation data is plotted in Fig \ref{fg:Gt_Jw}(a), while the $J(\omega)$ simulation data is plotted in Fig \ref{fg:Gt_Jw}(b). We also define the average correlation time $\left<\tau\right>$ as the normalized integral \cite{cowan:book}: \begin{align} \left<\tau\right> &= \frac{1}{G(0)}\int_{0}^{\infty}\! G(t)\,dt. \label{eq:tau} \end{align} The low frequency (i.e. extreme narrowing) limit $r_1(0) = r_2(0)=r_{1,2}(0)$ can then be expressed as: \begin{align} r_{1,2}(0) =\frac{1}{[H]}\frac{20}{9} \Delta\omega^2\!\left<\tau\right>. \label{eq:R15} \end{align} Note how $r_{1,2}(0)$ (for unlike Gd$^{3+}$--$^1$H spin pairs) is a factor 2/3 less than the equivalent expression for like $^1$H--$^1$H spin pairs \cite{singer:jmr2017}. We assume the ``fast-exchange" regime in the above formulation, which is discussed in more detail in Section \ref{subsc:SBM}. The fast-exchange regime can be inferred directly from measurements since $r_1$ increases with decreasing temperature \cite{koenig:jcp1975}. Investigations are underway to extend the simulations to the slow-exchange regime. The above analysis also assumes the electron spin is a point-dipole centered at the Gd$^{3+}$ ion \cite{kowalewski:jcp1985}. Given that the simulation agrees with measurements in the range $f_0 \simeq $ 5 $\leftrightarrow$ 500 MHz, the point-dipole approximation is considered valid in the present case for $^1$H \cite{helm:pnmrs2006}. \subsection{Expansion of $G(t)$ in terms of molecular modes}\label{subsc:ILT} The FFT result for $J(\omega)$ in Fig. \ref{fg:Gt_Jw}(b) is sparse. Besides the $f$ = 0 data point, the lowest frequency FFT data point is given by the resolution $\Delta f = 1/2t_{max} = $ 10$^3$ MHz, where $t_{max} = 500$ ps is the longest lag time in $G(t)$. As an alternative to zero-padding the FFT, we see to model $G(t)$ in terms of molecular modes. To this end, we expand $G(t)$ as \begin{align} G(t) &= \int_{0}^{\infty}\! P(\tau) \exp\left(-\frac{t}{\tau}\right) d\tau, \label{eq:ILT1} \end{align} where $P(\tau)$ is the underlying distribution in molecular correlation times, $\tau$. We solve this Fredholm integral equation of the first kind to recover the $P(\tau)$ distribution. Since $G(t)$ is available only at discrete time intervals, the inversion is an ill-posed problem. We address this by using Tikhonov regularization \cite{singer:jcp2018,singer:prb2020}, with the vector $\textbf{P}$ being one for which \begin{eqnarray} {\bf P} = \underset{{\bf P}\geq0}{\mathrm{arg\, min}}\,\, || {\bf G} - K \,{\bf P}||^2 + \alpha ||{\bf P}||^2 \label{eq:ILT4} \end{eqnarray} is a minimum. Here $\textbf{G}$ is the column vector representation of the autocorrelation function $G(t)$, $\textbf{P}$ is the column vector representation of the distribution function $P(\tau)$, $\alpha$ is the regularization parameter, and $K$ is the kernel matrix: \begin{align} K = K_{ij} = \exp\left(-\frac{t_i}{\tau_{j}}\right). \label{eq:ILT5} \end{align} The results for $P(\tau)$ are shown in Fig. \ref{fg:Ptau}, from which the following are determined: \begin{align} \left<\tau\right> &= \frac{1}{G(0)}\int_{0}^{\infty}\! P(\tau)\, \tau \,d\tau, \label{eq:ILT2} \\ G(0) &= \int_{0}^{\infty}\! P(\tau) d\tau= \frac{1}{3} \Delta\omega^2. \label{eq:ILT3} \end{align} The spectral density $J(\omega)$ is then determined from the Fourier transform (Eq. \ref{eq:R10}) of $G(t)$ (Eq. \ref{eq:ILT1}): \begin{align} J(\omega) &= \int_{0}^{\infty}\! \frac{2\tau}{1+(\omega\tau)^2} P(\tau) d\tau \, ,\label{eq:ILT6} \end{align} from which $T_{1,2}$ at any desired $f_0$ can be determined (Eqs. \ref{eq:R11} and \ref{eq:R12}). More in-depth discussions of the above procedure, loosely termed as ``inverse Laplace transform", can be found in Refs. \cite{parambathu:jpcb2020,singer:jpcb2020,singer:jcp2018b,asthagiri:jpcb2020,wang:prb2021,imai:jpsj2021} and the supplementary material in Refs.~\cite{singer:jcp2018,singer:prb2020}. We hasten to add that inverting Eq.~\ref{eq:ILT1} to recover $P$ is formally not a Laplace inversion \cite{fordham:diff2017}, but this terminology is common in the literature. Possible alternatives to the above formulation are also discussed in Ref. \cite{asthagiri:jpcb2020}. The $P(\tau)$ is binned from $\tau_{min} = 0.05$ ps to $\tau_{max} = 500$ ps using 200 logarithmically spaced bins. In the present case of low-viscosity fluids ($\eta \simeq $ 1 cP), the choice of $\tau_{max} $ does not impact $J(\omega)$, and is therefore {\it not} a free parameter in the analysis in terms of molecular modes. The constant ``div" in Fig. \ref{fg:Ptau} is a ``division" on a log-scale. More specifically, div $= \log_{10}(\tau_{i+1}) - \log_{10}(\tau_{i})$ is independent of the bin index $i$, and ensures unit area when $P(\tau)$ is of unit height and a decade wide \cite{singer:prb2020}. \begin{figure}[!ht] \begin{center} \includegraphics[width=1\columnwidth]{GadCleanO_ILT_Ptau_Norm} \end{center} \caption{Probability density function $P(\tau)$ obtained from the expansion of molecular modes (Eq. \ref{eq:ILT1}) of $G(t)$. The average correlation time $\left<\tau\right>$ (Table \ref{tb:MD}) and predicted translational correlation time $\tau_T$ (Table \ref{tb:Deff}) are shown.} \label{fg:Ptau} \end{figure} As shown in Fig. \ref{fg:Gt_Jw}(a), the residual between the $G(t)$ data and the fit using molecular modes is not dominated by Gaussian noise. As such, the regularization parameter is fixed to $\alpha =$ 10$^{-1}$ in accordance with previous studies \cite{singer:jcp2018,singer:jcp2018b,singer:jpcb2020,parambathu:jpcb2020,asthagiri:jpcb2020}. As shown in Fig. \ref{fg:Gt_Jw}(b), we find that $\alpha =$ 10$^{-1}$ gives excellent agreement with the parameter-free $J(\omega)$ from FFT, which validates the analysis in terms of molecular modes. The results in Fig. \ref{fg:Gt_Jw}(b) further emphasize the following advantages: (1) the expansion (Eq.~\ref{eq:ILT1}) filters out the noise while still honoring the FFT data (including the $f = 0$ data point), (2) Eq.~\ref{eq:ILT6} provides $J(\omega)$ for any desired $f$ value, and (3) the expansion in terms of molecular modes leads to physical insight into the distribution $P(\tau)$ of molecular correlation times $\tau$. \subsection{Diffusion}\label{subsc:Diffusion} An independent computation of translational diffusion $D_T$ was performed from MD simulations. We calculate the mean square displacement $\left<\!\Delta r^2\right>$ of the water oxygen and Gd$^{3+}$ ion as a function of the diffusion evolution time $t$ ($<$ 10 ps), and additionally average over a sample of 50 molecules to ensure adequate statistical convergence. \begin{figure}[h!] \begin{center} \includegraphics[width=1\columnwidth]{GadCleanO_ILT_Diffusion} \end{center} \caption{MD simulations of mean-square displacement $\left<\!\Delta r^2 \right>$ versus time $t$ for bulk water, water in Gd$^{3+}$--aqua, and Gd$^{3+}$ in Gd$^{3+}$--aqua. Solid lines show fitting region used to obtain translational diffusion coefficient $D_{sim}$ from Eq. \ref{eq:D1} for $t > 2 $ ps, and dashed lines show early time regime not used in the fit. Legend indicates $D_T$ values which include the correction term (Eq. \ref{eq:D2}).} \label{fg:Diff} \end{figure} As shown in Fig. ~\ref{fg:Diff}, at long-times ($t$) the slope of the linear diffusive regime gives the translational self-diffusion coefficient $D_{sim}$ according to Einstein's relation: \begin{equation} D_{sim} = \frac{1}{6} \frac{\delta \! \left<\!\Delta r^2\right>}{\delta t} \, \label{eq:D1} \end{equation} where $D_{sim}$ is the diffusion coefficient obtained in the simulation using periodic boundary conditions in a cubic cell of length $L$. In the linear regression procedure, the early ballistic regime and part of the linear regime is excluded to obtain a robust estimate of $D_{sim}$. Following Yeh and Hummer \cite{yeh:jpcb2004} (see also D\"unweg and Kremer \cite{kremer:jcp93}), we obtain the diffusion coefficient for an infinite system $D_{T}$ from $D_{sim}$ using \begin{equation} D_{T} = D_{sim} + \frac{k_B T}{6 \pi \eta} \frac{\xi}{L} \label{eq:D2} \end{equation} where $\eta$ is the shear viscosity and $\xi = 2.837297$ \cite{yeh:jpcb2004} is the same quantity that arises in the calculation of the Madelung constant in electrostatics. (In the electrostatic analog of the hydrodynamic problem, $\xi/L$ is the potential at the charge site in a Wigner lattice.) For the system sizes considered in this study, $L \simeq 25$~{\AA}, the correction factor constituted $\simeq13\%$ of $D_0$, and $\simeq16\%$ of $D_W$. The correction factor was not applied to $D_{Gd}$. \subsection{Measurements} \label{subsc:measurements} We prepared a Gd$^{3+}$--aqua solution in de-ionized water at $[Gd]$ = 0.3 mM and measured $T_{1,meas}$ at a controlled temperature of 25$^{\circ}$C, using static fields at $f_0 = 2.3$ MHz with an Oxford Instruments GeoSpec2, at $f_0 = 20$ MHz with a Bruker Minispec, and at $f_0 = 500$ MHz with a Bruker 500 MHz Spectrometer. The measured relaxivity $r_1$ was determined as such: \begin{align} r_1 = \frac{1}{[Gd]}\left(\!\frac{1}{T_{1,meas}} - \frac{1}{T_{1,bulk}}\!\right). \label{eq:M1} \end{align} where $T_{1,bulk} = $ 3.13 s was found for bulk water (not de-oxygenated \cite{shikhov:amr2016}) at 2.3 MHz and 500 MHz. Field cycling $r_1$ data at $[Gd]$ = 1 mM and 25$^{\circ}$C were taken from Luchinat {\it et al.} \cite{luchinat:jbio2014} (supplementary material) using a Stelar SpinMaster 1T. The field cycling results agreed with our measurements at $f_0 = $ 2.3 MHz and 20 MHz (within $\simeq$ 5\%), while our $f_0 = 500$ MHz significantly extends the frequency range of the measurements. \section{Results and discussions} \label{sc:Results} In this section we compare the simulated relaxivity $r_1$ with measurements. The simulated relaxivity is then discussed in the context of the Solomon-Bloembergen-Morgan (SBM) model, and Hwang-Freed (HF) model. The zero-field electron-spin relaxation time is then determined from $r_1$ at low frequencies. \subsection{Comparison with measurements} The results of simulated relaxivity $r_1$ (Eq. \ref{eq:R2}) are shown in Fig. \ref{fg:R1_relaxivity}, alongside corresponding measurements of Gd$^{3+}$--aqua solution at 25$^{\circ}$C using field cycling by Luchinat {\it et al.} \cite{luchinat:jbio2014} (supplementary material), and static fields from this work. A cross-plot of simulated versus measured $r_1$ results are also shown in Fig. \ref{fg:R1_cross}. The simulation is within $\simeq$ 8\% of measurements in the range $f_0 \simeq $ 5 $\leftrightarrow$ 500 MHz. Given that there are no adjustable parameters in the interpretation of the simulations, this agreement validates our simulations at high frequencies. We note that a similar degree of agreement (within $\simeq$ 7\%) was found in previous studies of liquid alkanes and water \cite{singer:jmr2017}. For convenience, Table \ref{tb:MD} lists the average correlation time $\left<\tau\right>$ (Eq. \ref{eq:ILT2}), the square-root of the second moment (i.e. strength) of $\Delta \omega$ (Eq. \ref{eq:ILT3}), and the residence time $\tau_m$ (Fig. \ref{fg:residence}). Note that these three quantities are model free. \begin{table}[!ht] \centering \begin{tabular}{ccc|cccc|c} \hline $^{{\strut}{}}$ $\left<\tau\right>$& $\Delta \omega/2\pi$& $\tau_m$ &$q$ & $r_{in}$ & $\Delta \omega_{in}/2\pi$& $T_{1in}(0)$ & $T_{e0}$\\ $^{{\strut}{}}$ (ps) & (MHz)& (ns)& & ({\AA})& (MHz)& ($\mu$s) &(ps) \\ \hline $^{{\strut}{}}$ 30 & 38.4 & 1 $\leftrightarrow$ 2 & 8.5 & 2.97 & 9.3 & 4.4 & 180 \end{tabular} \caption{Analysis of the simulation results including; (left) mean correlation time $\left<\tau\right>$ (Eq. \ref{eq:ILT2}), square-root of second moment $\Delta\omega$ (Eq. \ref{eq:ILT3}), and residence time $\tau_m$ (Fig. \ref{fg:residence}); (middle) approximate inner-sphere quantities including coordination number $q$ (Fig. \ref{fg:Gr_Nr}), Gd$^{3+}$--$^1$H distance $r_{in}$ (Eq. \ref{eq:SBM_R}), square-root of second moment $\Delta\omega_{in}$ (Eq. \ref{eq:SBM_SM}), and relaxation time $T_{1in}(0)$ at $f_0 = 0$ (Eq. \ref{eq:SBM_T1}); (right) zero-field electron-spin relaxation time $T_{e0} $ (Eq. \ref{eq:EL2}).}\label{tb:MD} \end{table} \begin{figure}[!ht] \begin{center} \includegraphics[width=1\columnwidth]{GadCleanO_ILT_R1} \end{center} \caption{Simulated $^1$H NMR relaxivity $r_1$ of Gd$^{3+}$--aqua solution at 25$^{\circ}$C, compared with static-field measurements (this work) and field-cycling measurements (Luchinat {\it et al.} \cite{luchinat:jbio2014}, supplementary material). Also shown is the average of the low-frequency ($f_0<0.5$ MHz) measurements.} \label{fg:R1_relaxivity} \end{figure} \begin{figure}[!ht] \begin{center} \includegraphics[width=0.9\columnwidth]{GadCleanO_ILT_CrossPlot} \end{center} \caption{Cross-plot of measured $r_1$ (including static-field and field-cycling measurements) versus simulations taken from Fig. \ref{fg:R1_relaxivity}, all at 25$^{\circ}$C. Dashed line is the 1-1 unity line.} \label{fg:R1_cross} \end{figure} \subsection{SBM inner-sphere model}\label{subsc:SBM} The SBM inner-sphere model assumes a rigid Gd$^{3+}$--$^1$H pair undergoing rotational diffusion, leading to the following mono-exponential decay in the autocorrelation function: \begin{align} G_{SBM}(t) = G_{SBM}(0)\exp\left(-\frac{t}{\tau_R} \right). \label{eq:HS1} \end{align} This functional form is identical to the BPP model \cite{bloembergen:pr1948} which is based on the Debye model, where the rotational-diffusion correlation time $\tau_R$ is defined as the average time it takes the rigid pair to rotate by 1 radian. $G_{SBM}(t)$ is plotted in Fig. \ref{fg:Gt_Jw}(a) assuming $\tau_R = \left<\tau\right>$ = 30 ps. As shown in Fig. \ref{fg:Gt_Jw}(a), the mono-exponential decay in $G_{SBM}(t)$ is not consistent with the multi-exponential (i.e., stretched) decay in $G(t)$. Equivalently, $J(\omega)$ in Fig. \ref{fg:Gt_Jw}(b) does not follow the $f^{-2}$ power-law behavior at large $f$. This is expected since the simulations implicitly include both inner-sphere and outer-sphere (see below) contributions. Currently the simulations do not separate inner-sphere from outer-sphere contributions, therefore the simulations do not clarify the origin of the multi-exponential decay in $G(t)$. It is also possible that the inner-sphere $G(t)$ itself has a multi-exponential decay of its own. The functional form of $G_{SBM}(t)$ is based on the Debye model, which was previously shown to be inaccurate when used in the BPP model for liquid alkanes and water \cite{singer:jmr2017}. It would therefore not be surprising if the inner-sphere $G(t)$ was also multi-exponential in nature. Assuming that inner-sphere relaxation dominates, and therefore that the multi-exponential decay in $G(t)$ is due to inner-sphere dynamics alone, the correlation time $\left<\tau\right> \simeq$ 30 ps is consistent with published values from the SBM inner-sphere model, where a range of $\tau_R \simeq 23 \leftrightarrow 45$ ps is reported \cite{koenig:jcp1975,banci:1985,powell:jacs1996,strandberg:jmr1996,rast:jcp01,lindgren:pccp2009,luchinat:jbio2014}. Note that $\left<\tau\right> \simeq$ 30 ps is a factor $\simeq 10$ larger than that of bulk water ($\tau_R \simeq 2.7$ ps \cite{singer:jmr2017}), which is expected given the hindered rotational motion of the Gd$^{3+}$--aqua complex. Continuing with the assumption that inner-sphere relaxation dominates, one can approximate the following inner-sphere quantities: \begin{align}\frac{1}{r_{in}^6} &\simeq \frac{1}{2q}\sum_{i=1}^N \left< \! \frac{1}{r_{i}^6(t')} \!\right>_{\!\! t'} \label{eq:SBM_R} \\ \Delta\omega^2_{in} &\simeq \frac{1}{2q}\sum_{i=1}^N\Delta\omega_i^2 \label{eq:SBM_SM} \\ \frac{1}{T_{1in}} &\simeq \frac{1}{2q}\sum_{i=1}^N\frac{1}{T_{1i}}. \label{eq:SBM_T1} \\ \end{align} The expressions are averaged over the $2q$ $^1$H nuclei in the inner sphere, where $q = 8.5$ is the H$_2$O inner-sphere coordination number determined from $n(r)$ (Fig. \ref{fg:Gr_Nr}) at $r \simeq 3.5$ {\AA}. Note again that these equations neglect the outer-sphere contribution, implying that $\Delta\omega_{in}^2$ is an upper bound, while $r_{in}$ and $T_{1in}(0) $ are lower bounds. According to Table \ref{tb:MD}, the resulting $r_{in} \simeq $ 2.97 {\AA} is consistent with published values from the inner-sphere SBM model, where a range of $r_{in} \simeq 3.0 \leftrightarrow 3.2$ {\AA} is reported \cite{koenig:jcp1975,banci:1985,powell:jacs1996,strandberg:jmr1996,rast:jcp01,lindgren:pccp2009,luchinat:jbio2014}. The product $\Delta\omega_{in} \left<\tau\right> \simeq 0.002 $ indicates that the Redfield-Bloch-Wagness condition ($\Delta\omega \left<\tau\right> \ll 1$) is satisfied \cite{abragam:book,mcconnell:book,cowan:book}, which justifies the relaxivity analysis used here. The fast-exchange regime \cite{korringa:pr1962} can also be verified by noting that $T_{1in}(0) \simeq$ 4.4 $\mu$s at $f_0 = 0$ is three orders of magnitude larger than $\tau_m \simeq 1$ ns, i.e. $(T_{1in} + \tau_m) \simeq T_{1in} $ can be assumed. \subsection{HF outer-sphere model}\label{subsc:HF} We now discuss the outer-sphere contribution to relaxivity, although it is generally believed (though not proven) to be smaller than inner-sphere relaxivity \cite{koenig:jcp1975,lauffer:cr1987,caravan:cr1999}. The outer-sphere contribution is expected to follow the Hwang-Freed (HF) model for the relative translational diffusion between Gd$^{3+}$ and H$_2$O assuming two force-free hard-spheres \cite{hwang:JCP1975}: \begin{multline} G_{HF}(t)=G_{HF}(0)\frac{54}{\pi}\int\limits_0^\infty \frac{x^2}{81 + 9 x^2 - 2 x^4 + x^6} \times \\ \exp\left(-x^2 \frac{t}{\tau_D} \right) dx. \label{eq:HS2} \end{multline} The translational-diffusion correlation time $\tau_D$ is defined as the average time it takes the molecule to diffuse by one hard-core diameter $d$. $G_{HF}(t)$ is a multi-exponential decay by nature, and therefore one expects the total $G(t)$ to be stretched, the extent of which depends on the relative contributions of outer-sphere to inner-sphere. The correlation time $\tau_D$ can be predicted as such: \begin{align}\tau_D &= \frac{d^2}{D_{W} + D_{Gd}} = \frac{9}{4}\tau_T, \label{eq:HS3} \end{align} where the simulated diffusion coefficients of H$_2$O ($D_W$) and Gd$^{3+}$ ($D_{Gd}$) in Gd$^{3+}$--aqua are taken from Fig. \ref{fg:Diff}, the results of which are listed in Table \ref{tb:Deff}. The hard-core diameter $d$ is taken from the local maximum at $r \simeq 4.6 $ {\AA} in the pair-correlation function $g(r)$ in Fig. \ref{fg:Gr_Nr} (which is attributed to outer-sphere water). Note that the resulting $\tau_D \simeq 99$ ps is a factor $\simeq 10$ larger than that of bulk water ($\tau_D \simeq 9.0$ ps \cite{singer:jmr2017}), which is expected given the larger hard-core distance $d \simeq 4.6$ {\AA} than bulk water ($d \simeq $ 2.0 {\AA} \cite{singer:jmr2017}). \begin{table}[!ht] \centering \begin{tabular}{ccc|cc} \hline $^{{\strut}{}}$ $D_{W}$& $D_{Gd}$ & $d$ &$\tau_D$ & $\tau_T$ \\ $^{{\strut}{}}$ (${\rm \AA}^2$/ps) & (${\rm \AA}^2$/ps) & (${\rm \AA}$) & (ps) & (ps) \\ \hline $^{{\strut}{}}$ 0.19 & 0.03 & 4.6 & 99 & 44 \end{tabular} \caption{Diffusion coefficients of H$_2$O ($D_W$) and Gd$^{3+}$ ($D_{Gd}$) in Gd$^{3+}$--aqua, distance of closest approach ($d$) between Gd$^{3+}$ and outer-sphere H$_2$O according to $g(r)$ (Fig. \ref{fg:Gr_Nr}), and, resulting translational-diffusion correlation time $\tau_D \,(= 9/4\,\tau_T)$ from Eq. \ref{eq:HS3}.} \label{tb:Deff} \end{table} The value $\tau_D \simeq 99$ ps is compared to the distribution $P(\tau)$ in Fig. \ref{fg:Ptau}. More specifically, the translational correlation time $\tau_T \simeq 44$ ps is plotted in Fig. \ref{fg:Ptau}, where the relation $\tau_D = 9/4 \,\tau_T$ and the origin of the factor 9/4 is explained in \cite{cowan:book,singer:jcp2018}. $\tau_T $ lies within the $P(\tau)$ distribution, indicating that the outer-sphere may contribute to relaxivity. Further investigations are underway to separate inner-sphere from outer-sphere contributions in the simulations, without assuming any models. We note that $P(\tau)$ in Fig. \ref{fg:Ptau} has a small contribution at short $\tau\simeq 10^{-1}$ ps, which is a result of the sharp drop in $G(t)$ over the initial $t \simeq 0.2 $ ps (Fig. \ref{fg:Gt_Jw}(a)). This molecular mode is also present for intramolecular relaxation in liquid alkanes and water, while it is absent for intermolecular relaxation. In the case of alkanes, the ubiquitous intramolecular mode at $\tau\simeq 10^{-1}$ ps is attributed to the fast spinning methyl end-groups \cite{singer:jcp2018}. Investigations are underway to better understand the origin of this mode in Gd$^{3+}$--aqua, which can perhaps be explained using a two rotational-diffusion model such as found in bulk water \cite{madhavi:jpcb2017}. The origin of other modes in $P(\tau)$ at $\tau\simeq 10^{0}$ ps and $\simeq 10^{1}$ ps are also being investigated. Finally, we note that the $r_{1}$ dispersion in Fig. \ref{fg:R1_relaxivity} results in a mild increase in the ratio $T_{1}/T_{2} = r_{2}/r_{1} \simeq 7/6$ above $f_0 \gtrsim 10$ MHz, until $f_0 \gtrsim 6$ GHz where $T_{1}/T_{2} $ increases further. Combining Eqs. \ref{eq:R11} and \ref{eq:R12} with $J(\omega_e) = 0$ (i.e., slow-motion regime) and $J(\omega_0) = J(0)$ (i.e., fast-motion regime) accounts for the factor $T_{1}/T_{2} = $ 7/6 within the frequency range $f_0 \simeq$ 10 MHz $\leftrightarrow$ 6 GHz. The ratio $T_{1}/T_{2} = 7/6$ was also used to explain water-saturated sandstones \cite{foley:JMR1996}. \subsection{Electron-spin relaxation} \label{subsc:ESR} At low-frequencies ($f_0 \lesssim 0.5$ MHz), the difference between $r_1$ measurements ($\simeq$ 29.7 mM$^{-1}$s$^{-1}$) and simulation ($\simeq$ 34.6 mM$^{-1}$s$^{-1}$) can be reconciled by taking the zero-field electron-spin relaxation time $T_{e0} = T_{1e}(0) = T_{2e}(0)$ into account. Assuming that the correlation times $P(\tau)$ are uncorrelated with the electron-spin relaxation times, the following expression results \cite{bloembergen:jcp1961}: \begin{align} r_{1,2}'(0) & =\frac{1}{[H]}\frac{20}{9} \Delta\omega^2\!\left<\tau'\right>, \label{eq:EL1}\\ \frac{1}{\left<\tau'\right>} &=\frac{1}{\left<\tau\right>} +\frac{1}{T_{e0}}.\label{eq:EL2} \end{align} This is equivalent to introducing an exponential decay term $\exp(-t/T_{e0})$ inside the FFT integral of Eq. \ref{eq:R10}. The fitted value of $T_{e0} \simeq 180 $ ps is determined by matching $r_1'(0)$ to the average low-frequency ($f_0 \lesssim 0.5$ MHz) measurement (Fig. \ref{fg:R1_relaxivity}). The resulting $T_{e0}\simeq 180 $ ps is consistent with the published range of $T_{e0} \simeq 96 \leftrightarrow 160 $ ps \cite{koenig:jcp1975,banci:1985,powell:jacs1996,strandberg:jmr1996,rast:jcp01,lindgren:pccp2009,luchinat:jbio2014}. Investigations are underway to incorporate $T_{1e}(\omega_e)$ and $T_{2e}(\omega_e)$ dispersion \cite{kowalewski:jcp1985,rast:jcp01,borel:jacs2002,lindgren:pccp2009} for predicting $r_1$ at low frequencies. \section{Conclusions}\label{sc:Conclusions} Atomistic MD simulations of $^1$H NMR relaxivity $r_1$ for water in Gd$^{3+}$--aqua complex at 25$^{\circ}$C show good agreement (within $\simeq $ 8\%) with measurements in the range $f_0 \simeq $ 5 $\leftrightarrow$ 500 MHz, without any adjustable parameters in the interpretation of the simulations, and without any relaxation models. This level of agreement validates the simulation techniques and analysis of Gd$^{3+}$--$^1$H dipole-dipole relaxation for unlike spins. The simulations show potential for predicting $r_1$ at high frequencies in chelated Gd$^{3+}$ contrast-agents for clinical MRI, or at the very least the simulations could reduce the number of free parameters in the Solomon-Bloembergen-Morgan (SBM) inner-sphere and Hwang-Freed (HF) outer-sphere relaxation models. Simulations suggest residence times between $\tau_m \simeq 1 \leftrightarrow$ 2 ns for a complete rejuvenation of the inner sphere waters of Gd$^{3+}$. Further, the average coordination number is $q \simeq 8.5$. These observations are consistent with previously reported interpretation of experiments using the SBM model. The autocorrelation function $G(t)$ shows a multi-exponential decay, with an average correlation time of $\left<\tau\right>\simeq$ 30 ps. The multi-exponential nature of $G(t)$ is expected given that the simulation implicitly includes both inner-sphere and outer-sphere contributions. The results are analyzed assuming that the inner-sphere relaxation dominates, yielding approximations for the average Gd$^{3+}$--$^1$H separation $r_{in} \simeq 2.97$ {\AA} and rotational correlation time $\tau_R = \left<\tau\right>\simeq$ 30 ps, both of which are consistent with previously published values which use the SBM model. A distance of closest approach (i.e. hard-core diameter) for Gd$^{3+}$--H$_2$O of $d\simeq 4.64$ {\AA} is determined from the local maximum in $g(r)$ (attribute to outer-sphere water), which together with the simulated diffusion coefficients of Gd$^{3+}$ and H$_2$O are used to estimate the translational-diffusion correlation time $\tau_D = 9/4 \,\tau_T \simeq 99$ ps in the HF outer-sphere model. Comparing $\tau_T$ to the distribution in molecular modes $P(\tau)$ (determined from the modes expansion of $G(t)$) indicates that the outer-sphere may contribute to relaxivity. Further investigations are underway to separate inner-sphere from outer-sphere contributions in the simulations, without assuming any models. Below $f_0 \lesssim $ 5 MHz the simulation overestimates $r_1$ compared to measurements, which is used to estimate the zero-field electron-spin relaxation time $T_{e0} $. The resulting fitted value $T_{e0} \simeq 180 $ ps is consistent with the published range of values. Further investigations are underway to incorporate dispersion in the electron-spin relaxation time. \section*{Acknowledgments} \label{sc:Acknow} We thank Vinegar Technologies LLC, Chevron Energy Technology Company, and the Rice University Consortium on Processes in Porous Media for financial support. We gratefully acknowledge the National Energy Research Scientific Computing Center, which is supported by the Office of Science of the U.S. Department of Energy (No.\ DE-AC02-05CH11231) and the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for high-performance computer time and support.
1,941,325,220,605
arxiv
\section{Introduction} \emph{Linear rank-width} is a linear-type width parameter of graphs motivated by the rank-width of graphs~\cite{OumS06}. The \emph{vertex-minor} relation is a graph containment relation which was introduced by Bouchet~\cite{Bouchet1987a, Bouchet1987b, Bouchet88, Bouchet1988, Bouchet1989a} on his research of circle graphs and 4-regular Eulerian digraphs. The relation has an important role in the theory of (linear) rank-width~\cite{Oum2004a, Oum05, Oum2006a, JKO2014, Oum12} as (linear) rank-width cannot increase when taking vertex-minors of a graph. The first result of the Graph Minor series papers is that for a fixed tree $T$, every graph of sufficiently large path-width contains a minor isomorphic to $T$~\cite{RobertsonS83}, and this was later used by Blumensath and Courcelle~\cite{BlumensathC10} to define a hierarchy of \emph{incidence graphs} based on \emph{monadic second-order transductions}. In order to obtain a similar hierarchy for graphs, still based on monadic second-order transductions, Courcelle~\cite{CourcelleBanhoff08} asked whether for a fixed tree $T$, every bipartite graph of sufficiently large linear rank-width contains a vertex-minor isomorphic to $T$. In a more general setting we can ask whether it is true for all graphs. \begin{QUE}\label{que:tree} For every fixed tree $T$, is there an integer $f(T)$ satisfying that every graph of linear rank-width at least $f(T)$ contains a vertex-minor isomorphic to $T$? \end{QUE} We prove that Question~\ref{que:tree} is true if and only if it is true for prime graphs with respect to split decompositions~\cite{Cunningham1982}, i.e., it is true for every class of graphs whose prime induced subgraphs have linear rank-width bounded by some fixed constant. Prime graphs are the graphs having no vertex partition $(A,B)$ with $\abs{ A}$, $\abs{B} \ge 2$ such that the set of edges joining $A$ and $B$ induces a complete bipartite graph. Since the maximum size of prime induced subgraphs of a distance-hereditary graph is at most $3$~\cite{Bouchet88}, our result implies that Question~\ref{que:tree} is true for distance-hereditary graphs. \begin{theorem} Let $p\ge 3$ be an integer and let $T$ be a tree. Let $G$ be a graph such that every prime induced subgraph of $G$ has linear rank-width at most $p$. If $G$ has linear rank-width at least $40(p+2)\abs{V(T)}$, then $G$ contains a vertex-minor isomorphic to $T$. \end{theorem} To prove this theorem, we essentially prove that for a fixed tree $T$, every graph admitting a split decomposition whose decomposition tree has sufficiently large path-width contains a vertex-minor isomorphic to $T$. Combined with a relation between the linear rank-width of such graphs and the path-width of their canonical decompositions, we obtain Theorem~\ref{thm:largelrw}. The vertex-minor relation is indeed necessary because there is a cograph (equivalently, a $P_4$-free graph) admitting a split decomposition whose decomposition tree has sufficiently large path-width~\cite{CorneilLB1981, GP2012}. In the second part, we investigate the set of forbidden distance-hereditary vertex-minors for graphs of bounded linear rank-width. Robertson and Seymour~\cite{RS2004} showed that for every infinite sequence $G_1, G_2, \ldots $ of graphs, there exist $G_i$ and $G_j$ with $i<j$ such that $G_i$ is isomorphic to a minor of $G_j$. In other words, graphs are \emph{well-quasi-ordered} under the minor relation. Interestingly, this property implies that for any class $\mathcal{C}$ of graphs closed under taking minors, the set of forbidden minors for $\mathcal{C}$ is finite. Oum~\cite{Oum2004a, Oum12} partially obtained an analogous result for the vertex-minor relation; for every infinite sequence $G_1$, $G_2, \ldots$ of graphs of bounded rank-width, there exist $G_i$ and $G_j$ with $i<j$ such that $G_i$ is isomorphic to a vertex-minor of $G_j$. We obtain the following as a corollary. \begin{theorem}[\cite{Oum2004a}]\label{thm:vertexminorwqo} For every class $\mathcal{C}$ of graphs with bounded rank-width that are closed under taking vertex-minors, there is a finite list of graphs $G_1$, $G_2, \ldots, G_m$ such that a graph is in $\mathcal C$ if and only if it has no vertex-minor isomorphic to $G_i$ for some $i\in \{1,2, \ldots, m\}$. \end{theorem} Theorem~\ref{thm:vertexminorwqo} implies that for every integer $k$, the class of all graphs of (linear) rank-width at most $k$ can be characterized by a finite list of vertex-minor obstructions. However, it does not give any explicit number of necessary vertex-minor obstructions or bound on the size of such graphs. Oum~\cite{Oum05} proved that for each $k$, the size of a vertex-minor obstruction for graphs of rank-width at most $k$ is at most $(6^{k+1}-1)/5$. For linear rank-width, such an upper bound on the size of vertex-minor obstructions remains as an open problem. Jeong, Kwon, and Oum~\cite{JKO2014} showed that there is a set of at least $2^{\Omega(3^k)}$ vertex-minor minimal graphs for the class of graphs of linear rank-width at most $k$, such that there are no two graphs $G, H$ in the set where $G$ has a vertex-minor isomorphic to $H$ and $\abs{V(G)}=\abs{V(H)}$. \begin{figure} \tikzstyle{v}=[circle, draw, solid, fill=black, inner sep=0pt, minimum width=3pt] \centering \begin{tikzpicture}[scale=0.5] \node[v](v1) at (0,.8){}; \node[v](v2) at (0,1.6){}; \node[v](v3) at (-.8,-.5){}; \node[v](v4) at (.8,-.5){}; \node[v](v5) at (-1.6,-1){}; \node[v](v6) at (1.6,-1){}; \draw (v1)--(v3)--(v4)--(v1); \draw (v1)--(v2); \draw (v3)--(v5); \draw (v4)--(v6); \end{tikzpicture}\quad \begin{tikzpicture}[scale=0.5] \node[v](v1) at (0,2){}; \node[v](v2) at (1-.2,2){}; \node[v](v3) at (1.5,1+.2){}; \node[v](v4) at (1.5,3-.2){}; \node[v](v5) at (2+.2,2){}; \node[v](v6) at (3,2){}; \draw (v1)--(v2)--(v3)--(v5)--(v6); \draw (v2)--(v4)--(v5); \end{tikzpicture}\quad\quad \begin{tikzpicture}[scale=0.5] \node[v](v1) at (1,3){}; \node[v](v2) at (2,2.3){}; \node[v](v3) at (1.6,1){}; \node[v](v4) at (0.4,1){}; \node[v](v5) at (0, 2.3){}; \draw (v1)--(v2)--(v3)--(v4)--(v5)--(v1); \end{tikzpicture} \caption{The three vertex-minor obstructions for graphs of linear rank-width at most $1$.} \label{fig:vmobslrw1} \end{figure} Adler, Farley, and Proskurowski~\cite{AdlerFP11} characterized exactly the three vertex-minor obstructions for graphs of linear rank-width at most $1$, depicted in Figure~\ref{fig:vmobslrw1}, two of which are distance-hereditary. In this paper, we give a set of graphs containing all vertex-minor obstructions that are distance-hereditary using the characterization of the linear rank-width of distance-hereditary graphs given in the companion paper \cite{AdlerKK15}. This is an analogous result of the characterization of acyclic minor obstructions for graphs of path-width at most $k$, investigated by Takahashi, Ueno and Kajitani~\cite{TakahashiUK94}, and Ellis, Sudborough and Turner~\cite{EllisST94}. We lastly remark that we can obtain simpler proofs of known characterizations of graphs of linear rank-width at most $1$~\cite{AdlerFP11,Bui-XuanKL13} using our characterization of the linear rank-width on distance-hereditary graphs. \section{Preliminaries} In this paper, graphs are finite, simple and undirected, unless stated otherwise. Our graph terminology is standard, see for instance \cite{Diestel05}. Let $G$ be a graph. We denote the vertex set of $G$ by $V(G)$ and the edge set by $E(G)$. An edge between $x$ and $y$ is written $xy$ (equivalently $yx$). For $X\subseteq V(G)$, we denote by $G[X]$ the subgraph of $G$ induced by $X$, and let $G\setminus X:=G[V(G)\setminus X]$. For shortcut we write $G\setminus x$ for $G\setminus \{x\}$. For a vertex $x$ of $G$, let $N_G(x)$ be the set of \emph{neighbors} of $x$ in $G$ and we call $|N_G(x)|$ the \emph{degree} of $x$ in $G$. An edge $e$ of $G$ is called a \emph{cut-edge} if its removal increases the number of connected components of $G$. A vertex $x$ is a \emph{pendant} vertex if the degree of $x$ is one. Two vertices $x$ and $y$ are \emph{twins} if $N(x)\setminus \{y\}=N(y)\setminus \{x\}$. A \emph{tree} is a connected acyclic graph. A \emph{leaf} of a tree is a vertex of degree one. A \emph{sub-cubic tree} is a tree such that each vertex has degree at most three. A \emph{path} is a tree where every vertex has degree at most two. The \emph{length} of a path is the number of its edges. A \emph{star} is a tree with a distinguished vertex, called its \emph{center}, adjacent to all other vertices. A \emph{complete graph} is a graph with all possible edges. A graph $G$ is called \emph{distance-hereditary} if for every pair of two vertices $x$ and $y$ of $G$ the distance of $x$ and $y$ in $G$ equals the distance of $x$ and $y$ in any connected induced subgraph containing both $x$ and $y$~\cite{BandeltM86}. It is well-known that a graph $G$ is distance-hereditary if and only if $G$ can be obtained from a single vertex by creations of pendant vertices and twins \cite{HammerM90}. \subsection{Path-width and graph minors} A \emph{path decomposition} of a graph $G$ is a pair $(P,\cB)$, where $P$ is a path and $\cB=(B_t)_{t\in V(P)}$ is a family of subsets $B_t\subseteq V(G)$, satisfying \begin{enumerate} \item For every $v\in V(G)$ there exists a $t\in V(P)$ such that $v\in B_t$. \item For every $uv\in E(G)$ there exists a $t\in V(P)$ such that $\{u,v\}\subseteq B_t$. \item For every $v\in V(G)$ the set $\{t\in V(P)\mid v\in B_t\}$ is connected in $P$. \end{enumerate} The \emph{width} of a path decomposition $(P,\cB)$ is defined as $\operatorname{w}(P,\cB):=\max\{|B_t|\mid t\in V(P)\}-1$. The \emph{path-width} of $G$ is defined as \[\pw(G):=\min\{\operatorname{w}(P,\cB)\mid (P,\cB)\text{ is a path decomposition of }G\}.\] Given a graph $G$ and an edge $xy$ of $G$, the \emph{contraction} of the edge $xy$ is the graph, denoted by $G/xy$, with vertex set $V(G)\setminus \{y\}$ and edge set $E(G)\setminus \{yz\in E(G)\}\cup \{xz\notin E(G)\mid yz\in E(G)\}$. A graph $H$ is a \emph{minor} of a graph $G$ if $H$ is obtained from a subgraph of $G$ by contractions of edges. It is well-known that if $H$ is a minor of $G$, then $\pw(H)\leq \pw(G)$ \cite{RobertsonS83}. The following is now a well-established result in the Graph Minor series. \begin{theorem}[\cite{BienstockRST91}]\label{thm:pathwidththeorem} For every forest $F$, every graph with path-width at least $|V(F)|-1$ has a minor isomorphic to $F$. \end{theorem} We finally recall the following theorem which characterizes the path-width of trees and were used for computing their path-width in linear time, and also for computing the acyclic minor obstructions for path-width. \begin{theorem}[\cite{EllisST94,TakahashiUK94}]\label{thm:maintreeforpathwidth} Let $T$ be a tree and let $k\geq 1$. The following are equivalent. \begin{enumerate} \item $T$ has path-width at most $k$. \item For every vertex $x$ of $T$, at most two of the subtrees of $T\setminus x$ have path-width $k$ and all other subtrees of $T\setminus x$ have path-width at most $k-1$. \item $T$ has a path $P$ such that for each vertex $v$ of $P$ and a component $T'$ of $T\setminus v$ not containing a vertex of $P$, $\pw(T')\le k-1$. \end{enumerate} \end{theorem} \subsection{Linear rank-width and vertex-minors}\label{subsec:lrw-vm} For sets $R$ and $C$, an \emph{$(R,C)$-matrix} is a matrix whose rows and columns are indexed by $R$ and $C$, respectively. For an $(R,C)$-matrix $M$, $X\subseteq R$, and $Y\subseteq C$, let $M[X,Y]$ be the submatrix of $M$ whose rows and columns are indexed by $X$ and $Y$, respectively. \subsubsection*{\bf Linear rank-width} Let $G$ be a graph. We denote by $A_G$ the \emph{adjacency matrix} of $G$ over the binary field. For a graph $G$, we let $\cutrk_G^*:2^{V(G)}\times 2^{V(G)} \to \mathbb{Z}$ be such that $\cutrk_G^*(X,Y):=\rank(A_G[X,Y])$ for all $X,Y\subseteq V(G)$. The \emph{cut-rank function} of $G$ is the function $\cutrk_G:2^{V(G)}\rightarrow \mathbb{Z}$ where for each $X\subseteq V(G)$, \[\cutrk_G(X):= \cutrk_G^*(X,V(G)\setminus X).\] A sequence $(x_1, \ldots, x_n)$ of the vertex set $V(G)$ is called a \emph{linear layout} of $G$. If $\abs{V(G)}\ge 2$, then the \emph{width} of a linear layout $(x_1,\ldots, x_n)$ of $G$ is defined as \[\max_{1\le i\le n-1}\{\cutrk_G(\{x_1,\ldots,x_i\})\}.\] The \emph{linear rank-width} of $G$, denoted by $\lrw(G)$, is defined as the minimum width over all linear layouts of $G$ if $\abs{V(G)}\ge 2$, and otherwise, let $\lrw(G):=0$. Caterpillars and complete graphs have linear rank-width at most $1$. Ganian~\cite{Ganian10} gave a characterization of the graphs of linear rank-width at most $1$, and call them \emph{thread graphs}. Adler and Kant\'{e}~\cite{AdlerK13} showed that linear rank-width and path-width coincide on forests, and therefore, there is a linear time algorithm to compute the linear rank-width of forests. It is easy to see that the linear rank-width of a graph is the maximum over the linear rank-widths of its connected components. The following is folklore and admits an easy proof. \begin{lemma}\label{lem:folklore} Let $G$ be a graph and let $(x_1,\ldots,x_n)$ be a linear layout of $G$ of width $k\geq 1$. If the graph $G'$ is obtained from $G$ by creating a twin vertex $x$ to $x_1$ (resp. to $x_n$), then $(x,x_1,\ldots,x_n)$ (resp. $(x_1,\ldots,x_n,x)$) is a linear layout of $G'$ of width $k$. \end{lemma} \begin{proof} Assume first that $x$ is a twin vertex of $x_1$. Then $\cutrk_G(\{x\})\leq 1$ trivially. Now, for each $1\leq i \leq n-1$, the row of $A_G[\{x,x_1,\ldots,x_i,\{x_{i+1},\ldots,x_n\}]$ indexed by $x$ is the same as the row indexed by $x_1$. Hence, \[\rank(A_G[\{x,x_1,\ldots,x_i,\{x_{i+1},\ldots,x_n\}])=\rank(A_G[\{x_1,\ldots,x_i,\{x_{i+1},\ldots,x_n\}])\leq k\] for each $1\leq i \leq n-1$. Now, the case when $x$ is a twin vertex of $x_n$ follows from the previous argument because $(x_n,\ldots,x_1)$ is also a linear layout of $G$ of width $k$ as the function $\cutrk_G$ is symmetric. \end{proof} \subsubsection*{\bf Vertex-minors} For a graph $G$ and a vertex $x$ of $G$, the \emph{local complementation at $x$} of $G$ is an operation to replace the subgraph induced by the neighbors of $x$ with its complement. The resulting graph is denoted by $G*x$. If $H$ can be obtained from $G$ by applying a sequence of local complementations, then $G$ and $H$ are called \emph{locally equivalent}. A graph $H$ is called a \emph{vertex-minor} of a graph $G$ if $H$ can be obtained from $G$ by applying a sequence of local complementations and deletions of vertices. \begin{lemma}[\cite{Oum05}] \label{lem:vm-rw} Let $G$ be a graph and let $x$ be a vertex of $G$. Then for every subset $X$ of $V(G)$, we have $\cutrk_G(X)=\cutrk_{G*x}(X)$. Therefore, every vertex-minor $H$ of $G$ satisfies that $\lrw(H) \leq \lrw(G)$. \end{lemma} For an edge $xy$ of $G$, let $W_1:=N_G(x)\cap N_G(y)$, $W_2:=(N_G(x)\setminus N_G(y))\setminus \{y\}$, and $W_3:=(N_G(y)\setminus N_G(x))\setminus \{x\}$. The \emph{pivoting on $xy$} of $G$, denoted by $G\wedge xy$, is the operation to complement the adjacencies between distinct sets $W_i$ and $W_j$, and swap the vertices $x$ and $y$. It is known that $G\wedge xy=G*x*y*x=G*y*x*y$ \cite{Oum05}. \subsection{Split decompositions and local complementations}\label{subsec:splitdecs} All the materials presented in this section are already discussed in the companion paper \cite{AdlerKK15} and we present here only the necessary definitions for completeness. Let $G$ be a connected graph. A \emph{split} in $G$ is a vertex partition $(X,Y)$ of $G$ such that $|X|,|Y|\geq 2$ and $\rank (A_G[X,Y]) = 1$. In other words, $(X,Y)$ is a split in $G$ if $|X|,|Y| \geq 2$ and there exist non-empty sets $X'\subseteq X$ and $Y'\subseteq Y$ such that $\{xy\in E(G) \mid x\in X, y\in Y\} = \{xy \mid x\in X', y\in Y'\}$. Notice that not all connected graphs have a split, and those that do not have a split are called \emph{prime} graphs. A \emph{marked graph} $D$ is a connected graph $D$ with a set of edges $M(D)$, called \emph{marked edges}, that form a matching such that every edge in $M(D)$ is a cut-edge. The ends of the marked edges are called \emph{marked vertices}, and the components of $(V(D), E(D)\setminus M(D))$ are called \emph{bags} of $D$. The edges in $E(D)\setminus M(D)$ are called \emph{unmarked edges}, and the vertices that are not marked vertices are called \emph{unmarked vertices}. If $(X,Y)$ is a split in $G$, then we construct a marked graph $D$ that consists of the vertex set $V(G) \cup \{x',y'\}$ for two distinct new vertices $x',y'\notin V(G)$ and the edge set $E(G[X]) \cup E(G[Y]) \cup \{x'y'\} \cup E'$ where we define $x'y'$ as marked and \begin{align*} E' &:= \{x'x\mid x\in X\ \textrm{and there exists $y\in Y$ such that $xy\in E(G)$}\} \cup\\ & \qquad \{y'y \mid y\in Y\ \textrm{and there exists $x\in X$ such that $xy\in E(G)$}\}. \end{align*} The marked graph $D$ is called a \emph{simple decomposition of} $G$. A \emph{split decomposition} of a connected graph $G$ is a marked graph $D$ defined inductively to be either $G$ or a marked graph defined from a split decomposition $D'$ of $G$ by replacing a component $H$ of $(V(D'),E(D')\setminus M(D'))$ with a simple decomposition of $H$. For a marked edge $xy$ in a split decomposition $D$, the \emph{recomposition of $D$ along $xy$} is the split decomposition $D':=(D\wedge xy) \setminus \{x,y\}$. For a split decomposition $D$, let $\origin{D}$ denote the graph obtained from $D$ by recomposing all marked edges. Note that if $D$ is a split decomposition of $G$, then $\origin{D}=G$. Since each marked edge of a split decomposition $D$ is a cut-edge and all marked edges form a matching, if we contract all unmarked edges in $D$, then we obtain a tree. We call it the \emph{decomposition tree of $G$ associated with $D$} and denote it by $T_D$. To distinguish the vertices of $T_D$ from the vertices of $G$ or $D$, the vertices of $T_D$ will be called \emph{nodes}. Obviously, the nodes of $T_D$ are in bijection with the bags of $D$. Two bags of $D$ are called \emph{adjacent bags} if their corresponding nodes in $T_D$ are adjacent. A split decomposition $D$ of $G$ is called a \emph{canonical split decomposition} (or \emph{canonical decomposition} for short) if each bag of $D$ is either a prime, a star, or a complete graph, and $D$ is not the refinement of a decomposition with the same property. The following is due to Cunningham and Edmonds \cite{CunninghamE80}, and Dahlhaus \cite{Dahlhaus00}. \begin{theorem}[\cite{CunninghamE80,Dahlhaus00}] \label{thm:CED} Every connected graph $G$ has a unique canonical decomposition, up to isomorphism, and it can be computed in time $\cO(|V(G)| +|E(G)|)$. \end{theorem} From Theorem~\ref{thm:CED}, we can talk about only one canonical decomposition of a connected graph $G$ because all canonical decompositions of $G$ are isomorphic. Let $D$ be a split decomposition of a connected graph $G$ with bags that are either primes, complete graphs or stars (it is not necessarily a canonical decomposition). The \emph{type of a bag} of $D$ is either $P$, $K$, or $S$ depending on whether it is a prime, a complete graph, or a star. The \emph{type of a marked edge} $uv$ is $AB$ where $A$ and $B$ are the types of the bags containing $u$ and $v$ respectively. If $A=S$ or $B=S$, then we can replace $S$ by $S_p$ or $S_c$ depending on whether the end of the marked edge is a leaf or the center of the star. \begin{theorem}[\cite{Bouchet88}]\label{thm:can-forbid} Let $D$ be a split decomposition of a connected graph with bags that are either primes, complete graphs, or stars. Then $D$ is a canonical decomposition if and only if it has no marked edge of type $KK$ or $S_pS_c$. \end{theorem} A canonical decomposition $D$ is an \emph{$\mathcal{S}$-decomposition} if every bag of $D$ is a star bag whose center is an unmarked vertex. We will use the following characterizations of trees and of distance-hereditary graphs. \begin{theorem}[\cite{Bouchet88}]\label{thm:Bouchet88}\hfill \begin{enumerate} \item A connected graph is distance-hereditary if and only if each bag of its canonical decomposition is of type K or S. \item A connected graph is a tree if and only if its canonical decomposition is an $\mathcal{S}$-decomposition. \end{enumerate} \end{theorem} We now relate the split decompositions of a graph and the ones of its locally equivalent graphs. Let $D$ be a split decomposition of a connected graph. A vertex $v$ of $D$ \emph{represents} an unmarked vertex $x$ (or is a \emph{representative} of $x$) if either $v=x$ or there is a path of even length from $v$ to $x$ in $D$ starting with a marked edge such that marked edges and unmarked edges appear alternately in the path. Two unmarked vertices $x$ and $y$ are \emph{linked} in $D$ if there is a path from $x$ to $y$ in $D$ such that unmarked edges and marked edges appear alternately in the path. \begin{lemma}[\cite{AdlerKK15}] \label{lem:represent} Let $D$ be a split decomposition of a connected graph. Let $v'$ and $w'$ be two vertices in a same bag of $D$, and let $v$ and $w$ be two unmarked vertices of $D$ represented by $v'$ and $w'$, respectively. The following are equivalent. \begin{enumerate} \item $v$ and $w$ are linked in $D$. \item $vw\in E(\origin{D})$. \item $v'w' \in E(D)$. \end{enumerate} \end{lemma} A \emph{local complementation} at an unmarked vertex $x$ in a split decomposition $D$, denoted by $D*x$, is the operation to replace each bag $B$ containing a representative $w$ of $x$ with $B*w$. Observe that $D*x$ is a split decomposition of $\origin{D}*x$, and $M(D) = M(D*x)$. Two split decompositions $D$ and $D'$ are \emph{locally equivalent} if $D$ can be obtained from $D'$ by applying a sequence of local complementations at unmarked vertices. \begin{lemma}[\cite{Bouchet88}]\label{lem:localdecom} Let $D$ be the canonical decomposition of a connected graph. If $x$ is an unmarked vertex of $D$, then $D*x$ is the canonical decomposition of $\origin{D}*x$. \end{lemma} \begin{remark}[\cite{AdlerKK15}] \label{rem:localdecom} If $D$ is a canonical decomposition and $D'=D*x$ for some unmarked vertex $v$ of $D$, then $T_{D'}$ and $T_D$ are isomorphic because $M(D)=M(D')$. Thus, for every node $v$ of $T_D$ associated with a bag $B$ of $D$, its corresponding node $v'$ in $T_{D'}$ is associated in $D'$ with either \begin{enumerate} \item $B$ if $x$ has no representative in $B$, or \item $B*w$ if $B$ has a representative $w$ of $v$. \end{enumerate} For easier arguments in several places, if $T_D$ is given for $D$, then we assume that $T_{D'}=T_D$ for every split decomposition $D'$ locally equivalent to $D$. For a canonical decomposition $D$ and a node $v$ of its decomposition tree, we write $\bag{D}{v}$ to denote the bag of $D$ with which it is in correspondence. \end{remark} Let $x$ and $y$ be linked unmarked vertices in a split decomposition $D$, and let $P$ be the alternating path in $D$ linking $x$ and $y$. Observe that each bag contains at most one unmarked edge in $P$. Notice also that if $B$ is a bag of type $S$ containing an unmarked edge of $P$, then the center of $B$ is a representative of either $x$ or $y$. The \emph{pivoting on $xy$ of $D$}, denoted by $D\wedge xy$, is the split decomposition obtained as follows: for each bag $B$ containing an unmarked edge of $P$, if $v, w\in V(B)$ represent respectively $x$ and $y$ in $D$, then we replace $B$ with $B\wedge vw$. (It is worth noticing that by Lemma~\ref{lem:represent}, we have $vw\in E(B)$, hence $B\wedge vw$ is well-defined.) \begin{lemma}[\cite{AdlerKK15}] \label{lem:pivotdecom} Let $D$ be a split decomposition of a connected graph. If $xy\in E(\origin{D})$, then $D\wedge xy=D*x*y*x$. \end{lemma} As a corollary of Lemmas \ref{lem:localdecom} and \ref{lem:pivotdecom}, we get the following. \begin{corollary}[\cite{AdlerKK15}]\label{cor:pivotdecom} Let $D$ be the canonical decomposition of a connected graph. If $xy\in E(\origin{D})$, then $D\wedge xy$ is the canonical decomposition of $\origin{D}\wedge xy$. \end{corollary} \subsection{The linear rank-width of distance-hereditary graphs} \label{sec:dhandthread} We present here the characterization of the linear rank-width of distance-hereditary graphs in terms of their canonical decomposition \cite{AdlerKK15}. Let $G$ be a distance-hereditary graphs and let $D$ be its canonical decomposition. For a bag $B$ of $D$ and a component $T$ of $D\setminus V(B)$, let us denote by $\zeta_b(D,B,T)$ and $\zeta_t(D,B,T)$ the adjacent marked vertices of $D$ that are in $V(B)$ and in $V(T)$ respectively. Observe that $\zeta_t(D,B,T)$ is not incident with any marked edge in $T$. So, when we take a sub-decomposition $T$ from $D$, we regard $\zeta_t(D,B,T)$ as an unmarked vertex of $T$. For an unmarked vertex $y$ in $D$ and a bag $B$ of $D$ containing a marked vertex that represents $y$, let $T$ be the component of $D\setminus V(B)$ containing $y$, and let $v$ and $w$ be adjacent marked vertices of $D$ where $v\in V(T)$ and $w\in V(B)$. We define the \emph{limb } $\limb:=\limb_D[B,y]$ with respect to $B$ and $y$ as follows: \begin{enumerate} \item if $B$ is of type $K$, then $\limb:=T*v\setminus v$, \item if $B$ is of type $S$ and $w$ is a leaf, then $\limb:=T\setminus v$, \item if $B$ is of type $S$ and $w$ is the center, then $\limb:=T\wedge vy \setminus v$. \end{enumerate} Since $v$ becomes an unmarked vertex in $T$, the limb is well-defined and it is a split decomposition. While $T$ is a canonical decomposition, $\limb$ may not be a canonical decomposition at all, because deleting $v$ may create a bag of size $2$. Let us analyze the cases when such a bag appears, and describe how to transform it into a canonical decomposition. Suppose that a bag $B'$ of size $2$ appears in $\limb$ by deleting $v$. If $B'$ has no adjacent bags in $\limb$, then $B'$ itself is a canonical decomposition. Otherwise we have two cases. \begin{enumerate} \item ($B'$ has one adjacent bag $B_1$.) \\ If $v_1\in V(B_1)$ is the the marked vertex adjacent to a vertex of $B'$ and $r$ is the unmarked vertex of $B'$ in $\mathcal{L}$, then we can transform the limb into a canonical decomposition by removing the bag $B'$ and replacing $v_1$ with $r$. \item ($B'$ has two adjacent bags $B_1$ and $B_2$.)\\ If $v_1\in V(B_1)$ and $v_2\in V(B_2)$ are the two marked vertices that are adjacent to the two marked vertices of $B'$, then we can first transform the limb into another decomposition by removing $B'$ and adding a marked edge $v_1v_2$. If the new marked edge $v_1v_2$ is of type KK or $S_pS_c$, then by recomposing along $v_1v_2$, we finally transform the limb into a canonical decomposition. \end{enumerate} Let $\limbtil_D[B,y]$ be the canonical decomposition obtained from $\limb_D[B,y]$ and we call it the \emph{canonical limb}. Let $\limbhat_D[B,y]$ be the graph obtained from $\limb_D[B,y]$ by recomposing all marked edges. See \cite[Figure 3]{AdlerKK15} for an illustration of canonical limbs. \begin{proposition}[\cite{AdlerKK15}]\label{prop:limb} Let $G$ be a connected distance-hereditary graph and let $D$ be its canonical decomposition. Let $v$ be a vertex of $T_D$ and let $y\in V(T)$ be an unmarked vertex represented by $\zeta_b(D,\bag{D}{v},T)$ for some component $T$ of $D\setminus V(\bag{D}{v})$. For every canonical decomposition $D'$ locally equivalent to $D$ and every unmarked vertex $y'$ represented by $\zeta_b(D,\bag{D}{v},T)$ the two canonical limbs $\limbtil_D[\bag{D}{v},y]$ and $\limbtil_{D'}[\bag{D}{v},y']$ are locally equivalent. \end{proposition} For a bag $B$ of $D$ and a component $T$ of $D\setminus V(B)$, we define $f_D(B,T)$ as the linear rank-width of $\limbhat_D[B,y]$ for some unmarked vertex $y\in V(T)$. By Proposition~\ref{prop:limb}, $f_D(B,T)$ does not depend on the choice of $y$ nor on the decomposition $D$ since we can take any decomposition locally equivalent to $D$. We can now state the characterization which generalizes Theorem \ref{thm:maintreeforpathwidth}. \begin{theorem}[\cite{AdlerKK15}]\label{thm:mainchap2} Let $k$ be a positive integer and let $D$ be the canonical decomposition of a connected distance-hereditary graph $G$. Then the following are equivalent. \begin{enumerate} \item $G$ has linear rank-width at most $k$. \item For each bag $B$ of $D$, $D$ has at most two components $T$ of $D\setminus V(B)$ such that $f_D(B,T)=k$, and every other component $T'$ of $D\setminus V(B)$ satisfies that $f_D(B,T')\le k-1$. \item $T_D$ has a path $P$ such that for each node $v$ of $P$ and a component $C$ of $D\setminus V(\bag{D}{v})$ such that $T_C$ does not contain a node of $P$, $f_D(\bag{D}{v},C)\le k-1$. \end{enumerate} \end{theorem} \section{Path-width of decomposition trees}\label{sec:pwofcanonicaltrees} To prove Theorem~\ref{thm:largelrw}, we first observe a relation between the linear rank-width of a graph whose prime induced subgraphs have bounded linear rank-width and the path-width of its decomposition tree. \begin{proposition}\label{prop:generalupperbound} Let $p\ge 1$ be a positive integer. Let $G$ be a connected graph whose prime induced subgraphs have linear rank-width at most $p$, and let $D$ be the canonical decomposition of $G$, and $T_D$ be the decomposition tree of $D$. Then $\lrw(G) \le 2(p+2)(\pw(T_D)+1)$. \end{proposition} We need the following lemma. \begin{lemma}\label{lem:genupperbound} Let $B$ be a bag of a canonical decomposition $D$ with two unmarked vertices $a$ and $b$ and let $D_1,D_2,\ldots,D_\ell$ be the components of $D\setminus V(B)$. Let $k:=\max\{\lrw(\origin{D_i}))\mid 1\leq i \leq \ell\}$. If $B$ has a linear layout of width at most $p\geq 1$ whose first and last vertices are $a$ and $b$ respectively, then $\origin{D}$ has a linear layout of width at most $2p+k$ whose first and last vertices are $a$ and $b$ respectively. \end{lemma} \begin{proof} Let $L_B:=(a,w_1,\ldots,w_m,b)$ be a linear layout of $B$ of width at most $p$. For each $1\le j\le m$, \begin{enumerate} \item if $w_j$ is an unmarked vertex, then let $L(w_j):=(w_j)$, and \item if $w_j$ is a marked vertex with a neighbor in the component $D_j$ of $D\setminus V(B)$, then let $L(w_j)$ be a linear layout of $\origin{D_{j}}\setminus \zeta_t(D, B, D_{j})$ having width at most $k$. \end{enumerate} We define the linear layout $L$ of $\origin{D}$ as \[L:=(a) \oplus L(w_1)\oplus L(w_2) \cdots \oplus L(w_m) \oplus (b).\] We claim that $L$ has width at most $2p+k$. It is sufficient to prove that for every $w\in V(\origin{D})\setminus \{a,b\}$, $\cutrk_{\origin{D}}(\{v: v\le_{L} w\})\le 2p +k$. Let $w\in V(\origin{D})\setminus \{a,b\}$ and let $S_w:=\{v:v\le_{L} w\}$ and $T_w:=V(\origin{D})\setminus S_w$. If $w$ is an unmarked vertex in $B$, then clearly, \[\cutrk_{\origin{D}}(S_{w})=\cutrk_{B}(\{v: v\le_{L_{B}} w\})\le p.\] Thus, we may assume that $w\notin V(B)$, and $w$ is contained in a component $D_{j}$. From the assumption we have the following. \begin{align*} &(1)\, \cutrk^*_{\origin{D}}(S_{w}, T_{w}\setminus V(\origin{D_{j}})) \\ &\le \max \{\cutrk_{B}(\{v: v\le_{L_{B}} \zeta_b(D,B,D_{j-1})\}), \cutrk_{B}(\{v: v\le_{L_{B}} \zeta_b(D,B,D_{j})\})\} \le p. \\ &(2)\, \cutrk^*_{\origin{D}}(S_{w}\setminus V(\origin{D_{j}}), T_{w}) \\ &\le \max \{\cutrk_{B}(\{v: v\le_{L_{B}} \zeta_b(D,B,D_{j-1})\}), \cutrk_{B}(\{v: v\le_{L_{B}} \zeta_b(D,B,D_{j})\})\} \le p. \\ &(3)\, \cutrk^*_{\origin{D}}(S_{w}\cap V(\origin{D_{j}}), T_{w}\cap V(\origin{D_{j}}))\le k. \end{align*} Therefore, we have \begin{align*} &\cutrk_{\origin{D}}(S_{w})\\ &\le \cutrk^*_{\origin{D}}(S_{w}, T_{w}\setminus V(\origin{D_j})) +\cutrk^*_{\origin{D}}(S_{w}\setminus V(\origin{D_j}), T_{w}) \\ &+ \cutrk^*_{\origin{D}}(S_{w}\cap V(\origin{D_{j}}), T_{w}\cap V(\origin{D_{j}}))\\ &\le p + p + k\le 2p+k. \end{align*} We conclude that $L$ is a linear layout of $\origin{D}$ of width at most $2p+k$ whose first and last vertices are $a$ and $b$, respectively. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:generalupperbound}] We prove it by induction on $k:=\pw(T_D)$. If $k=0$, then $T_D$ consists of one node, and by the assumption, $\lrw(G)\le p\le 2(p+2)$. We may assume that $k\ge 1$. Since $\pw(T_D)=k$, by Theorem~\ref{thm:maintreeforpathwidth}, there exists a path $P:=v_1 \cdots v_n$ in $T_D$ such that for each node $v$ in $P$ and a component $T$ of $T_D\setminus v$ not containing a node of $P$, $\pw(T)\le k-1$. For each $1\le i\le n$, let $B_i:=\bag{D}{v_i}$. If $n=1$, then let $D_1:=D$, and by adding unmarked vertices on $B_1$ which are twins of respectively the first vertex and the last vertex in the corresponding optimal linear layout, we may assume that $B_1$ has unmarked vertices $a_1$ and $b_1$ in $D$, respectively, and the linear rank-width of $B_1$ is still at most $p$ (see Lemma \ref{lem:folklore}). Otherwise, we define $D_i$ for each $1\le i\le n$ as follows. For each $1\le i\le n-1$, let $b_i$ and $a_{i+1}$ be the marked vertices of $B_i$ and $B_{i+1}$, respectively, such that $b_ia_{i+1}$ is the marked edge connecting $B_i$ and $B_{i+1}$. If necessary, by adding unmarked vertices on $B_1$ and $B_{n}$ which are twins of respectively the first vertex and the last vertex in the corresponding optimal linear layout, we may assume that $B_1$ and $B_{n}$ have unmarked vertices $a_1$ and $b_{n}$ in $D$, respectively, and the linear rank-width of $B_1$ and $B_{n}$ are still at most $p$ (see Lemma \ref{lem:folklore}). We define the following sub-decompositions. \begin{enumerate} \item Let $D_1$ be the component of $D\setminus V(B_2)$ containing the bag $B_1$. \item Let $D_{n}$ be the component of $D\setminus V(B_{n-1})$ containing the bag $B_{n}$. \item For each $1\le i\le n$, let $D_i$ be the component of $D\setminus (V(B_{i-1})\cup V(B_{i+1}))$ containing the bag $B_i$. \end{enumerate} Notice that the vertices $a_i$ and $b_i$ are unmarked vertices in $D_i$ Since the rank of any matrix can be increased by at most $1$ when we move one element in the column indices (resp. the row indices) to the row indices (resp. the column indices), $B_i$ admits a linear layout of width at most $p+2$ whose first and last vertices are $a_i$ and $b_i$, respectively. By induction hypothesis and Lemma \ref{lem:genupperbound} we have that $\origin{D_i}$ has a linear layout $L_i$ of width at most $2(p+2)(k+1)$ whose first and last vertices are $a_i$ and $b_i$, respectively. For each $i$, let $L_i'$ be the linear layout obtained from $L_i$ by removing unnecessary vertices. Then it is not hard to check that \[ L'_1 \oplus\cdots \oplus L'_{n} \] is a linear layout of $G$ having width at most $2(p+2)(k+1)$. We conclude that $\lrw(G) \le 2(p+2)(\pw(T_D)+1)$. \end{proof} For distance-hereditary graphs, the following establishes a lower bound and the tight upper bound of linear rank-width with respect to the path-width of their canonical decomposition. \begin{proposition}\label{prop:lrwpw} Let $D$ be the canonical decomposition of a connected distance-hereditary graph $G$ and let $T_D$ be the decomposition tree of $D$. Then $\frac{1}{2}\pw(T_D) \le \lrw(G) \le \pw(T_D)+1$. \end{proposition} The upper bound part is tight. For instance, every complete graph with at least two vertices has linear rank-width $1$ and the path-width of its decomposition tree has path-width $0$. Also, for each odd integer $k=2n+1$ with $n\ge 1$, every complete binary tree of height $k$ (each path from a leaf to the root has distance $k$) has linear rank-width $\lceil k/2 \rceil=n+1$, and its decomposition tree has path-width $\lceil (k-1)/2 \rceil=n$. (Note that the linear rank-width and the path-width of a tree are the same~\cite{AdlerK13}.) We will need the following lemmas. \begin{lemma}[Folklore]\label{lem:decpw} Let $G$ be a graph and let $uv\in E(G)$. Then $\pw(G)\le \pw(G/uv )+1$. \end{lemma} \begin{proof} Let $(P, \mathcal{B})$ be an optimal path-decomposition of $G/uv$. It is not hard to check that a new path-decomposition obtained by adding $u$ in each bag containing $v$ is a path-decomposition of $G$. We conclude that $\pw(G)\le \pw(G/uv)+1$. \end{proof} \begin{lemma}\label{lem:decpw2} Let $G$ be a graph. Let $u$ be a vertex of degree $2$ in $G$ such that $v_1, v_2$ are the neighbors of $u$ in $G$ and $v_1v_2\notin E(G)$. Then $\pw(G)\le \pw(G/uv_1/uv_2 )+1$. \end{lemma} \begin{proof} Let $w$ be the contracted vertex in $G/uv_1/uv_2$, and let $(P, \mathcal{B})$ be an optimal path-decomposition of $G/uv_1/uv_2$ of width $t:=\pw(G/uv_1/uv_2)$. We may assume without loss of generality that not two consecutive bags are equal. We first obtain a path-decomposition $(P, \mathcal{B}')$ from $(P, \mathcal{B})$ by replacing $w$ with $v_1$ and $v_2$ in all bags containing $w$. Since every two consecutive bags in $(P, \mathcal{B})$ are not equal, every two consecutive bags in $(P, \mathcal{B}')$ are not equal. We first assume that there are two adjacent bags $B_1$ and $B_2$ in $(P, \mathcal{B}')$ containing both $v_1$ and $v_2$, respectively. We obtain a path-decomposition $(P', \mathcal{B}'')$ from $(P, \mathcal{B}')$ by subdividing the edge between $B_1$ and $B_2$, and adding a new bag $B'=(B_1\cap B_2) \cup \{u\}$. Since $B_1$ and $B_2$ are not the same, $\abs{B_1\cap B_2}\le t+1$ and therefore, $\abs{B'}\le t+2$. Thus, $(P', \mathcal{B}'')$ is a path-decomposition of $G$ of width at most $t+1$, and $\pw(G)\le \pw(G/uv_1/uv_2)+1$. Now we assume that there are only one bag $B$ in $(P, \mathcal{B}')$ containing both $v_1$ and $v_2$. In this case, since $v_1v_2\notin E(G)$, we can obtain a path decomposition of $G$ by replacing this bag $B$ with a sequence of two bags $B_1$ and $B_2$, where $B_1:=B\setminus \{v_2\} \cup \{u\}$ and $B_2:=B\setminus \{v_1\} \cup \{u\}$. This implies that $\pw(G)\le \pw(G/uv_1/uv_2)+1$. \end{proof} We are now ready to prove Proposition \ref{prop:lrwpw}. \begin{proof}[Proof of Proposition \ref{prop:lrwpw}] (1)~Let us first prove by induction on $k:=\lrw(G)$ that $\pw(T_D)\leq 2\lrw(G)$. If $k=0$, then $G$ consists of a vertex, and $\pw(T_D)=0$. If $k=1$, then by Theorem~\ref{thm:charlrw1}, $T_D$ is a path. Therefore, $\pw(T_D)=0$ or $1$, and we have $\pw(T_D)\le 2k$. Thus, we may assume that $k\ge 2$. Since $\lrw(G)=k\ge 2$, by Theorem~\ref{thm:mainchap2}, there exists a path $P:=v_0v_1 \cdots v_nv_{n+1}$ in $T_D$ such that for each node $v$ in $P$ and a component $C$ of $D\setminus V(\bag{D}{v})$ such that $T_C$ does not contain a node of $P$, $f_D(B,C)\le k-1$. Let $v$ be any node of $P$ and let $C$ be a component of $D\setminus V(\bag{D}{v})$ such that $T_C$ does not contain a node of $P$. Let $y$ be an unmarked vertex in $C$ and let $L_C:=\limbtil_D[V(\bag{D}{v}),y]$. By induction hypothesis, the decomposition tree $T_{L_C}$ of $L_C$ has path-width at most $2k-2$. We claim that $\pw(T_C)\le 2k-1$. By the definition of canonical limbs, $T_{L_C}$ is obtained from $T_C$ using one of the following operations: \begin{enumerate} \item Removing a node of degree $1$. \item Removing a node of degree $2$ with the neighbors $v_1, v_2$ and adding an edge $v_1v_2$. \item Removing a node of degree $2$ with the neighbors $v_1, v_2$ and identifying $v_1$ and $v_2$. \end{enumerate} The first two cases can be regarded as contracting one edge. So, $\pw(T_C)\le \pw(T_{L_C})+1\le (2k-2)+1=2k-1$ by Lemma~\ref{lem:decpw}. The last case corresponds to contracting two incident edges where the middle node has degree $2$ and its neighbors are not adjacent. By Lemma~\ref{lem:decpw2}, $\pw(T_C)\le \pw(T_{L_C})+1\le 2k-1$. Therefore, for each node $v$ of $P$ and each component $T'$ of $T_C\setminus v$ not containing a node of $T_D$ we have that $\pw(T')\leq 2k-1$. By Theorem~\ref{thm:maintreeforpathwidth}, $T_D$ has path-width at most $2k$, as required. \medskip \noindent (2)~We now prove by induction on $k:=\pw(T_D)$ that $\lrw(G)\leq \pw(T_D)+1$. If $k=0$, then $T_D$ consists of one node, and $\lrw(G)=0$ or $1$. So, we have $\lrw(G)\le \pw(T_D)+1$. We assume that $k\ge 1$. Since $\pw(T_D)=k$, by Theorem~\ref{thm:maintreeforpathwidth}, there exists a path $P=v_0v_1 \cdots v_nv_{n+1}$ in $T_D$ such that for each node $v$ in $P$ and a component $T_C$ of $T_D\setminus v$ not containing a node of $P$, $\pw(T_C)\le k-1$. Let $v$ be any node of $P$ and let $C$ be a component of $D\setminus V(\bag{D}{v})$ such that its decomposition tree $T_C$ corresponds to a component of $T_D\setminus v$ that does not contain a node of $P$. By induction hypothesis, $\origin{C}$ has linear rank-width at most $(k-1)+1=k$. By the definition of limbs, we can therefore conclude that $f_D(\bag{D}{v},C) \leq k$. By Theorem~\ref{thm:mainchap2}, we can conclude that $\lrw(G)\le k+1$. \end{proof} We could not confirm that the lower bound in Proposition~\ref{prop:lrwpw} is tight. We leave the following as an open question. \begin{QUE} Let $D$ be the canonical decomposition of a connected distance-hereditary graph $G$. Is it true that $\pw(T_D) \le \lrw(G)$? \end{QUE} \section{Containing a tree as a vertex-minor} We show that Question~\ref{que:tree} is true if it is true for prime graphs. To support this statement, we show the following. \begin{theorem}\label{thm:largelrw} Let $p\ge 1$ be a positive integer and let $T$ be a tree. Let $G$ be a graph such that every prime induced subgraph of $G$ has linear rank-width at most $p$. If $\lrw(G) \ge 40(p+2)\abs{V(T)}$, then $G$ contains a vertex-minor isomorphic to $T$. \end{theorem} We proved in the previous section that for a graph whose prime induced subgraphs have bounded linear rank-width, if $G$ has sufficiently large linear rank-width, then its decomposition tree must have large path-width. In this section, we show that for a fixed tree $T$, if a graph $G$ admits a canonical decomposition whose decomposition tree has sufficiently large path-width, then $G$ contains a vertex-minor isomorphic to $T$. We first prove some lemmas to replace general trees in Theorem~\ref{thm:largelrw} with subcubic trees. For a tree $T$, we denote by $\phi(T)$ the sum of the degrees of vertices of $T$ whose degree is at least $4$. Every subcubic tree $T$ satisfies that $\phi(T)=0$. \begin{figure}[t]\centering \tikzstyle{v}=[circle, draw, solid, fill=black, inner sep=0pt, minimum width=2.5pt] \tikzset{photon/.style={decorate, decoration={snake}}} \begin{tikzpicture}[scale=0.3] \node [v] (a) at (0,0) {}; \node [v] (b) at (2,3) {}; \node [v] (c) at (-2,3) {}; \node [v] (d) at (-4,0) {}; \node [v] (e) at (-2,-3) {}; \node [v] (f) at (2,-3) {}; \node [v] (g) at (4,0) {}; \draw (1,0.5) node{$v$}; \draw (4,0.7) node{$v_1$}; \draw (2,3.7) node{$v_2$}; \draw (-2,3.7) node{$v_3$}; \draw (-4,0.7) node{$v_4$}; \draw (-2.2,-2.3) node{$v_5$}; \draw (2.2,-2.3) node{$v_6$}; \draw (a)--(b); \draw (a)--(c); \draw (a)--(d); \draw (a)--(e); \draw (a)--(f); \draw (a)--(g); \end{tikzpicture}\qquad\quad \begin{tikzpicture}[scale=0.3] \node [v] (a) at (0,0) {}; \node [v] (b) at (10,3) {}; \node [v] (c) at (-2,3) {}; \node [v] (d) at (-4,0) {}; \node [v] (e) at (-2,-3) {}; \node [v] (f) at (2,-3) {}; \node [v] (g) at (12,0) {}; \node [v] (p1) at (4,0) {}; \node [v] (p2) at (8,0) {}; \draw (1,0.5) node{$v$}; \draw (12,0.7) node{$v_1$}; \draw (10,3.7) node{$v_2$}; \draw (-2,3.7) node{$v_3$}; \draw (-4,0.7) node{$v_4$}; \draw (-2.2,-2.3) node{$v_5$}; \draw (2.2,-2.3) node{$v_6$}; \draw (4,0.7) node{$p_2$}; \draw (7.8,0.7) node{$p_1$}; \draw (p2)--(b); \draw (a)--(c); \draw (a)--(d); \draw (a)--(e); \draw (a)--(f); \draw (a)--(p1)--(p2)--(g); \end{tikzpicture} \caption{Splitting an edge in Lemma~\ref{lem:subcubicpivot1}.}\label{fig:splitting} \end{figure} \begin{lemma}\label{lem:subcubicpivot1} Let $k$ be a positive integer and let $T$ be a tree with $\phi(T)=k$. Then $T$ is a vertex-minor of a tree $T'$ with $\phi(T')=k-1$ and $\abs{V(T')}=\abs{V(T)}+2$. \end{lemma} \begin{proof} Since $\phi(T)\ge 1$, $T$ has a vertex of degree at least $4$. Let $v\in V(T)$ be a vertex of degree at least $4$, and let $v_1, v_2, \ldots, v_m$ be its neighbors. We obtain $T'$ from $T$ by replacing the edge $vv_1$ with the path $vp_2p_1v_1$, removing the edge $vv_2$ and adding the edge $p_1v_2$. It is easy to verify that $(T'\wedge p_1p_2)\setminus p_1\setminus p_2=T$. We depict this procedure in Figure~\ref{fig:splitting}. Because $p_1$ and $p_2$ are vertices of degree at most $3$ in $T'$, and the degree of $v$ in $T'$ is one less than the degree of $v$ in $T$, we have $\phi(T')=k-1$. \end{proof} \begin{lemma}\label{lem:subcubicpivot2} Let $T$ be a tree. Then $T$ is a vertex-minor of a subcubic tree $T'$ with $\abs{V(T')}\le 5\abs{V(T)}$. \end{lemma} \begin{proof} By Lemma~\ref{lem:subcubicpivot1}, $T$ is a vertex-minor of a subcubic tree $T'$ with $\abs{V(T')}\le \abs{V(T)}+2\phi(T)$. Since $\phi(T)\le 2\abs{E(T)}\le 2\abs{V(T)}$, we conclude that $\abs{V(T')}\le \abs{V(T)}+2\phi(T)\le 5\abs{V(T)}$. \end{proof} We recall that from Theorem~\ref{thm:Bouchet88}(2) that a connected graph is a tree if and only if its canonical decomposition is an $\mathcal{S}$-decomposition. The basic strategy to prove Theorem~\ref{thm:largelrw} is the construction of the canonical decomposition of $T$ from the canonical decomposition of $G$. Let us introduce some lemmas which tell how to recursively replace each bag with a star bag whose center is an unmarked vertex, without changing the decomposition tree too much. This will be used in the recursion step of the proof of Theorem~\ref{thm:largelrw}. \begin{lemma}\label{lem:findingpathinprime} Let $G$ be a prime graph on at least $5$ vertices, and let $a,b,c\in V(G)$. Then there exists a sequence $(x_1,\ldots,x_m)$ of vertices in $V(G)\setminus \{a,b\}$ such that $G*x_1*x_2*\cdots *x_m$ contains an induced path $acb$. \end{lemma} \begin{proof} Since every prime graph on at least $5$ vertices is $2$-connected, there is a path from $a$ to $b$ in $G$ of length at least $2$. Let $P$ be the shortest path among such paths. We divide into cases depending on whether $c\in V(P)$. \vskip 0.2cm \noindent\emph{\textbf{Case 1.} $c\in V(P)$.} Let $P_1$ be the subpath of $P$ from $a$ to $c$ and let $P_2$ be the subpath of $P$ from $c$ to $b$. By applying local complementations at all internal vertices in $P_1$ and $P_2$, we may create a path $acb$. If $ab$ is an edge in the resulting graph, then by applying a local complementation at $c$, we can remove it. We create the required induced path $acb$ without applying local complementations neither at $a$ nor at $b$. \vskip 0.2cm \noindent\emph{\textbf{Case 2.} $c\notin V(P)$.} By applying local complementations at all internal vertices of $P$ except one, we may assume that $G$ has a path of length $2$, say $azb$, where $z\neq c$. We take a minimal path $Q=q_1q_2 \cdots q_m$ from $q_1=c$ to the path $azb$. Since $c\notin \{a,b,z\}$, we have $m\ge 2$. Let $G_1:=G[V(Q)\cup \{a,b,z\}]$. If $m\ge 4$, then by replacing $G_1$ by $(G_1*q_2*q_3*\cdots *q_{m-2})\setminus \{q_2,\ldots,q_{m-2}\}$, we may assume that $m=2$ or $3$. First assume that $m=2$. By applying a local complementation at $z$, we may assume that either $abc$ is a triangle or among the possible edges $\{ab, bc, ca\}$, exactly one is present in $G_1$. If $abc$ is a triangle, then $G_1*c\setminus z$ is the induced path $acb$. If $ab\in E(G_1)$ and $bc,ca\notin E(G_1)$, then $G_1*z\setminus z=acb$. If one of $bc$ and $ca$ is an edge of $G_1$ and two others of $\{ab,bc,ca\}$ are not edges of $G_1$, then $(G_1*c*z)\setminus z$ is the induced path $acb$. All of these operations create an induced path $acb$ without applying local complementations at $a$ or $b$, as required. Now suppose that $m=3$. In this case, we take $(G_1\wedge q_2q_3)\setminus \{q_2, q_3\}$. Since $a$ and $b$ are adjacent to $z$ but $c$ is not adjacent to $z$, we have $ac, bc\in E((G_1\wedge q_2q_3)\setminus \{q_2, q_3\})$. By applying a local complementation at $c$ if $ab$ is an edge, we can obtain an induced path $acb$, as required. \end{proof} A canonical decomposition $D$ is \emph{rooted} if we distinguish a leaf bag and call it the \emph{root} of $D$. Let $D$ be a rooted canonical decomposition with the root $R$. A bag $B$ is a descendant of a bag $B'$ if a vertex of $B'$ is in the unique path from a vertex of the root to a vertex of $B$. If $B$ is a descendant of $B'$ and $B$ and $B'$ are adjacent, then we call $B$ a \emph{child} of $B'$ and $B'$ the \emph{parent} of $B$. A bag in $D$ is called a \emph{non-root bag} if it is not the root bag. \begin{lemma}\label{lem:primetwomarked1} Let $D$ be a rooted canonical decomposition of a connected graph with root $R$ and let $B$ be a non-root bag of $D$ with $w$ its corresponding node in $T_D$ such that \begin{itemize} \item $D\setminus V(B)$ has exactly two components $T_1$ and $T_R$ with $R$ contained in $T_R$, \item the parent of $B$ is a star and $\zeta_t(D,B,T_R)$ is a leaf. \end{itemize} Then by possibly applying local complementations at unmarked vertices of $D$ contained in $D\setminus V(T_R)$ and possibly deleting some vertices in $B$, we can transform $D$ into a canonical decomposition $D'$ such that \begin{enumerate} \item $T_D=T_{D'}$ and $R$ is a bag of $D'$, and \item $\bag{D'}{w}$ is a star whose center is an unmarked vertex of $B$, and $T_R$ is the component of $D'\setminus V(\bag{D'}{w})$ containing $R$. \end{enumerate} \end{lemma} \begin{proof} If $B$ is a star bag or a complete bag, then it is easy to transform it into a star bag with the center at the unmarked vertex, preserving $T_R$. We may assume that $B$ is a prime bag. Let $v:=\zeta_b(D,B,T_R)$, $v_1:=\zeta_b(D,B, T_1)$ and choose an unmarked vertex $v_2$ of $B$. Let $B_1$ be the child of $B$. If $B_1$ is a star bag whose center is $\zeta_t(D,B,T_1)$, then by pivoting two linked unmarked vertices $w_1\in V(B)$ and $w_2$ represented by $v_2$, we may assume that $B_1$ is a star bag having $\zeta_t(D,B,T_1)$ as a leaf. Since $B$ is prime, by Lemma~\ref{lem:findingpathinprime}, we can modify $B$ into an induced path $vv_2v_1$ by only applying local complementations at unmarked vertices in $B$. Then we remove all the other vertices of $B$. Note that the marked edges incident with $B$ are still marked edges that cannot be recomposed. In the resulting canonical decomposition, the new bag modified from $B$ is a star bag whose center is an unmarked vertex, as required. Since the decomposition tree of the resulting decomposition is equal to $T_D$ we are done. \end{proof} \begin{lemma}\label{lem:primetwomarked2} Let $D$ be a rooted canonical decomposition of a connected graph with root $R$ and let $B$ be a non-root bag of $D$ such that \begin{enumerate} \item[(a)] $D\setminus V(B)$ has exactly three components $T_1,T_2$ and $T_R$ with $R$ contained in $T_R$, \item[(b)] the parent $P_1$ of $B$ is a star bag whose center is an unmarked vertex, and the parent $P_2$ of $P_1$ is a star bag whose center is an unmarked vertex, and \item[(c)] both $D\setminus V(P_1)$ and $D\setminus V(P_2)$ have exactly two components. \end{enumerate} For each $i\in \{1,2\}$, let $B_i$ be the child of $B$ contained in $T_i$ where $D\setminus V(B_i)$ has exactly two components. Then by possibly applying local complementations at unmarked vertices of $D$ contained in $D\setminus V(T_R)$ and possibly deleting some vertices in $B$ and recomposing some marked edges, we can transform $D$ into a canonical decomposition $D'$ containing a bag $P$ such that $D'\setminus V(P)$ contains exactly three components $F_R, F_1, F_2$ with \begin{enumerate} \item $F_R=T_R\setminus (V(P_1)\cup V(P_2) )$, \item for each $i\in \{1,2\}$, $V(F_i)=V(T_i)$ or $V(F_i)=V(T_i)\setminus V(B_i)$, and \item $P$ is a star bag whose center is an unmarked vertex. \end{enumerate} \end{lemma} \begin{proof} For each $i\in \{1,2\}$, let $x_i$ be the center of $P_i$. If $B$ is a star bag or a complete bag and has an unmarked vertex, then it is easy to transform it into a star bag with the center at an unmarked vertex and preserving $T_R$. Then we can remove the two bags $P_1$ and $P_2$ after doing a pivoting $x_1x_2$ and removing all the unmarked vertices contained in $P_1$ and $P_2$. Thus, we may assume that either $B$ is a prime bag on at least $5$ vertices, or $B$ has no unmarked vertex. Let $v:=\zeta_b(D,B,T_R)$ and let $w$ be the corresponding node of $B$ in $T_D$, and for each $i\in \{1,2\}$, let $v_i:=\zeta_b(D, B, T_i)$. \vskip 0.2cm \noindent\emph{\textbf{Case 1.} $B$ is not prime and has no unmarked vertex.} By applying a local complementation in $D$, we may assume that $B$ is a star and $v_1$ is its center without changing $T_R$. We can also assume that each of $P_1$ and $P_2$ has exactly one unmarked vertex $x_i$ by possibly deleting all the other leaf unmarked vertices in $P_1$ and $P_2$. Let $w_1w_2$ be the marked edge between $P_1$ and $P_2$ with $w_i\in V(P_i)$. Now we transform $D$ by pivoting $x_1x_2$ so that each $w_i$ is the center of $P_i$. It does not change $T_R\setminus (V(P_1)\cup V(P_2))$. The canonical decomposition $D'$ of $\origin{D\wedge x_1x_2}\setminus V(P_1)$ is obtained from $D\wedge x_1x_2$ as follows: delete the vertices of $V(P_1)$, add the marked edge $vw_2$ and then recompose the new marked edge $vw_2$ as it is of type $S_pS_c$ which is not valid by Theorem~\ref{thm:can-forbid}. Notice that $w$ is still in $T_{D'}$ and $\bag{D'}{w}$ is a star on the vertex set $\{v,v_1,v_2,x_2\}$ with center $v_1$. By pivoting $x_2$ with an unmarked vertex represented by $v_1$, the vertex $x_2$ becomes the center of $\bag{D'}{w}$. Now, by construction of $D'$, the components of $D'\setminus V(\bag{D'}{w})$ are respectively $T_R\setminus (V(P_1)\cup V(P_2) )$ and $F_1$ and $F_2$ with $V(F_i)=V(T_i)$. \vskip 0.2cm \noindent\emph{\textbf{Case 2.} $B$ is prime.} Since $B$ is prime, by Lemma~\ref{lem:findingpathinprime}, we can modify $B$ into an induced path $vv_1v_2$ by only applying local complementations at unmarked vertices in $B$ and unmarked vertices represented by $v_1$. Then we remove all the other vertices of $B$. Note that the marked edge connecting $B$ and $P_1$ is still a valid marked edge as $P_1$ is a star with $\zeta_t(D,B,T_R)$ a leaf. Now, for $i\in \{1,2\}$, the marked edge $v_i-\zeta_t(D,B,T_i)$ may not be valid, then we recompose it. In the resulting canonical decomposition $D'$, the node $w$ is still a node of $T_{D'}$ and $\zeta_b(D', \bag{D'}{w}, T_R)$ is a leaf. We are now reduced to \emph{\textbf{Case 1}}, from which we can construct the required canonical decomposition. \end{proof} Now we are ready prove the main result of the section. For a tree $T$, let $\eta(T)$ be the tree obtained from $T$ by replacing each edge with a path of length $4$. \begin{proof}[Proof of Theorem~\ref{thm:largelrw}] Let $t:=\abs{V(T)}$ and suppose that $\lrw(G) \ge 40(p+2)t$. By Lemma~\ref{lem:subcubicpivot2}, there exists a subcubic tree $T'$ such that $T$ is a vertex-minor of $T'$ and $\abs{V(T')}\le 5t$. Note that $\abs{V(\eta(T'))}\le 20t$. Let $D$ be the canonical decomposition of $G$ and let $T_D$ be the decomposition tree of $D$. Since $\lrw(G) \ge 40(p+2)t$, by Proposition~\ref{prop:generalupperbound}, $\pw(T_D)\ge 20t-1$. Since $\abs{V(\eta(T'))}\le 20t$, from Theorem~\ref{thm:pathwidththeorem}, $T_D$ contains a minor isomorphic to $\eta(T')$. Since the maximum degree of $\eta(T')$ is $3$, $T_D$ contains a subgraph $T_1$ that is isomorphic to a subdivision of $\eta(T')$. Clearly, $T_1$ is an induced subgraph of $T_D$ as $T_1$ and $T_D$ are trees. Let $D'$ be the sub-decomposition of $D$ whose decomposition tree is $T_1$ and root $D'$ at a leaf bag. Notice that $D'$ is a canonical decomposition of an induced subgraph of $G$. By possibly using Lemma \ref{lem:findingpathinprime} we can moreover assume that $R$ is a star centered at a marked vertex. Now we inductively modify $D'$ into a canonical decomposition with some bags having three adjacent bags colored in blue such that \begin{itemize} \item[(i)] $R$ exists in the resulting decomposition, \item[(ii)] for every colored bag $B$, $B$ is a star bag whose center is an unmarked vertex, \item[(iii)] for every colored bag $B$ and an ascendant bag $B'$ of $B$ having three adjacent bags, $B'$ is also colored, \item[(iv)] for every non-colored bag $B'$, $B'$ has a parent bag $P_1$ and $P_1$ has a parent bag $P_2$ where $P_1$ and $P_2$ have two adjacent bags, respectively, \item[(v)] there are at least $3$ bags between two non-colored bags, and \item[(vi)] the number of bags having three adjacent bags in $D'$ and in the resulting decomposition is same. \end{itemize} We claim that we can obtain such a decomposition where all bags with three adjacent bags are colored in blue. We construct such a decomposition in a top-down manner. Note that $D'$ itself with all bags non-colored satisfies the above conditions because the decomposition tree of $D'$ is isomorphic to $\eta(T')$. Now, choose the first non-colored bag $B$ having three adjacent bags such that \begin{quote} either all its ascendants bags having three adjacent bags are colored, or it does not have an ascendant bag with three adjacent bags. \end{quote} From the condition, $B$ has a parent bag $P_1$ and $P_1$ has a parent bag $P_2$ where $P_1$ and $P_2$ have two adjacent bags, respectively. Note that $P_2\neq R$ in case when $B$ is the closest bag to $R$ as the decomposition tree of $D'$ is isomorphic to a subdivision of $\eta(T')$. Let $T_R, T_1$ and $T_2$ be the components of $D'\setminus V(B)$ where $T_R$ contains the root bag. For each $i\in \{1,2\}$, let $B_i$ be the child of $B$ contained in $T_i$. By Lemma~\ref{lem:primetwomarked1}, we can modify $P_1$ and $P_2$ into star bags whose centers are unmarked vertices respectively, preserving the decomposition tree and without modifying the component of $D'\setminus V(P_i)$ containing $R$. . Then by Lemma~\ref{lem:primetwomarked2}, by possibly applying local complementations at unmarked vertices of $D'$ contained in $D'\setminus V(T_R)$ and possibly deleting some vertices in $B$ and recomposing some marked edges, we can transform $D'$ into a canonical decomposition $D''$ containing a bag $P$ such that $D''\setminus V(P)$ contains exactly three components $F_R, F_1, F_2$ with \begin{enumerate} \item $F_R=T_R\setminus (V(P_1)\cup V(P_2) )$, \item for each $i\in \{1,2\}$, $V(F_i)=V(T_i)$ or $V(F_i)=V(T_i)\setminus V(B_i)$, and \item $P$ is a star bag whose center is an unmarked vertex. \end{enumerate} We color $P$ in blue. Note that previously, there are at least $3$ bags between two non-colored bags, therefore new decomposition $D''$ still satisfies the property that \begin{itemize} \item[(v)] every non-colored bag $B'$ has a parent bag $P_1$ and $P_1$ has a parent bag $P_2$ where $P_1$ and $P_2$ have two adjacent bags, respectively. \end{itemize} Since by construction $D''$ satisfies all the other conditions (i)-(iv) and (vi), we have therefore constructed from $D'$ a new decomposition $D''$ satisfying all the conditions (i)-(vi) and have decreased by one the number of non-colored bags having three adjacent bags. Therefore, we can construct from $D'$ a canonical decomposition $D''$ such that \begin{itemize} \item[(a)] every bag of $D''$ is a star bag whose center is an unmarked vertex, \item[(b)] the decomposition tree of $D''$ is isomorphic to a subdivision of $T'$. \end{itemize} By Theorem~\ref{thm:Bouchet88}, $\origin{D''}$ is a tree, and in fact, it is not hard to observe that $\origin{D''}$ has an induced subgraph isomorphic to $T'$ by removing vertices. Since $T$ is a vertex-minor of $T'$, we conclude that $G$ contains a vertex-minor isomorphic to $T$. \end{proof} \section{Distance-hereditary vertex-minor obstructions for graphs of bounded linear rank-width}\label{sec:obstructions} We generalize the constructions of vertex-minor obstructions for graphs of bounded linear rank-width in \cite{JKO2014} so that we can generate all vertex-minor obstructions that are distance-hereditary graphs. We use the characterization of distance-hereditary graphs described in Section~\ref{sec:dhandthread}. For a distance-hereditary graph $G$, a connected graph $G'$ is an \emph{one-vertex DH-extension} of $G$ if $G=G'\setminus v$ for some vertex $v\in V(G')$ and $G'$ is distance-hereditary. For convenience, if $G'$ is an \emph{one-vertex DH-extension} of $G$, and $D$ and $D'$ are canonical decompositions of $G$ and $G'$ respectively, then $D'$ is also called an \emph{one-vertex DH-extension} of $D$. For a set $\mathcal{D}$ of canonical decompositions, we define the \[\mathcal{D}^{+}:=\mathcal{D}\cup \{D':D' \text{ is an one vertex DH-extension of }D\in \mathcal{D}\}.\] For a set $\mathcal{D}$ of canonical decompositions, we define a new set $\Delta(\mathcal{D})$ of canonical decompositions $D$ as follows: \begin{quote} Choose three canonical decompositions $D_1, D_2, D_3$ in $\mathcal{D}$ and for each $1\le i\le 3$, take an one-vertex extension $D_i'$ of $D_i$ with a new unmarked vertex $w_i$. We introduce a new bag $B$ of type $K$ or $S$ having three vertices $v_1, v_2, v_3$ such that the type of $B$ is compatible with the type of the bag in $D_i'$ containing $w_i$, and \begin{enumerate} \item if $v_i$ is in a complete bag, then we define $D_i'':=D_i'*w_i$, \item if $v_i$ is the center of a star bag, then we define $D_i'':=D_i'\wedge w_iz_i$ for some $z_i$ linked to $w_i$ in $D_i'$, \item if $v_i$ is a leaf of a star bag, then we define $D_i'':=D_i'$. \end{enumerate} \medskip Let $D$ be the canonical decomposition obtained by the disjoint union of $D_1'', D_2'', D_3''$ and $B$ by adding the marked edges $v_1w_1, v_2w_2, v_3w_3$. \end{quote} \medskip For each non-negative integer $k$, we recursively construct the sets $\Psi_k$ of canonical decompositions as follows. \begin{enumerate} \item $\Psi_0:=\{K_2\}$ ($K_2$ is the canonical decomposition of itself.) \item For $k\ge 0$, let $\Psi_{k+1}:=\Delta(\Psi_k^{+})$. \end{enumerate} We prove the following. \begin{theorem}\label{thm:mainobs} Let $k\ge 0$ be a positive integer. Every distance-hereditary graph of linear rank-width at least $k+1$ contains a vertex-minor isomorphic to a graph whose canonical decomposition is isomorphic to a decomposition in $\Psi_k$. \end{theorem} We remark that for each positive integer $k$, starting with the set $\Psi_k$, we can construct a set of vertex-minor minimal graphs $\mathcal{O}$ such that \begin{enumerate} \item every distance-hereditary graph of linear rank-width at least $k+1$ contains a vertex-minor isomorphic to a graph in $\mathcal{O}$, and \item for $G\in \mathcal{O}$, $\lrw(G)=k+1$ and every of its proper vertex-minors has linear rank-width at most $k$. \end{enumerate} Recall that we can compute the linear rank-width of any distance-hereditary graph in polynomial-time \cite{AdlerKK15}. Let us now prove some intermediate lemmas. \begin{lemma}\label{lem:reduce} Let $D$ be the canonical decomposition of a connected distance-hereditary graph. Let $B_1$ and $B_2$ be two distinct bags of $D$, and for each $i\in \{1,2\}$, let $T_i$ be the component of $D\setminus V(B_i)$ such that \begin{itemize} \item $T_1$ contains $B_2$ and $T_2$ contains $B_1$, \item $\zeta_b(D, B_1, T_1)$ is not a center of a star bag, and \item $B_2$ is a star bag and $\zeta_b(D, B_2, T_2)$ is a leaf of $B_2$. \end{itemize} Then there exists a canonical decomposition $D'$ such that \begin{enumerate} \item $\origin{D}$ has $\origin{D'}$ as a vertex-minor, \item $D[V(T_2)\setminus V(T_1)]=D'[V(T_2)\setminus V(T_1)]$, \item $D[V(T_1)\setminus V(T_2)]=D'[V(T_1)\setminus V(T_2)]$, and \item either $D'$ has no bags between $B_1$ and $B_2$, or $D'$ has only one bag $B$ between $B_1$ and $B_2$ such that $\abs{V(B)}=3$, $B$ is a star bag whose center is an unmarked vertex. \end{enumerate} \end{lemma} \begin{proof} If $B_1$ and $B_2$ are adjacent bags in $D$, then we are done. We assume that there exists at least one bag between $B_1$ and $B_2$ in $D$. Let $P=p_1p_2 \ldots p_{\ell}$ be the shortest path from $\zeta_b(D, B_1, T_1)=p_1$ to $\zeta_b(D, B_2, T_2)=p_{\ell}$ in $D$. Note that $\ell\ge 4$ as $D$ has at least one bag between $B_1$ and $B_2$. Let $C$ be a bag in $D$ that contains exactly two vertices $p_i$, $p_{i+1}$ of $P$. Then we remove $C$ and all components of $D\setminus V(C)$ which does not contain $B_1$ or $B_2$, and add a marked edge $p_{i-1}p_{i+2}$. Since this operation does not change the parts $D[V(T_2)\setminus V(T_1)]$ and $D[V(T_1)\setminus V(T_2)]$, applying this operation consecutively, we may assume that all bags of $D$ between $B_1$ and $B_2$ are star bags containing $3$ vertices of $P$. Suppose there exist two adjacent bags $C_1$ and $C_2$ in $D$ such that $p_i,p_{i+1},p_{i+2}\in V(C_1)$ and $p_{i+3},p_{i+4},p_{i+5}\in V(C_2)$. Take two unmarked vertices $x_{i+1}$ and $x_{i+4}$ of $D$ that are represented by $p_{i+1}$, $p_{i+4}$, respectively. By pivoting $x_{i+1}x_{i+4}$ in $D$, we can modify two bags $C_1$ and $C_2$ so that $p_ip_{i+2}p_{i+3}p_{i+5}$ becomes a path. By the definition of the pivoting operation, this pivoting does not affect on the parts $D[V(T_2)\setminus V(T_1)]$ and $D[V(T_1)\setminus V(T_2)]$. We remove $C_1$ and $C_2$ from $D$ (with all components of $D\setminus V(C_i)$ which does not contain $B_1$ or $B_2$), and add a marked edge $p_{i-1}p_{i+6}$. By the assumption that $y_1$ is not the center of $B_1$ we know that the marked edge incident with $B_1$ is still not recomposable. Therefore, we obtain a canonical decomposition satisfying the condition (1), (2), (3), and the number of bags containing $P$ is decreased by two. By recursively doing this procedure, at the end, we have either no bags between $B_1$ and $B_2$, or exactly one star bag $B$ with $\abs{B}=3$, where the center of $B$ is an unmarked vertex, by removing redundant components. \end{proof} The next proposition says how we can replace limbs having linear rank-width $\geq k=1$ into a canonical decomposition in $\Psi_{k-1}^{+}$ using Lemma~\ref{lem:reduce}. \begin{proposition}\label{prop:replace} Let $D$ be the canonical decomposition of a connected distance-hereditary graph. Let $B$ be a star bag of $D$ and $v$ be a leaf of $B$. Let $T$ be a component of $D\setminus V(B)$ such that $\zeta_b(D, B, T)=v$, and let $w$ be an unmarked vertex of $D$ represented by $v$. Let $A$ be the canonical decomposition of a distance-hereditary graph. If $\limbhat_D[B,w]$ has a vertex-minor that is either $\origin{A}$ or an one-vertex extension of $\origin{A}$, then there exists a canonical decomposition $D'$ on a subset of $V(D)$ such that \begin{enumerate} \item either $D'\setminus V(T)=D\setminus V(T)$ or $D'\setminus V(T)=(D\setminus V(T))*v$, and \item for some unmarked vertex $w'$ of $D'$ represented by $v$, $\limbtil_{D'}[B,w']$ is either $A$ or an one-vertex DH-extension of $A$. \end{enumerate} \end{proposition} \begin{proof} Suppose that there exists a sequence $x_1, x_2, \ldots, x_m$ of vertices of $\limbhat_D[B,w]$ and $S\subseteq V(\limbhat_D[B,w])$ such that $(\limbhat_D[B,w]*x_1*x_2* \ldots *x_m)\setminus S$ is either $\origin{A}$ or an one-vertex DH-extension of $\origin{A}$. So, there exists $Q\subseteq V(D)$ such that $(\limb_D[B,w]*x_1*x_2* \ldots *x_m)[Q]$ is a split decomposition of either $\origin{A}$ or an one-vertex DH-extension of $\origin{A}$. Since $\limb_D[B,w]$ is an induced subgraph of $D$, we have \[(\limb_D[B,w]*x_1*x_2* \ldots *x_m)[Q]=(D*x_1*x_2* \ldots *x_m)[Q].\] For convenience, let $D_1=D*x_1*x_2* \ldots *x_m$. Note that $D[V(B)]=D_1[V(B)]$. We choose a bag $B'$ in $D_1$ such that \begin{enumerate} \item $B'$ has a vertex of $Q$, and \item the distance between the corresponding nodes of $B'$ and $B$ in the decomposition tree of $D_1$ is minimum. \end{enumerate} Here, we want to shrink all the bags between $B'$ and $B$ using Lemma~\ref{lem:reduce}. Let $T_1$ be the component of $D_1\setminus V(B')$ containing the bag $B$ and let $T_2$ be the component of $D_1\setminus V(B)$ containing the bag $B'$. Let $y:=\zeta_b(D, B', T_1)$. From the choice of $B'$, we have $y\notin Q$, otherwise, there exists an unmarked vertex represented by $y$, and all vertices on the path from $y$ to it should be contained in $Q$, which contradicts to the minimality of the distance from $B$ to $B'$. In addition, $y$ is not the center of a star bag because $D_1[Q]$ is connected and $B'$ has at least two vertices of $Q$. Applying Lemma~\ref{lem:reduce}, there exists a canonical decomposition $D_2$ such that \begin{enumerate} \item $\origin{D_1}$ has $\origin{D_2}$ as a vertex-minor, \item $D_1[V(T_2)\setminus V(T_1)]=D_2[V(T_2)\setminus V(T_1)]$, \item $D_1[V(T_1)\setminus V(T_2)]=D_2[V(T_1)\setminus V(T_2)]$, \item either $D_2$ has no bags between $B$ and $B'$, or it has exactly one bag $B_s$ between $B$ and $B'$ such that $\abs{V(B_s)}=3$, $B_s$ is a star bag whose center is an unmarked vertex, and the two leaves of $B_s$ are adjacent to $y$ and $v$, respectively. \end{enumerate} We obtain $D_3$ from $D_2$ by removing the vertices of $V(T_2)\setminus V(T_1)$ that are not contained in $Q\cup \{y\}$, and recomposing all new recomposable marked edges. Since recomposable marked edges appeared in the part $V(T_2)\setminus V(T_1)$, $D_3[V(T_1)\setminus V(T_2)]=D_2[V(T_1)\setminus V(T_2)]$ and the bag between $B$ and $B'$ still exists in $D_3$. Let $B_2$ be the bag of $D_3$ containing $y$ and we denote by $H$ the canonical decomposition of $D_2[Q]$. We divide into two cases. \vskip 0.2cm \noindent\emph{\textbf{Case 1.} $D_3$ has no bags between $B$ and $B_2$.} In this case, $D_3$ itself is a required decomposition. Choose an unmarked vertex $z$ in $D_3$ that is represented by $v$. Then $\limbtil_{D_3}[B, z]=H$, which is either $A$ or an one-vertex DH-extension of $A$. \vskip 0.2cm \noindent\emph{\textbf{Case 2.} $D_3$ has one bag $B_s$ between $B$ and $B_2$ where $\abs{V(B_s)}=3$, $B_s$ is a star bag whose center $c$ is an unmarked vertex, and two leaves $c_1$, $c_2$ of $B_s$ are adjacent to $y$ and $v$, respectively.} Choose an unmarked vertex $z$ of $D_3$ represented by $c_1$. From the construction, we can easily observe that $\limbtil_{D_3}[B_s, z]=H$. If $H=A$, then we can regard $\limbtil_{D_3}[B, c]$ as an one-vertex DH-extension of $A$ with the new vertex $c$. Therefore, we may assume that $H$ is an one-vertex DH-extension of $A$ with a newly added vertex $a$ for some unmarked vertex $a$ of $H$. Note that since $y$ is not the center of a star bag, either $y$ is a leaf of a star bag or $B_2$ is a complete bag. If $B_2$ is a star whose center is an unmarked vertex in $D_3$, then we obtain a new decomposition $D_4$ by applying a local complementation at $c$ and removing $c$ and recomposing a marked edge incident with $B_s$. Note that $D_4$ is exactly the decomposition obtained from the disjoint union of the two components of $D_3\setminus V(B_s)$ by adding a marked edge $yv$, and thus it is canonical. Also, $z$ is represented by $v$ in $D_4$, and therefore $\limbtil_{D_4}[B, z]=H$. Thus, $D_4$ is a required decomposition. Now we may assume that at least two unmarked vertices of $D_3$ are represented by $c_1$. So, $c$ is linked to at least two vertices of $\origin{A}$ in $D_3$. Since $\origin{A}$ is an one vertex DH-extension of a connected distance-hereditary graph, $\origin{A}\setminus a$ is connected. So, if we define $D_4$ as the canonical decomposition obtained from $D_3\setminus a$, then $D_4$ is connected and $\limbtil_{D_4}[B, c]$ can be regarded as an one vertex DH-extension of $A$. Therefore, $D_4$ is a required decomposition. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:mainobs}] We prove it by induction on $k$. If $k=0$, then $\lrw(G)\ge 1$ and $G$ has an edge. Therefore, we may assume that $k\ge 1$. Let $D$ be the canonical decomposition of $G$. Since $G$ has linear rank-width at least $k+1$, by Theorem~\ref{thm:mainchap2}, there exists a bag $B$ in $D$ with three components $T_1, T_2, T_3$ of $D\setminus V(B)$ such that $f_D(B, T_i)\ge k$ for each $1\le i\le 3$. For each $1\le i\le 3$, let $v_i:=\zeta_b(D, B, T_i)$ and $w_i:=\zeta_t(D, B, T_i)$, and $z_i$ be an unmarked vertex of $D$ that is represented by $v_i$ in $D$. By Proposition~\ref{prop:limb}, we may assume that $B$ is a star with the center $v_3$. We may also assume that $B$ has exactly three vertices. Since $v_1$ and $v_2$ are leaves of $B$, for each $i\in \{1,2\}$, $\limb_D[B, z_i]=T_i\setminus w_i$ and by the induction hypothesis, there exists a canonical decomposition $D_i$ in $\Psi_{k-1}$ such that $\limbhat_D[B, z_i]$ has a vertex-minor isomorphic to a graph $\origin{D_i}$. Then by applying Proposition~\ref{prop:replace} twice, we can obtain a canonical decomposition $D'$ satisfying that \begin{enumerate} \item $D'[V(B)]=D[V(B)]$, \item either $D'[V(T_3)]=T_3$ or $D'[V(T_3)]=T_3*w_3$, and \item for each $i\in \{1,2\}$, $\limbtil_{D'}[B, z_i']$ is isomorphic to a canonical decomposition in $\Psi_{k-1}^+$ for some unmarked vertex $z_i'$ of $D'$ represented by $v_i$. \end{enumerate} For each $i\in \{1,2,3\}$, let $T_i'$ be the component of $D'\setminus V(B)$ containing $z_i'$, and $w_i':=\zeta_t(D', D'[V(B)], T_i')$. Note that $T_1'\setminus w_1'$ and $T_2'\setminus w_2'$ are contained in $\Psi_{k-1}^+$. We choose an unmarked vertex $z_3'$ that is represented by $v_3'$ in $D'$. If we apply local complementation at $z_3'$ and $z_2'$ subsequently in $D'$, then \begin{enumerate} \item $B$ is changed to a star with center $v_2$, \item $T_1'$ is the same as before, \item $T_2'$ is changed to $T_2'*w_2'*z_2'$, \item $T_3'$ is changed to $T_3'*z_3'*w_3'$. \end{enumerate} Now, we again apply Proposition~\ref{prop:replace} to $D'*z_3'*z_2'$, and obtain a canonical decomposition $D''$ satisfying that \begin{enumerate} \item $D''[V(B)]=(D'*z_3'*z_2')[V(B)]$ and $D''[V(T_1')]=(D'*z_3'*z_2')[V(T_1')]$, \item either $D''[V(T_2')]=(D'*z_3'*z_2')[V(T_2')]$ or $(D'*z_3'*z_2')[V(T_2')]*w_2'$, and \item $\limbtil_{D''}[B, z_3'']$ is isomorphic to a canonical decomposition in $\Psi_{k-1}^+$ for some unmarked vertex $z_3''$ of $D''$ represented by $v_3$. \end{enumerate} Let $T_3''$ be the component of $D''\setminus V(B)$ containing $z_3''$, and $w_3'':=\zeta_t(D'', D''[V(B)], T_3'')$. Note that $T_3''\setminus w_3''\in \Psi_{k-1}^+$ and for $i\in \{1,2\}$, $z_i'$ is still represented by $v_i$ in $D''$. Now we claim that $D''\in \Psi_k$ or $D''*z_2'\in \Psi_k$. We observe two cases depending on whether $D''[V(T_2')]$ is equal to $(D'*z_3'*z_2')[V(T_2')]$ or to $(D'*z_3'*z_2')[V(T_2')]*w_2'$. \vskip 0.2cm \noindent\emph{\textbf{Case 1.} $D''[V(T_2')]=(D'*z_3'*z_2')[V(T_2')]$.} We observe that $B$ is a star with the center $v_2$ in $D''$, and the three components of $D''\setminus V(B)$ are $T_1'$, $T_2'*w_2'*z_2'$, and $T_3''$. In this case, $D''*z_2'\in \Psi_k$ because \begin{enumerate} \item $B$ is a complete bag in $D''*z_2'$, and \item the three components of $D''\setminus V(B)$ are $T_1'*w_1'$, $T_2'*w_2'$, and $T_3''*w_3''$, \end{enumerate} and the limbs of $D''*z_2'$ with respect to $B$ are $T_1'\setminus w_1'$, $T_2'\setminus w_2'$, $T_3''\setminus w_3''$, which are contained in $\Psi_{k-1}^+$. \vskip 0.2cm \noindent\emph{\textbf{Case 2.} $D''[V(T_2')]=(D'*z_3'*z_2')[V(T_2')]*w_2'$. } We observe that $B$ is a star with the center $v_2$ in $D''$, and the three components of $D''\setminus V(B)$ are $T_1'$, $T_2'*w_2'*z_2'*w_2'$, and $T_3''$. We can see that $D''\in \Psi_k$ because the limbs with respect to $B$ are $T_1'\setminus w_1'$, $T_2'\setminus w_2'$, $T_3''\setminus w_3''$, which are contained in $\Psi_{k-1}^+$. \vskip 0.2cm We conclude that $G$ has a vertex-minor isomorphic to $\origin{D''}$ where $D''\in \Psi_{k}$, as required. \end{proof} In order to prove that $\Psi_k$ is a minimal set of canonical decompositions of distance-hereditary vertex-minor obstructions for linear rank-width at most $k$, we need to prove that for every $D\in \Psi_k$, $\origin{D}$ has linear rank-width $k+1$ and all its proper vertex-minors have linear rank-width at most $k$. However, this property does not hold, for instance, the triangle in $\Psi_0$ has linear rank-width $1$ but all its proper vertex-minors also have linear rank-width $1$. We guess that the following set $\Phi_k$ would form a minimal set of distance-hereditary vertex-minor obstructions, but we leave it as an open problem. \begin{enumerate} \item $\Phi_0:=\{K_2\}$. \item For $k\ge 0$, let $\Phi_{k+1}:=\Delta(\Phi_k)$. \end{enumerate} Our intuition is supported by the following. \begin{proposition}\label{prop:phik} Let $k\geq 0$ and let $D\in \Phi_k$. Then $\lrw(\origin{D}) = k+1$ and every proper vertex-minor of $\origin{D}$ has linear rank-width at most $k$. \end{proposition} We need the following two lemmas. \begin{lemma}\label{lem:locphi} Let $D\in \Phi_k$ and $v$ be an unmarked vertex in $D$. Then $D*v\in \Phi_k$. \end{lemma} \begin{proof} We proceed by induction on $k$. We may assume that $k\ge 1$. By the construction, there exists a bag $B$ of $D$ such that the three limbs $D_1$, $D_2$, $D_3$ in $D$ corresponding to the bag $B$ are contained in $\Phi_{k-1}$. Let $B':=B$ or $B':=B*v'$ be a bag of $D*v$ depending on whether $v$ has a representative $v'$ in $B$. Let $D_1'$, $D_2'$ and $D_3'$ be the three limbs of $D*v$ corresponding to the bag $B'$ such that $D_i'$ and $D_i$ came from the same component of $D\setminus V(B)$. Then by Proposition~\ref{prop:limb}, $D_i'$ is locally equivalent to $D_i$. So by the induction hypothesis, $D_i'\in \Phi_{k-1}$. And $D*v$ is the canonical decomposition obtained from $D_i'$ following the construction of $\Phi_k$. Therefore, $D*v\in \Phi_k$. \end{proof} \begin{lemma}[\cite{Bouchet88}]\label{lem:bouchet} Let $G$ be a graph, $v$ be a vertex of $G$ and $w$ be an arbitrary neighbor of $v$. Then every elementary vertex-minor obtained from $G$ by deleting $v$ is locally equivalent to either $G\setminus v$, $G* v\setminus v$, or $G\wedge vw\setminus v$. \end{lemma} \begin{proof}[Proof of Proposition \ref{prop:phik}] By construction, it is not hard to prove by induction with the help of Theorem \ref{thm:mainchap2} that $\lrw(\origin{D})=k+1$ for every decomposition $D\in \Phi_k$. For the second statement, by Lemma~\ref{lem:locphi} and Lemma~\ref{lem:bouchet}, it is sufficient to show that if $D\in \Phi_k$ and $v$ is an unmarked vertex of $D$, then $\origin{D}\setminus v$ has linear rank-width at most $k$. We use induction on $k$ to prove it. We may assume that $k\ge 1$. Let $B$ be the bag of $D$ such that $D\setminus V(B)$ has exactly three limbs whose underlying graphs are contained in $\Phi_{k-1}$. Clearly there is no other bag having the same property. Since $B$ has no unmarked vertices, $v$ is contained in one of the limbs $D'$, and by induction hypothesis, $\origin{D'}\setminus v$ has linear rank-width at most $k-1$. Therefore, by Theorem~\ref{thm:mainchap2}, $\origin{D}\setminus v$ has linear rank-width at most $k$. \end{proof} We finish by pointing out that it is proved in \cite{JKO2014} that the number of distance-hereditary vertex-minor obstructions for linear rank-width $k$ is at least $2^{\Omega(3^k)}$. One can easily check by induction that the number of graphs in $\Phi_k$ is bounded by $2^{O(3^k)}$. Therefore, we can conclude that the number of distance-hereditary vertex-minor obstructions for linear rank-width $k$ is equal to $2^{\theta(3^k)}$. \section{Simpler proofs for the characterizations of graphs of linear rank-width at most $1$}\label{sec:charlrw1} In this section, we obtain simpler proofs for known characterizations of the graphs of linear rank-width at most $1$ using Theorem~\ref{thm:mainchap2}. Theorem~\ref{thm:charlrw1} was originally proved by Bui-Xuan, Kant\'{e}, and Limouzy~\cite{Bui-XuanKL13}. \begin{theorem}[\cite{Bui-XuanKL13}]\label{thm:charlrw1} Let $G$ be a connected graph and let $D$ be the canonical decomposition of $G$. The following two are equivalent. \begin{enumerate} \item $G$ has linear rank-width at most $1$. \item $G$ is distance-hereditary and $T_D$ is a path. \end{enumerate} \end{theorem} \begin{proof} Let $T_D:=u_1-u_2- \cdots -u_m$ be a path. For each $1\le i\le m$, we take any ordering $L_i$ of unmarked vertices in $\bag{D}{u_i}$. We can easily check that $L_1\oplus L_2\oplus \ldots \oplus L_m$ is a linear layout of $G$ having width at most $1$. Suppose $G$ has linear rank-width at most $1$. From the known fact that a connected graph has rank-width at most $1$ if and only if it is distance-hereditary~\cite{Oum05}, $G$ is distance-hereditary. Suppose $T_D$ is not a path. Then there exists a bag $B$ of $D$ such that $B$ has at least three adjacent bags in $D$. Thus, $D\setminus V(B)$ has at least three components $T$ where $f_D(B, T)\ge 1$. By Theorem~\ref{thm:mainchap2}, $G$ has linear rank-width at least $2$, which is a contradiction. \end{proof} From Theorem~\ref{thm:charlrw1}, we have a linear time algorithm to recognize the graphs of linear rank-width at most $1$. \begin{theorem}\label{thm:recoglrw1} For a given graph $G$, we can recognize whether $G$ has linear rank-width at most $1$ or not in time $\mathcal{O}(\abs{V(G)}+\abs{E(G)})$. \end{theorem} \begin{proof} We first compute the canonical decomposition $D$ of each connected component of $G$ using the algorithm from Theorem~\ref{thm:CED}. It takes $\mathcal{O}(\abs{V(G)}+\abs{E(G)})$ time. Then we check whether $T_D$ is a path, and whether no bag is prime. By Theorem~\ref{thm:charlrw1}, if $T_D$ is a path and each bag is not prime, then we conclude that $G$ has linear rank-width at most $1$, and otherwise, $G$ has linear rank-width at least $2$. Because the total number of bags in every canonical decomposition is $\mathcal{O}(\abs{V(G)})$, it takes $\mathcal{O}(\abs{V(G)})$ time. \end{proof} The list of induced subgraph obstructions for graphs of linear rank-width at most $1$ was characterized by Adler, Farley, and Proskurowski~\cite{AdlerFP11}. The obstructions consist of the known obstructions for distance-hereditary graphs~\cite{BandeltM86}, and the set $\obt$ of the induced subgraph obstructions for graphs of linear rank-width at most $1$ that are distance-hereditary. See Figure~\ref{fig:obslrw1} for the list of obstructions $\alpha_i, \beta_j, \gamma_k$ in $\obt$ where $1\le i\le 4$, $1\le j\le 6$, $1\le k\le 4$. This set $\obt$ can be obtained from Theorem~\ref{thm:charlrw1} in a much easier way than the previous result. \begin{figure} \tikzstyle{v}=[circle, draw, solid, fill=black, inner sep=0pt, minimum width=3pt] \centering \begin{tikzpicture}[scale=0.5] \node[v](v1) at (0,.8){}; \node[v](v2) at (0,1.6){}; \node[v](v3) at (-.8,-.5){}; \node[v](v4) at (.8,-.5){}; \node[v](v5) at (-1.6,-1){}; \node[v](v6) at (1.6,-1){}; \draw (v1)--(v3)--(v4)--(v1); \draw (v1)--(v2); \draw (v3)--(v5); \draw (v4)--(v6); \node [label=$\alpha_1$] at (0,-2.7) {}; \end{tikzpicture}\quad \begin{tikzpicture}[scale=0.5] \node[v](v1) at (0,2){}; \node[v](v2) at (1-.2,2){}; \node[v](v3) at (1.5,1+.2){}; \node[v](v4) at (1.5,3-.2){}; \node[v](v5) at (2+.2,2){}; \node[v](v6) at (3,2){}; \draw (v1)--(v2)--(v3)--(v5)--(v6); \draw (v2)--(v4)--(v5); \draw (v2)--(v5); \node [label=$\alpha_2$] at (1.5,0-.7) {}; \end{tikzpicture}\quad \begin{tikzpicture}[scale=0.5] \node[v](v2) at (1-.2-.6,2){}; \node[v](v3) at (1.5,1+.2-.6){}; \node[v](v4) at (1.5,3-.2+.6){}; \node[v](v5) at (2+.2+.6,2){}; \node[v](v6) at (1.5,2){}; \node[v](v7) at (1,2.5){}; \draw (v2)--(v3)--(v4)--(v5)--(v2); \draw (v2)--(v4); \draw (v3)--(v5); \draw (v6)--(v7); \node [label=$\alpha_3$] at (1.5,0-.7) {}; \end{tikzpicture}\quad \begin{tikzpicture}[scale=0.5] \node[v](v2) at (0.2,2){}; \node[v](v3) at (0.7-.3,1.5){}; \node[v](v4) at (2.3+.3,2.5){}; \node[v](v5) at (2.8,2){}; \node[v](v6) at (1.5,3){}; \node[v](v7) at (1.5,1){}; \draw (v2)--(v4)--(v5)--(v3)--(v2); \draw (v6)--(v2);\draw (v6)--(v3);\draw (v6)--(v4);\draw (v6)--(v5); \draw (v7)--(v2);\draw (v7)--(v3);\draw (v7)--(v4);\draw (v7)--(v5); \node [label=$\alpha_4$] at (1.5,0-.7) {}; \end{tikzpicture} \begin{tikzpicture}[scale=0.5] \node[v](v1) at (0,2){}; \node[v](v2) at (1-.2,2){}; \node[v](v3) at (1.5,1+.2){}; \node[v](v4) at (1.5,3-.2){}; \node[v](v5) at (2+.2,2){}; \node[v](v6) at (3,2){}; \draw (v1)--(v2)--(v3)--(v5)--(v6); \draw (v2)--(v4)--(v5); \node [label=$\beta_1$] at (1.5,0-.5) {}; \end{tikzpicture}\quad \begin{tikzpicture}[scale=0.5] \node[v](v1) at (0,2){}; \node[v](v2) at (1-.2,2){}; \node[v](v3) at (1.5,1+.2){}; \node[v](v4) at (1.5,3-.2){}; \node[v](v5) at (2+.2,2){}; \node[v](v6) at (3,2){}; \draw (v1)--(v2)--(v3)--(v5)--(v6); \draw (v2)--(v4)--(v5); \draw (v3)--(v1)--(v4); \node [label=$\beta_2$] at (1.5,0-.5) {}; \end{tikzpicture}\quad \begin{tikzpicture}[scale=0.5] \node[v](v1) at (0,2){}; \node[v](v2) at (1-.2,2){}; \node[v](v3) at (1.5,1+.2){}; \node[v](v4) at (1.5,3-.2){}; \node[v](v5) at (2+.2,2){}; \node[v](v6) at (3,2){}; \draw (v1)--(v2)--(v3)--(v5)--(v6); \draw (v2)--(v4)--(v5); \draw (v3)--(v1)--(v4); \draw (v3)--(v6)--(v4); \node [label=$\beta_3$] at (1.5,0-.5) {}; \end{tikzpicture} \begin{tikzpicture}[scale=0.5] \node[v](v1) at (0,2){}; \node[v](v2) at (1-.2,2){}; \node[v](v3) at (1.5,1+.2){}; \node[v](v4) at (1.5,3-.2){}; \node[v](v5) at (2+.2,2){}; \node[v](v6) at (3,2){}; \draw (v1)--(v2)--(v3)--(v5)--(v6); \draw (v2)--(v4)--(v5); \draw(v3)--(v4); \node [label=$\beta_4$] at (1.5,0-.5) {}; \end{tikzpicture}\quad \begin{tikzpicture}[scale=0.5] \node[v](v1) at (0,2){}; \node[v](v2) at (1-.2,2){}; \node[v](v3) at (1.5,1+.2){}; \node[v](v4) at (1.5,3-.2){}; \node[v](v5) at (2+.2,2){}; \node[v](v6) at (3,2){}; \draw (v1)--(v2)--(v3)--(v5)--(v6); \draw (v2)--(v4)--(v5); \draw (v3)--(v1)--(v4); \draw(v3)--(v4); \node [label=$\beta_5$] at (1.5,0-.5) {}; \end{tikzpicture}\quad \begin{tikzpicture}[scale=0.5] \node[v](v1) at (0,2){}; \node[v](v2) at (1-.2,2){}; \node[v](v3) at (1.5,1+.2){}; \node[v](v4) at (1.5,3-.2){}; \node[v](v5) at (2+.2,2){}; \node[v](v6) at (3,2){}; \draw (v1)--(v2)--(v3)--(v5)--(v6); \draw (v2)--(v4)--(v5); \draw (v3)--(v1)--(v4); \draw (v3)--(v6)--(v4); \draw(v3)--(v4); \node [label=$\beta_6$] at (1.5,0-.5) {}; \end{tikzpicture} \begin{tikzpicture}[scale=0.5] \node[v](v1) at (0,1){}; \node[v](v2) at (-.8,1.8){}; \node[v](v3) at (-1.6,.8){}; \node[v](v4) at (.8,1.8){}; \node[v](v5) at (1.6,.8){}; \node[v](v6) at (.6,.2){}; \node[v](v7) at (-.2,-.6){}; \draw (v1)--(v2)--(v3); \draw (v1)--(v4)--(v5); \draw (v1)--(v6)--(v7); \node [label=$\gamma_1$] at (0,-2.5) {}; \end{tikzpicture}\quad \begin{tikzpicture}[scale=0.5] \node[v](v1) at (0,1){}; \node[v](v2) at (-.8,1.8){}; \node[v](v3) at (-1.6,.8){}; \node[v](v4) at (.8,1.8){}; \node[v](v5) at (1.6,.8){}; \node[v](v6) at (.6,.2){}; \node[v](v7) at (-.2,-.6){}; \draw (v1)--(v2)--(v3); \draw (v1)--(v4)--(v5); \draw (v1)--(v6)--(v7); \draw (v1)--(v3); \node [label=$\gamma_2$] at (0,-2.5) {}; \end{tikzpicture}\quad \begin{tikzpicture}[scale=0.5] \node[v](v1) at (0,1){}; \node[v](v2) at (-.8,1.8){}; \node[v](v3) at (-1.6,.8){}; \node[v](v4) at (.8,1.8){}; \node[v](v5) at (1.6,.8){}; \node[v](v6) at (.6,.2){}; \node[v](v7) at (-.2,-.6){}; \draw (v1)--(v2)--(v3); \draw (v1)--(v4)--(v5); \draw (v1)--(v6)--(v7); \draw (v1)--(v3); \draw (v1)--(v5); \node [label=$\gamma_3$] at (0,-2.5) {}; \end{tikzpicture}\quad \begin{tikzpicture}[scale=0.5] \node[v](v1) at (0,1){}; \node[v](v2) at (-.8,1.8){}; \node[v](v3) at (-1.6,.8){}; \node[v](v4) at (.8,1.8){}; \node[v](v5) at (1.6,.8){}; \node[v](v6) at (.6,.2){}; \node[v](v7) at (-.2,-.6){}; \draw (v1)--(v2)--(v3); \draw (v1)--(v4)--(v5); \draw (v1)--(v6)--(v7); \draw (v1)--(v3); \draw (v1)--(v5); \draw (v1)--(v7); \node [label=$\gamma_4$] at (0,-2.5) {}; \end{tikzpicture} \caption{The induced subgraph obstructions for graphs of linear rank-width at most $1$ that are distance-hereditary.} \label{fig:obslrw1} \end{figure} \begin{table} \begin{center} \begin{tabular}[t]{|c| c| c| c| c|} \hline type of $B$ & type of $v_1w_1$ & type of $v_2w_2$ & type of $v_3w_3$ & induced subgraph \\ \hline A complete bag& $KS_p$ & $KS_p$ & $KS_p$ & $\alpha_1$ \\ & $KS_c$ & $KS_p$ & $KS_p$ & $\alpha_2$ \\ & $KS_c$ & $KS_c$ & $KS_p$ & $\alpha_3$ \\ & $KS_c$ & $KS_c$ & $KS_c$ & $\alpha_4$ \\ \hline A star bag & $S_cS_c$ & $S_pS_p$ & $S_pS_p$ & $\beta_1$ \\ with center at $v_1$ & $S_cS_c$ & $S_pS_p$ & $S_pK$ & $\beta_2$ \\ & $S_cS_c$ & $S_pK$ & $S_pK$ & $\beta_3$ \\ & $S_cK$ & $S_pS_p$ & $S_pS_p$ & $\beta_4$ \\ & $S_cK$ & $S_pS_p$ & $S_pK$ & $\beta_5$\\ & $S_cK$ & $S_pK$ & $S_pK$ & $\beta_6$\\ \hline A star bag & $S_pS_p$ & $S_pS_p$ & $S_pS_p$ & $\gamma_1$ \\ with center at & $S_pK$ & $S_pS_p$ & $S_pS_p$ & $\gamma_2$\\ a vertex & $S_pK$ & $S_pK$ & $S_pS_p$ & $\gamma_3$\\ other than $v_i$ & $S_pK$ & $S_pK$ & $S_pK$ & $\gamma_4$ \\ \hline \end{tabular} \end{center} \caption{Summary of all cases in Theorem~\ref{thm:charlrw2}}\label{table2} \end{table} \begin{theorem}[\cite{AdlerFP11}]\label{thm:charlrw2} Let $G$ be a connected graph and let $D$ be the canonical split decomposition of $G$. The following are equivalent. \begin{enumerate} \item $G$ has linear rank-width at most $1$. \item $G$ is distance-hereditary and $G$ has no induced subgraph isomorphic to a graph in $\{\alpha_1, \alpha_2, \alpha_3, \alpha_4, \alpha_5, \alpha_6, \beta_1, \beta_2, \beta_3, \beta_4, \gamma_1, \gamma_2, \gamma_3, \gamma_4\}$. \item $G$ has no pivot-minor isomorphic to $C_5$, $C_6$, $\alpha_1$, $\alpha_3$, $\alpha_4$, $\alpha_6$, $\gamma_1$, and $\gamma_3$. \item $G$ has no vertex-minor isomorphic to $C_5$, $\alpha_1$ and $\gamma_1$. \end{enumerate} \end{theorem} \begin{proof} By Lemma~\ref{lem:vm-rw}, $((1)\rightarrow (4))$ is clear as $C_5$, $\alpha_1$ and $\gamma_1$ have linear rank-width $2$. We can easily confirm the directions $((4)\rightarrow (3)\rightarrow (2))$; see~\cite{AdlerFP11}. We add a proof for $((2)\rightarrow (1))$. Suppose that $G$ has linear rank-width at least $2$ and $G$ is distance-hereditary. By Theorem~\ref{thm:charlrw1}, $T_D$ is not a path. Thus there exists a bag $B$ of $D$ such that $D\setminus V(B)$ has at least three components $T_1$, $T_2$, $T_3$. For each $i\in \{1,2,3\}$, let $v_i:=\zeta_b(B,T_i)$ and $w_i:=\zeta_t(B,T_i)$. We have three cases; $B$ is a complete bag, or $B$ is a star bag with the center at one of $v_1, v_2, v_3$, or $B$ is a star bag with the center at a vertex of $V(B)\setminus \{v_1, v_2, v_3\}$. If $B$ is a complete bag, then $G$ has an induced subgraph isomorphic to one of $\alpha_1, \alpha_2, \alpha_3, \alpha_4$ depending on the types of the marked edges $v_iw_i$. If $B$ is a star bag with the center at one of $v_1, v_2, v_3$, then $G$ has an induced subgraph isomorphic to one of $\beta_1, \beta_2, \ldots, \beta_6$. Finally, if $B$ is a star bag with the center at a vertex of $V(B)\setminus \{v_1, v_2, v_3\}$, then $G$ has an induced subgraph isomorphic to one of $\gamma_1, \gamma_2, \gamma_3, \gamma_4$. We summarize all the cases in Table~\ref{table2}. \end{proof} \section{Conclusion} In this paper we used the characterization of the linear rank-width of distance-hereditary graphs given in \cite{AdlerKK15} to \begin{enumerate} \item compute the set of distance-hereditary vertex-minor obstructions for linear rank-width $k$ and at the same time give a nearly tight bound on the number of distance-hereditary vertex-minor obstructions. \item prove that Question \ref{que:tree} is true if and only if it is true in prime graphs. \end{enumerate} Computing an upper bound on the size of vertex-minor obstructions for graphs of bounded linear rank-width is a challenging open question. Until now only a bound on obstructions for graphs of bounded rank-width is known \cite{Oum05}. Secondly, resolving Question \ref{que:tree} in all graphs seems to require new techniques as we currently do not have any idea on how to reduce any graph of small rank-width but large linear rank-width into a distance-hereditary graph whose decomposition tree has large path-width. One may be start with graphs of rank-width $2$.
1,941,325,220,606
arxiv
\section{Sampling from the conditional Bernoulli distribution} \subsection{Problem statement} Let $x=(x_{1},\ldots,x{}_{N})$ be an $N$-vector in $\{0,1\}^{N}$, with sum $\sum_{n=1}^{N}x_{n}=I$. Let $(p_{1},\ldots,p_{N}) \in (0,1)^N$, and denote the associated ``odds'' by $w_{n}=p_{n}/(1-p_{n})$. Define the set $S_{z}=\{n\in[N]:x_{n}=z\}$ for $z\in\{0,1\}$, where $[N]=\{1,\ldots,N\}$, i.e. $S_z$ has indices $n\in[N]$ at which $x_n=z$. We consider the task of sampling $x\in\{0,1\}^N$ from a distribution obtained by specifying an independent Bernoulli distribution with probability $p_n$ on each component $x_n$, and conditioning on $\sum_{n=1}^{N}x_{n} = I$ for some value $0\leq I\leq N$. This is known as the conditional Bernoulli distribution and will be denoted by $\ensuremath{\text{CB}}(p,I)$. The support of $\ensuremath{\text{CB}}(p,I)$ is denoted as $\mathbb{X}=\{x\in\{0,1\}^N: \sum_{n=1}^N x_n = I\}$. We assume that $1\leq I \leq N/2$, since we can always swap the labels ``0''and ``1''. We consider the asymptotic regime where $N$ and $I$ go to infinity at the same rate. One can sample exactly from $\ensuremath{\text{CB}}(p,I)$ \citep{chen1994weighted,chen1997statistical}, for a cost of order $IN$, thus order $N^2$ in the context of interest here; see Appendix \ref{appx:exactsampling}. As an alternative, we consider a simple Markov chain Monte Carlo (MCMC) algorithm that leaves $\ensuremath{\text{CB}}(p,I)$ invariant \citep{chen1994weighted,liu1995bayesian}. Starting from an arbitrary state $x\in\mathbb{X}$, this MCMC performs the following steps at each iteration. \begin{enumerate} \item Sample $i_{0}\in S_{0}$ and $i_{1} \in S_{1}$ uniformly, independently from one another. \item Propose to set $x_{i_{0}} = 1$ and $x_{i_{1}} = 0$, and accept with probability $\min(1,w_{i_{0}}/w_{i_{1}})$. \end{enumerate} By keeping track of the sets $S_0$ and $S_1$, the algorithm can be implemented using a constant cost per iteration. The purpose of this article is to show that this Markov chain converges to its target distribution $\ensuremath{\text{CB}}(p,I)$ in the order of $N \log N$ iterations, under mild conditions on $p$ and $I$. As the cost per iteration is constant, this provides an overall competitive scheme to sample from $\ensuremath{\text{CB}}(p,I)$. \subsection{Approach and related works} We denote the transition kernel of the above Metropolis--Hastings algorithm by $P(x,\cdot)$, and a Markov chain generated using the algorithm by $(x^{(t)})_{t\geq0}$, starting from $x^{(0)}\sim\pi_{0}$. The initial distribution $\pi_0$ could correspond to setting $I$ components of $x^{(0)}$ to $1$, chosen uniformly without replacement, or setting $x_i = 1$ for $i=1,\ldots,I$ and the other components to $0$. If all probabilities in $p$ are identical, the chain is equivalent to the Bernoulli--Laplace diffusion model, which is well-studied \citep{diaconis1987time,donnelly1994approach,eskenazis2020}. In particular, \citet{diaconis1987time} showed that mixing of the chain occurs in the order of $N\log N$ iterations when $I$ is proportional to $N$, via a Fourier analysis of the group structure of the chain. A mixing time of the same order can be obtained with a simple coupling argument \citep{guruswami2000rapidly}. Here we consider the case where $p$ is a vector of realizations of random variables in $(0,1)$, and provide conditions under which the mixing time remains of order $N\log N$. As we will see in Section \ref{sec:cvgcoupling}, the coupling argument alone falls apart in the case of unequal probabilities $p$, but can be successfully combined with a partition of the pair of state spaces into favorable and unfavorable pairs, to be defined in Section \ref{sec:partition}. Bounds on the transitions from parts of the space are used to define a simple Markov chain on the partition labels, which allows us to obtain our bounds in Section \ref{sec:mixingtime}. The problem of sampling $\ensuremath{\text{CB}}(p,I)$ has various applications, such as survey sampling \citep{chen1994weighted}, hypothesis testing in logistic regression \citep{chen1997statistical,brostrom2000acceptance}, testing the hypothesis of proportional hazards \citep{brostrom2000acceptance}, and sampling from a determinantal point process \citep{hough2006determinantal,kulesza2012determinantal}. \section{Convergence rate via couplings \label{sec:cvgcoupling}} \subsection{General strategy} Consider two chains $(x^{(t)})$ and $(\tilde{x}^{(t)})$, each marginally evolving according to $P$, with initialization $x^{(0)}\sim \pi^{(0)}$ and $\tilde{x}^{(0)}\sim \ensuremath{\text{CB}}(p,I)$. Define the sets $\tilde{S}_{z}=\{n\in[N]:\tilde{x}_{n}=z\}$ for $z=0,1$. The Hamming distance between two states $x$ and $\tilde{x}$ is $d(x,\tilde{x}) = \sum_{n=1}^N \mathds{1}(x_n\neq \tilde{x}_n)$. Since $x,\tilde{x}\in\mathbb{X}$ sum to $I$, the distance $d(x,\tilde{x})$ must be an even number. If $d(x,\tilde{x})=D$ then $|S_{0}\cap\tilde{S}_{0}|=N-I-D/2$, $|S_{1}\cap\tilde{S}_{1}|=I-D/2$ and $|\tilde{S}_{0}\cap S_{1}|=|S_{0}\cap\tilde{S}_{1}|=D/2$, where $|\cdot|$ denotes the cardinality of a set. Let $d^{(t)}$ denote the distance between $x^{(t)}$ and $\tilde{x}^{(t)}$ at iteration $t$. Following e.g. \citet[Section 6.2,][]{guruswami2000rapidly}, in the case of identical probabilities $p=(p_1,\ldots,p_N)$, a path coupling strategy \citep{bubley1997path} gives an accurate upper bound on the mixing time of the chain. The strategy is to study the distance $d^{(t)}$ as the iterations progress. Denote the total variation distance between the law of $x^{(t)}$ and its limiting distribution $\ensuremath{\text{CB}}(p,I)$ by $\|x^{(t)}-\ensuremath{\text{CB}}(p,I)\|_{\text{TV}}$. By the coupling inequality and Markov's inequality, \begin{align*} \|x^{(t)}-\ensuremath{\text{CB}}(p,I)\|_{\text{TV}} \leq \mathbb{P}\left(x^{(t)}\neq\tilde{x}^{(t)}\right) & =\mathbb{P}\left(d^{(t)}>0\right) \leq\mathbb{E}\left[d^{(t)}\right]. \end{align*} If the contraction $\mathbb{E}[d^{(t)}|x^{(t-1)}=x, \tilde{x}^{(t-1)}=\tilde{x}]\leq(1-c)d^{(t-1)}$ holds for all $x,\tilde{x}\in\mathbb{X}$ with $c\in(0,1)$, by induction this implies that $\|x^{(t)}-\ensuremath{\text{CB}}(p,I)\|_{\text{TV}}$ is less than $(1-c)^t \mathbb{E}[d^{(0)}]$. Noting that $\mathbb{E}[d^{(0)}]\leq N$ and writing $\kappa = -(\log(1-c))^{-1}>0$, an upper bound on the $\epsilon$-mixing time, defined as the first time $t$ at which $\|x^{(t)}-\ensuremath{\text{CB}}(p,I)\|_{\text{TV}}\leq \epsilon$, is given by $\kappa \log(N/\epsilon)$. Thus we seek a contraction result, with $c$ as large as possible. Instead of considering all pairs $(x,\tilde{x})\in\mathbb{X}^2$, the path coupling argument allows us to restrict our attention to contraction from pairs of adjacent states. We write the set of adjacent states as $\bar{\mathbb{X}}_{adj} = \{(x,\tilde{x})\in\mathbb{X}^2: d(x,\tilde{x})=2\}$; see Appendix \ref{sec:pathcoupling} for more details on path coupling. \subsection{Contraction from adjacent states \label{sec:contraction}} We now introduce a coupling $\bar{P}$ of $P(x,\cdot)$ and $P(\tilde{x},\cdot)$, for any pair $(x,\tilde{x})\in\mathbb{X}^2$, although we will primarily be interested in the case $(x,\tilde{x})\in\bar{\mathbb{X}}_{adj}$. First, sample $i_{0},\tilde{i}_{0}$ from the following maximal coupling of the uniform distributions on $S_{0}$ and $\tilde{S}_{0}$: \begin{enumerate} \item with probability $|S_{0}\cap\tilde{S}_{0}|/(N-I)$, sample $i_{0}$ uniformly in $S_{0}\cap\tilde{S}_{0}$ and set $\tilde{i}_{0}=i_{0}$, \item otherwise sample $i_{0}$ uniformly in $S_{0}\setminus\tilde{S}_{0}$ and $\tilde{i}_{0}$ uniformly in $\tilde{S}_{0}\setminus S_{0}$, independently. \end{enumerate} We then sample $i_{1},\tilde{i}_{1}$ with a similar coupling, independently of the pair $(i_{0},\tilde{i}_{0})$. Using these proposed indices, swaps are accepted or rejected using a common uniform random number. These steps define a coupled transition kernel $\bar{P}((x,\tilde{x}),\cdot)$. Under $\bar{P}$, the distance between the chains can only decrease, so for $(x',\tilde{x}') \sim \bar{P}((x,\tilde{x}),\cdot)$ from $(x,\tilde{x}) \in \bar{\mathbb{X}}_{adj}$, the distance $d(x',\tilde{x}')$ is either zero or two. We denote the expected contraction from $(x,\tilde{x})$ by $c(x,\tilde{x})$, i.e. $\mathbb{E}[d(x',\tilde{x}')|x,\tilde{x}] = (1-c(x,\tilde{x})) d(x,\tilde{x})$. In the case $(x,\tilde{x}) \in \bar{\mathbb{X}}_{adj}$, we denote by $a$ the single index at which $x_{a}=0,\tilde{x}_{a}=1$, and by $b$ the single index at which $x_{b}=1,\tilde{x}_{b}=0$. An illustration of such states is in Table \ref{table:adjset}. Up to a re-labelling of $x$ and $\tilde{x}$, we can assume $w_a \leq w_b$. By considering all possibilities when propagating $(x,\tilde{x}) \in \bar{\mathbb{X}}_{adj}$ through $\bar{P}$, we find that \begin{align} \label{eq:contractionrate} c(x,\tilde{x}) & =\frac{1}{N-I}\times\frac{1}{I}\times \left[\left|1 - \frac{w_a}{w_b}\right| +\sum_{i_{1}\in S_{1}\cap\tilde{S}_{1}}\min\left(1,\frac{w_{a}}{w_{i_{1}}}\right)+\sum_{i_{0}\in S_{0}\cap\tilde{S}_{0}}\min\left(1,\frac{w_{i_{0}}}{w_{b}}\right)\right]. \end{align} The derivation of \eqref{eq:contractionrate} is in Appendix \ref{appx:contractionrate}. The next question is whether this contraction rate $c(x,\tilde{x})$ can be lower bounded by a quantity of order $N^{-1}$; if this is the case, a mixing time of order $N\log N$ would follow. \begin{table} \[ \begin{array}{cccccccccccc} &1& & & & a & b& & & & & N\\ x &0 & \ldots & 1 & \ldots & 0 & 1 &\ldots & 1 & 0 & \ldots & 0 \\ \tilde{x} & 0 & \ldots & 1 & \ldots & 1 & 0 & \ldots & 1 & 0 & \ldots & 0 \end{array} \] \caption{Adjacent states $(x,\tilde{x})\in \bar{\mathbb{X}}_{adj}$. They differ at indices $a$ and $b$ only, with $x_a=\tilde{x}_b=0$ and $x_b = \tilde{x}_a = 1$. The other components of $x$ and $\tilde{x}$ are identical, and equal to $0$ or $1$. \label{table:adjset}} \end{table} \subsection{Shortcomings} If the probabilities $p$ are identical, $c(x,\tilde{x})$ simplifies to $(N-2)/\{(N-I)I\}$ for all $(x,\tilde{x})\in \bar{\mathbb{X}}_{adj} $. Assuming that $I \propto N$, this is of order $N^{-1}$ and leads to a mixing time in $N \log N$ \citep{guruswami2000rapidly}. It follows from \citet{diaconis1987time} that this contraction rate is sharp in its dependency on $N$. The same conclusion holds in the case where $(p_n)$ are not identical but are bounded away from $0$ and $1$, i.e. $w_{n}\in [w_{lb},w_{ub}]$ with $0<w_{lb}<w_{ub}<\infty$ independent of $N$. In that case, we obtain the rate $c(x,\tilde{x}) \geq (N-2)/\{(N-I)I\} w_{lb}/w_{ub}$, which worsens as the ratio $w_{lb} / w_{ub}$ gets smaller. The main difficulty addressed in this article arises when $\min_n p_n$ and $\max_n p_n$ get arbitrarily close to $0$ and $1$ as $N$ increases. This scenario is common, for example if $(p_n)$ are independent Uniform$(0,1)$, we have $\min_n w_{n} \sim N^{-1}$ and $\max_n w_{n} \sim N$. Thus for $w_a = \min_n w_{n}$ and $w_b = \max_n w_{n}$, the contraction in $\eqref{eq:contractionrate}$ can be of order $N^{-2}$ when $I \propto N$, which leads to an upper bound on the mixing time of order $N^2 \log N$. To set our expectations appropriately, we follow the approach of \citet{biswas2019estimating} to obtain empirical upper bounds on the mixing time as $N$ increases. Details of the approach, which itself is based on couplings, are given in Appendix \ref{appx:empiricalmixing}. Figure \ref{fig:mixingtimeNlogN} shows the estimated upper bound on the mixing time, divided by $N \log N$, as a function of $N$, when $(p_n)$ are generated (once for each value of $N$) from independent Uniform(0,1) and $I$ is set to $N/2$. The figure suggests that the mixing time might scale as $N \log N$. \begin{figure}[t] \begin{centering} \subfloat[Meeting times $/ N\log N$]{\begin{centering} \includegraphics[width=0.45\textwidth]{{scaledmeetings}.pdf} \par\end{centering} } \hspace*{1cm} \subfloat[Estimated $1\%$-mixing times $/ N\log N$]{\begin{centering} \includegraphics[width=0.45\textwidth]{{mixingtimes}.pdf} \par\end{centering} } \par\end{centering} \caption{ Meeting times (\emph{left}) and estimated upper bounds on the mixing time of the chain $x^{(t)}$ targeting $\ensuremath{\text{CB}}(p,I)$ (\emph{right}), divided by $N\log N$, against $N$. Here the probabilities $p$ are independent Uniform(0,1) and $I=N/2$. \label{fig:mixingtimeNlogN} } \end{figure} Our contribution is to refine the coupling argument in order to establish an upper bound on the mixing time of order $N\log N$, under conditions which allow for example $(p_n)$ to be independent Uniform(0,1). A practical consequence of our result stated in Section \ref{sec:mixingtime} is that the simple MCMC algorithm is competitive compared to exact sampling strategies for $\ensuremath{\text{CB}}(p,I)$. \section{Proposed analysis} \subsection{Favorable and unfavorable states \label{sec:partition}} In the worst case scenario, $w_a$ might be of order $N^{-1}$ and $w_b$ of order $N$, resulting in a rate $c(x,\tilde{x})$ of order $N^{-2}$. However, this is not necessarily typical of a pair of states $(x,\tilde{x})\in\bar{\mathbb{X}}_{adj}$. This prompts us to partition $\bar{\mathbb{X}}_{adj}$ into ``unfavorable'' states, from which their probability of contracting is smaller than order $N^{-1}$, and ``favorable'' states, from which meeting occurs with probability of order $N^{-1}$. The precise definition of this partition will be made in relation to the odds $(w_n)$. Since $c(x,\tilde{x})$ in \eqref{eq:contractionrate} depends on $(w_n)$ and $I$, we will care about statements holding with high probability under the distribution of $(w_n)$ and $I$, which are described in Assumptions \ref{def:reasonableodds} and \ref{def:ones}. Fortunately, we will see in Proposition \ref{prop:transitionfavorable} that favorable states can be reached from unfavorable ones with probability at least order $N^{-1}$, while unfavorable states are visited from favorable ones with probability less than order $N^{-1}$. This will prove enough for us to establish a mixing time of order $N\log N$ in Theorem \ref{thm:convergencerate}. \begin{assumption}\label{def:reasonableodds} (Condition on the odds). The odds $(w_n)$ are such that there exist $\zeta>0$, $0<l<r<\infty$ and $\eta>0$ such that for all $N$ large enough, \[\mathbb{P}\left(\left|\left\{n\in[N]: w_n \notin (l, r)\right\}\right|\leq \zeta N\right)\geq 1-\exp(-\eta N).\] \end{assumption} This assumption states that with exponentially high probability, a proportion of the odds that falls within an interval can be defined independently of $N$. The condition can be verified using for example Hoeffding's inequality if the odds $(w_n)$ are independently and identically distributed on $(0,\infty)$, but also under weaker conditions. The statement ``for all $N$ large enough'' means for all $N\geq N_0$ where $N_0\in\mathbb{N}$. \begin{assumption}\label{def:ones} (Conditions on $I$). There exist $0<\xi\leq 1/2$ and $\eta' >0$ such that for all $N$ large enough, \[\mathbb{P}\left(\xi N \le I \right)\geq 1 - \exp(- \eta' N).\] \end{assumption} This assumption formalizes what we mean by $I \propto N$, and is probabilistic rather than setting $I=\lfloor \xi N\rfloor$ for some $\xi\in (0,1/2]$. It implies that $\xi N^2/2 \le (N-I)I \le (1 - \xi) N^2/2$ with high probability. Recall that we have assumed $I\leq N/2$ without loss of generality. \begin{proposition} \label{prop:transitionfavorable} Suppose Assumptions \ref{def:reasonableodds} and \ref{def:ones} hold such that $\zeta < \xi$. Then we can define $\xi_{\mathrm{F}\to\mathrm{D}},\xi_{\mathrm{U}\to\mathrm{F}},\xi_{\mathrm{F}\to\mathrm{U}},\nu>0$ and $0<w_{lo}<w_{hi}<\infty$ such that, for all $N$ large enough, with probability at least $1-\exp(-\nu N)$, the sets of favorable and unfavorable states defined as \begin{align} \bar{\mathbb{X}}_\mathrm{U} &= \{ (x,\tilde{x})\in \bar{\mathbb{X}}_{adj}: w_{a}<w_{lo} \text{ and } w_{b}>w_{hi}\}, \label{eq:unfavorablestates}\\ \bar{\mathbb{X}}_\mathrm{F} &= \{(x,\tilde{x})\in \bar{\mathbb{X}}_{adj}: w_{a} \ge w_{lo} \ \text{ or } \ w_{b} \le w_{hi}\}, \label{eq:favorablestates} \end{align} and the ``diagonal'' set $ \bar{\mathbb{X}}_{\mathrm{D}} = \{(x,\tilde{x})\in \mathbb{X}^2: x = \tilde{x}\}$, satisfy the following statements under the coupling $\bar{P}$ described in Section \ref{sec:contraction}, \begin{align} \bar{P}((x,\tilde{x}), \bar{\mathbb{X}}_{\mathrm{D}}) &\geq \xi_{\mathrm{F}\to\mathrm{D}} /N,\quad \forall (x,\tilde{x}) \in \bar{\mathbb{X}}_\mathrm{F}, \label{eq:transit:favtomeet}\\ \bar{P}((x,\tilde{x}), \bar{\mathbb{X}}_\mathrm{F}) &\geq \xi_{\mathrm{U}\to\mathrm{F}}/N, \quad\forall (x,\tilde{x}) \in \bar{\mathbb{X}}_\mathrm{U}, \label{eq:transit:unfavtofav}\\ \bar{P}((x,\tilde{x}), \bar{\mathbb{X}}_\mathrm{U}) &\leq \xi_{\mathrm{F}\to\mathrm{U}}/N,\quad \forall (x,\tilde{x}) \in \bar{\mathbb{X}}_\mathrm{F}.\label{eq:transit:favtounfav} \end{align} \end{proposition} The proof in Appendix \ref{appx:proofs:transition} relies on a careful inspection of the various cases arising in the propagation of the coupled chains. The proposition provides bounds on the transition probabilities between the subsets $\bar{\mathbb{X}}_\mathrm{U}$, $\bar{\mathbb{X}}_\mathrm{F}$ and $\bar{\mathbb{X}}_\mathrm{D}$. \subsection{Chasing chain and mixing time \label{sec:mixingtime}} We relate the coupled chain $(x^{(t)},\tilde{x}^{(t)})$ to an auxiliary Markov chain denoted by $(Z^{(t)})$, defined on a space with three states $\{1,2,3\}$, associated with the subsets $\bar{\mathbb{X}}_\mathrm{U}$, $\bar{\mathbb{X}}_\mathrm{F}$ and $\bar{\mathbb{X}}_\mathrm{D}$, respectively. We introduce the Markov transition matrix \begin{align} Q = \begin{pmatrix} 1-\xi_{\mathrm{U}\to\mathrm{F}}/N & \xi_{\mathrm{U}\to\mathrm{F}}/N & 0\\ \xi_{\mathrm{F}\to\mathrm{U}}/N & 1-\xi_{\mathrm{F}\to\mathrm{U}}/N-\xi_{\mathrm{F}\to\mathrm{D}}/N & \xi_{\mathrm{F}\to\mathrm{D}}/N\\ 0 & 0 & 1 \end{pmatrix}, \label{eq:transition3states} \end{align} where the constants $\xi_{\mathrm{F}\to\mathrm{D}},\xi_{\mathrm{U}\to\mathrm{F}},\xi_{\mathrm{F}\to\mathrm{U}}>0$ are given by Proposition \ref{prop:transitionfavorable}, and we assume $N$ is large enough for each entry, including $1-\xi_{\mathrm{U}\to\mathrm{F}}/N$ and $ 1-\xi_{\mathrm{F}\to\mathrm{U}}/N-\xi_{\mathrm{F}\to\mathrm{D}}/N$, to be positive. We then observe that a Markov chain $(Z^{(t)})$ with transition $Q$ is such that there exists $r\in(0,1)$ independent of $N$ satisfying $\mathbb{P}(Z^{(N)} = 3 | Z^{(0)} = 1)\geq 1-r$; details can be found in Appendix \ref{appx:convergenceQ}. \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{{Zdependencies}.pdf} \caption{Conditional dependencies of the processes $(x^{(t)},\tilde{x}^{(t)})$ and $(Z^{(t)})$.} \label{fig:Zdependencies} \end{figure} We now relate the auxiliary chain to $(x^{(t)},\tilde{x}^{(t)})$ using a strategy inspired by \citet{jacob2014wang}. Consider the variable $B^{(t)} \in \{1,2,3\}$ defined as $1$ if $(x^{(t)},\tilde{x}^{(t)}) \in \bar{\mathbb{X}}_\mathrm{U}$, $2$ if $(x^{(t)},\tilde{x}^{(t)}) \in \bar{\mathbb{X}}_\mathrm{F}$ and $3$ if $x^{(t)} = \tilde{x}^{(t)}$. The key idea is to construct the auxiliary chain $(Z^{(t)})$ on $\{1,2,3\}$, in such a way that it is (marginally) a Markov chain with transition matrix $Q$ in \eqref{eq:transition3states}, and also such that $Z^{(t)}\leq B^{(t)}$ for all $t$ almost surely; this is possible thanks to Proposition \ref{prop:transitionfavorable}. Thus the event $\{Z^{(t)} = 3\}$ will imply $\{B^{(t)} = 3\} = \{x^{(t)}=\tilde{x}^{(t)}\}$, and we can translate the hitting time of $(Z^{(t)})$ to its absorbing state into a statement about the meeting time of $(x^{(t)},\tilde{x}^{(t)})$. An explicit construction of $(Z^{(t)})$ is described in Appendix \ref{appx:constructionauxprocess}; Figure \ref{fig:Zdependencies} represents the dependency structure where $Z^{(t+1)}$ is constructed given $Z^{(t)}$, but also conditional upon $(x^{(t)},\tilde{x}^{(t)})$ and $(x^{(t+1)},\tilde{x}^{(t+1)})$ to ensure that the inequality $Z^{(t+1)}\leq B^{(t+1)}$ holds almost surely. The convergence of $(Z^{(t)})$ to its absorbing state translates into an upper bound on the mixing time of $(x^{(t)})$ of the order of $N\log N$ iterations, which is our main result. \begin{theorem} \label{thm:convergencerate} Under Assumptions \ref{def:reasonableodds} and \ref{def:ones} such that $\zeta < \xi$, there exist $\kappa>0$, $\nu>0$, $N_0\in\mathbb{N}$ independent of $N$ such that, for any $\epsilon\in(0,1)$, and for all $N\geq N_0$, with probability at least $1-\exp(-\nu N)$, we have \[\|x^{(t)} - \ensuremath{\text{CB}}(p,I)\|_{\mathrm{TV}}\leq \epsilon \quad \text{ for all }\quad t\geq \kappa N \log (N/\epsilon).\] \end{theorem} The proof of Theorem \ref{thm:convergencerate} is given in Appendix \ref{appx:upperboundmixing}. \section{Discussion} Using the strategy of \citet{biswas2019estimating}, we assess the convergence rate of the chain in the regime where $I$ is sub-linear in $N$. Figure \ref{fig:mixingtimeNlogN_smallI} shows the estimated upper bounds on the mixing time obtained in the case where $I$ is fixed to $10$ while $N$ grows, and where $(p_n)$ are independent Uniform(0,1) (generated once for each value of $N$). The figure might suggest that the mixing time grows at a slower rate than $N$ in this setting, and thus that MCMC is competitive relative to exact sampling. Understanding the small $I$ regime remains an open problem. \begin{figure}[t] \begin{centering} \subfloat[ Meeting times]{\begin{centering} \includegraphics[width=0.45\textwidth]{{scaledmeetings_Ifixed}.pdf} \par\end{centering} } \hspace*{1cm} \subfloat[Estimated $1\%$-mixing times divided by $N$]{\begin{centering} \includegraphics[width=0.45\textwidth]{{mixingtimes_Ifixed}.pdf} \par\end{centering} } \par\end{centering} \caption{ Meeting times (\emph{left}) and estimated upper bounds on the mixing time, divided by $N$ (\emph{right}), against $N$. The probabilities $p$ are independent Uniform(0,1) and $I=10$ for all $N$. \label{fig:mixingtimeNlogN_smallI} } \end{figure} Our approach relies on a partition of the state space and an auxiliary Markov chain defined on the subsets given by the partition. This technique bear a resemblance to partitioning the state space with more common drift and contraction conditions \citep{durmus2015quantitative,qin2019geometric}, but appears to be distinct. The present setting is similar to the question of sampling permutations via random swaps. For that problem, direct applications of the coupling argument result in upper bounds on the mixing time of the order of at least $N^2$. \citet{bormashenko2011coupling} devises an original variant of the path coupling strategy to obtain an upper bound in $N \log N$, which is the correct dependency on $N$; see \citet{berestycki2019cutoff} for recent developments leading to sharp constants. The proposed analysis captures the impact of the dimension $N$ faithfully. It fails to provide accurate constants and exact characterizations of how the mixing time depends on the distribution of the probabilities $(p_n)$ and the sum $I$. Yet our analysis already supports the use of MCMC over exact sampling strategies for conditional Bernoulli sampling, especially as part of encompassing MCMC algorithms such as that of \citet{yang2016computational} for Bayesian variable selection. \subsubsection*{Acknowledgments} This work was funded by CY Initiative of Excellence (grant ``Investissements d'Avenir'' ANR-16-IDEX-0008). Pierre E. Jacob gratefully acknowledges support by the National Science Foundation through grants DMS-1712872 and DMS-1844695. \small \bibliographystyle{plainnat}
1,941,325,220,607
arxiv
\section{Introduction} \label{sec:Intro} The transverse momentum spectra of gauge bosons is well trodden territory. They are important for measurements of, \emph{e.g.}~ Higgs production, as well as the dynamics of QCD in Drell-Yan (DY) processes. There are calculations available at NNLL+NNLO accuracy using a variety of resummation schemes both using the framework of soft collinear effective theory (SCET) \cite{Bauer:2000ew,Bauer:2000yr,Bauer:2001ct,Bauer:2001yt,Bauer:2002nz}, \emph{e.g.}~ \cite{Gao:2005iu,Idilbi:2005er,Mantry:2010mk,GarciaEchevarria:2011rb}, and Collins-Soper-Sterman (CSS) \cite{Collins:1984kg} formalisms, \emph{e.g.}~ \cite{deFlorian:2000pr,deFlorian:2001zd,Catani:2010pd,Bozzi:2010xn,Becher:2010tm}, and even N$^3$LL+NNLO \cite{Bizon:2017rah} (see also calculations in, \emph{e.g.}~\cite{Li:2016axz,Li:2016ctv,Vladimirov:2016dll,Gehrmann:2014yya,Luebbert:2016itl,Echevarria:2015byo,Echevarria:2016scs}). Joint resummation of threshold and tranverse-momentum logs is even possible to NNLL and beyond (\emph{e.g.}~ \cite{Li:2016axz,Lustermans:2016nvk,Marzani:2016smx,Muselli:2017bad}).Why, then, do we wish to visit this subject anew? This has mostly to do with the peculiar structure of the factorized cross section which makes the resummation of large logarithms an interesting problem. The cross section can be factorized in terms of a hard function, which lives at a virtuality $Q$, the invariant mass of the gauge boson, and soft and the beam functions (or TMDPDFs) which describe the IR physics and live at the virtuality $q_T \ll Q$, which is the transverse momentum of the gauge boson. The soft and collinear emissions are the ones providing the recoil for the transverse momentum of the gauge boson. This automatically means that these functions are convolved with each other in transverse momentum space so that the $q_T$ of the gauge boson is a sum of the $q_T$ contribution from each emission: \begin{align} \label{txsec} \frac{ d \sigma} {d^2 q_T dy} &= \sigma_0 C_t^2(M_t^2,\mu) H (Q^2; \mu ) \int d^2 \vec{q}_{Ts} d^2 \vec{q}_{T1} d^2\vec{q}_{T2} \delta^2\bigl(\vec{q_T} - (\vec{q_{Ts}} +\vec{q}_{T1}+\vec{q}_{T2}) \bigr) \\ &\quad\times S( \vec{q}_{Ts} ;\mu,\nu) f_1^{\perp}\Bigl(\vec{q}_{T1}, x_1, p^-;\mu ,\nu\Bigr)f_2^{\perp}\Bigl(\vec{q}_{T2},x_2, p^+;\mu ,\nu\Bigr) \,, \nn \end{align} at $s=(P_1+P_2)^2$, with colliding protons of momenta $P_{1,2}$, and gauge boson invariant mass $Q^2$ and rapidity $y$. For the case of the Higgs, we have a Wilson coefficient $C_t$ after integrating out the top quark. (For DY we just set $C_t=1$ in \eq{txsec}, and consider explicitly only the $\gamma^*$ channel in this paper.) Here, $S$ is the soft function accounting for the contribution of soft radiation to $\vec{q}_T$, $f_{1,2}^\perp$ are the TMDPDFs (or beam functions) accounting for the contribution of radiation collinear to the incoming protons to $\vec{q}_T$, and they depend on kinematic variables $p^\mp = Qe^{\mp y} = x_{1,2}\sqrt{s}$. The peculiarity of the factorization is that even though the TMDPDFs form a part of the IR physics, they depend on the hard scale $Q$ (\emph{c.f.} \cite{Chiu:2007dg}), which, as we shall see later, will play an important role in our resummation formalism. The hard function $H$ encodes virtual corrections to the hard scattering process, computed by a matching calculation from QCD to SCET. The scale $\mu$ is the renormalization scale normally encountered in the $\overline{\text{MS}}$ scheme and plays the role of separating hard modes (integrated out of SCET) from the soft and collinear modes, by their virtuality. The additional rapidity renormalization scale $\nu$, introduced in \cite{Chiu:2011qc,Chiu:2012ir}, arises from the need to separate soft and collinear modes, which share the same virtuality $\mu$, in their rapidity (\fig{modes}). The cross section itself is independent of these arbitrary virtuality and rapidity boundaries, but the renormalization group (RG) evolution of factorized functions from their natural scales, where they have no large logs, to arbitrary $\mu,\nu$ can be used to resum the large logs in the cross section. \begin{figure} \centerline{\scalebox{.55}{\includegraphics{modes.pdf}} \scalebox{.6}{\includegraphics{resum.pdf}}} \vskip-0.2cm \caption[1]{\emph{Left:} EFT modes and their scalings in light-cone momentum $k^\pm$ space. For the TMD cross sections we consider, the small parameter can be taken to be $\lambda \sim q_T/Q$ or $\sim Qb_0$ in impact parameter $b$ space, where $b_0 = be^{\gamma_E}/2$. \emph{Right:} RG and rapidity RG evolution. $\mu$ runs between the hard and soft hyperbolas of virtuality shown in the left-hand figure, while $\nu$ runs between the soft and collinear modes which are separated only by rapidity. The evolution is path independent, one convenient path is shown here.} \label{fig:modes} \end{figure} \subsection{RG and RRG Evolution in Impact Parameter vs. Momentum Space} These functions obey the renormalization group (RG) equations in $\mu$ \begin{eqnarray} \mu \frac{d }{d\mu} F_i = \gamma_{\mu}^i F_i \end{eqnarray} where $F_i$ can be $C_t^2(M_t^2, \mu)$, $H(Q^2;\mu)$, $S(\vec{q}_{Ts};\mu,\nu)$ or $f_i^{\perp}(\vec{q}_{Ti}, Q, x_i;\mu ,\nu)$. The RG equations in $\nu$ have a more complicated convolution structure: \begin{eqnarray} \nu \frac{ d}{d\nu} G_i (\vec{q}_{T}; \nu) = \gamma_{\nu}^i( \vec{q}_{T}) \otimes G_i (\vec{ q_{T}} ;\nu) \end{eqnarray} where $G_i$ can be soft functions or TMDPDFs. The symbol $\otimes$ here indicates convolution defined as \begin{eqnarray} \gamma_{\nu} (\vec{q_T}) \otimes G( \vec{q_T})= \int \frac{d^2p_T}{(2 \pi)^2} \gamma_{\nu} (\vec{q_T} - \vec{p_T} ) G( \vec{p_T}) \end{eqnarray} Apart from the complicated structure of the RG equations, the anomalous dimensions themselves are not simple functions but are usually plus distributions \cite{Chiu:2012ir} which makes it even harder to solve these equations directly in momentum space. A typical strategy to get around this is to Fourier transform to position (\emph{i.e.}~ impact parameter) space, defining \begin{eqnarray} \widehat G(\vec{b}) \equiv \int \frac{d^2 q_T}{(2\pi)^2} e^{i \vec{b} \cdot \vec{q_T}}G(\vec{q_T}) \,,\quad \widehat G(\vec{b}) \equiv \frac{1}{2\pi}\widetilde G(b)\,,\quad b\equiv \abs{\smash{\vec{b}}}\,, \end{eqnarray} the latter definitions accounting for the fact that all the distributions we encounter will have azimuthal symmetry in $\vec{q}_T$ or $\vec{b}$. This then gives ordinary multiplicative differential equations (instead of convolutions), and a closed form solution to the RG equations can be easily obtained. Moreover the cross section now takes the simpler structure, \begin{align} \label{xsec} \frac{ d \sigma} {dq_T^2 dy} &= \sigma_0 \pi (2\pi)^2 C_t^2(M_t^2, \mu) H (Q^2,\mu ) \int db\, b J_0( b q_T) \\ &\qquad \times \widetilde S( b ,\mu,\nu) \widetilde f_1^{\perp}( b, x_1,p^-; \mu ,\nu) \widetilde f_2^{\perp}( b, x_2, p^+; \mu ,\nu) \,,\nn \end{align} where $J_0$ is the $n=0$ Bessel function of the first kind. Note we have changed variables from $\vect{q}_T$ in \eq{txsec} to $q_T^2$ in \eq{xsec}. The $b$-space soft and beam functions $\widetilde S$ and $\widetilde f_i^\perp$ now obey multiplicative rapidity RGEs in $\nu$, \begin{equation} \nu\frac{d}{d\nu}\widetilde G_i = \gamma_\nu^i \widetilde G_i\,, \end{equation} whose anomalous dimensions and solutions we shall give below. Only the $b$ integration in \eq{xsec} stands in the way of a having a simple product factorization of the momentum-space cross section. Finding a way to carry it out will be one the main focuses of this paper. For perturbative values of $q_T$, the TMDPDF's can be matched onto the PDF's. The $b$-space cross section, defined as the following product of factors in the integrand of \eq{xsec}: \begin{equation} \label{bxsec} \widetilde\sigma(b,x_1,x_2;\mu,\nu) = H(Q^2,\mu) \widetilde S(b;\mu,\nu) \widetilde f_1^\perp(b,x_1,p^-;\mu,\nu) \widetilde f_2^\perp(b,x_2,p^+;\mu,\nu)\,, \end{equation} computed in fixed-order QCD perturbation theory then contains logs of $Q b_0$ where $b_0 = b e^{\gamma_E}/2$ (see \eq{sing}). Schematically, the expansion takes the form \begin{equation} \label{logexpansion} (2\pi)^3\widetilde\sigma(b) = f_i(x_1)f_{\bar i}(x_2) \exp\bigg[ \sum_{n=0}^\infty\sum_{m=0}^{n+1}\Bigl(\frac{\alpha_s(\mu)}{4\pi}\Bigr)^n G_{nm}\ln^m Qb_0\biggr] \,, \end{equation} where $i,\bar i= g$ for Higgs production and $i=q$ for DY, and where we ignore effects of DGLAP evolution for the moment (we include them in \eq{sing} and in all our analysis below). This takes the typical form of a series of Sudakov logs. The number of coefficients $G_{nm}$ that need to be known is determined by the desired order of resummed accuracy. Using the heuristic power counting $\ln Qb_0\sim 1/\alpha_s$ in the region of large logs needing resummation, the leading log (LL) series includes the $\mathcal{O}(1/\alpha_s)$ terms $m=n+1$, the next-to-leading log (NLL) series the $\mathcal{O}(1)$ terms up to $m=n$, at NNLL the $\mathcal{O}(\alpha_s)$ terms up to $m=n-1$, etc. When we later talk about resummation in momentum space, we will define our accuracy by the corresponding terms in the $b$-space integrand that we have successfully inverse Fourier transformed (cf. \cite{Almeida:2014uva}). For a TMD cross section, the logs in the full QCD expansion \eq{logexpansion} are factored into logs from the hard and soft functions and TMDPDFs of ratios of the arbitrary virtuality and rapidity factorization scales $\mu,\nu$ and the physical virtuality and rapidity scales defining each mode. Each function contains logs: \begin{equation} C_t^2 = C_t^2\Bigl(\ln \frac{\mu^2}{M_t^2}\Bigr)\,\ H = H\Bigl(\ln \frac{\mu^2}{Q^2}\Bigr)\,,\ \widetilde S = \widetilde S\Bigl(\ln\mu b_0,\ln\frac{\mu}{\nu}\Bigr)\,,\ \widetilde f^\perp = \widetilde f^\perp \Bigl(\ln \mu b_0,\ln\frac{\nu}{p^\pm}\Bigr)\,. \end{equation} These logs reflect the natural virtuality and rapidity scales where each function ``lives'' and where logs in each are minimized. For example, at one loop, the logs in the QCD result \eq{sing} split up into individual hard, soft, and collinear logs from \eqss{Cn}{Sn}{Iijcoefficients}, \begin{align} &-\mathbb{Z}_H \frac{\Gamma_0}{2} \ln^2 Qb_0 - \gamma_H^0 \ln Qb_0- \gamma_{C_t^2}^0 \ln M_tb_0 = - \mathbb{Z}_H \frac{\Gamma_0}{2} \ln^2\frac{\mu}{Q} + \gamma_H^0\ln\frac{\mu}{Q} + \gamma_{C_t^2}^0\ln\frac{\mu}{M_t} \nn\\ &\qquad\qquad\qquad + \mathbb{Z}_S\frac{\Gamma_0}{2}\Bigl(\ln^2 \mu b_0 + 2\ln\mu b_0 \ln\frac{\nu}{\mu}\Bigr) + \mathbb{Z}_f \Gamma_0 \ln\mu b_0\ln\frac{\nu^2}{Q^2} + 2\gamma_f^0 \ln\mu b_0\,, \end{align} where the individual anomalous dimension coefficients satisfy the constraints $\mathbb{Z}_H + \mathbb{Z}_S + 2\mathbb{Z}_f = 0$ and $\gamma_H^0+\gamma_{C_t^2}^0 + 2\gamma_f^0 = 0$. (For DY, $\gamma_{C_t^2}=0$.) RG evolution of each factor---hard, soft, and collinear---in both virtuality and rapidity space from scales where the logs are minimized, namely, $\mu_H\sim Q, \mu_T \sim M_t$ and, naively, $\mu_{S,f}\sim 1/b_0$ for the virtuality scales, while $\nu_S\sim \mu_S$ and $\nu_f\sim Q$ for the rapidity scales, to the common scales $\mu,\nu$ achieve resummation of the large logs, to an order of accuracy determined by the order to which the anomalous dimensions and boundary conditions for each function are known and included. This will be reviewed in further detail in \sec{resumxsec}. This, at least, is the procedure one would follow to resum logs in impact parameter space. It corresponds, in SCET language, to how to obtain the result of the standard CSS resummation through traditional \cite{Collins:1984kg} or modern techniques \cite{Collins:2011zzd}, as well as recent EFT treatments like \cite{Neill:2015roa}. Then the resummed $b$-space cross section is Fourier transformed back to momentum space via \eq{xsec}. The main issue with this procedure is that the strong coupling $\alpha_s(\mu)$ in the soft function and TMDPDFs is then evaluated at a $b$-dependent scale $\mu_{S,f}\sim 1/b_0$, which enters the nonperturbative regime at sufficiently large $b$ in the integral in \eq{xsec}. So the integrand must be cut off before reaching the Landau pole in $\alpha_s$. There are quite a few procedures in the literature to implement precisely such a cutoff by introducing models for nonperturbative physics, see \emph{e.g.}~ \cite{Qiu:2000hf,Sun:2013dya,DAlesio:2014mrz,Collins:2014jpa,Scimemi:2017etj}. Motivated by these observations, in this paper we explore the following main questions: \begin{itemize} \item Even though the natural scale for minimizing the logarithms in the soft function and TMDPDFs is a function of the impact parameter $b$, can we actually set scales directly in momentum space, after performing the $b$ integration? (without an arbitrary cutoff of the $b$ integration?) \item If that is possible, can we obtain a closed-form expression for the cross section which will be accurate to any resummation order and ultimately save computation time? \end{itemize} In \sec{resumxsec} we shall propose a way to answer the first question, and in \sec{analytic} we shall develop a method to answer the second. To aid the reader in quickly grasping the main points of our paper, we offer a more detailed-than-usual summary of these sections here, which is somewhat self-contained and can be used as a substitute for the rest of the paper upon a first reading. Readers interested in the details of our arguments can then delve into the main body of the paper. Except for a brief discussion near the end, we emphasize we address only the perturbative computation of the cross section in this paper. \subsection{A hybrid set of scale choices for convergence of the $b$ integral} Regarding the first question, the issue with leaving the $\mu,\nu$ scales for the soft function and TMDPDFs unfixed before integrating over $b$ in \eq{xsec} is that the integral, while avoiding the Landau pole from long-distance/small-energy scales, is then plagued by a spurious divergence from \emph{large}-energy/\emph{short}-distance emissions \cite{Chiu:2012ir}, \emph{e.g.}~ at NLL accuracy: \begin{align} \label{singlelog} \frac{d \sigma}{d q_T^2 dy} &=\sigma_0 \pi (2\pi)^2 C_t^2(M_t^2, \mu_T)U^{\rm NLL}(\mu_H, \mu_T, \mu_L) H(Q^2;\mu_H) \int db\, b J_0(b q_T) \widetilde S(b;\mu_L,\nu_L) \\ &\qquad \times \widetilde f_1^\perp(b,x_1,p^-;\mu_L,\nu_H) \widetilde f_2^\perp(b,x_2,p^+;\mu_L,\nu_H) \exp\left[ -\Gamma_0 \frac{\alpha(\mu_L)}{\pi}\ln\left(\frac{\nu_H}{\nu_L}\right)\ln(\mu_L b_0) \right]\,, \nn \end{align} although $ H,\widetilde S,\widetilde f^\perp$ are truncated to tree level at NLL while $C_t= \alpha_s$. The hard scale $\mu_H$ is usually set to $i Q$ to implement what is called $\pi^2$ resummation to improve perturbative convergence \cite{Ahrens:2008qu,Ahrens:2008nc}. This integral, as we will see below in \eq{softNLL}, is still divergent. At this point, $\mu_L$ and $\nu_{H,L}$ are $b$-independent and cannot help with regulating the integral. What we need in \eq{singlelog} is a factor that damps away the integrand for both large \emph{and} small $b$. In this paper, we adopt the approach that there are already terms in the physical cross section itself that can play the role of this damping factor and that we should use them. Namely, at NLL$'$ order and beyond, the soft function evaluated at the low scales $\mu_L,\nu_L$ in the integrand of \eq{singlelog} contains logs of $\mu_L b_0$ that we can use to regulate the integral, see \eq{Sn}. Since $\widetilde S(b,\mu_L,\nu_L)$ no longer contains large logs (if $\mu_L,\nu_L$ are chosen near the natural soft scales), it is typically truncated to fixed order (see \tab{NkLL}). However, we know that the logs themselves still exponentiate, being predicted by the solution \eq{Sevolved2} to the RG and $\nu$RG equations. If we could keep the exponentiated one-loop double log in $\widetilde S$ in \eq{Sn} in the integrand of \eq{singlelog}, $\exp\bigl[\frac{\alpha_s(\mu_L)}{8\pi}\mathbb{Z}_S \Gamma_0 \ln^2\mu_L b_0\bigr]$, where $\mathbb{Z}_S = -4$, it would play precisely the role that we desire. Now, as we argue below, if we are going to keep this term exponentiated, we should also include a piece of the 2-loop rapidity evolution kernel $\sim \alpha_s^2 \ln^2\mu_L b_0 \ln(\nu_H/\nu_L)$ given by \eqs{UVdef}{gammanuexp} in the exponent, as it is of the same form and same power counting, so that the terms we wish to promote to the exponent of \eq{singlelog} at least at NLL order are: \begin{equation} \label{Sexp} \widetilde S_{\text{exp}} = \exp \biggl[ \frac{\alpha_s(\mu_L)}{4\pi}\mathbb{Z}_S \frac{\Gamma_0}{2} \ln^2 \mu_L b_0 + \Bigl(\frac{\alpha_s(\mu_L)}{4\pi}\Bigr)^2 \mathbb{Z}_S \Gamma_0 \beta_0 \ln^2\mu_L b_0 \ln \frac{\nu_H}{\nu_L}\biggr]\,. \end{equation} These are terms that would otherwise be truncated away at strict NLL accuracy. Since they are subleading, we are in fact free to choose to include them (and not other subleading terms that are formally of the same order). While this is admittedly a bit \emph{ad hoc}, we take the view that it is no more arbitrary than any regulator or cutoff we might choose to introduce to \eq{singlelog}, and these are terms that actually exist in the expansion of the physical cross section. We can rephrase this choice of subleading terms in \eq{Sexp} to include in \eq{singlelog} as part of our freedom to choose the precise scale $\nu_L$ in \eq{singlelog} (the variation of which anyway probes theoretical uncertainty due to missing subleading terms). Namely, if one were otherwise to choose $\nu_L\sim \mu_L$ in $\widetilde S$ in \eq{singlelog}, we propose then shifting that choice to: \begin{equation} \label{nuLshift} \nu_L\to \nu_L^* = \nu_L(\mu_L b_0)^{-1+p}\,,\qquad p = \frac{1}{2}\biggl[ 1 - \frac{\alpha_s(\mu_L)\beta_0}{2\pi}\ln\frac{\nu_H}{\nu_L}\biggr]\,, \end{equation} which we derive in \eqs{nuLstar}{eq:n}. This achieves the shifting of the terms in the exponent of \eq{Sexp} that would otherwise be truncated away into the integrand of \eq{singlelog} where they appear explicitly, and can be used to regulate the $b$ integral. This particular choice of regulator factor in \eq{nuLshift} is motivated, furthermore, by the fact that it will allow us actually to evaluate the $b$ integral \eq{singlelog} (semi-)\emph{analytically}, as we show in \sec{analytic}. Maintaining a Gaussian form for the exponent in $\ln b$ inside the $b$ integral will be crucial to this strategy. Beyond NLL, we will choose to keep the same shifted scale choice \eq{nuLshift}, but to ensure that we do not introduce higher powers of logs of $\mu_L b_0$ than quadratic into the exponent of the integrand in \eq{singlelog}, we make one additional modification to how we treat the rapidity evolution kernel. Namely, in the all orders form of the rapidity evolution kernel: \begin{equation} \label{Vdef} V(\nu_L,\nu_H;\mu_L) = \exp\biggl[ \gamma_\nu^S(\mu_L)\ln\frac{\nu_H}{\nu_L}\biggr]\,, \end{equation} given in \eq{UVdef}, where the rapidity anomalous dimension takes the form \eq{gammanuexp}, \begin{align} \gamma_\nu^S(\mu_L) &= \quad \frac{\alpha_s(\mu_L)}{4\pi} \quad \biggl[\mathbb{Z}_S \Gamma_0 \ln \mu_L b_0 \biggr] \\ &\quad + \Bigl(\frac{\alpha_s(\mu_L)}{4\pi}\Bigr)^2 \biggl[ \mathbb{Z}_S \Gamma_0\beta_0 \ln^2 \mu_L b_0 +( \mathbb{Z}_S \Gamma_1 + 2\gamma_{RS}^0 \beta_0 )\ln \mu_L b_0 + \gamma_{RS}^1\biggr] \,, \nn \end{align} we divide the anomalous dimension into a purely ``conformal'' part containing only the diagonal pure cusp terms with a single log of $\mu_L b_0$ and the same for the non-cusp part $\gamma_{RS}$. We divide the rapidity evolution kernel \eq{Vdef} into corresponding ``conformal'' and ``non-conformal'' parts: \begin{equation} \label{Vdivision} V(\nu_L,\nu_H;\mu_L) = V_\Gamma(\nu_L,\nu_H;\mu_L) V_\beta(\nu_L,\nu_H;\mu_L)\,, \end{equation} where $V_\Gamma$ contains pure anomalous dimension coefficients, \begin{equation} \label{VGammaintro} V_\Gamma (\nu_L,\nu_H;\mu_L) = \exp\biggl\{ \ln\frac{\nu_H}{\nu_L} \sum_{n=0}^\infty \Bigl(\frac{\alpha_s(\mu_L)}{4\pi}\Bigr)^{n+1}(\mathbb{Z}_S \Gamma_n \ln\mu_L b_0 + \gamma_{RS}^n)\biggr\}\,, \end{equation} and $V_\beta$ contains all the terms with beta function coefficients, whose expansion is shown in \eq{Vbeta}. We will keep $V_\Gamma$ exponentiated as in \eq{VGammaintro}, and the shift $\nu_L\to \nu_L^*$ in \eq{nuLshift} will turn it into a Gaussian in $\ln\mu_L b_0$ and thus allow us to carry out the $b$ integral in \eq{singlelog} (updated beyond NLL). However, to keep this Gaussian form of the exponent, we will then choose to truncate $V_\beta$ at fixed order. The logs of $\mu_L b_0$ in $V_\beta$ will give integrals in \eq{singlelog} that we can carry out by differentiating the basic result we obtain in \sec{analytic}. Admittedly, this expansion and truncation of $V_\beta$ is not part of any usual scheme for N$^k$LL resummation, but is our addition. In particular, $V_\beta$ still contains large logs of $\nu_H/\nu_L$ as seen in \eq{Vbeta}. This means that starting at NNLL order, we will not actually exponentiate \emph{all} the logs that appear at this accuracy, as usual log counting schemes in the exponent require. This is the price we choose to pay for the (semi-)analytic solution we obtain in \sec{analytic}, which requires a Gaussian exponent in $\ln b$ in the $b$ integrand. This is essentially an implementation of Laplace's method for evaluating the $b$ integral. As we argue in \ssec{NNLLbeyond}, our truncation of $V_\beta$ in fixed order is not as bad as failing to exponentiate large logs involving $\mu$ (which we do exponentiate) would be. There is never more than a single large log of $\nu_H/\nu_L$ appearing in the exponent of the rapidity evolution. Thus, the series of terms in the fixed-order expansion of the exponentiated $V_\beta$ are suppressed at every order by another power of $\alpha_s$.\footnote{In the $\mu$ evolution kernels, the exponents, \emph{e.g.}~ \eqs{KGammaexp}{etaexp}, themselves contain higher and higher powers of large logs of $\mu_H/\mu_L$, and truncating any part of it to fixed order would not be sensible. Truncating $V_\beta$ in \eq{Vbeta} to the same order as other corresponding genuinely fixed-order terms at N$^k$LL accuracy makes more sense. Loosely speaking, we maintain counting of logs in the exponent for most of the cross section, except for $V_\beta$, in which we revert to older log counting in the fixed-order expansion (N$^k$LL$_\text{E}$ vs. N$^k$LL$_\text{F}$ in \cite{Chiu:2007dg}.)} Our expansion of $V_\beta$ should be viewed as asymptotic expansion, which, indeed, we find truncating at a finite order yields a good numerical approximation to the resummed cross section (within the theoretical uncertainties otherwise present in the resummed cross section at N$^k$LL accuracy) in the perturbative region.\footnote{We expect that the way in which it breaks down for small $q_T$ will yield clues to the behavior of the nonperturbative contributions to the cross section, which however are not the subject of this paper.} Note, furthermore, that in the conformal limit, $V_\beta =1$, and the exponentiated part $V_\Gamma$ of the rapidity evolution would be exact. We should point out that, through the shift \eq{nuLshift}, we do introduce $b$ dependence into our choice of scale $\nu_L$, so we would not call our resummation scheme \emph{entirely} a momentum-space scheme. (See \cite{Ebert:2016gcn,Monni:2016ktx} for such proposed methods.) We do, however, leave the $\mu_L$ scale unfixed until \emph{after} the $b$ integration, and this still allows us to avoid integrating over a Landau pole in $\alpha_s(\mu_L)$ in \eq{xsec}. In \sec{resumxsec} we also use our freedom to determine exactly where $\mu_L\sim q_T$ should be in order to improve the convergence of the resummed perturbative series. We argue it should be set at a value such that other unresummed fixed-order logs make a minimal contribution to the final momentum-space cross section. For small values of $q_T$, this scale turns out to be shifted to slightly higher values $\mu_L\sim q_T + \Delta q_T$. Without making such a shift we find instabilities in the evaluation of the cross section. This is similar in spirit to the shift $\mu_L\to q_T+q_T^*$ proposed in \cite{Becher:2011xn,Becher:2012yn}, though not identical in motivation, implementation, or interpretation in terms of nonperturbative screening. \subsection{A semi-analytic result for the $b$ integral with full analytic dependence on momentum-space parameters} \label{ssec:intro3} If we stopped there, our choice of scale \eq{nuLshift} might be no more than just another in a long series of proposed schemes to avoid the Landau pole in \eq{xsec}, and, in addition, our division of the rapidity evolution kernel \eq{Vdef} into an exponentiated and a fixed-order truncated part in \eq{Vdivision} would be quite unnecessary and inexplicable. However, what we find in \sec{analytic} is that all of these scheme choices together yield a form of the $b$-space integrand \eq{bxsec} that is Gaussian in $\ln b$ so that we can integrate it analytically into a fairly simple form, modulo a numerical approximation for the pure Bessel function in \eq{xsec}. The dependence on all physical parameters and scales such as $q_T,\mu_{H,L},\nu_{H,L}$, is obtained analytically. We now briefly summarize our procedure and results. With the division \eq{Vdef} of the rapidity evolution kernel and the scale choice $\nu_L^*$ in \eq{nuLshift}, the momentum-space cross section \eq{xsec} can be written in the form, given in \eq{resummedIb}, \begin{equation} \label{qTcs} \frac{d\sigma}{dq_T^2 dy} = \frac{\sigma_0}{2} C_t^2(M_t^2, \mu_T) H(Q^2,\mu_H) U(\mu_L,\mu_H, \mu_T) I_b(q_T,Q;\mu_L,\nu_L^*,\nu_H)\,, \end{equation} where we isolated the $b$ integral, \begin{equation} \label{Ibintro} I_b(q_T,Q;\mu_L,\nu_L^*,\nu_H) \equiv \int_0^\infty db\,b J_0(bq_T) \widetilde F(b,x_1,x_2,Q;\mu_L,\nu_L^*,\nu_H) V_\Gamma(\nu_L^*,\nu_H;\mu_L)\,, \end{equation} $\widetilde F$ contains the fixed-order terms, including powers of logs of $\mu_L b_0$, contained in the soft function, TMDPDFs, and the part $V_\beta$ in \eq{Vdivision} of the rapidity evolution kernel that we choose to truncate at fixed order. As we will show in \sec{analytic}, the exponentiated part of the rapidity evolution kernel $V_\Gamma$ in \eq{VGamma}, with the scale choice $\nu_L^*$ in \eq{nuLshift}, can be written in the form of a pure Gaussian in $\ln b$, \begin{equation} \label{VGauss} V_\Gamma = C e^{-A \ln^2(\mu_L b_0 \chi)}\,, \end{equation} where $C,A,\chi$ are functions of the scales $\mu_L,\nu_L,\nu_H$ and the rapidity anomalous dimension, given explicitly in \eq{ACUeta}. In particular $A\sim \Gamma[\alpha_s(\mu_L)]$. If we could figure out how to integrate this Gaussian against the Bessel function in \eq{Ibdef}, we would be done. Now, the presence of terms in $\widetilde F$ in \eq{Ibdef} with nonzero powers of $\ln \mu_L b_0$ can be obtained from the basic result by differentiation, as we will derive in \ssec{fixedorderlimit}, so we really only need to figure out how to evaluate the basic integral, \begin{equation} \label{basicintegral} I_b^0= \int_0^\infty db\, b J_0(b q_T)\, e^{-A\ln^2(\Omega b)}\,, \end{equation} where $\Omega \equiv \mu_L e^{\gamma_E}\chi/2$. Now, our mathematical achievements in this paper do not reach so far as to evaluate \eq{basicintegral} analytically in its precise form. We will, however, develop a procedure to evaluate it in a closed form, with analytic dependence on $q_T,A,\Omega$ (and thus all scales and anomalous dimensions), to arbitrary numerical accuracy determined by the goodness of an approximation we use for the Bessel function. We find a basis in which to expand the pure Bessel function, in which just a few terms are sufficient to reach a precision better than needed for NNLL accuracy in the resummed cross section, and which can be systematically improved as needed. The details of this derivation are in \sec{analytic}, but we summarize the key steps here. The first step is to use a Mellin-Barnes representation for the Bessel function, \begin{equation} \label{Jzero} J_0(z) = \frac{1}{2 \pi i}\int_{c-i \infty}^{c+i \infty} dt \frac{\Gamma[-t]}{\Gamma[1+t]} \left(\frac{1}{2} z \right)^{2t} \,, \end{equation} where the contour lies to the left the poles of the gamma function $\Gamma(-t)$, so $c<0$. The choice $c=-1$ turns out to be well behaved, and useful as it is closely related to the fixed-order limit of \eq{Ibintro} (see \ssec{largeqT}). This trades the $b$ integral in \eq{basicintegral} for the $t$ integral, and we obtain \begin{equation} \label{Ib0f} I_b^0 = -\frac{2}{\pi q_T^2}\frac{e^{-AL^2}}{\sqrt{\pi A}}\int_{- \infty}^{\infty} dx\, \Gamma(-c-ix)^2 \sin[\pi (c+ix)] e^{-\frac{1}{2}[x - i(c-t_0)]^2}\,, \end{equation} where we parametrized the contour in \eq{Jzero} as $t=c+ix$, and where $t_0 = -1+AL$, where $L = \ln (2\Omega/q_T)$. We also used the reflection formula $\Gamma(-t)\Gamma(1+t) = -\pi\csc(\pi t)$. It may appear that we are no farther along than when we started with \eq{basicintegral}---we still have to do the $x$ integral. However, we now observe that thanks to the form of the Gaussian with a width $\sim \sqrt{A}$, which vanishes in the limit $\alpha_s\to 0$, we only need to know the rest of the integrand, in particular \begin{equation} \label{f} f(t)\equiv \Gamma(-c-ix)^2\,, \end{equation} in a fairly small region of $x$. In fact we shall not need it out to more than $\abs{x}\sim 1.5$ for any of our applications. Thus if we can find a good basis in which to expand $f$ where every term gives an analytically evaluable integral in \eq{Ib0f}, we shall be in good shape. Now, this would not have been a good strategy in \eq{Ibintro} for the Bessel function itself, as it is highly oscillatory out to fairly large $b$, and the Gaussian does not damp the integrand away quickly---its width only grows as $\alpha_s$ (i.e. $A$) goes to zero. However, inside \eq{Ib0f}, we find an expansion of $f$ (\eq{f}) in terms of Hermite polynomials $H_n$ to work very well: \begin{equation} \label{seriesexpansion} \Gamma(1-ix)^2 = e^{-a_0 x^2} \sum_{n=0}^\infty c_{2n} H_{2n}(\alpha x) + \frac{i \gamma_E}{\beta} e^{-b_0 x^2} \sum_{n=0}^\infty c_{2n+1} H_{2n+1}(\beta x) \,, \end{equation} where we now pick $c=-1$ and factor out Gaussians with widths set by $a_0,b_0$ which closely (but not exactly) resemble the real and imaginary parts of $\Gamma(1-ix)^2$ itself, near $x=0$. Their departures from an exact Gaussian are accounted for by the remaining series of Hermite polynomials. It would be natural to choose the scaling factors $\alpha,\beta$ for the Hermite polynomials to be $\alpha^2 = a_0$ and $\beta^2 = b_0$, but instead we leave them free, to be determined empirically to optimize fast convergence of the series. We find we can get sufficient numerical accuracy acceptable for NNLL accuracy in the final cross section with just a few (3 or 4) terms in each series, real and imaginary. The coefficients $c_n$ in \eq{seriesexpansion} still have to be determined by the numerical integrals \eq{HermiteCoefficients}, which unfortunately prevents us from having a fully analytic result for the momentum-space cross section. However, the series \eq{seriesexpansion} with these numerical coefficients depends only on properties of the pure mathematical function $\Gamma(1-ix)^2$ itself---not on any physical parameters. The dependence on these we keep analytically. All that is left is to evaluate analytically the integral of each Hermite polynomial against the Gaussian in \eq{Ib0f}, leading to the result we derive in \eq{IbHermite}, \begin{equation} \label{Ibseriesintro} I_b^0 = \frac{2}{\pi q_T^2} \sum_{n=0}^{\infty} \Imag \biggl\{ c_{2n} \mathcal{H}_{2n}(\alpha,a_0) + \frac{i\gamma_E}{\beta} c_{2n+1} \mathcal{H}_{2n+1}(\beta,b_0) \biggr\}\,, \end{equation} where each term $\mathcal{H}_{n}$ is defined by the integral, \begin{equation} \label{Hnintro} \mathcal{H}_{n}(\alpha,a_0) = \frac{1}{\sqrt{\pi A}} e^{-A(L-i\pi/2)^2}\int_{-\infty}^\infty dx\, H_{n}(\alpha x) e^{-a_0 x^2-\frac{1}{A}( x + z_0)^2}\,, \end{equation} each of which has the closed form result, \begin{equation} \label{Hnanalytic} \mathcal{H}_n(\alpha,a_0) = \frac{(-1)^n n! \, e^{\frac{-A(L-i\pi/2)^2}{1+a_0 A}}}{(1+a_0 A)^{n+\frac{1}{2}}}\sum_{m=0}^{\floor*{n/2}}\frac{1}{m!}\frac{1}{(n-2m)!} \Bigl\{ [ A(\alpha^2 - a_0)-1] (1+a_0 A)\Bigr\}^m (2\alpha z_0)^{n-2m} \,, \end{equation} the first several of which are written out explicitly in \eq{Hnexplicit}. In \eqs{Hnintro}{Hnanalytic}, $z_0 = A(\pi/2 + iL)$, in terms of which the integral \eq{Ib0f} can be written, the shifted exponents arising from absorbing the sine function in \eq{Ib0f}. The results \eq{Hnanalytic} for the integrals \eq{Hnintro} are the primary mathematical result of our paper. The final and primary physics result of our paper, \eq{thefinalfinalresult}, the resummed cross section in momentum space, is given in terms of the analytic result \eq{Ibseriesintro} for $I_b^0$ above. While a first glance at these formulas may not be particuarly illuminating, we would like to emphasize that the results \eq{Hnanalytic} of the integrals \eq{Hnintro} in terms of which the final result is written contain within them explicit dependence on all the physical parameters such as $q_T$ and the scales $\mu_{L,H},\nu_{L,H}$ that one would want to vary not only to evaluate the cross section but estimate its theoretical uncertainties. This is made very fast to compute by our explicit analytic formula, modulo only the numerically computed coefficients in \eq{seriesexpansion}, but that can be done once and for all, for any TMD observable or kinematics. It is important to emphasize that the result \eq{thefinalfinalresult} we give for the resummed momentum-space cross section represents, then, a triple expansion: \begin{itemize} \item \emph{Perturbative expansion:} usual expansions in $\alpha_s$ of matching coefficients and resummed exponents in \eq{qTcs}, counting $\alpha_s \ln(\mu_H/\mu_L)\sim 1$ or $\alpha_s \ln(\nu_H/\nu_L)\sim 1$, and fixed-order tails (not shown in \eq{qTcs}). \item \emph{$V_\beta$ expansion:} The additional truncation of the $V_\beta$ part of the rapidity evolution kernel in \eqs{Vdivision}{VGammaVbeta} to a fixed order in $\alpha_s$, according to \tab{NkLL}, makes possible the integration of a rapidity exponential \eq{VGauss} Gaussian in $\ln b$, and behaves as an asymptotic expansion. This expansion becomes exact in the conformal limit of QCD. \item \emph{Hermite expansion:} The integral of the Gaussian in \eq{VGauss} against the Bessel function in \eq{basicintegral} is performed in terms of analytic integrals, by expanding $J_0$ through the representation \eq{Jzero} and the series of Gaussian-weighted Hermite polynomials \eq{seriesexpansion}, truncated to a finite number of terms, as needed to achieve a numerical accuracy in the cross section better than the perturbative uncertainty already present. \end{itemize} These are the expansions we find necessary to obtain the analytical (up to the numerical Hermite coefficients) result for the cross section in \eq{thefinalfinalresult}. Each expansion is systematically and straightforwardly improvable. The last two expansions could be avoided if one is satisfied with a fully numerical evaluation of the $b$ integral in \eq{Ibintro}. We find the expansions worthwhile as they yield the faster and similarly accurate formula \eq{thefinalfinalresult}.\footnote{In our calculations, we found a factor of 5 improvement in speed with our formula for the $q_T$ distribution vs. numerically integrating \eq{Ibintro} at every $q_T$.} In the rest of the paper, we will do our best to make clear which expansion(s) are being used at each stage. \bigskip The remainder of our paper is organized as follows. Before concluding \sec{analytic} we match our resummed result onto fixed-order perturbation theory in \ssec{matching} and obtain and illustrate our results resummed to NNLL accuracy and matched to $\mathcal{O}(\alpha_s)$ fixed order. In \sec{non_pert} we offer some comments about expected nonperturbative corrections to our perturbative predictions, and in \sec{compare} we survey other methods to resum TMD cross sections in the literature as compared to ours. We conclude in \sec{conclusions}. In the Appendices we offer an array of technical results we need to evaluate the integrals and cross sections in the rest of the paper, as well as some alternatives to particular choices of schemes or methods we made in the main body of the paper. \section{Resummed cross section} \label{sec:resumxsec} In this section, we first review RG and rapidity RG methods to resum logs of separated hard and soft/collinear virtuality scales and collinear and soft rapidity scales in TMD cross sections. We review a standard procedure to set scales in impact parameter space, and then inverse Fourier transforming to momentum space. Then we propose a hybrid scale setting scheme where the soft rapidity scale is chosen to depend on $b$, but the virtuality scales are chosen only \emph{after} we transform back to momentum space, allowing evaluation of the $b$ integral without encountering a Landau pole. We also organize the rapidity evolution kernel in a way that anticipates making use of it to perform the $b$ integral semi-analytically in \sec{analytic}. We also address the choice of the soft virtuality scale itself in momentum space to ensure stable power counting of logs. \subsection{RGE and $\nu$RGE solutions} We defined the $b$-space cross section in \eq{bxsec}. The cross section is independent of the virtuality and rapidity factorization scales $\mu,\nu$, but each factor $H,\widetilde S,\widetilde f^\perp$ does depend on them, and contains logs of ratios of the scales $\mu,\nu$ to their ``natural'' virtuality or rapidity scales $\mu_H$, $(\mu_L,\nu_L)$ and $(\mu_L,\nu_H)$, at which no large logarithms exist. Thus we would like to evaluate each factor at these separate scales, and then use RG and $\nu$RG evolution to take them to the common scales $(\mu,\nu)$ at which the cross section is evaluated. The solutions to these evolution equations are in a form where the large logs of ratios of separated scales are resummed or exponentiated. \subsubsection{Hard function} The hard function $H=\abs{C}^2$ depends only on the virtuality scale $\mu$, and obeys the RGE, \begin{equation} \label{hardRGE} \mu\frac{d}{d\mu}C(Q^2,\mu) = \gamma_C(\mu)C(Q^2,\mu) \Rightarrow \mu\frac{d}{d\mu}H(Q^2,\mu) = \gamma_H(\mu)H(Q^2,\mu)\,, \end{equation} where the anomalous dimension takes the form, \begin{equation} \gamma_C(\mu) = -\frac{\mathbb{Z}_H}{2}\Gamma_{\text{cusp}}[\alpha_s(\mu)]\ln\frac{\mu}{Q} + \gamma_C[\alpha_s(\mu)] \,,\qquad \gamma_H = \gamma_C + \gamma_C^*\,, \end{equation} where $\Gamma_{\text{cusp}}$ is known as the cusp anomalous dimension, the proportionality constant $\mathbb{Z}_H = 4$, and $\gamma_C[\alpha_s]$ is the non-cusp part of the anomalous dimension. The cusp anomalous dimension can be written as an expansion in the strong coupling $\alpha_s(\mu)$. \begin{equation} \Gamma_{\text{cusp}}[\alpha_s(\mu)] = \sum_{i=0}^{\infty}{\left(\frac{\alpha_s(\mu)}{4\pi}\right)^{i+1} \Gamma_{i}} \end{equation} The RGE \eq{hardRGE} has the solution \begin{equation} \label{Hevolved} C(Q^2,\mu) = C(Q^2,\mu_H)U_C(\mu_H,\mu) \Rightarrow H(Q^2,\mu) = H(Q^2,\mu_H) U_H(\mu_H,\mu)\,, \end{equation} where the evolution kernel is \begin{align} \label{UHdef} U_C(\mu_H,\mu) &= \exp\biggl\{ \int_{\mu_H}^\mu \frac{d\mu'}{\mu'} \gamma_C(\mu')\biggr\} \\ &= \exp\biggl\{ -\frac{\mathbb{Z}_H}{2} K_\Gamma(\mu_H,\mu) - \frac{\mathbb{Z}_H}{2}\eta_\Gamma(\mu_H,\mu)\ln\frac{\mu_H}{Q} + K_{\gamma_C}(\mu_H,\mu)\biggr\}\,, \nn \end{align} and $U_H = \abs{U_C}^2$. The pieces $K_\gamma,\eta_\Gamma,K_\gamma$ of the evolution kernel are given by: \begin{subequations} \label{Ketadef} \begin{align} K_\Gamma(\mu_0,\mu) &= \int_{\mu_0}^\mu \frac{d\mu'}{\mu'} \Gamma_{\text{cusp}}[\alpha_s(\mu')]\ln\frac{\mu'}{\mu_0} \\ \eta_\Gamma(\mu_0,\mu) &= \int_{\mu_0}^\mu \frac{d\mu'}{\mu'} \Gamma_{\text{cusp}}[\alpha_s(\mu')] \\ K_\gamma(\mu_0,\mu) &= \int_{\mu_0}^\mu \frac{d\mu'}{\mu'} \gamma[\alpha_s(\mu')] \,. \end{align} \end{subequations} Explicit expressions for these kernels up to NNLL accuracy are given in \appx{def-K}. For the case of the Higgs production, we have another Wilson coefficient ($C_t^2$) obtained from integrating out the top quark. So in addition to the hard function, we also have a running for this coefficient. \begin{eqnarray} \mu \frac{d}{d\mu} C_t^2(M_t^2,\mu) = \gamma_{C_t^2} C_t^2(M_t^2,\mu) \end{eqnarray} The anomalous dimension takes the general form \begin{eqnarray} \gamma_{C_t^2} = \sum_{i=0}^{\infty} \left(\frac{\alpha_s(\mu)}{4\pi}\right)^{i+1} \gamma_i \end{eqnarray} where $\gamma_i$ is a number. The RGE has the solution \begin{eqnarray} C_t^2(M_t^2,\mu) = C_t^2(M_t^2,\mu_T)U_{C_t^2}(\mu_T,\mu) \end{eqnarray} where the evolution kernel is \begin{eqnarray} \label{UTdef} U_{C_t^2}(\mu_T,\mu)= \text{exp} \Bigg \{\int_{\mu_T}^{\mu}\frac{d\mu'}{\mu'}\gamma_{C_t^2}\Bigg \}= \text{exp} \Bigg \{K_{\gamma_{C_t^2}}(\mu_T, \mu) \Bigg\} \end{eqnarray} where \begin{equation} K_{\gamma_{C_t^2}}(\mu_0,\mu) = \int_{\mu_0}^\mu \frac{d\mu'}{\mu'} \gamma_{C_t^2}[\alpha_s(\mu')] \end{equation} Explicit expressions for these kernels up to NNLL accuracy are given in \appx{def-K}. \subsubsection{Soft function and TMDPDFs} The soft function in $b$ space obeys the $\mu$- and $\nu$-RGEs, \begin{align} \mu\frac{d}{d\mu}\widetilde S(b;\mu,\nu) = \gamma_\mu^S(\mu,\nu) \widetilde S(b;\mu,\nu)\,, \ \nu\frac{d}{d\nu}\widetilde S(b;\mu,\nu) = \gamma_\nu^S (\mu,\nu) \widetilde S(b;\mu,\nu) \end{align} while the TMDPDFs/beam functions obey \begin{align} \mu\frac{d}{d\mu}\widetilde f_i^\perp(b,x_i,p^\pm;\mu,\nu) &= \gamma_\mu^f(\mu,\nu) \widetilde f_i^\perp(b,x_i,p^\pm;\mu,\nu)\,, \\ \nu\frac{d}{d\nu}\widetilde f_i^\perp(b,x_i,p^\pm;\mu,\nu) &= \gamma_\nu^f (\mu,\nu) \widetilde f_i^\perp(b,x_i,p^\pm;\mu,\nu)\,.\nn \end{align} The $\mu$ anomalous dimensions take the form: \begin{subequations} \label{gammamu} \begin{align} \label{gammamuS} \gamma_\mu^S(\mu,\nu) &= -\mathbb{Z}_S \Gamma_{\text{cusp}}[\alpha_s(\mu)] \ln\frac{\mu}{\nu} + \gamma_\mu^S[\alpha_s(\mu)]\,, \\ \label{gammamuf} \gamma_\mu^f(\mu,\nu) &= \mathbb{Z}_f \Gamma_{\text{cusp}}[\alpha_s(\mu)] \ln\frac{\nu}{p^\pm} + \gamma_\mu^f[\alpha_s(\mu)]\,, \end{align} \end{subequations} where $\mu$ and $\nu$ independence of the cross section require $\mathbb{Z}_H = 2\mathbb{Z}_f = -\mathbb{Z}_S = 4$, and $\gamma_H[\alpha_s] = -\gamma_\mu^S[\alpha_s] - 2\gamma_\mu^f[\alpha_s]$. In $\gamma_\mu^f$ we recall the large rapidity scales are given by $p^\pm = Qe^{\pm y} = x_{1,2}\sqrt{s}$ for the two colliding hard partons. Note $p^+ p^- = Q^2$. As for the form of the $\nu$ anomalous dimensions, at one-loop fixed order in perturbation theory, they take the values \begin{equation} \label{gammanu} \gamma_\nu^S = \mathbb{Z}_S \frac{\alpha_s(\mu)}{4 \pi}\Gamma_0 \ln \mu b_0\,,\quad \gamma_\nu^f = \mathbb{Z}_f \frac{\alpha_s(\mu)}{4 \pi}\Gamma_0 \ln\mu b_0\,, \end{equation} where \begin{equation} \label{mub} b_0 = \frac{be^{\gamma_E}}{2}\,, \end{equation} the $\mu$-scale at which rapidity logs are minimized in $b$ space. Beyond $\mathcal{O}(\alpha_s)$, the form of the $\nu$ anomalous dimensions can be deduced from the consistency relation: \begin{equation} \frac{d}{d\ln\mu}\gamma_\nu^i(\mu,\nu) = \frac{d}{d\ln\nu} \gamma_\mu^i(\mu,\nu) = \mathbb{Z}_i \Gamma_{\text{cusp}}[\alpha_s(\mu)]\,. \end{equation} Solving this equation in $\mu$, we obtain \begin{equation} \label{gammanurun} \gamma_\nu^i (\mu,\nu) = \mathbb{Z}_i\int_{1/b_0}^\mu d\ln\mu' \Gamma_{\text{cusp}}[\alpha_s(\mu')] + \gamma_{Ri}[\alpha_s(1/b_0)]= \mathbb{Z}_i\eta_\Gamma(1/b_0,\mu) + \gamma_{Ri}[\alpha_s(1/b_0)]\,, \end{equation} where the boundary condition of the evolution at $1/b_0$ determines the non-cusp part $\gamma_{Ri}[\alpha_s]$ of the $\nu$ anomalous dimension. The independence of the cross section \eq{bxsec} on $\nu$ requires, again, $\mathbb{Z}_S = -2\mathbb{Z}_f$, and $\gamma_{RS} = -2\gamma_{Rf}$. The solutions of the $\mu$ and $\nu$ RGEs for $\widetilde S$ and $\widetilde f^\perp$ are: \begin{subequations} \begin{align} \label{Sevolved} \widetilde S(b;\mu,\nu) &= \widetilde S(b;\mu_L,\nu_L)U_S(\mu_L,\mu;\nu)V_S(\nu_L,\nu;\mu_L) \\ &= \widetilde S(b;\mu_L,\nu_L)V_S(\nu_L,\nu;\mu)U_S(\mu_L,\mu;\nu_L) \nn \\ \label{fevolved} \widetilde f_i^\perp(b,x_i,p^\pm;\mu,\nu) &= \widetilde f_i^\perp(b,x_i,p^\pm;\mu_L,\nu_H)U_f(\mu_L,\mu;\nu)V_f(\nu_H,\nu;\mu_L) \\ &= \widetilde f_i^\perp(b,x_i,p^\pm;\mu_L,\nu_H)V_f(\nu_H,\nu;\mu)U_f(\mu_L,\mu;\nu_H)\,, \nn \end{align} \end{subequations} where each pair of equalities accounts for two, equivalent paths for RG evolution in the two-dimensional $\mu,\nu$-space (see \fig{modes}). The evolution kernels $U_{S,f}$ in the $\mu$ direction are: \begin{subequations} \label{USUf} \begin{align} \label{US} U_S(\mu_L,\mu;\nu) &= \exp\biggl\{-\mathbb{Z}_S K_\Gamma(\mu_L,\mu) - \mathbb{Z}_S \eta_\Gamma(\mu_L,\mu)\ln \frac{\mu_L}{\nu} + K_{\gamma_S}(\mu_L,\mu)\biggr\} \\ \label{Uf} U_f(\mu_L,\mu;\nu) &= \exp\biggl\{\mathbb{Z}_f \eta_\Gamma(\mu_L,\mu)\ln\frac{\nu}{p^\pm} + K_{\gamma_f}(\mu_L,\mu)\biggr\} \,. \end{align} \end{subequations} Note that the $\mu$ anomalous dimension for $\widetilde f^\perp$ in \eq{gammamuf} does not have a log of $\mu$ in its cusp anomalous dimension term, so no $K_\Gamma$ term appears in its evolution kernel $U_f$ in \eq{Uf}. Meanwhile, the $\nu$ evolution kernels $V_{S,f}$ are given by integrals over $\nu$ of \eq{gammanurun}, \begin{subequations} \label{VSVf} \begin{align} \label{VS} V_S(\nu_L,\nu;\mu) &= \exp\biggl\{\biggl[\mathbb{Z}_S \eta_\Gamma(1/b_0,\mu) + \gamma_{RS}[\alpha_s(1/b_0)]\biggr]\ln\frac{\nu}{\nu_L}\biggr\} \\ \label{Vf} V_f(\nu_H,\nu;\mu) &=\exp\biggl\{\biggl[\mathbb{Z}_f \eta_\Gamma(1/b_0,\mu) + \gamma_{Rf}[\alpha_s(1/b_0)]\biggr]\ln\frac{\nu}{\nu_H}\biggr\} \,. \end{align} \end{subequations} \subsubsection{RG evolved cross section} We can now put these pieces together to express the cross section \eq{bxsec} in terms of the hard, soft, and beam functions evolved from their natural scales where logs in each are minimized, and thus logs in the whole cross section are resummed: \begin{align} \label{sigrun} \widetilde\sigma(b,x_1,x_2,Q;\mu_i,\nu_i;\mu,\nu) &=U_{\text{tot}}(\mu_i,\nu_i;\mu,\nu) \, C_t^2(M_t^2, \mu_T) H(Q^2;\mu_H)\widetilde S(b;\mu_L,\nu_L) \\ &\quad\times \widetilde{f}^\perp_1(b,x_1,p^-;\mu_L,\nu_H) \widetilde{f}^\perp_2(b,x_2,p^+;\mu_L,\nu_H) \,,\nn \end{align} where \begin{align} \label{Utotmunu} &U_{\text{tot}}(\mu_i,\nu_i;\mu,\nu) \equiv U_{C_t^2}(\mu_T, \mu)U_H(\mu_H, \mu) U_S(\mu_L, \mu; \nu)V_S(\nu_L,\nu;\mu_L)\,U_f^2(\mu_L, \mu; \nu) V_f^2(\nu_H,\nu;\mu_L)\Big] \nn\\ &\quad = \exp\biggl\{-\mathbb{Z}_H K_\Gamma(\mu_H,\mu) - \mathbb{Z}_S K_\Gamma(\mu_L, \mu) - \mathbb{Z}_H \eta_\Gamma(\mu_H,\mu)\ln\frac{\mu_H}{Q} +K_{\gamma_H}(\mu_H,\mu) + K_{\gamma_{C_t^2}}(\mu_T,\mu) \nn \\ & \qquad\quad +\eta_\Gamma(\mu_L,\mu)\Bigl[-\mathbb{Z}_S \ln\frac{\mu_L}{\nu} + 2\mathbb{Z}_f \ln\frac{\nu}{Q}\Bigr] + K_{\gamma_S}(\mu_L,\mu) + 2K_{\gamma_f}(\mu_L,\mu) \\ & \qquad\quad + \Bigl[ \mathbb{Z}_S \eta_\Gamma(1/b_0,\mu_L) + \gamma_{RS}[\alpha_s(1/b_0)]\Bigr] \ln\frac{\nu}{\nu_L} + 2 \Bigl[ \mathbb{Z}_f \eta_\Gamma(1/b_0,\mu_L) + \gamma_{Rf}[\alpha_s(1/b_0)]\Bigr] \ln\frac{\nu}{\nu_H} \biggr\} \nn . \end{align} Using the relations $\mathbb{Z}_H = -\mathbb{Z}_S = 2\mathbb{Z}_f = 4$, $\gamma_H +\gamma_{C_t^2}= -\gamma_\mu^S - 2\gamma_\mu^f$, and $\gamma_{RS} = -2\gamma_{Rf}$, we obtain the simpler expression, \begin{align} \label{Utot} U_{\text{tot}}(\mu_i,\nu_i;\mu,\nu) & = \exp\biggl\{4 K_\Gamma(\mu_L,\mu_H) - 4 \eta_\Gamma(\mu_L,\mu_H) \ln\frac{Q}{\mu_L} - K_{\gamma_H}(\mu_L,\mu_H) - K_{\gamma_{C_t^2}}(\mu_L,\mu_T) \nn\\ & \qquad +\Big[ -4\, \eta_\Gamma (1/b_0,\mu_L)+ \gamma_{R\, S}\big[\alpha_s(1/b_0) \big] \Big] \ln \frac{\nu_H}{\nu_L}\biggr\} \,,\end{align} in which we observe that the explicit dependence on the arbitrary scales $\mu$ and $\nu$ has exactly canceled out, leaving only the dependence on the natural scales $\mu_{H,T,L,b}$ and $\nu_{L,H}$ where the hard, soft, and beam functions live. Note $U_{C_t^2},K_{\gamma_{C_t^2}}$ are present only in the case of the Higgs. In \eq{Utot}, we envision that the rapidity evolution takes place at (or around) the scale $1/b_0$ (see \fig{modes}). Then we can actually just expand the evolution factor $\eta_\Gamma(1/b_0,\mu_L)$ and the rapidity anomalous dimension $\gamma_{RS}$ in a fixed-order expansion in $\alpha_s(\mu_L)$, to the order required for N$^k$LL accuracy. This is in fact what we will do below. Then it becomes useful to split up $U_{\text{tot}}$ in \eq{Utot} into two factors, \begin{equation} \label{UVsplit} U_{\text{tot}}(\mu_i,\nu_i;\mu,\nu) = U(\mu_L,\mu_H) V(\nu_L,\nu_H;\mu_L)\,, \end{equation} where \begin{align} \label{UVdef} U(\mu_L,\mu_H, \mu_T) &= \exp\biggl\{4 K_\Gamma(\mu_L,\mu_H) - 4 \eta_\Gamma(\mu_L,\mu_H) \ln\frac{Q}{\mu_L} - K_{\gamma_H}(\mu_L,\mu_H)- K_{\gamma_{C_t^2}}(\mu_L,\mu_T)\biggr\} \nn \\ V(\nu_L,\nu_H;\mu_L) &= \exp\biggl\{ \gamma_\nu^S (\mu_L) \ln\frac{\nu_H}{\nu_L}\biggr\} \,, \end{align} which are of course just $U_H(\mu_L,\mu_H)U_{C_t^2}(\mu_L,\mu_T)$ and $V_S(\nu_L,\nu_H;\mu_L)$ as given by \eqss{UHdef}{UTdef}{VS}. For brevity in the rest of the paper we will just use $U,V$ in \eq{UVdef}. Inside $V$ in \eq{UVdef}, we use the fixed-order expansion of $\gamma_\nu^S(\mu_L)$ given in \eq{gammanurun} using the expansions \eq{etaexp2} for $\eta_\Gamma$ and \eq{gammaRSexp} for $\gamma_{RS}$: \begin{align} \label{gammanuexp} \gamma_\nu^S(\mu_L) &= \quad \frac{\alpha_s(\mu_L)}{4\pi} \quad \biggl[\mathbb{Z}_S \Gamma_0 \ln \mu_L b_0 + \gamma_{RS}^0 \biggr] \\ &\quad + \Bigl(\frac{\alpha_s(\mu_L)}{4\pi}\Bigr)^2 \biggl[ \mathbb{Z}_S \Gamma_0\beta_0 \ln^2 \mu_L b_0 +( \mathbb{Z}_S \Gamma_1 + 2\gamma_{RS}^0 \beta_0 )\ln \mu_L b_0 + \gamma_{RS}^1\biggr] \nn \\ &\quad + \Bigl(\frac{\alpha_s(\mu_L)}{4\pi}\Bigr)^3 \biggl\{ \frac{4}{3}\mathbb{Z}_S \Gamma_0\beta_0^2 \ln^3 \mu_L b_0 + [ \mathbb{Z}_S (\Gamma_0 \beta_1 + 2\Gamma_1\beta_0) + 4\gamma_{RS}^0\beta_0^2]\ln^2 \mu_L b_0 \nn \\ &\qquad\qquad\qquad + [\mathbb{Z}_S \Gamma_2 + 2\gamma_{RS}^0 \beta_1 + 4\gamma_{RS}^1\beta_0]\ln \mu_L b_0 + \gamma_{RS}^2\biggr\} + \cdots \,, \nn \end{align} In practice we truncate this expansion at the appropriate order of logarithmic accuracy. We will always pick $\mu_L$ in such a way that none of these generate large logs (either $\mu_L\sim 1/b_0$ in $b$ space, or in momentum space in such a way that they remain small after inverse transformation---see \ssec{muLscale}), except the factor of $\ln(\nu_H/\nu_L)$ in \eq{UVdef}. This is an observation that will become key below, when we split $\gamma_\nu$ into two separate parts in \ssec{NNLLbeyond}. \subsubsection{How to choose the scales?} To evaluate the cross section \eq{sigrun} (and its inverse Fourier transform back to momentum space \eq{xsec}) explicitly, we need to make explicit choices for the scales $\mu_{H,L}$ and $\nu_{L,H}$ between which to run in \eq{Utot}. Choosing these near the scales at which the logs in each individual function are minimized in principle achieves resummation of all large logarithms. However, these natural choices are different in impact parameter and momentum space. There are various possible ways in which this resummation can be handled. In this paper, we envision, in \eqs{UVsplit}{UVdef} and \fig{modes}, running the hard function in $\mu$ to the natural low scale of the soft and collinear functions, and the soft function in $\nu$ to the natural rapidity scale of the TMDPDFs. The high scales $\mu_H$ and $\nu_H$ for the running of the hard and soft functions are unambiguously best chosen near the invariant mass $\sim Q$ of the gauge boson. The choices of the low scales $\mu_L$ and $\nu_L$ are under debate since we can choose those scales either in $b$ space or in momentum space. The $\mu$ scales, like in a usual EFT, are a measure of virtuality of the modes that contribute to that function. For the hard function, this virtuality scale, not surprisingly, is the hard scale $Q$, which also happens to be the scale choice for which the logarithms in the hard function are minimized. The virtuality for soft and beam functions is of the order of the transverse momentum that the function contributes to the total transverse momentum. This can be seen in momentum space where the product of these functions in impact parameter space turns into a convolution over transverse momentum, \eq{txsec}. Since the total transverse momentum is a sum over the transverse momenta contributed by each function, for a given total $q_T$, the contribution of any one of these functions traverses a range of scales. While this situation is not unique to this observable, what is different is the dependence of the TMDPDF on the hard scale $Q$. As we will see, due to this $Q$ dependence, the conjugate natural scale to $b_0$ in the resummed result is no longer $q_T$ but is shifted away from $q_T$ towards $Q$. However, the final aim of any resummation is to have a well behaved perturbative series. Whenever the fixed order logs become too large, the expansion in $\alpha_s$ does not converge, and it becomes necessary to reorganize the series in terms of resummed exponents. A successful resummation is then one in which the fixed order terms that are left behind form a rapidly converging series in $\alpha_s$. Since the large logarithms are, in fact, the terms that spoil the convergence of the fixed order perturbative series, the general strategy is then to minimize the effect of these logarithms in the residual fixed order series. Keeping these issues in mind, we explore two possible sets of scale choices for $\mu_L,\nu_L$ for resummation: the standard choices in impact parameter space in \ssec{CSS}, and a new proposed set of choices in \ssec{mom} allowing evaluation of the resummed cross section in momentum space. \subsection{Scale choice in impact parameter space} \label{ssec:CSS} To choose scales for resummation, we need some idea about the natural scales at which each of the three functions (hard, soft and TMDPDF) live. This is easily seen by looking at their behavior up to one loop. From the results given in \appx{fixedorder}, we find that each of these functions are function of the logs, \begin{equation} H = H\Bigl(\ln\frac\mu Q\Bigr)\,,\quad \widetilde S = \widetilde S\Bigl( \ln \mu b_0,\ln \frac{\nu}{\mu}\Bigr)\,,\quad \widetilde f_\perp = \widetilde f_\perp\Bigl( \ln \mu b_0,\ln \frac{\nu}{p^\pm}\Bigr)\,, \end{equation} given in impact parameter space for $\widetilde S,\widetilde f_\perp$. In this space, it is perfectly evident from the fixed-order calculation that the natural scale which minimizes all the logs in $\mu$ for the soft and beam functions is $\mu =\mu_L \sim 1/b_0$. Since the final cross section at a given $q_T$ involves an integral over a range of $b$, the scale choice is in fact spread over a range of scales. This is to be expected from the earlier discussion of their being no unique physical scale for the soft and beam functions. The natural scales for the various functions then are $\mu=\mu_H$ for the hard function, $(\mu,\nu) = (\mu_L,\nu_L)$ for the soft function and $(\mu,\nu) = (\mu_L,\nu_H)$ for the beam functions, where $\mu_L, \nu_L \sim 1/b_0$ and $\nu_H , \mu_H \sim Q$ (recall $p^+p^- = Q^2$). All the logs can then be resummed by running the hard function from the scale $Q$ to $1/b_0$ and the soft function in $\nu$ from $Q$ to $1/b_0$. This will produce the result (for the central values, not counting scale variations) of the CSS formalism. Therefore, this scheme resums logarithms of the form $\ln( Q b_0)$. The power counting adopted for this resummation then is straightforward since there is only one type of log. It is usually chosen as $\alpha_s \ln(Q b_0) \sim 1$. Leading log (LL) accuracy then resums $\alpha_s^n \ln^{n+1}(Q b_0)$, with NLL and NNLL down by one and two powers of the logarithm respectively. Since the lower scales $\mu_L$ are chosen in $b$ space, the cross section involves an inverse Fourier transform over arbitrarily large values of $b$, so eventually we hit the nonperturbative scale which manifests itself in the form of the Landau pole: $\alpha_s(1/b_0)$. This corresponds to the fact that the beam and soft functions can contribute arbitrarily small values of transverse momentum even when the total transverse momentum is perturbative. This is usually handled by putting a sharp or smooth cutoff in $b$ space which provides a way to model nonperturbative physics \cite{Qiu:2000hf,Sun:2013dya,DAlesio:2014mrz,Collins:2014jpa,Scimemi:2017etj}. The impact of these nonperturbative effects will be discussed in \sec{non_pert}. The obvious advantage of this scheme is that the power counting is unambiguous and we can guarantee that with the central values of scale choices in $b$ space, all the logs in the residual fixed order series are set exactly to zero. As far as the choice of central values is concerned, the terms that are resummed are exactly equal to the CSS resummation formalism \cite{Collins:1984kg}. However, due to the introduction of the new rapidity renormalization scale $\nu$, there is much better control over which terms can be included in the exponent and which terms remain in the fixed order \cite{Chiu:2012ir,Chiu:2011qc}. This directly translates into a much better estimates of error due to missing higher order terms. \begin{figure} \centerline{\scalebox{.55}{\includegraphics{bspace.pdf}}} \vskip-0.2cm \caption[1]{Result of $b$-space resummation for Higgs production and DY, setting the central values for the low scales for resummation at $\mu_L,\nu_L = 1/b_0$ and varying around these by factors of 2 to obtain the uncertainty bands. The plots in momentum space are obtained by an inverse Fourier transform., which hits a Landau pole for large $b$, which must thus be cut off, see \fig{bspace}. The uncertainty for the DY curve is actually thus underestimated due to an ambiguity in this cutoff, see also \fig{bspace}.} \label{fig:bspaceresum} \end{figure} Another advantage of having control over what exactly goes in the exponent is when we match the resummed cross section to the fixed order cross section at large $q_T$. To maintain accuracy over the full (perturbative) range of $q_T$ ($Q \geq q_T \gg \Lambda_{QCD}$), we need to turn off resummation at the value $q_T$ where the nonsingular contribution is the same order as that of the singular one. Due to the two independent scales $\mu,\nu$ available, this can be done very easily by using profiles in these scales to smoothly turn off the resummation and simultaneously match onto the full (including non-singular pieces) fixed order cross-section. This technique was implemented for the Higgs transverse spectrum in \cite{Neill:2015roa} to obtain the cross section to NNLL+NNLO accuracy. In this paper, for the purposes of comparison with other resummation schemes, we present the results for the cross section at NNLL accuracy for both the Higgs and DY using this scheme (\fig{bspaceresum}). After having decided on the central values, we next need to estimate the size of higher-order perturbative corrections we have missed by using scale variations. This is accomplished by varying the two renormalization scales $\mu $ and $\nu$ independently as detailed in \cite{Neill:2015roa}. Since we resummed and chose scales in $b$ space, our final result involves an inverse Fourier transform over the resummed $b$ space result: \begin{align} \label{inverseFT} \frac{d\sigma}{dq_T^2 dy} &= \frac{\sigma_0}{2} \int db\, bJ_0(bq_T) U\Bigl(\mu_H \!\sim\! i Q, \mu_T \!\sim\! M_t, \mu_L \!\sim\! \frac{1}{b_0}\Bigr) V\Bigl(\nu_L \!\sim \!\frac{1}{b_0},\nu \!\sim\! Q;\mu_L\! \sim\! \frac{1}{b_0}\Bigr)\nn\\ &\qquad \times f(x_1,\mu_L\sim1/b_0)f(x_2,\mu_L\sim 1/b_0) \equiv \frac{\sigma_0}{2} \int db\, K(b) \end{align} where $K(b)$ is the complete $b$-space integrand. It turns out the cusp anomalous dimension for DY ($\Gamma_0 = 4C_F$) is much lower in magnitude than that for Higgs ($\Gamma_0 = 4C_A$). This results in a much lower damping effect at large values of $b$, see \fig{bspace}. The plot shows the $b$ space integrand $K(b)$ for $q_T= 5$ GeV. \begin{figure} \centerline{\scalebox{.55}{\includegraphics{dybspace.pdf}}} \vskip-0.4cm \caption[1]{$b$-space integrand $K(b)$ given in \eq{inverseFT} for $b$-space resummation. On the left is the result of the central value $\mu_L=1/b_0$ for the low virtuality scale, for which a somewhat stable plateau exists before hitting the Landau pole above $b\sim 3\text{ GeV}$, allowing imposition of a cutoff to which the result is insensitive. On the right is shown the result of varying this scale down by a factor of 2, in which case the pole moves to lower $b$, and no stable region for a reasonable cutoff exists.} \label{fig:bspace} \end{figure} The divergence beyond $b= 3$ in the left-hand plot in \fig{bspace} is the Landau pole. It is clear that there is only a narrow window of stability before we hit the Landau pole. The situation worsens when try and do a scale variation about the central choice $ \mu_L \sim 1/b_0 $, specifically $\mu \sim \mu_L/2$, as shown in the right-hand plot of \fig{bspace}. What this does is to bring the Landau pole closer by factor of 2, in which case there is no clear separation between the perturbative and nonperturbative regimes. It is then unclear how a hard cut-off or even a smooth one would give an accurate estimation of the perturbative uncertainty band. The error bands in \fig{bspaceresum} for the case of DY at NNLL, therefore cannot be generated meaningfully via this scale variation. For the purposes of error estimation then we were forced to make an educated guess for the upper boundary of the error band at NNLL. Clearly, we would like to find a procedure that does better. \subsection{Resummation in momentum space} \label{ssec:mom} The Landau pole in the $b$-space resummation scheme above comes about due to the running of the strong coupling $\alpha_s$ all the way down to $1/b_0$ which goes all the way down to $\Lambda_{\text{QCD}}$. A natural question to ask, then, is if we can avoid the Landau pole by choosing the $\mu$ scale in momentum space. It is clear from the above discussion that we cannot choose a single scale in momentum space which will put all the logs to 0. However, what we can ask is whether it is possible to make an appropriate choice for $\mu,\nu$ directly in momentum space such that the resummed exponent, on an average, minimizes the contribution from the residual fixed order logs. These small logs, though nonzero, can then be included order by order ensuring that they only contribute to the same order as the error band. \subsubsection{Leading logs} As before let us assume the power counting $ \alpha_s \ln( \mu_H/\mu_L)$ , $\alpha_s \ln( \nu_H/\nu_L) \sim 1$, where $\mu_L,\nu_L $ will be our choices of the renormalization scales in momentum space. According to this power counting then, at leading log (LL) we wish to resum terms of the form $\alpha_s^n \ln^{n+1}(\frac{\mu_H}{\mu_L})$ in the exponent of the cross section. We also assume that the residual logs of the form $\ln( \mu_L b_0)$ are small (of $\mathcal{O}(1)$) and need not be resummed. The resummation then involves running just the hard function from $\mu_H \sim Q$ to $\mu_L $. All the residual fixed order logs as well as logs in $\nu$ are then subleading at this order. The cross section we get is \begin{eqnarray} \frac{d \sigma}{d q_T^2 dy} = \frac{(2\pi)\sigma_0}{2} U^{\rm LL}(\mu_H, \mu_T, \mu_L) \delta^2(\vec{q_T})f(x_1,q_T)f(x_2,q_T) \label{LLc} \end{eqnarray} where we can obtain $U_H^{\rm LL}$ from \eq{UHdef}. This is highly singular and gives a trivial result at nonzero $q_T$. This suggests that the supposedly higher order pieces that we are ignoring are not unimportant. Let's look at the fixed order pieces left over at one loop: \begin{equation} \label{ffS} (2\pi)^3 \tilde f^{\perp}_1 \tilde f^{\perp}_2 \tilde S = 1+ \mathbb{Z}_S \Gamma_0\frac{\alpha_s(\mu_L)}{8\pi}\Bigl( 2 \ln\frac{Q}{\mu_L}\ln(\mu_L b_0) + \ln^2(\mu_L b_0) \Bigr) + O(\alpha_s \ln(\mu_L b_0), \alpha_s^2) \end{equation} The biggest term here appears to be $\ln(Q/\mu_L)\ln(\mu_L b_0)$. According to our naive power counting, this term is subleading at LL and hence should be ignored. Going to momentum space, this term gives us $\ln(\frac{Q}{\mu_L})/q_T$, which is in fact the leading logarithmic term in the fixed order cross section at nonzero $q_T$. So then it appears that we haven't resummed any of the logs which contribute at nonzero $q_T$, which makes sense since we get a trivial result at nonzero $q_T$. We can then conclude that the LL cross section in Eq.~(\ref{LLc}) with this power counting only gives us the result at zero $q_T$. We will have to go to NLL to have any handles to ``fix'' it. \subsubsection{Next-to-leading logs} \label{ssec:NLL} The next logical step is to go to NLL. We first update the hard function to resum all logs of the form $\alpha_s^n \ln^n(Q/\mu_L)$. However this by itself is not useful since it will lead to the same problem as for the LL case in that it will only contribute at zero $q_T$. But power counting demands that we also run the soft function in $\nu$ from the scale $\nu_H \sim Q$ to $\nu_L \sim \mu_L$. Using \eq{gammanurun}, \begin{eqnarray} \label{gamS} \gamma_{\nu}^S (\mu_L)= -4\Gamma_0\frac{\alpha_s(\mu_L)}{4\pi}\ln(\mu_L b_0) \left[1+ \frac{\alpha_s(\mu_L)}{4\pi} \beta_0 \ln(\mu_L b_0) + O\left(\alpha_s^2\ln^2(\mu_L b_0)\right) \right ] \end{eqnarray} We can then use the leading term of this anomalous dimension to resum the soft function: \begin{eqnarray} \label{softNLL} V^{\rm NLL}(\nu_H,\nu_L;\mu_L) &&= \exp\left[ -\Gamma_0 \frac{\alpha(\mu_L)}{\pi}\ln\left(\frac{\nu_H}{\nu_L}\right)\ln(\mu_L b_0) \right] \\ \frac{d \sigma}{d q_T^2 dy} &&=\frac{\sigma_0}{2} U^{\rm NLL}(\mu_H, \mu_T, \mu_L)\int db\, b J_0(b q_T) \,V^{\rm NLL}(\nu_H,\nu_L;\mu_L)f(x_1,\mu_L)f(x_2,\mu_L) \nn\\ &&= \sigma_0 U^{\rm NLL}(\mu_H, \mu_T, \mu_L) e^{-2 \omega_s \gamma_E} \frac{ \Gamma[1- \omega_s]}{\Gamma[\omega_s]} \frac{1}{\mu_L^2} \left(\frac{\mu_L^2}{q_T^2}\right)^{1-\omega_s}f(x_1,\mu_L)f(x_2,\mu_L) \nn \end{eqnarray} where $\omega_s= \Gamma_0\frac{\alpha_s}{2\pi}\ln\left(\frac{\nu_H}{\nu_L}\right)$ . This result works for $q_T >0 $. Clearly then we still have a singularity in the cross section at $\omega_s = 1$. As was noted in earlier papers \cite{Chiu:2012ir}, this is a divergent series. The reason for this is that the logarithms in $b$ space of the form $\ln^n( \mu b_0)$ do not translate directly to logarithms of $\ln^n(\mu q_T)$. The simplest example of this is the inverse Fourier transform of $\ln(\mu b_0)$. This gives us a term proportional to the plus distribution function $ \mathcal{L}_0 =\tfrac{1}{\mu^2}\left[\mu ^2/q_T^2\right]_+$ as defined in \cite{Chiu:2012ir}. For nonzero values of $q_T$, this is simply $1/q_T^2$. The same thing happens for higher powers of logarithms. For example $\ln^2(\mu b_0)$ also gives a term proportional to $\mathcal{L}_0$ along with terms which go like $\ln(\mu /q_T)$. So an all order summation of the logarithms in $b$ space eventually adds up all of these pieces (whose coefficient is controlled by $\omega_s$) which leads to a divergence for sufficiently large $\omega_s$. So this step softens the singularity somewhat moving it away from $q_T =0$, so that we at least have a nonzero cross section for nonzero $q_T$. However, the result is still singular. The singularity results from the single logarithm of $\ln(\mu_L b_0)$ in the exponent which diverges at very small values of $b$. We would have expected that the integrand would receive its dominant contribution from the region $b \sim 1/q_T$. Instead, as it was noted in \cite{Chiu:2012ir} it is the UV region $b\sim 1/Q$ which appears to dominate the integral. Since our resummation kernel in $b$ space is unstable, we cannot yet assign a power counting to the residual logs. This is because the fixed order logs (of the form $\alpha_s^n \ln^m(\mu_L b_0)$ are weighted by this exponent in the inverse Fourier transform. So before we can talk of power counting for these logs, we must stabilize this kernel such that the region $b \sim 1/q_T$ dominates. Clearly, to cure this singularity (which is a UV singularity) we need an even powered logarithm $\ln^{2m}(\mu b_0)$ with a negative coefficient in the soft resummation exponent. So we move ahead and attempt to see if we can resum the next biggest term (which would technically be subleading at this order) $\sim\alpha_s \ln^2(\mu_L b_0)$. There are two places we can find such a double logarithm, one from the second term in \eq{gamS}, which is part of the 2-loop rapidity anomalous dimension, and another from the fixed-order soft function at 1-loop, see \eq{Sn}, or \eq{ffS}, which includes the beam function contribution. Since the term in \eq{gamS} is already part of the RRG exponent, including it just means tacking on a subleading term in the exponent, which we are free to do at NLL order. As for the fixed-order term in \eq{Sn}, while standard resummation schemes tell us to truncate the logs in the soft function $\widetilde S(b,\mu_L,\nu_L)$ at fixed order, these logs in the soft function, though not large, do actually exponentiate, see \eq{Sevolved2}. The higher order logs are subleading at NLL, but again, we are free to include them, either in fixed-order or exponentiated form. Specifically, together with standard soft exponent at NLL, the terms we want to put in the total soft exponent are: \begin{eqnarray} &&\ln V^\text{NLL}(\nu_H,\nu_L; \mu_L) + \ln^2(\mu_L b_0)\left[ -\Gamma_0\frac{\alpha_s}{2\pi}-2\Gamma_0\frac{\alpha_s^2}{2\pi} \frac{\beta_0}{4\pi} \ln\left(\frac{\nu_H}{\nu_L}\right)\right] \nn \\ &&=-2\Gamma_0\frac{\alpha_s}{2\pi} \left[\ln\left(\frac{\nu_H}{\nu_L}\right)\ln(\mu_L b_0)+\ln^2(\mu_L b_0)\left(\frac{1}{2}+ \alpha_s \frac{\beta_0}{4\pi} \ln\left(\frac{\nu_H}{\nu_L}\right)\right) \right]\,. \label{Ustable} \end{eqnarray} The typical default choice for the low rapidity scale is $\nu_L = \mu_L$. We imagine making a choice near this scale. But we will rescale it so that the extra $\ln^2(\mu_L b_0)$ terms in \eq{Ustable} get automatically included in the standard soft NLL exponent. We now attempt to reproduce the above terms in the pure NLL soft exponent, with a modified scale choice $\nu_L^*$: \begin{equation} \label{VSNLLstar} V_S^{\text{NLL}}(\nu_H,\nu_L^*;\mu_L) =\exp\biggl\{ - \frac{\alpha_s(\mu_L)}{4\pi}4\Gamma_0 \ln(\mu_L b_0) \ln\frac{\nu_H}{\nu_L^*}\biggr\}\,. \end{equation} This can be done with the scale choice: \begin{equation}\label{nuLstar} \nu_L^* = \nu_L (\mu_L b_0)^{-1+p} \end{equation} The value of $p$ that allows us to obtain the double log terms in \eq{Ustable} is determined by comparing them with the exponent in \eq{VSNLLstar}: \begin{eqnarray} \label{VSNLL} \ln V_S^\text{NLL}(\nu_H,\nu_L^*;\mu_L) &= & \gamma_\nu^{S (0)} \ln\left(\frac{\nu_H}{\nu_L^*}\right) \nn\\ &=& -4\Gamma_0 \frac{\alpha_s(\mu_L)}{4\pi}\ln(\mu_L b_0) \left[ \ln\left(\frac{\nu_H}{\nu_L}\right)+(1-p)\ln(\mu_L b_0) \right] \end{eqnarray} By comparing above equation to \eq{Ustable} we find \begin{eqnarray} p&=& \frac{1}{2}\left[1- \frac{\alpha_s(\mu_L) \beta_0}{2\pi}\ln\left(\frac{\nu_H}{\nu_L}\right)\right] \label{eq:n} \end{eqnarray} This ensures that we have now resummed all the terms of the form $\alpha_s \ln^2(\mu_L b_0)$. (We assume one did not choose the default $\nu_L$ scale on the right-hand side of \eq{nuLstar} with nontrivial $b$ dependence.) Since $\Gamma_0>0$, the double log now provides the necessary stability to the exponential kernel in $b$ space (at both large and small values of $b$). With this term in place, we can now talk of a systematic power counting for the fixed order logs, which, hitherto, was not meaningful. So we now adopt the usual power counting that $\ln(\mu_L b_0) \sim 1$, i.e., this log is small. We still need to confirm numerically, that the fixed order logs that remain ($\mathcal{O}(\alpha_s\ln(\mu_L b_0))$, $\mathcal{O}(\alpha_s^2 \ln^3(\mu_L b_0))$, $\mathcal{O}(\alpha_s^2 \ln^2(\mu_L b_0))$, etc.), when integrated against our exponent in $b$ space, are actually small so that our series is well behaved perturbatively. With this power counting in place, our result for the resummation at NLL then looks like \begin{eqnarray} \label{nustarcs} \frac{d \sigma}{d q_T^2 dy} =\frac{\sigma_0}{2}\, U^{\text{NLL}}(\mu_H, \mu_T, \mu_L) \int db \, b J_0(b q_T)\, V^{\text{NLL}}(\nu_H,\nu_L^*;\mu_L)f(x_1,\mu_L)f(x_2,\mu_L) \end{eqnarray} At NLL, all fixed order logs are subleading and hence not included. We will find below that not only does the quadratic $\ln^2\mu_L b_0$ term in the exponent of the $b$ integrand in \eq{nustarcs} introduced by the scale choice \eq{nuLstar} make the integral converge for both small and large $b$, it actually makes it integrable analytically (after a very good numerical approximation for the Bessel function). We will describe this in detail in \sec{analytic}. First, we explore how to generalize the above-described procedure beyond NLL. \subsubsection{NNLL and beyond} \label{ssec:NNLLbeyond} At NNLL and higher order, we have some freedom in choosing how to update the accuracy of the rapidity evolution kernel in \eq{UVdef}. In this subsection, we will use this freedom to look for a way to preserve both the stable power counting of fixed-order logs of $\mu_L b_0$ after integrating over $b$ as well as our ability, in \sec{analytic}, to evaluate that $b$ integral (semi-)analytically. A simple, standard, choice would simply be to include the next order terms in the rapidity anomalous dimension \eq{gammanuexp}, \emph{e.g.}~ the $\mathcal{O}(\alpha_s^2)$ terms at NNLL, while keeping the scale choice $\nu_L^*$ in \eq{nuLstar} for the soft rapidity scale. There are two somewhat undesirable consequences of this choice, however. First, as we noted at the end of \ssec{NLL}, an exponent in the $b$ integrand that is quadratic in $\ln\mu_L b_0$ will make the integral analytically computable, to a very good approximation to be described in \sec{analytic}---but not higher powers of $\ln\mu_L b_0$, which will begin to enter starting at N$^3$LL order in \eq{gammanuexp}. Second, starting at NNLL order, maintaining the scale choice $\nu_L^*$ would put some terms into the exponent twice, namely, the $\mathbb{Z}_S \Gamma_0\beta_0$ term in $\mathcal{O}(\alpha_s^2)$, once from the shift from $\nu_L$ to $\nu_L^*$ \eq{VSNLL} in the $\mathcal{O}(\alpha_s)$ term of the exponent, and once from updating the anomalous dimension appearing in \eq{UVdef} with the two-loop value in \eq{gammanuexp}. Of course, this is compensated by the shift from $\nu_L$ to $\nu_L^*$ in the fixed-order soft function. From \eq{Sn}: \begin{align} \label{Snustar} \widetilde S^{*(1)}(\mu_L,\nu_L^*) &= \mathbb{Z}_S \frac{\Gamma_0}{2}\Bigl(\ln^2 \mu_L b_0 + 2\ln \mu_L b_0 \ln\frac{\nu_L^*}{\mu_L}\Bigr) + c_{\widetilde S}^1 \\ &= \mathbb{Z}_S \frac{\Gamma_0}{2}\Bigl[\ln^2 \mu_L b_0 + 2\ln \mu_L b_0 \Bigl( \ln\!\frac{\nu_L}{\mu_L} - \frac{1}{2} \ln\mu_L b_0 - \frac{\alpha_s(\mu_L)\beta_0}{4\pi} \ln\frac{\nu_H}{\nu_L}\ln \mu_L b_0\Bigr)\!\Bigr] + c_{\widetilde S}^1 \nn \\ &= \mathbb{Z}_S \frac{\Gamma_0}{2} \Bigl( 2\ln \mu_L b_0\ln\frac{\nu_L}{\mu_L} - \frac{\alpha_s(\mu_L)\beta_0}{2\pi}\ln\! \frac{\nu_H}{\nu_L} \ln^2\mu_L b_0\Bigr) + c_{\widetilde S}^1\,,\nn \end{align} where we see that the one-loop double log of $\mu_L b_0$ has canceled---the scale choice $\nu_L^*$ has promoted this term to the exponent in \eqs{UVdef}{gammanuexp}. The added $\mathcal{O}(\alpha_s^2)$ term subtracts off (at fixed order) the corresponding $\Gamma_0\beta_0$ term that was double-added to the exponent at NNLL. While this is acceptable as far as power counting goes, it seems awkward to have this term double-counted in the exponent by itself. Similar observations apply to additional terms at higher orders. There are a number of ways to avoid these problems, while maintaining the desirable quadratic $\ln^2\mu_Lb_0$ terms in the exponent of the $b$ integrand of the cross section. One possibility would be just to revert back to the ordinary scale choice $\nu_L$ from $\nu_L^*$ beyond NLL, but for meaningful comparisons between orders of accuracy, we should maintain the same scale choice as we increase accuracy. Moreover this solution would not prevent higher than quadratic power terms in $\ln\mu_L b_0$ from entering the exponent, which will spoil our analytic integration below. Since the choice $\nu_L^*$ is needed at NLL to stabilize the $b$ integral in \eq{nustarcs} and restore a meaningful resummed power counting, we will go ahead and keep it beyond NLL as well. We will then need a prescription for how to update the rapidity evolution kernel in \eq{VSNLL} that respects the stabilization of the $b$ integral at NLL, without introducing unwanted double counting of terms or higher powers of $\ln^{n>2}\mu_L b_0$ as discussed in the previous paragraph. Another possibility, then, is to just keep the NLL part of the exponent in the rapidity evolution kernel \eq{VSNLLstar}, and expand out the NNLL and higher-order parts in $\alpha_s$, \emph{i.e.}~ \begin{eqnarray} V_S(\nu_H,\nu_L^*;\mu_L)&=&e^{\gamma_\nu^{S(0)} \ln \frac{\nu_H}{\nu_L^*}}e^{(\gamma_\nu^{S(1)}+\gamma_\nu^{S(2)}+\cdots) \ln \frac{\nu_H}{\nu_L^*}} \nn \\ &=&e^{\gamma_\nu^{S(0)} \ln \frac{\nu_H}{\nu_L^*}} \left[1+ \gamma_\nu^{S(1)} \ln \frac{\nu_H}{\nu_L^*} +\cdots \right] \label{VStruncated} \end{eqnarray} At NNLL we would keep the second term of order $\alpha_s$ in the bracket, and at N$^3$LL two more terms of order $\alpha_s^2$ would be included, etc. This is not insensible, as the rapidity kernel in \eq{UVdef} contains only a single large log ($\ln \nu_H/\nu_L$) multiplying the whole $\nu$-anomalous dimension. Thus, while the NLL terms are all order $\alpha_s \times 1/\alpha_s\sim 1$ in log counting and should be exponentiated, the NNLL $\mathcal{O}(\alpha_s^2)$ part of the $\nu$-anomalous dimension and higher-order terms are all truly suppressed by additional powers of $\alpha_s$. This is in contrast to the $\mu$ evolution kernels, \emph{e.g.}~ \eqs{KGammaexp}{etaexp}, where there are infinite towers of terms of the same order in log counting, because terms at higher powers in $\alpha_s$ are multiplied by large logs of $\mu/\mu_0$. This is not the case in \eq{gammanuexp}, since higher order terms in $\alpha_s$ generated by $\mu$ running do not come with large logs---we are doing the rapidity evolution at a scale $\mu_L\sim 1/b_0$, generating only small logs of $\mu_L b_0$. All the effects of $\mu$ running between widely separated scales and their associated large logs are contained in $U$ in \eq{UVdef}. However, we do not need to be so draconian in truncating the terms we could resum using the RRG kernel in \eq{UVdef}. The terms in \eq{VStruncated} that we either exponentiate or truncate at fixed order are basically the same order as terms in the fixed-order expansion of the soft function $\widetilde S(b,\mu_L,\nu_L^*)$ given by \eqs{Sfixedorder}{Sn} that are kept at each order of logarithmic accuracy, so there is a freedom in choosing which terms in the rapidity anomalous dimension \eq{gammanuexp} and the soft function \eq{Sfixedorder} we will exponentiate or leave in a fixed-order expansion, to obtain desirable behavior of the $b$-space integrand in \eq{xsec}. Let us then use this freedom to divide the terms in the rapidity anomalous dimension \eq{gammanuexp} into two parts, those that we will exponentiate and those that we will expand out in fixed order. Namely, we will exponentiate all the pure $\Gamma_n$ and $\gamma_{RS}^n$ anomalous dimension terms; these are at most single logarithmic in $\mu_L b_0$, and the shift from $\nu_L$ to $\nu_L^*$ introduces the double logs $\frac{1}{2}\mathbb{Z}_S\Gamma_n \ln^2\mu_L b_0$ in the fixed-order soft function $\widetilde S$ associated with $\Gamma_n$, see \eq{Sn}; as well as (part of) the $\sim \mathbb{Z}_S\Gamma_n \beta_0\ln^2\mu_L b_0$ term in the rapidity anomalous dimension \eq{gammanuexp}, which stabilize the $b$-space integrand. The remaining terms will be expanded out in $\alpha_s$, and these are all associated with all the beta function terms coming from $\alpha_s$ running of the ``pure'' $\Gamma_n$ and $\gamma_{RS}^n$ terms. Concretely, we split the rapidity evolution kernel in \eq{UVdef} into: \begin{equation} \label{Vsplit} V(\nu_L,\nu_H;\mu_L) = V_\Gamma(\nu_L,\nu_H;\mu_L) V_\beta(\nu_L,\nu_H;\mu_L)\,, \end{equation} where \begin{equation} \label{VGamma} V_\Gamma (\nu_L,\nu_H;\mu_L) = \exp\biggl\{ \ln\frac{\nu_H}{\nu_L} \sum_{n=0}^\infty \Bigl(\frac{\alpha_s(\mu_L)}{4\pi}\Bigr)^{n+1}(\mathbb{Z}_S \Gamma_n \ln\mu_L b_0 + \gamma_{RS}^n)\biggr\}\,, \end{equation} which remains exponentiated and contains all the ``pure'' anomalous dimension terms, and \begin{align} \label{Vbeta} V_\beta (\nu_L,\nu_H;\mu_L) &= 1 + \Bigl(\frac{\alpha_s(\mu_L)}{4\pi}\Bigr)^2 \Bigl( \mathbb{Z}_S \Gamma_0\beta_0 \ln^2\mu_L b_0 + 2\gamma_{RS}^0\beta_0 \ln \mu_L b_0\Bigr) \ln\frac{\nu_H}{\nu_L} + \cdots\,, \end{align} which is the fixed-order expansion of the part of the rapidity evolution kernel \eq{UVdef} coming from all the remaining ($\beta_n$) terms in \eq{gammanuexp} that are not included in \eq{VGamma}. This division \eq{Vsplit} ensures that the exponentiated part of the rapidity evolution kernel contains at most double logs of $\mu_L b_0$ after the shift from $\nu_L$ to $\nu_L^*$, and also avoids double counting of the $\beta_0$ induced terms in \eq{gammanuexp} in the exponent as described above. We can give a more formal definition of the two factors $V_\Gamma$ and $V_\beta$ in \eqs{VGamma}{Vbeta}. Since they are built out of pieces of the anomalous dimension $\gamma_\nu^S(\mu_L)$ in \eq{gammanuexp}, we go back to its all orders expression given by \eq{gammanurun}: \begin{align} \gamma_\nu^{S}(\mu_L) = \mathbb{Z}_S \eta_\Gamma(1/b_0,\mu_L) + \gamma_{RS}[\alpha_s(1/b_0)]\,, \end{align} and divide up each term into pieces containing just the ``pure'' anomalous dimension coefficients and those generated by beta function terms. For $\gamma_{RS}$ this is easy: \begin{align} \label{gammaRSsplit} \gamma_{RS}[\alpha_s(1/b_0)] &= \sum_{n=0}^\infty \Bigl(\frac{\alpha_s(1/b_0)}{4\pi}\Bigr)^{n+1} \gamma_{RS}^n \\ &= \sum_{n=0}^\infty \Bigl(\frac{\alpha_s(\mu_L)}{4\pi}\Bigr)^{n+1} \gamma_{RS}^n + \sum_{n=0}^\infty \Bigl(\frac{\alpha_s(\mu_L)}{4\pi}\Bigr)^{n+1}\biggl(\frac{1}{r^{n+1}} - 1\biggr) \gamma_{RS}^n \nn \\ &\equiv \gamma_{RS}^{\text{conf.}}(\mu_L) + \Delta\gamma_{RS}(\mu_L)\,, \nn \end{align} where \begin{equation} r\equiv \frac{\alpha_s(\mu_L)}{\alpha_s(1/b_0)}\,. \end{equation} The first piece in the last line of \eq{gammaRSsplit} contains those terms in the anomalous dimension that would survive in the conformal limit of QCD, where $\alpha_s$ does not run. The second set of terms, in $\Delta\gamma_{RS}$ contain all the beta function induced terms. For example, the ratio $1/r$ has the fixed order expansion up to $\mathcal{O}(\alpha_s^2)$: \begin{equation} \frac{1}{r} = 1 + \frac{\alpha_s(\mu_L)}{2\pi}\beta_0 \ln \mu_L b_0 + \frac{\alpha_s(\mu_L)^2}{8\pi^2} \Bigl( \beta_1 \ln\mu_L b_0 + 2\beta_0^2 \ln^2\mu_L b_0\Bigr) + \cdots\,. \end{equation} and the ``1'' term is subtracted off in \eq{gammaRSsplit}, leaving just the $\beta_i$ terms. The similar thing happens for $1/r^{n+1}$. We can similarly split up $\eta_\Gamma$ into two pieces. To all orders in $\alpha_s$, $\eta_\Gamma(1/b_0,\mu_L)$ is given by \eq{Keta}, and is expanded out in fixed orders in \eq{etaexp2}. We want to split up $\eta_\Gamma$ into the ``pure'' $\Gamma_n \ln\mu_L b_0$ pieces along the diagonal of \eq{etaexp2}, and the remaining $\beta_i$ induced terms. We do this by very straightforwardly defining: \begin{equation} \label{etasplit} \eta_\Gamma(1/b_0,\mu_L) = \eta_\Gamma^{\text{conf.}}(1/b_0,\mu_L) + \Delta\eta_\Gamma(1/b_0,\mu_L)\,, \end{equation} where \begin{equation} \eta_\Gamma^{\text{conf.}}(1/b_0,\mu_L) \equiv \int_{1/b_0}^{\mu_L} \frac{d\mu}{\mu} \sum_{n=0}^\infty \Bigl(\frac{\alpha_s(\mu_L)}{4\pi}\Bigr)^{n+1}\Gamma_n = \sum_{n=0}^\infty \Bigl(\frac{\alpha_s(\mu_L)}{4\pi}\Bigr)^{n+1}\Gamma_n \ln\mu_L b_0\, \end{equation} contains the ``pure'' anomalous dimension terms in $\eta_\Gamma$ (the ones which would survive in the conformal limit), and $\Delta\eta_\Gamma$ is given simply by \begin{equation} \Delta\eta_\Gamma(1/b_0,\mu_L) = \eta_\Gamma(1/b_0,\mu_L) - \eta_\Gamma^{\text{conf.}}(1/b_0,\mu_L)\,. \end{equation} We will keep $\eta_\Gamma^{\text{conf.}}$ exponentiated part of \eq{Vsplit}, while $\Delta\eta_\Gamma$ will go into the part expanded in fixed orders in $\alpha_s$. The explicit expansion for $\Delta\eta_\Gamma$ is of course given by \eq{etaexp2} with the diagonal terms deleted, or can be worked out to all orders in $\alpha_s$ (up to NNLL accuracy) from the expression in \eq{Keta}. The corresponding expression for $\Delta\eta_\Gamma$ up to terms of NNLL accuracy is then: \begin{align} \label{Deltaetaexp} \Delta\eta_\Gamma(1/b_0, \mu_L) &= - \frac{\Gamma_0}{2\beta_0}\, \biggl[ \ln r + \frac{\alpha_s(\mu_L)}{2\pi} \beta_0 \ln \mu_L b_0 \\ &\qquad\qquad + \frac{\alpha_s(\mu_L)}{4\pi}\, \biggl(\frac{\Gamma_1 }{\Gamma_0 } \!-\! \frac{\beta_1}{\beta_0}\biggr)\Bigl(1- \frac{1}{r}\Bigr) + \Bigl(\frac{\alpha_s(\mu_L)}{4\pi}\Bigr)^2 \frac{\Gamma_1}{\Gamma_0}2\beta_0\ln \mu_L b_0 \nn \\ &\qquad \qquad + \frac{\alpha_s^2(\mu_L)}{16\pi^2} \biggl( \frac{\Gamma_2 }{\Gamma_0 } \!-\! \frac{\beta_1\Gamma_1 }{\beta_0 \Gamma_0 } + \frac{\beta_1^2}{\beta_0^2} -\frac{\beta_2}{\beta_0} \biggr) \frac{1 - \frac{1}{r^2}}{2} + \Bigl(\frac{\alpha_s(\mu_L)}{4\pi}\Bigr)^3 \frac{\Gamma_2}{\Gamma_0}2\beta_0\ln \mu_L b_0 \biggr] \,, \nn \end{align} where we notice the subtraction terms at the end of each line just modify the pure anomalous dimension $\Gamma_i/\Gamma_0$ pieces, as designed. Note the following properties of the expanded functions of $r = \alpha_s(\mu_L)/\alpha_s(1/b_0)$ that appear in each line of \eq{Deltaetaexp}: \begin{subequations} \begin{align} \ln r + \frac{\alpha_s(\mu_L)}{2\pi} \beta_0 \ln \mu_L b_0 = - \frac{\alpha_s(\mu_L)^2}{8\pi^2} (\beta_1 \ln \mu_L b_0 + \beta_0^2 \ln^2\mu_L b_0) + \cdots \\ 1-\frac{1}{r} + \frac{\alpha_s(\mu_L)}{2\pi} \beta_0 \ln \mu_L b_0 = - \frac{\alpha_s(\mu_L)^2}{8\pi^2} (\beta_1 \ln \mu_L b_0 + 2 \beta_0^2 \ln^2\mu_L b_0) + \cdots \\ \frac{1}{2}\Bigl(1-\frac{1}{r^2}\Bigr) + \frac{\alpha_s(\mu_L)}{2\pi} \beta_0 \ln \mu_L b_0 = - \frac{\alpha_s(\mu_L)^2}{8\pi^2} (\beta_1 \ln \mu_L b_0 + 3 \beta_0^2 \ln^2\mu_L b_0) + \cdots \end{align} \end{subequations} etc. So the remaining terms in the expansion of $\Delta\eta_\Gamma$ in \eq{Deltaetaexp} contain only the $\beta_i$ induced terms, that we want to put in to $V_\beta$ in \eq{Vsplit}, as designed. With the splitting up of terms in \eqs{gammaRSsplit}{etasplit}, we can formally define the two pieces $V_\Gamma,V_\beta$ into which we have split the rapidity evolution kernel in \eq{Vsplit}: \begin{subequations} \label{VGammaVbeta} \begin{align} V_\Gamma(\nu_L,\nu_H;\mu_L) &= \exp\biggl\{ \ln\frac{\nu_H}{\nu_L} \bigl[ \mathbb{Z}_S \eta_\Gamma^{\text{conf.}}(1/b_0,\mu_L) + \gamma_{RS}^{\text{conf.}}(\mu_L)\bigr]\biggr\} \\ V_\beta(\nu_L,\nu_H;\mu_L) &= \exp\biggl\{ \ln\frac{\nu_H}{\nu_L} \bigl[ \mathbb{Z}_S \Delta\eta_\Gamma(1/b_0,\mu_L) + \Delta\gamma_{RS}(\mu_L)\bigr]\biggr\}\biggr\rvert_{\text{F.O.}}\,, \end{align} \end{subequations} indicating that $V_\beta$ is to be truncated to fixed order in $\alpha_s$ according to \tab{NkLL}. This defines the ``$V_\beta$-expansion'' we first mentioned in \ssec{intro3}. In this paper we shall not need it beyond the $\mathcal{O}(\alpha_s^2)$ terms given in \eq{Vbeta}. Our master expression for the resummed cross section, using the scale choices and prescriptions we have explained above, is then: \begin{align} \label{finalresummedcs} \frac{ d \sigma} {dq_T^2 dy} &= \sigma_0 \pi (2\pi)^2 C_t^2(M_t^2, \mu_T) H (Q^2,\mu_H ) U(\mu_L,\mu_H, \mu_T) \int db\, b J_0( b q_T) V_\Gamma(\nu_L^*,\nu_H;\mu_L) \nn\\ &\quad\times V_\beta(\nu_L^*,\nu_H;\mu_L) \widetilde S( b ;\mu_L,\nu_L^*) \widetilde f_1^{\perp}( b, x_1,p^-; \mu_L ,\nu_H) \widetilde f_2^{\perp}( b, x_2, p^+; \mu_L ,\nu_H) \,, \end{align} where $U$ in \eq{UVdef} and $V_\Gamma$ given by \eq{VGamma} or \eq{VGammaVbeta} are exponentiated: \begin{align} U(\mu_L,\mu_H, \mu_T) &= \exp\biggl\{4 K_\Gamma(\mu_L,\mu_H) - 4 \eta_\Gamma(\mu_L,\mu_H) \ln\frac{Q}{\mu_L} - K_{\gamma_H}(\mu_L,\mu_H)- K_{\gamma_{C_t^2}}(\mu_L,\mu_T)\biggr\}\nn \\ \label{Vstar} V_\Gamma (\nu_L^*,\nu_H;\mu_L) &= \exp\biggl\{ \ln\frac{\nu_H}{\nu_L^*} \sum_{n=0}^\infty \Bigl(\frac{\alpha_s(\mu_L)}{4\pi}\Bigr)^{n+1}(\mathbb{Z}_S \Gamma_n \ln\mu_L b_0 + \gamma_{RS}^n)\biggr\}\,, \end{align} and the other objects are truncated at fixed order in $\alpha_s$, $H$ being given by \eqs{Hexp}{Cn}, $\widetilde S$ by \eqs{Sfixedorder}{Sn}, $\widetilde f^\perp$ being given by \eqss{beammatching}{Iijexpansion}{Iijcoefficients}. The $\beta$-function dependent part of the RRG kernel $V_\beta$ is also truncated at fixed order in our scheme, and is defined in \eq{VGammaVbeta} and given by \eq{Vbeta} up to $\mathcal{O}(\alpha_s^2)$, the highest order we shall need in this paper. In order for \eq{finalresummedcs} to successfully resum large logs of scale ratios, we recall that the scales $\mu_H,\mu_L$ and $\nu_H,\nu_L$ should be chosen near the natural scales of the respective hard, soft, and beam functions. $\mu_H$ and $\nu_H$ should be chosen $\sim Q$. We expect $\nu_L$ to be chosen $\sim \mu_L$, and then shifted to $\nu_L^*$ according to \eq{nuLstar} to introduce the quadratic damping factor in the $b$ exponent. As for $\mu_L$ itself, it should be chosen $\sim q_T$ in momentum space, although we will explore in \ssec{muLscale} a modified choice for this scale that better preserves stable power counting. For now, it remains a free scale. \subsubsection{Truncation and resummed accuracy} Here we summarize the rules for how to truncate the various objects in the full resummed cross section \eq{finalresummedcs}. These rules are mostly standard and well known, but with our introduction of the division of the RRG kernel into exponentiated and fixed order pieces $V_\Gamma \times V_\beta$, it seems prudent to restate these rules here. These are given in \tab{NkLL}. \begin{table}[t] \begin{center} $ \begin{array}{ | c | c | c | c | c | c |} \hline \text{accuracy} & \Gamma_n,\beta_n & \gamma_{H,S,f}^n ,\, \gamma_{RS}^n & V_\beta & H,\widetilde S,\widetilde f \\ \hline \text{LL} & \alpha_s & 1 & 1 & 1 \\ \hline \text{NLL} & \alpha_s^2 & \alpha_s & \alpha_s & 1 \\ \hline \text{NNLL} & \alpha_s^3 & \alpha_s^2 & \alpha_s^2 & \alpha_s \\ \hline \text{N$^3$LL} & \alpha_s^4 & \alpha_s^3 & \alpha_s^3 & \alpha_s^2 \\ \hline \end{array} $ \end{center} \vspace{-1em} \caption{Order of anomalous dimensions, beta function, and fixed-order functions (hard, soft, TMDPDF, and $V_\beta$ in \eq{VGammaVbeta}) required to achieve N$^k$LL\ and N$^k$LL$'$\ accuracy in the resummed cross section \eq{finalresummedcs}.} \label{tab:NkLL} \end{table} We should note here that our choice of terms to group into the exponentiated part $V_\Gamma$ and those expanded in fixed order in $V_\beta$ in \eq{finalresummedcs} is not unique. Terms in $V_\Gamma$ and $V_\beta$ in \eqs{VGamma}{Vbeta} can be shifted back and forth by a different choice of prescription, and different choices of scales (such as $\nu_L^*$ in \eq{nuLstar}). Indeed, with our choice of the split between $V_\Gamma$ and $V_\beta$ and the scale $\nu_L^*$, the rapidity evolution exponent contains only a subset of the terms in the full rapidity anomalous dimension \eq{gammanuexp}---but the subset that allows an analytic evaluation of the $b$ integral, as we will show in \sec{analytic}. One may very well make a different set of choices that put a different set of terms in the exponent, based on a different set of desired criteria. This freedom is allowed by the presence of only a single large log in the RRG kernel in \eq{UVdef} at the virtuality scale $\mu_L$. We present our particular choice as just one such example. We advocate that readers make use of the freedom to choose the scales $\nu_{L,f}$ and $\mu_{L,H}$ in \eq{finalresummedcs}, either before \emph{or after} $b$ integration, along with the organization of terms in the RRG kernel \eq{Vsplit} into exponentiated and fixed-order parts (beyond NLL) to achieve their desired properties and results for the resummed momentum-space cross section.\footnote{We explore one such scheme in \appx{Cparameter} which allows us to also include single logarithmic terms of the form $\ln(\mu_L b_0)$ at each order of resummation using a simple modification of the choice for $\nu_L^*$.} The way we have organized the resummed cross section \eq{finalresummedcs}, all logs of $\mu_L b_0$ are contained in the exponent of $V_\Gamma$, and in fixed-order terms in $V_\beta,\widetilde S,\widetilde f_i^\perp$. Furthermore, the power of $\ln\mu_L b_0$ in the exponent \eq{Vstar} with the scale choice $\nu_L^*$ is at most quadratic to all orders in $\alpha_s$. Our choices to arrange this property are motivated, as we will see later, by the fact that it enables us, using some approximations, to obtain an analytical expression for the $b$ space integral. This is only possible as long as the quadratic nature of the exponent holds. At N$^3$LL (or NNLL$'$) and higher order, the fixed-order coefficients $V_\beta,\widetilde S,\widetilde f_i^\perp$ contains other powers of $\ln\mu_Lb_0$, but the contributions of these fixed-order logs can be dealt with analytically as well, as long as the exponent is no more than quadratic in $\ln\mu_L b_0$. \subsubsection{Stable power counting in momentum space and the scale $\mu_L$} \label{ssec:muLscale} We have not yet specified exactly what we will choose for the scale $\mu_L$. All of the above arguments are contingent upon the fact that our power counting $\ln(\mu_L b_0) \sim 1$ holds. What this means in momentum space is that after performing the $b$ integral in \eq{finalresummedcs}, fixed-order logs in the integrand of the form $\ln^n \mu_L b_0$ do not generate parametrically large terms after integration. This, it turns out, depends on what we pick for the scale $\mu_L$. Let us see what $\mu_L$ we should choose for this power counting to remain true. It turns out that the ``obvious'' choice in momentum space $\mu_L =q_T$ is not always a very good choice for minimizing the contributions of the fixed order logs. This is not too surprising, since the kernel against which they are integrated in transforming back to momentum space is no longer a simple inverse Fourier transform with a single scale $q_T$. Instead it involves an exponent which is also a function of the high scale. It is then natural that the scale at which the logarithms are minimized (if not put exactly to 0) is shifted towards the high scale. This effect is particularly pronounced at low values of $q_T$. \begin{figure} \centerline{\scalebox{.55}{\includegraphics{fixed.pdf}}} \vskip-0.5cm \caption[1]{Percentage contribution of fixed order logs to the $b$ integral in \eq{finalresummedcs} as a function of the scale choice $\mu$, assuming a coefficient of $1$ in front of each plotted log, as an estimate of the scale $\mu$ where the fixed-order logs in $b$ space translate to small logs in momentum space.} \label{fig:fixed} \end{figure} \begin{figure} \centerline{\scalebox{.55}{\includegraphics{tscale2.pdf}}} \vskip-0.5cm \caption[1]{Optimal $\mu$ scale choice in momentum space, corresponding to the solution of the condition \eq{scale} for the peak of the resummation exponent in the $b$ integral, which also turns out to correspond roughly to the location where all the logs in \fig{fixed} make the numerically smallest contributions.} \label{fig:scaleL} \end{figure} For the rest of the fixed order logarithmic pieces, we then check what value of scale will minimize the contribution to the cross section ($\lesssim 10 \% $). Here, the nature of the resummed kernel can help us. The soft exponent in $b$-space provides damping at both small and large values of $b$ which, combined with the Bessel function, gives an integrand which has the form of damped oscillations. Then it is reasonable to assume that the most of the contribution to the integral is coming from the around the region of the first peak. A ballpark choice for the scale $\mu$ then is $1/b^*$, where $b^*$ is the scale at which the resummation kernel has the first peak. Since the hard kernel is independent of $b$, we only need to consider the soft resummation. A straightforward analysis of the $b$-space integrand then gives the following condition for the peak \begin{eqnarray} q_T b^* \, J_1(q_T b^*) &=& J_0( q_T b^*)\left \{ 1- 2\Gamma_0 \frac{\alpha_s}{2\pi} \Bigg[\ln \frac{\nu_H}{\mu_L} +2(1-p)\ln(\mu_L b_0^*) \Bigg]\right \} \end{eqnarray} $J_{0,1}(x)$ are the zeroth and first order Bessel Functions of the first kind respectively. We now set $b^*=1/\mu_L$ and we can further simplify the expression above by keeping the dominant terms. \begin{eqnarray} \frac{q_T}{\mu_L}J_1\left(\frac{q_T}{\mu_L}\right) = J_0\left(\frac{q_T}{\mu_L}\right)\left \{ 1-2\Gamma_0 \frac{\alpha_s}{2\pi} \ln \frac{\nu_H}{\mu_L} \right \} \label{scale} \end{eqnarray} This can be solved numerically to obtain the scale $\mu_L$. We can also confirm this by checking the contribution of the leading term $\alpha_s \ln(\mu_L b_0)$ at $O(\alpha_s)$ and cross checking it against the next biggest piece $ \alpha_s^2 \ln^2(\mu b_0), \alpha_s^2 \ln^3(\mu b_0)$ at $O(\alpha_s^2)$. While we are currently keeping only one-loop fixed order pieces in our NNLL cross section, we can always include the two-loop pieces $\alpha_s^2 \ln^n(\mu b_0)$ which are known to us at NNLL (this would be part of the NNLL$'$ cross section). \fig{fixed} looks at the percentage contribution of the fixed order logs when integrated against the NNLL exponent in $b$ space. This particular plot is for the Higgs $q_T$ distribution for $q_T = 10 \text{ GeV}$. It is pretty clear that $\mu =q_T$ is a poor choice for $\mu$ and that a good choice would be somewhere between $\mu \sim 20 \text{ GeV}$ for the contribution from the fixed order logs to be $\sim 2 \%$. In comparison, the value predicted by \eq{scale} is $22 \text{ GeV}$. \fig{scaleL} gives the scale choice for Higgs and DY (Choosing a common value of $Q=125 \text{ GeV}$ and $\sqrt{s}= 13$ TeV. For low values of $q_T$, the scale is shifted away from $q_T$ toward $Q$ as expected. The shift is far more pronounced for Higgs than for DY since the cusp anomalous dimension for Higgs is much larger so that the soft exponent has far more impact on the shape of the $b$-space integrand. At large values of $q_T$, the scale is more or less $q_T$. The key point to notice here is that apart from the single log $\alpha_s \ln(\mu_L b_0)$, which we will include in the fixed order cross section at NNLL, the contribution from the higher order logs shows a flat behavior at at values of $\mu \geq 15\text{ GeV}$ for $q_T= 10\text{ GeV}$ in \fig{fixed}. So the result is insensitive to the choice of $\mu_L$ as long it is chosen greater than this threshold value. Using these scale choices, we now obtain the transverse spectrum, again at NNLL (\fig{NNLL}). The uncertainties are obtained by scale variations described in \eq{scalevariations}, obtained reliably by varying both $\mu$ and $\nu$ scales, without cutoffs in the $b$ integral. \begin{figure} \centerline{\scalebox{.55}{\includegraphics{pspace.pdf}}} \vskip-0.2cm \caption[1]{Result of the resummed cross section \eq{finalresummedcs} in momentum space, with the rapidty and virtuality scales $\nu_L^*$ and $\mu_L$ chosen as in \eq{nuLstar} and \fig{scaleL}. No cutoff of the $b$ integral is required, and the $\mu_L$ scale can be varied in its full range from $\mu_L/2$ to $2\mu_L$ to obtain the uncertainty, without hitting a pole as in \fig{bspace}. These plots are obtained by performing the $b$ integral numerically. We stop plotting at low $q_T$ where true nonperturbative effects will enter.} \label{fig:NNLL} \end{figure} \section{Explicit formula for resummed transverse momentum spectrum} \label{sec:analytic} In this section we will provide an expression for the transverse momentum spectra of gauge bosons that is analytic in terms of its dependence on all the kinematic variables. It requires a numerical but efficient approximation for the Bessel function in the $b$ integral in \eq{xsec} or \eq{finalresummedcs}. We can write the resummed cross section \eq{finalresummedcs} in the form \begin{equation} \label{resummedIb} \frac{d\sigma}{dq_T^2 dy} = \frac{1}{2}\sigma_0 C_t^2(M_t^2, \mu_T) H(Q^2,\mu_H) U(\mu_L,\mu_H, \mu_T) I_b(q_T,Q;\mu_L,\nu_L^*,\nu_H)\,, \end{equation} where the we have isolated the terms in the $b$ integral, \begin{equation} \label{Ibdef} I_b(q_T,x_{1,2},Q;\mu_L,\nu_L^*,\nu_H) \equiv \int_0^\infty db\,b J_0(bq_T) \widetilde F(b,x_{1,2},Q;\mu_L,\nu_L^*,\nu_H) V_\Gamma(\nu_L^*,\nu_H;\mu_L)\,, \end{equation} grouping together the terms in the integrand that are to be expanded in fixed order in $\alpha_s(\mu_L)$, \begin{align} \label{fixedorderfactor} \widetilde F(b,x_{1,2},Q;\mu_L,\nu_L^*,\nu_H) & \equiv (2\pi)^3\widetilde S( b ,\mu_L,\nu_L^*) \widetilde f_1^{\perp}( b, x_1, p^-,\mu_L ,\nu_H) \widetilde f_2^{\perp}( b, x_2,p^+, \mu_L ,\nu_H) \nn \\ &\qquad \times V_\beta(\nu_L^*,\nu_H;\mu_L) \,, \end{align} separating them from the exponentiated $V_\Gamma$ factor. The $V_\Gamma$ factor is explicitly to all orders in $\alpha_s$, plugging in \eq{nuLstar} for $\nu_L^*$ in \eq{Vstar}, \begin{align} V_\Gamma(\nu_L^*,\nu_H;\mu_L) &= \exp\biggl\{\! \biggl[ \ln \! \frac{\nu_H}{\nu_L} + \Bigl( \frac{1}{2} \!+\! \frac{\alpha_s(\mu_L)\beta_0}{4\pi} \ln\frac{\nu_H}{\nu_L}\Bigr)\! \ln\mu_L b_0\biggr] \\ &\qquad\qquad\times \Bigl( \mathbb{Z}_s \Gamma[\alpha_s(\mu_L)] \! \ln\mu_L b_0 \!+\! \gamma_{RS}[\alpha_s(\mu_L)]\Bigr) \! \biggr\} \, , \nn \end{align} which can always be written in the form \begin{eqnarray} \label{VGaussian} V_\Gamma = C e^{ -A \ln^2( \Omega b)} \end{eqnarray} where $C$, $A$, and $\Omega$ are independent of $b$ and thus constants as far as the integral $I_b$ in \eq{Ibdef} is concerned. Explicitly, \begin{eqnarray} \label{ACUeta} A(\mu_L,\nu_L,\nu_H) &=& -\mathbb{Z}_S\Gamma[\alpha_s(\mu_L)] (1-p)=-\mathbb{Z}_S \Gamma[\alpha_s(\mu_L)] \left(\frac12+\frac{\alpha_s(\mu_L)\beta_0}{4\pi}\ln \frac{\nu_H}{\nu_L} \right) \\ \Omega&\equiv& \frac{\mu_L e^{\gamma_E}}{2}\chi\,,\quad \chi(\mu_L,\nu_L,\nu_H) = \exp\left\{ \frac{ \ln \nu_H / \nu_L }{1+\frac{\alpha_s(\mu_L)\beta_0}{2\pi}\ln \nu_H / \nu_L }+\frac{\gamma_{RS}[\alpha_s(\mu_L)]}{2\mathbb{Z}_S\Gamma[\alpha_s(\mu_L)]}\right\} \nn \\ C(\mu_L,\nu_L,\nu_H) &=& \exp\left\{ A \ln^2\chi + \gamma_{RS} [\alpha_s(\mu_L)] \ln \frac{\nu_H}{\nu_L}\right\} \nn \end{eqnarray} These show all the dependence on the scales and on the anomalous dimensions contained in $V_\Gamma$. They are to be truncated to the order in resummed accuracy we intend to work (which we show at NLL and NNLL in \appx{fixedorder} in \eqs{ACUetaNLL}{ACUetaNNLL}). Thus we now just have to figure out how to integrate the Gaussian $V_\Gamma$ in $\ln b$ in \eq{VGaussian} against the Bessel function in the integral $I_b$ in \eq{Ibdef}. We will encounter integrals of the form \begin{equation} \label{Ibk} I_b^k\equiv \int_0^\infty db\, b J_0(b q_T) \ln^k(\mu_L b_0) e^{-A \ln^2\Omega b}\,, \end{equation} where the factors $\ln^k(\mu_L b_0)$ come from the fixed-order factor $\widetilde F$ in \eq{fixedorderfactor}. We will first focus on the case where $\widetilde F = 1$ (\emph{e.g.}~ at NLL) and compute $I_b^0$, and then discuss below how to generate the terms $I_b^{k>0}$ resulting from integrating the fixed-order terms containing logs of $\mu_L b_0$ against the rest of the integrand. In the next two subsections we shall develop a method to evaluate $I_b^0$ semi-analytically---with a numerical series expansion for the Bessel function but deriving the analytic dependence of \eq{Ibk} on all relevant physical parameters including $q_T$. Then in \ssec{fixedorderlimit} we shall show how to obtain arbitrary $I_b^k$ from derivatives on $I_b^0$. \subsection{Representing the Bessel Function} The main issue in doing the integrals in \eq{Ibk} analytically is the presence of the Bessel function inside the integrand. The exponent, in our scheme, has terms only up to the quadratic powers of $\ln(\mu_L b_0)$. Analytic integration is then possible if we can approximate the Bessel function using a series of simpler functions, such as polynomials, which can be easily integrated given the quadratic nature of our exponent. However, a simple power series expansion fails to reproduce the Bessel function in the region, where it contributes to the integrand. This is basically the region up to $b \sim 2 \text{ GeV}^{-1}$. This is because the argument of the Bessel function is $b q_T$ and at larger values of $q_T$ ($\geq$ 10 GeV), the power series expansion is no longer useful. We can possibly switch to the large $b\, q_T$ asymptotic form in terms of the cosine function, but then analytic integration is again not possible. An alternative way then is to rewrite the Bessel function in terms of an integral representation, so that the $b$ space integrand can them be done exactly. This automatically rules out any representations in terms of trigonometric functions, and given the discussion earlier, the most expeditious choice is if a polynomial representation can be used. One choice that we will find amenable to a polynomial expansion is the Mellin-Barnes type representation of the Bessel function, \begin{eqnarray} \label{Mellin-Barnes} J_0(z) = \frac{1}{2 \pi i}\int_{c-i \infty}^{c+i \infty} dt \frac{\Gamma[-t]}{\Gamma[1+t]} \left(\frac{1}{2} z \right)^{2t} \,,\end{eqnarray} where the contour is to the left of all the non-negative poles of the Gamma function, i.e., $c<0$. We give a proof of this identity in \appx{proof}. Going back to our all-orders $b$ space integral $I_b^0$ given by \eq{Ibk}, it now looks like \begin{eqnarray} \label{Ib} I_b^0= \int_0^\infty db\, b J_0(b q_T)\, e^{-A\ln^2(\Omega b)} = \frac{1}{2 \pi i} \int_{c-i \infty}^{c+i \infty} dt \, \frac{\Gamma[-t]}{\Gamma[1+t]} \int_0^\infty db\, b \left(\frac{b q_T}{2} \right)^{2t} e^{ -A \ln^2( \Omega b)} \end{eqnarray} Since we do not have a Landau pole, we can extend the limit of integration in $b$ space all the way to infinity, in which case, the $b$ integral is now in the form of a simple Gaussian integral and admits the analytical result: \begin{eqnarray} I_b^0 &=& \frac{1}{2\pi i} \int_{c -i \infty}^{c+i \infty} dt\, \frac{\Gamma[-t]}{\Gamma[1+t]} \left(\frac{q_T}{2}\right)^{2t}\sqrt{\frac{\pi}{A}} \,e^{ (1+t)^2/A- 2(1+t) \ln\Omega} \nn\\ &=& \frac{1}{i \Omega^2} \frac{1}{\sqrt{4 \pi A}} \int_{c-i \infty}^{c+i \infty} dt\, \frac{\Gamma[-t]}{\Gamma[1+t]}e^{ (1+t)^2/A- 2t L} \label{Ib-L}\end{eqnarray} where we have defined \begin{equation}\label{L} L= \ln \frac{2 \Omega}{q_T}\,. \end{equation} Let's examine the Gaussian exponent in this integrand. A simple rearrangement gives us \begin{eqnarray}\label{Ib-t} I_b^0 = \frac{ 2 }{i q_T^2} \frac{e^{-AL^2 }}{\sqrt{\pi A}} \int_{c-i \infty}^{c+i \infty} dt\, \frac{\Gamma[-t]}{\Gamma[1+t]}e^{ \frac{1}{A}(t-t_0)^2 }\nn \end{eqnarray} where $t_0 = -1+ A L$ , which is a saddle point for this Gaussian and lies on the real line. In some sense, the $t$ space integral is a dual of the $b$ space integral since the degree of suppression inverts itself from one space to another. So then in a region of large $A$, we should stick to the $b$ space integral, fit a polynomial to the Bessel function (which will now work since the integrand is highly suppressed). On the other hand for small $A$ (which would be relevant for most perturbative series), we should go to $t$ space. If we parametrize the contour as $t= c+ix$, we have \begin{eqnarray} I_b^0 = \frac{ 2 }{q_T^2} \frac{e^{-AL^2 }}{\sqrt{\pi A}} \int_{-\infty}^{\infty} dx \, \frac{\Gamma[-c-ix]}{\Gamma[1+c+ix]} \,e^{ -\frac{1}{A}[x-i(c-t_0)]^2} \,.\label{saddle} \end{eqnarray} It is clear that the path of steepest descent passes through this saddle point ($c=t_0$) and is parallel to the imaginary axis. One can then consider doing a Taylor series expansion of the rest of the integrand \begin{equation}\label{ft} f(t)\equiv \frac{\Gamma[-t]}{\Gamma[1+t]} \end{equation} along this path around the saddle point, truncating the series after a finite number of terms. The primary difficulty with a polynomial expansion is that to have percent level accuracy that we desire for this integral, we need to have a good description of $f(t)$ out to $x_l \sim \sqrt{2 A\ln(10)}$ at which value the exponent drops to 1\% of its value. The factor of $A$ is essentially the cusp anomalous dimension, ( for e.g., it is $ 2 \alpha_s C_A /\pi $ for the Higgs ) which is a small number. Considering the worst case scenario $ A \sim 0.5$, we would need $x_l \sim 1.5$, which clearly cannot be accomplished using a Taylor series expansion because the radius of convergence is around 1. We will, instead, find an expansion for $f(t)$ in the next section in terms of (Gaussian-weighted) Hermite polynomials that performs quite well with just a few terms. There a few customizations of this expansion we will make to optimize fast convergence. Our particular procedure presented here is by no means unique, and we give a couple of alternative methods for expanding and approximating the integrand of $I_b^0$ in \appx{alternatives}. There are, undoubtedly, others that would work also. \subsection{Expansion in Hermite polynomials} \label{HStrategy} One of the difficulties with finding a series expansion of $f(t)$ in \eq{ft} is that it grows exponentially for large $\abs{x}$, with $t=c+ix$. We can factor out this exponential growth by using Euler's reflection formula: \begin{equation} \label{Euler} \Gamma(z)\Gamma(1-z) = \frac{\pi}{\sin(\pi z)}\,. \end{equation} Then \begin{equation} \label{ftfactored} f(t) = -\frac{\sin(\pi t)}{\pi}\Gamma(-t)^2\,. \end{equation} The exponential behavior for large $\abs{x}$ is now factored out in the sine function in front, and we can focus on finding a good expansion for $\Gamma(-t)^2$. The integral \eq{saddle} is then: \begin{equation} I_b^0 = -\frac{2}{q_T^2}\frac{e^{-AL^2}}{\sqrt{\pi A}} \int_{-\infty}^\infty dx\,\Gamma(-c-ix)^2 \frac{\sin[\pi(c+i x)]}{\pi} e^{-\frac{1}{A}[x - i(c-t_0)]^2}\,. \end{equation} The sine can be written in terms of exponentials, which shift the linear and constant terms in the Gaussian exponential. It is straightforward to work out that the result is \begin{align} I_b^0 = \frac{1}{i\pi q_T^2\sqrt{\pi A}} \int_{-\infty}^\infty dx\, \Gamma(-c-ix)^2 &\biggl\{e^{-A(L - i\pi/2)^2} e^{-\frac{1}{A}\bigl[x + \frac{A\pi}{2} - i(c-t_0)\bigr]^2} \\ & -e^{-A(L + i\pi/2)^2} e^{-\frac{1}{A}\bigl[x - \frac{A\pi}{2} - i(c-t_0)\bigr]^2} \biggr\} \,. \nn \end{align} By changing variables in the second line from $x\to -x$, we can write the result compactly as \begin{equation} \label{Ibimag} I_b^0 = \frac{2}{\pi q_T^2\sqrt{\pi A}} \Imag\biggl\{ e^{-A(L-i\pi/2)^2} \int_{-\infty}^\infty dx\, \Gamma(-c-ix)^2 e^{-\frac{1}{A}\bigl[ x + \frac{A\pi}{2} - i(c-t_0)\bigr]^2}\biggr\}\,. \end{equation} The remaining function $\Gamma(-t)^2$ is exponentially damped for large $x$. Indeed, Stirling's formula tells us that \begin{equation} \label{stirling} \abs{\Gamma(-c-ix)}^2 \to 2\pi e^{-\pi\abs{x}} \abs{x}^{-1-2c} \end{equation} as $\abs{x}\to \infty$. The contribution of this exponential tail is further suppressed by the Gaussian factor it multiplies in \eq{Ibimag}. On the other hand, near $x=0$, the function $\Gamma(-t)^2$ itself closely resembles a Gaussian, times polynomials. To determine the curvature of the Gaussian, we look at the Taylor series expansion of $\Gamma(-t)^2$ near $x=0$: \begin{eqnarray} \label{gexpansion} \Gamma(-c-ix)^2 &=&\Gamma(-c)^2 \left[(1 - a_0\, x^2)-2i \psi^{(0)}(-c)\, x(1-b_0\, x^2) \right]+\cdots \,,\\ \nn a_0&=&2 \psi^{(0)}( -c)^2 + \psi^{(1)}( -c) \,,\\ \nn b_0 &=& \tfrac{2}{3} \psi^{(0)}( -c)^2 + \psi^{(1)}( -c) +\tfrac{1}{6} \psi^{(2)}( -c)/\psi^{(0)}( -c) \,.\end{eqnarray} It now remains to find a good series expansion for $\Gamma(-c-ix)^2$ that enables us to perform the integral in \eq{Ibimag} analytically and accurately. As noted in \eq{stirling}, $\Gamma(-c-ix)^2$ dies exponentially for large $x$, and the remaining Gaussian in \eq{Ibimag} dies even faster. For $c=-1$, both $\Gamma(-c-ix)^2$ and the Gaussian are significantly nonzero only up to about $\abs{x}\sim 2$. For practical purpose of series expansion, we set $c=-1$ which makes the gamma function less oscillatory than the values $|c|<1$. This is the saddle point in the limit $ A \rightarrow 0$, i.e., when we are in the fixed order regime with the resummation turned off. This would still induce some imaginary part and hence oscillations in the exponent away from $A=0$, but with a good expansion, this is not an issue. Thus one can try a series expansion for $\Gamma(1-ix)^2$ in terms of Hermite polynomials $H_n(x)$ which form a complete orthogonal basis. They are well known, of course, but we nevertheless remind ourselves of the first several $H_{n}$: \begin{align} H_0(x) &= 1 & H_1(x) &= 2x \\ H_2(x) &= 4x^2 - 2 & H_3(x) &= 8x^3 - 12x \nn \\ H_4(x) &= 16 x^4 - 48 x^2 + 12 & H_5(x) &= 32 x^5 - 160 x^3 + 120 x \nn \\ H_6(x) &= 64x^6 - 480 x^4 + 720 x^2 - 120 & H_7(x) &= 128 x^7 - 1344 x^5 + 3360 x^3 - 1680 x\,, \nn \end{align} etc. They satisfy the orthogonality relation \begin{equation} \label{orthogonality} \int_{-\infty}^\infty dx\,e^{-\alpha^2 x^2} H_m(\alpha x) H_n(\alpha x) = \alpha^{-1}\sqrt{\pi}\, 2^n n! \delta_{nm}\,. \end{equation} We expand $\Gamma(1-ix)^2$ in terms of these polynomials in the following way: \begin{equation} \label{Hermiteexpansion} \Gamma(1-ix)^2 = e^{-a_0 x^2} \sum_{n=0}^\infty c_{2n} H_{2n}(\alpha x) + \frac{i \gamma_E}{\beta} e^{-b_0 x^2} \sum_{n=0}^\infty c_{2n+1} H_{2n+1}(\beta x) \,, \end{equation} We have introduced weighting factors $e^{-a_0 x^2}$ and $i \gamma_E e^{-b_0 x^2}$ for the real and imaginary parts to help with faster convergence of the series, as they capture the behavior of $\Gamma(1-ix)^2$ near $x=0$, using the values of $a_0$ and $b_0$ obtained from the Taylor series expansion in \eq{gexpansion} at $c=-1$: \begin{equation}\label{a0b0} a_0 =2\gamma_E^2+\frac{\pi^2}{6}\approx 2.31129\, , \quad b_0=\frac{2}{3}\gamma_E^2+\frac{\zeta_3}{3\gamma_E} +\frac{\pi^2}{6}\approx 2.56122 \,. \end{equation} Note that the real and imaginary parts of the LHS of \eq{Hermiteexpansion} are respectively even and odd functions of $x$. Hence on the RHS, we need even polynomials for the real part and odd polynomials for imaginary part. Although the relation \eq{orthogonality} would make it seem natural to pick $\alpha^2 = a_0$ and $\beta^2 =b_0$ in the arguments of $H_n$ in the expansion\eq{Hermiteexpansion}, we choose $\alpha,\beta$ to be floating, and will determine their optimal values to ensure the fastest convergence for this expansion. Using \eq{orthogonality}, the coefficients in \eq{Hermiteexpansion} are given by \begin{eqnarray} \label{HermiteCoefficients} c_{2n} &=& \frac{\alpha}{\sqrt{\pi}\, 2^{2n} (2n)!} \int_{-\infty}^\infty dx \, \text{Re}\{\Gamma(1-ix)^2\} H_{2n}(\alpha x) e^{-(\alpha^2-a_0) x^2}\,, \\ c_{2n+1} &=& \frac{\beta^2}{\gamma_E \sqrt{\pi}\, 2^{2n+1} (2n+1)!} \int_{-\infty}^\infty dx \, \text{Im}\{\Gamma(1-ix)^2\} H_{2n+1}(\beta x) e^{-(\beta^2-b_0) x^2}\,.\nn \end{eqnarray} These integrals still have to computed numerically, as far as we know, but note they are purely mathematical, having no dependence on any of our physical parameters, and need only be computed once. Thanks to the damped behavior of the integrand and the normalization factors in front, the expansion coefficients fall off fairly rapidly with $n$. To make the series expansion well behaved, the width parameter of Gaussian functions should be positive definite: $\alpha^2- a_0>0$ and $\beta^2- b_0>0$. These widths define the region of $x$ where the function $\Gamma(1-ix)^2$ is expanded in terms of the Hermite bases. For a narrow width, the series converges swiftly with $n$ but is valid only in a narrow region around $x=0$, while for broader width, the convergence is slower but the expansion is valid in a wider region around $x$. The Gaussian function in our integration in \eq{Ibimag} resolves the region $|x-\tfrac{A\pi}{2}| \sim \sqrt{A}$ and for the maximal value we encounter, $A\sim 0.5$, the broadest region is up to $|x| \sim 1.5$. The parameters $\alpha,\beta$ should be chosen so that the Gaussians in \eq{HermiteCoefficients} roughly match the width of this region and resemble $\Gamma(1-ix)^2$ itself as closely as possible. We explored various values of these parameters such that the exponents $\alpha^2 -a_0$ and $\beta^2-b_0$ were between 1 and 10 and found that for small values $\sim 1$ the convergence is slow and hence more terms are needed for an accurate description. For large values $\sim 10$ the accuracy of the integration in \eq{Ibimag} is very good with a few basis terms but is not further improved by including higher order terms because the series expansion is resolving only a narrow $x$ region compared to the one dictated by \eq{Ibimag}. Empirical tests show that the series converges rapidly for $\alpha^2-a_0$ and $\beta^2-b_0$ around $3\sim 5$ while maintaining required accuracy over the range of $x$ (from 0 to $\pm 1.5 $) desired. \fig{Gamma2} shows the agreement between the exact result and series expansion up to 3 or 4 basis terms for the real (even) and imaginary (odd) parts, for: \begin{equation}\label{alphabeta} \alpha^2-a_0=4\quad \text{and}\quad \beta^2-b_0=4\,. \end{equation} The coefficients $c_{n}$ for these choices of $\alpha,\beta$ are given by \begin{align} \label{cn} c_0& =1.02248\,,\qquad c_2 = 0.02254\,,\qquad c_4= 0.00206\,, \qquad c_6=3.42\times 10^{-5} \\ c_1&=1.06808\,,\qquad c_3=0.02173\,,\qquad c_5=0.00103\,, \qquad c_7=3.21 \times10^{-6}\, \nn \end{align} which indeed show a rapid convergence. In practice we include up to $c_6$ in our numerical results; from $c_7$ onwards the impact is negligible. \begin{figure} \centerline{ \includegraphics[width=.5\linewidth]{ReGamSqr.pdf} \hskip 0.2cm \includegraphics[width=.5\linewidth]{ImGamSqr.pdf} } \vskip-0.2cm \caption[1]{Real and imaginary parts of $\Gamma(1-ix)^2$ compared to series expansion in terms of Hermite polynomials up to 6th (5th) order for the real (imaginary) parts, \emph{i.e.}~ four terms for the real part and three for the imaginary part.} \label{fig:Gamma2} \end{figure} From staring at \fig{Gamma2}, one notices a residual deviation in the real part above $x\sim 1$, which thus appears to be the potentially largest source of error from our method. However, the region of larger $x$ in \fig{Gamma2} is suppressed by the remaining Gaussian in \eq{Ibimag}. The remaining deviation can easily be further suppressed if desired by including higher-order polynomials. In practice, at NNLL accuracy the cross section has $\sim 10$\% uncertainties, and the error due to our series truncation at $n=5$ or $6$ is significantly smaller than this perturbative uncertainty. This is clearly seen in Fig. \ref{fig:Hcompare}, which shows the effect of increasing the total number of terms in the Hermite expansion from 6 to 7. In terms of this expansion, the integration in \eq{Ibimag} is rewritten in terms of following basis of integrals \begin{eqnarray} \label{cH2n} \mathcal{H}_{n}(\alpha,a_0) &=& \frac{1}{\sqrt{\pi A}} e^{-A(L-i\pi/2)^2}\int_{-\infty}^\infty dx\, H_{n}(\alpha x) e^{-a_0 x^2-\frac{1}{A}( x + z_0)^2}\,, \end{eqnarray} where $z_0 = A\pi/2 - i(c-t_0)$. For $c=-1$, $z_0 = A(\pi/2 + iL)$. The integrals for odd $n$ arising from the expansion in \eq{Hermiteexpansion} are obtained from \eq{cH2n} with the substitutions $\alpha\to \beta,a_0\to b_0$. The prefactors in front have been included in the definition of $\mathcal{H}_n$ for later convenience. Now we go about evaluating analytically the integrals in \eq{cH2n}. There are a number of ways to do this, we choose one that seems particularly elegant. \subsubsection{Generating function method to integrate against Hermite polynomials} Using the generating function for Hermite polynomials, \begin{equation} \label{generating} e^{2xt -t^2} = \sum_{n=0}^\infty \frac{t^n}{n!} H_n(x)\,, \end{equation} we can efficiently evaluate all the integrals $\mathcal{H}_{n}$ in \eq{cH2n} at the same time. By forming the infinite series, \begin{equation} \label{Hnseries} \mathcal{H} \equiv \sum_{n=0}^\infty \frac{t^n}{n!} \mathcal{H}_n(\alpha,a_0) = \frac{e^{-A(L-i\pi/2)^2}}{\sqrt{\pi A}} \int_{-\infty}^\infty dx\,e^{2\alpha xt - t^2} e^{-a_0 x^2 - \frac{1}{A}( x+ z_0)^2}\,, \end{equation} we are able to use the generating function relation \eq{generating} to obtain a Gaussian integral on the RHS. By evaluating the integral on the RHS and expanding the result back out in powers of $t^n$, we will be able to obtain expressions for the individual $\mathcal{H}_n$. Rescaling the integration variable and completing the square in the exponent on the RHS of \eq{Hnseries}, \begin{equation} \label{HGaussian} \mathcal{H} = e^{-\frac{a_0 z_0^2}{1+a_0 A}} \frac{e^{-A(L-i\pi/2)^2}}{\sqrt{\pi(1+a_0A)}}\int_{-\infty}^\infty dx\, e^{-\bigl[ x + \sqrt{\frac{A}{1+a_0 A}}\,\bigl( \frac{z_0}{A} - \alpha t\bigr )\bigr]^2} e^{\frac{t}{1+a_0 A}\bigl[t(A\alpha^2 - 1 - a_0 A) - 2 \alpha z_0 \bigr] } \end{equation} The exponent of the $x$-dependent Gaussian is complex, but the result of integrating it is just $\sqrt{\pi}$ (see \eq{contour}). Thus, \begin{equation} \label{Ht} \mathcal{H} = e^{\frac{-A (L-i\pi/2)^2}{1+a_0 A}} \frac{1}{\sqrt{1+a_0 A}} \sum_{m=0}^\infty \frac{t^m}{m!}\frac{1}{(1+a_0 A)^m}\bigl[ t (A\alpha^2 - 1 - a_0 A) - 2\alpha z_0\bigr]^m\,, \end{equation} where we have expanded the exponential in $t$ in \eq{HGaussian} in a Taylor series. We cannot directly read off the coefficients of powers of $t$ in \eq{Ht} to obtain $\mathcal{H}_n$ in \eq{Hnseries} due to the powers of the binomial in $t$, but using the binomial expansion and some reindexing, we can do so. The proof is given in \appx{integral}, with the result: \begin{equation} \label{Hnresult} \mathcal{H}_n(\alpha,a_0) = \mathcal{H}_0(\alpha,a_0) \frac{(-1)^n n!}{(1+a_0 A)^n}\sum_{m=0}^{\floor*{n/2}}\frac{1}{m!}\frac{1}{(n-2m)!} \Bigl\{ [ A(\alpha^2 - a_0)-1] (1+a_0 A)\Bigr\}^m (2\alpha z_0)^{n-2m} \,, \end{equation} where $\floor*{\frac{n}{2}}$ is the floor operator, i.e. the integer part of $\frac{n}{2}$, and \begin{equation} \mathcal{H}_0(\alpha,a_0) = e^{\frac{-A(L-i\pi/2)^2}{1+a_0 A}} \frac{1}{\sqrt{1+a_0 A}} \,. \end{equation} If one desires, \eq{Hnresult} can be written separately for even and odd $n$, \begin{align} \mathcal{H}_{2n} &= \mathcal{H}_0 \frac{(2n)!}{(1\!+\! a_0 A)^{2n}} \sum_{m=0}^n \frac{1}{m!}\frac{1}{(2n\!-\! 2m)!} \Bigl\{ [ A(\alpha^2 \!-\! a_0)-1] (1\!+\! a_0 A)\Bigr\}^m (2\alpha z_0)^{2n-2m} \\ \mathcal{H}_{2n+1} &= \mathcal{H}_1 \frac{(2n+1)!}{(1+a_0 A)^{2n}} \sum_{m=0}^n \frac{1}{m!}\frac{1}{(2n-2m+1)!} \Bigl\{ [ A(\alpha^2 - a_0)-1] (1+a_0 A)\Bigr\}^m (2\alpha z_0)^{2n-2m}\,, \nn \end{align} where \begin{equation} \mathcal{H}_1 = - \mathcal{H}_0 \frac{2\alpha z_0}{1+a_0 A}\,. \end{equation} \subsubsection{Explicit result of integration} Explicitly, the first several $\mathcal{H}_{n}$ given by \eq{Hnresult}, including $\mathcal{H}_{0,1}$ given above, are given in \eq{Hnexplicit}. The integral $I_b^0$ in \eq{Ibimag} that we sought to evaluate in the first place is then given explicitly by, for $c=-1$, \begin{align} \label{IbHermite} I_b^0 &= \frac{2}{\pi q_T^2} \sum_{n=0}^{\infty} \Imag \biggl\{ c_{2n} \mathcal{H}_{2n}(\alpha,a_0) + \frac{i\gamma_E}{\beta} c_{2n+1} \mathcal{H}_{2n+1}(\beta,b_0) \biggr\} \\ &= \frac{2}{\pi q_T^2} \Imag \sum_{n=0}^\infty \sum_{m=0}^n \Biggl( c_{2n} \mathcal{H}_0(\alpha,a_0) \frac{(2n)!}{(1+a_0 A)^{2n}} \frac{1}{m!(2n-2m)!} \nn \\ &\qquad\qquad \qquad \qquad\qquad \times \Bigl\{ [ A(\alpha^2 - a_0)-1] (1+a_0 A)\Bigr\}^m (2\alpha z_0)^{2n-2m} \nn \\ &\qquad \qquad + \frac{i\gamma_E}{\beta} c_{2n+1} \mathcal{H}_1(\beta,b_0) \frac{(2n+1)!}{(1+b_0A)^{2n}} \frac{1}{m!(2n-2m+1)!} \nn \\ &\qquad\qquad \qquad \qquad\qquad \times \Bigl\{ [ A(\beta^2 - b_0)-1] (1+b_0 A)\Bigr\}^m (2\beta z_0)^{2n-2m}\biggr)\nn \,,\end{align} where the coefficients $c_{n}$ are given by \eq{HermiteCoefficients}. As many terms in the sum over $n$ may be included to achieve the numerical accuracy desired. In practice, we include the first few terms in the sum over $n$ (three or four even $c_{2n}$ and three odd $c_{2n+1}$ coefficients), which gives us percent level accuracy for the cross section in the perturbative resummation region. Although the coefficients $c_{n}$ still need to be evaluated numerically, we note that they depend only on properties of the pure function $\Gamma(1-ix)^2$ and need be determined only once (\eq{cn}). Our Hermite expansion is applied only to this function, which arises solely from the Bessel function $J_0(z)$ which appears in the factorization convolution in \eq{xsec}. The dependence on all the physical variables, such as $q_T,Q$, and the scales $\mu_{L,H},\nu_{L,H}$, enters through the evolution exponent in \eqs{VGaussian}{ACUeta} and is captured analytically by \eq{IbHermite}. For other processes or observables, the same basis and coefficients we used and resultant analytic integration \eq{IbHermite}, should apply, though the number of terms needed to get good convergence (determined by the width of the evolution exponent \eq{VGaussian}) may vary. \subsection{Fixed order terms} \label{ssec:fixedorderlimit} \subsubsection{Fixed-order prefactors in resummed expression} At NNLL (or NLL$'$) and higher orders, fixed order logarithmic terms of the form $\ln^k(\mu_L b_0)$ appear in the prefactor $\widetilde F$ multiplying the resummed exponent in the integrand of \eq{Ibdef}. Explicitly, $\widetilde F$ can be expanded: \begin{equation} \label{Fexpansion} \widetilde F(b,x_1,x_2,Q;\mu_L,\nu_L^*,\nu_H) = \sum_{n=0}^\infty \sum_{k=0}^{2n} \Bigl(\frac{\alpha_s(\mu_L)}{4\pi}\Bigr)^n \widetilde F^{(n)}_k \ln^k\mu_L b_0\,, \end{equation} so that we can isolate integrals of each term in the form \eq{Ibk}. We can build each coefficient $\widetilde F^{(n)}_k$ out of the coefficients of the soft function and TMDPDFs in their expansions \eqs{Sn}{Iijexpansion}, reading off the coefficients of each power of $\ln\mu_L b_0$. In doing so we must take into account extra powers of $\ln\mu_L b_0$ in the soft function due to the shifted scale $\nu_L^* = \nu_L(\mu_L b_0)^{-1+p}$ in \eq{nuLstar} that we use. Then we have for the terms we need up to NNLL accuracy, at tree level: \begin{align} \label{Ftree} \widetilde F^{(0)}_0 = f_i(x_1,\mu_L) f_{\bar i}(x_2,\mu_L)\,, \end{align} where $i,\bar i = g$ for Higgs production and $i=q$ for DY, and at $\mathcal{O}(\alpha_s)$: \begin{align} \label{Foneloop} \widetilde F^{(1)}_2 &= f_i(x_1,\mu_L) f_{\bar i}(x_2,\mu_L) \mathbb{Z}_S \frac{\Gamma_0}{2} (1-1) = 0 \\ \widetilde F^{(1)}_1 &= f_i(x_1,\mu_L) f_{\bar i}(x_2,\mu_L) \biggl( \mathbb{Z}_S \Gamma_0 \ln\frac{\nu_L}{\mu_L} + \mathbb{Z}_f \Gamma_0 \ln\frac{\nu_H^2}{Q^2} + 2\gamma_f^0\biggr) \nn \\ &\quad - [2P_{ij}^{(0)}\otimes f_j(x_1,\mu_L)] f_{\bar i}(x_2,\mu_L) - f_i(x_1,\mu_L) [2P_{\bar i j} \otimes f_j(x_2,\mu_L)] \nn \\ \widetilde F^{(1)}_0 &= f_i(x_1,\mu_L) f_{\bar i}(x_2,\mu_L) c_{\widetilde S}^1 + [I_{ij}^{(1)}\otimes f_j(x_1,\mu_L)] f_{\bar i}(x_2,\mu_L) + f_i(x_1,\mu_L)[ I_{\bar i j}^{(1)}\otimes f_j(x_2,\mu_L)]\,. \nn \end{align} Note that $\widetilde F_2^{(1)}$ vanishes due to the extra term from the shift $\nu_L\to \nu_L^*$ since the double log term was promoted from the fixed-order soft function to the exponent in \eq{VSNLL}, as designed (though this cancellation will no longer be exact once we implement profile scales in the next subsection). Up to NNLL accuracy, the only $\mathcal{O}(\alpha_s^2)$ terms we need come from the the $\alpha_s^2$ piece of the soft function induced by $\nu_L\to\nu_L^*$, and the $\mathcal{O}(\alpha_s^2)$ terms in the $V_\beta$ piece of $\widetilde F$ in \eq{fixedorderfactor}, which from \eq{Vbeta} we see has the expansion \begin{equation} V_\beta(\nu_L^*,\nu_H;\mu_L) = 1+ \sum_{n=2}^\infty \sum_{k=1}^{n+1} \Bigl(\frac{\alpha_s(\mu_L)}{4\pi}\Bigr)^n V^{(n)}_k \ln^k \mu_L b_0 \,, \end{equation} where at $\mathcal{O}(\alpha_s^2)$, \begin{align} V^{(2)}_3 &= \mathbb{Z} _S\Gamma_0 \beta_0 \Bigl(1-\frac{1}{2}\Bigr) = \frac{1}{2} \mathbb{Z} _S\Gamma_0 \beta_0 \\ V^{(2)}_2 &= \mathbb{Z}_S \Gamma_0 \beta_0 \ln\frac{\nu_H}{\nu_L} \nn \\ V^{(2)}_1 &= 0\,. \nn \end{align} The $V_3^{(2)}$ coefficient actually just multiplies a small log $\ln^2\mu_L b_0$, so strictly at NNLL we can drop it. Then the only relevant piece of $\widetilde F^{(2)}$ we would need at NNLL accuracy is \begin{align} \label{Ftwoloop} \widetilde F^{(2)}_2 &= f_i(x_1,\mu_L) f_{\bar i}(x_2,\mu_L) \Bigl(V_{2}^{(2)} - \mathbb{Z}_S\Gamma_0\beta_0 \ln\frac{\nu_H}{\nu_L}\Bigr) = 0\,, \end{align} where the second term in $F_2^{(2)}$ came from the shift $\nu_L\to \nu_L^*$ in the one-loop soft function. So at NNLL, all pieces of $\widetilde F^{(2)}$ (\emph{i.e.}~ the fixed-order $\mathcal{O}(\alpha_s^2)$ terms) vanish or can be dropped. The result of integrating each of the powers of $\ln\mu_L b_0$ in \eq{Fexpansion} inside of the integrals $I_b^k$ in \eq{Ibk} can be readily obtained from the analytic resummed result for $I_b^0$ \eq{IbHermite}, by taking derivatives, using: \begin{align} \label{logderivatives} \ln^k(\mu_L b_0) e^{-A \ln^2( \Omega b)} &= \ln^k (\mu_L b_0) e^{-A \ln^2( \mu_L b_0 \chi)} = \left[ \hat \partial_{\chi} \right]^{k} e^{-A \ln^2( \mu_L b_0 \chi)}\,, \end{align} where we used \eq{ACUeta} and we have defined \begin{eqnarray} \hat \partial_{\chi} = -\frac{1}{2A} \frac{\partial}{\partial \ln \chi}- \ln \chi\,. \end{eqnarray} Using the final expression \eq{IbHermite} for $I_b^0$ we can now write \begin{eqnarray}\label{Ibk-cH} I_b^k &=& \left[ \hat \partial_{\chi} \right]^k I_b^0 \nn\\ &=& \frac{2}{\pi q_T^2}\sum_{n=0}^{\infty} \text{Im}\Big \{c_{2n} \, \left[ \hat \partial_{\chi} \right]^k\mathcal{H}_{2n}(\alpha,a_0)+ \frac{i\gamma_E}{\beta}c_{2n+1} \, \left[ \hat \partial_{\chi} \right]^k\mathcal{H}_{2n+1} (\beta,b_0) \Big\} \end{eqnarray} In the expression \eq{Hnresult} for $\mathcal{H}_{n}$, the variable $\chi$ only appears inside of \begin{equation} \label{Lchi} L = \ln \left(\frac{2\Omega}{q_T}\right) = \ln\frac{\mu_L e^{\gamma_E}}{q_T} + \ln \chi\,, \end{equation} which appears inside $z_0$ and $\mathcal{H}_0$ in \eq{Hnresult}. So we can write \begin{eqnarray} \label{partialchiL} \hat \partial_{\chi} = -\frac{1}{2A} \partial_{L} - \ln\chi\,. \end{eqnarray} To construct fixed order terms up to order $k$, we need up to $k^{\rm th}$ derivatives of the $L$ dependent pieces that make up $I_b^k$. There is very simple recursion relation satisfied by the derivatives of $\mathcal{H}_n$: \begin{equation} \label{recursion} -\frac{1}{2A}\partial_L \mathcal{H}_{n}(\alpha,a_0) = -\frac{i z_0}{A(1+a_0 A)} \mathcal{H}_{n}(\alpha,a_0) + i\frac{n \alpha}{1+a_0 A} \mathcal{H}_{n-1}(\alpha,a_0)\,, \end{equation} which can be derived either from the original definition \eq{cH2n} of $\mathcal{H}_n$, or its explicit all-orders result in \eq{Hnresult}. These proofs are given in \appx{Hderivative}. Derivatives to arbitrary order $(\hat\partial_\chi)^k\mathcal{H}_n$ can then be obtained by repeatedly applying this relation \eq{recursion}: \begin{eqnarray} \label{dcH} \hat \partial_{\chi} \mathcal{H}_n &=& L_1\, \mathcal{H}_{n}+ \frac{i \alpha}{1+a_0 A} \,n\mathcal{H}_{n-1} \,,\\ \left[ \hat \partial_{\chi} \right]^2 \mathcal{H}_n &=& \left[L_1^2 +\frac{a_0}{2(1+a_0 A)} \right] \mathcal{H}_n +2L_1\frac{i \alpha}{1+a_0 A}n \mathcal{H}_{n-1} +\left[ \frac{i \alpha}{1+a_0 A} \right]^2 n(n-1) \mathcal{H}_{n-2} \,,\nn\\ \left[ \hat \partial_{\chi} \right]^3 \mathcal{H}_n &=&L_1 \left[L_1^2 +\frac{3 a_0}{2(1+a_0 A)} \right]\, \mathcal{H}_n + 3\left[L_1^2 +\frac{ a_0}{2(1+a_0 A)} \right] \frac{i \alpha}{1+a_0 A}n\mathcal{H}_{n-1} \nn\\ && +3 L_1 \left[ \frac{i \alpha}{1+a_0 A} \right]^2 n(n-1) \mathcal{H}_{n-2} +\left[ \frac{i \alpha}{1+a_0 A} \right]^3 n(n-1)(n-2) \mathcal{H}_{n-3} \,,\nn \end{eqnarray} etc., where \begin{equation} L_1 \equiv -\frac{i z_0}{A(1+a_0 A)} - \ln\chi =\frac{1}{1+a_0 A}\left[ \ln\frac{\mu_L e^{\gamma_E}}{q_T}-\frac{i\pi}{2}-a_0 A \ln\chi \right] \,. \end{equation} With these expressions, we can now write the resummed cross section \eq{resummedIb} in $q_T$ space in terms of the above results of integrating $I_b$ and $I_b^k$ in \eqs{Ibdef}{Ibk}, \begin{align} \label{thefinalresult} \frac{d\sigma}{dq_T^2 dy} &= \frac{1}{2}\sigma_0 C_t^2(M_t^2,\mu_T)H(Q^2,\mu_H) U(\mu_L,\mu_H, \mu_T) C(\mu_L,\nu_L,\nu_H) \\ &\quad\times \sum_{n=0}^\infty \sum_{k=0}^{2n} \Bigl(\frac{\alpha_s(\mu_L)}{4\pi}\Bigr)^n \widetilde F^{(n)}_k(x_1,x_2,Q;\mu_L,\nu_L,\nu_H)I_b^k(q_T;\mu_L,\nu_L,\nu_H;\alpha,a_0;\beta,b_0)\,. \nn \end{align} where the integrals $I_b^k$ are given in final evaluated form in \eq{Ibk-cH} with the first few derivatives $(\partial_\chi)^k \mathcal{H}_n$ given by \eq{dcH}. The calculation of these integrals have formed the bulk of this \sec{analytic} and constitute one of the main results of this paper. To turn \eq{thefinalresult} into a final prediction for the perturbative $q_T$ cross section, we still need to match it on to the fixed-order prediction of full QCD for the large $q_T$ tail. We turn our attention now to this task. \subsubsection{The large $q_T$ limit} \label{ssec:largeqT} In the large $q_T\sim Q$ limit the resummation is turned off and this corresponds to the $A\to 0$ limit in the soft evolution of \eq{Ib-L}. In this limit the resummed part of the cross section \eq{thefinalresult} should reduce to a sum of the fixed-order singular terms in the cross section. Since we evaluate the $I_b^k$'s in \eq{thefinalresult} using an expansion in Hermite polynomials giving \eqs{IbHermite}{Ibk-cH}, it is not obvious whether these expansions, when truncated to our desired number of terms, maintain their accuracy in the fixed order limit. We check the accuracy here. In the $A\to 0$ limit, the integrals of the even and odd Hermite polynomials reduce to \begin{eqnarray}\label{Ibk-limit} I_b^0 &\to& 0 \,,\nn\\ I_b^1 &\to& -\frac{1}{q_T^2} f_1 \,,\nn\\ I_b^2 &\to& -\frac{2}{q_T^2} \left[ f_1 \, \ln\frac{\mu_L e^{\gamma_E}}{q_T} -f_2 \gamma_E \right] \,,\nn\\ I_b^3 &\to&-\frac{3}{q_T^2} \left[f_1 \ln^2\frac{\mu_L e^{\gamma_E}}{q_T} -2 f_2 \gamma_E \ln\frac{\mu_L e^{\gamma_E}}{q_T} + f_3 \right] \,, \end{eqnarray} where $f_k$ is a linear combination of the coefficients $c_n$ in the series expansion : \begin{eqnarray} \label{fn} f_1 &=& \sum_{n=0}^{\infty} \frac{(-1)^n (2n)!}{n!}c_{2n} \approx c_0-2c_2+12 c_4 -120 c_6\approx 0.998051 \,,\\ f_2 &=& \sum_{n=0}^{\infty} \frac{(-1)^n (2n+1)!}{n!}c_{2n+1}\approx c_1-6 c_3 +60 c_5 \approx 0.99950 \,,\nn\\ f_3 &=& \sum_{n=0}^{\infty} \frac{(-1)^n (2n)!}{n!}\left(\gamma_E^2+n \alpha^2\right) c_{2n} \approx \gamma_E^2 (c_0-2 c_2+12 c_4-120 c_6)-2\alpha^2( c_2-12 c_4+120c_6) \nn \\ & & \qquad\qquad\qquad\qquad\qquad\qquad\quad\ \ \approx 0.2828 \nn\,, \end{eqnarray} where we used the values in \eqs{alphabeta}{cn}. We can compare to the result of taking the $A\to 0 $ limit of the exact expression for $I_b^k$ in \eq{Ibk} before expanding the Bessel function in a series representation. Using \eqss{Ibk}{Ib-L}{logderivatives}, we obtain \begin{equation} \label{Ibkexact} I_{b,\text{exact}}^k = \frac{2}{i q_T^2} \frac{1}{\sqrt{\pi A}} \int dt\, f(t) \left[ \hat \partial_\chi \right]^k e^{ \frac{(1+t)^2}{A}-2L (1+t) } \end{equation} Using \eq{partialchiL}, we can compute \begin{equation} \label{chiderivative} \hat\partial_\chi e^{ \frac{(1+t)^2}{A}-2L (1+t) } = \Bigl( - \frac{1}{2A} \partial_L - \ln\chi\Bigr) e^{ \frac{(1+t)^2}{A}-2L (1+t) } = \Bigl(\frac{1+t}{A} -\ln\chi\Bigr)e^{ \frac{(1+t)^2}{A}-2L (1+t) } \,, \end{equation} which we can re-express as a derivative with respect to $t$, \begin{equation} \label{tderivative} \Bigl(\frac{1+t}{A} -\ln\chi\Bigr)e^{ \frac{(1+t)^2}{A}-2L (1+t) } = \Bigl(\frac{1}{2}\frac{d}{dt} + \ln\frac{\mu_L e^{\gamma_E}}{q_T}\Bigr)e^{ \frac{(1+t)^2}{A}-2L (1+t) } \,, \end{equation} where we also used \eq{Lchi} in the last equality. \eqs{chiderivative}{tderivative} then allow us to make the replacement in \eq{Ibkexact}: \begin{eqnarray} \left[ \hat \partial_\chi \right]^k &\to& \left[ \frac{1}{2}\frac{d }{d t} +\ln\frac{\mu_L e^{\gamma_E} }{q_T}\right]^k = \sum_{\ell=0}^{k} \binom{k}{\ell} \left[\ln\frac{\mu_L e^{\gamma_E} }{q_T}\right]^{k-\ell} \left[ \frac{1}{2}\frac{d }{d t}\right]^\ell \,, \end{eqnarray} Then, integrating repeatedly by parts in \eq{Ibkexact}, we obtain \begin{equation} I_{b,\text{exact}}^k = \frac{2}{iq_T^2} \frac{1}{\sqrt{\pi A}} \sum_{\ell=0}^k {{k}\choose{\ell}} \frac{(-1)^\ell}{2^\ell} \int dt \biggl[\frac{d^\ell}{dt^\ell} f(t)\biggr]\left[\ln\frac{\mu_L e^{\gamma_E}}{q_T} \right]^{k-\ell}e^{ \frac{(1+t)^2}{A}-2L (1+t) } \,,\label{Ibkexact-parts} \end{equation} We can now take the $A\to 0$ limit in \eq{Ibkexact-parts}, and in so doing turn the Gaussian into a Dirac delta function: \begin{equation}\label{basicI} \lim_{A\to 0} \frac{1}{\sqrt{\pi A}} \int dt\, e^{ \frac{(1+t)^2}{A}-2L (1+t) } \left[ \frac{d }{d t}\right]^\ell f(t) = i \int dx\, \delta (x-i-ic) \left[ \frac{d }{id x}\right]^\ell f(c+ix) = i \frac{d ^\ell }{d t^\ell} f (-1)\,, \end{equation} recalling we pick $c=-1$ in $t=c+ix$, and where we used the Dirac delta identity: \begin{equation} \lim_{A\to 0} \exp[-x^2/A]= \sqrt{\pi A} \,\delta (x)\,. \end{equation} Thus, in the $A\to 0$ limit, \begin{equation} I_{b,\text{exact}}^k \to \frac{2}{q_T^2} \sum_{\ell=0}^k {{k}\choose{\ell}} \frac{(-1)^\ell}{2^\ell} \biggl[\frac{d^\ell}{dt^\ell} f(t)\biggr]_{t=-1}\left[\ln\frac{\mu_L e^{\gamma_E}}{q_T} \right]^{k-\ell} \,.\label{Ibkexact-limit} \end{equation} We compute the exact derivatives: \begin{equation} \label{dfdt} \frac{d ^\ell }{d t^\ell} f(-1) =\left. \frac{d ^\ell }{d t^\ell} \frac{\Gamma(-t) }{\Gamma(1+t)}\right|_{t=-1} = \begin{cases} 0 & \text{for } \ell=0 \\ 1 & \text{for } \ell=1 \\ 4\gamma_E & \text{for } \ell=2 \\ 12\gamma_E ^2 & \text{for } \ell=3 \end{cases} \end{equation} Inserting \eq{dfdt} into \eq{Ibkexact-limit}, we have \begin{eqnarray} \label{Ibkexact-final} I_{b,\text{exact}}^0 &\to& 0 \,,\nn\\ I_{b,\text{exact}}^1 &\to& -\frac{1}{q_T^2} \,,\nn\\ I_{b,\text{exact}}^2 &\to& -\frac{2}{q_T^2} \left[ \ln\frac{\mu_L e^{\gamma_E}}{q_T} - \gamma_E \right] \,,\nn\\ I_{b,\text{exact}}^3 &\to&-\frac{3}{q_T^2} \left[\ln^2\frac{\mu_L e^{\gamma_E}}{q_T} -2\gamma_E \ln\frac{\mu_L e^{\gamma_E}}{q_T} + \gamma_E^2 \right] \,, \end{eqnarray} Comparing \eq{Ibkexact-final} to \eq{Ibk-limit}, we find exact values of $f_k$ which are 1,1, $\gamma_E^2\approx 0.33317$ for $k=1,2,3$ and the approximate values in \eq{fn} agree better than 1\% for $k=1,2$ and agree within about 15\% for $k=3$. Note that the term $I_{b,k}$ at $k=3$ is the $O(\alpha_s^2)$ contribution induced by our prescription in \ssec{final} as it multiplies the coefficient $\widetilde F^{(2)}_3$ in \eq{Ftwoloop},which is suppressed by another order of $\alpha_s$ at the cross section level. Taking this into account, the error in $f_3$ itself induces only an order of magnitude smaller error in the cross section, much smaller than the total theoretical error at NNLL accuracy, and so we need not be concerned about it. If desired, one can go beyond this accuracy by including higher-order Hermite polynomials in the computation of $I_b^k$, and thus of $f_k$. This means that the fixed order series probes very narrowly the behavior of $f(-1+ix)$ near $x=0$, and the more accurate our expansion is near $x=0$, the higher the order in $\alpha_s$ that we can reproduce accurately in the fixed order (high $q_T$) regime. Practically, we would like to reproduce the one loop fixed order cross section since we would be matching our NNLL predictions to that result in the high $q_T$ region, which means we need our expansion to match the exact result up to the second order in the Taylor expansion of $f$, \emph{i.e.}~ $\ell=0, 1, 2$. \subsubsection{Matching to the fixed order cross section} \label{ssec:matching} The resummed formula \eq{thefinalresult} accurately predicts the spectrum for relatively low (but not too low) values of $q_T$. At large $q_T\sim Q$, the non-singular terms in $q_T$ become just as big as the logs of $q_T/Q$ themselves, and we should use the fixed-order perturbative expansion for the cross section there. It has been shown in the context of $B\rightarrow X_s +\gamma$ \cite{Ligeti:2008ac}, thrust \cite{Abbate:2010xh} as well as the Higgs jet veto calculation \cite{Berger:2010xi}, that if one does not turn off the resummation then one can over estimate the cross section, in the region where fixed order perturbation theory should suffice, by an amount which goes beyond the canonical error band in the fixed order result. This overshoot happens despite the fact that the resummed terms are formally sub-leading in the expansion. The reason for this overshoot has been shown \cite{Ligeti:2008ac, Berger:2010xi, Abbate:2010xh} to be due to the fact that there are cancellations between the singular and non-singular terms in the tail region and that this cancellation will occur only if the proper scale is chosen in the logarithms. We can smoothly combine the resummed and fixed-order formulas by turning off resummation in \eq{thefinalresult} in the high $q_T$ region using profiles in $\mu $ and $\nu$ \cite{Abbate:2010xh} and match the result to the one loop ($\mathcal{O}(\alpha_s)$) full theory cross section. Our resummation as given in \eq{resummedIb} (and evaluated in \eq{thefinalresult}) is neatly divided into two parts, the hard function which runs in $\mu$ and functions $C$ and $I_b$ which implement $\nu$ running. To turn off resummation at large $q_T$ we implement the following profiles for both $\mu, \nu$ in \eq{resummedIb}: \begin{subequations} \begin{align} \label{run} \mu_L \to \mu_\run (q_T) &= \mu_L(q_T) ^{1-\zeta(q_T)}\, \abs{\mu_H}^{\zeta(q_T)} \,, \\ \nu_L^* \to \nu_\run (q_T) &= \nu_L^*(b_0;\mu_\run)^{1-\zeta(q_T)}\, \nu_H^{\zeta(q_T)} \\ &= [\nu_L(\mu_\run b_0)^{-1+p}]^{1-\zeta(q_T)} \nu_H^{\zeta(q_T)} \nn \end{align} \end{subequations} The function $\zeta(q_T)$ is chosen so that at low values of $q_T$ where resummation is important, its value is 0 while for values near $Q$ it approaches 1. $\mu_L(q_T) $ is given by \eq{scale} and illustrated in \fig{scaleL}, and we have indicated $\nu_\run$ so that inside \eq{nuLstar} for $\nu_L^*$ the $\mu_L$ is also set to $\mu_\run$. This is designed so that not only resummation of logs of $\mu_L/\mu_H$ and $\nu_L/\nu_H$ is turned off as $\zeta\to 1$ but also the shifting of logs of $\mu_L b_0$ into the soft rapidity exponent in \eq{Vstar} as we move out of the resummation region. The exponent $p$ defined in \eq{eq:n} is also now evaluated at $\mu_\run$. The choice of $\zeta(q_T)$ that we make for this function is: \begin{eqnarray} \zeta(q_T) = \frac{1}{2} \left(1+\tanh \left[\rho\left(\frac{q_T}{q_0}-1\right)\right]\right) \end{eqnarray} Here $q_0$ determines the central value for the transition and $\rho$ determines its rate. In practice we use $\rho=3$. The value of $q_0$ is determined by the scale at which the non-singular pieces become as important as the resummed singular cross section. The profiles in $\mu$ and $\nu$ for the case of the Higgs spectrum are shown in \fig{profile}. (Here $q_0$ has been chosen to be 40 GeV for the central profile). We also probe the effect of varying the profiles by varying the value of $q_0$, in our case, between 30 GeV and 50 GeV as shown in Fig. \ref{profile_var}. For each of these profiles, we consider the scale variation by a factor of 2 for $\mu ,\nu$ (Fig. \ref{fig:profile}) For the case of the DY, we use a similar value for $q_0$. \begin{figure} \centerline{\scalebox{.55}{\includegraphics{profile_var.pdf}}} \vskip-0.5cm \caption[1]{Effect of variation of the transition point $q_0$ in the profile functions for $\mu$, $\nu$ for the case of the Higgs $q_T$ spectrum. $\abs{\mu_H}$ here is $M_h$ = 125 GeV.} \label{profile_var} \end{figure} \begin{figure} \centerline{\scalebox{.55}{\includegraphics{profiles.pdf}}} \vskip-0.5cm \caption[1]{Central profile ($q_0$= 40 GeV) in $\mu$, $\nu$ along with the effect of the scale variations \eq{scalevariations} for the case of the Higgs $q_T$ spectrum. $\abs{\mu_H}$ here is $M_h$ = 125 GeV.} \label{fig:profile} \end{figure} This procedure is straightforward to implement for the hard function running, let's see what effect it has on the $\nu$ running. Going back to the $\nu$ running, our exponent in \eq{UVdef} with the scale choice \eq{nuLstar} for $\nu_L$ looks like \begin{eqnarray} V = \exp\biggl[ \gamma_\nu^S(\mu_\run) \ln\left( \frac{\nu_H}{\nu_L^*} \right) \biggr] \end{eqnarray} Putting in our choices for $\nu_L$ we have \begin{eqnarray} V = \exp\biggl[ \gamma_{\nu}^S(\mu_\run) \ln\left( \frac{\nu_H}{(\nu_L^*)^{1-\zeta} \nu_H^\zeta} \right) \biggr] = \exp\biggl[(1-\zeta) \gamma_\nu^S(\mu_\run) \ln\left( \frac{\nu_H}{\nu_L^*} \right) \biggr] \end{eqnarray} So the effect is to merely multiply the argument of the exponent by a factor of $1-\zeta(q_T)$. Notice that we have also put in $\mu_\run$ in the $\nu$ anomalous dimension since it is a function of $\mu_L$. This is basically the same as $A \rightarrow (1-\zeta)A$ and $\gamma_{RS}\to (1-\zeta)\gamma_{RS}$ in the expressions \eqs{VGaussian}{ACUeta} for the rapidity evolution factor $V_\Gamma$ inside $I_b$ in \eq{Ibdef}. Thus, using these profile scales simply modifies the definitions in \eq{ACUeta} to: \begin{align} \label{ACprofile} A(\mu_L,\nu_L,\nu_H;\zeta) &= -(1-\zeta)\mathbb{Z}_S\Gamma[\alpha_s(\mu_\run)] (1-p) \\ &= -(1-\zeta)\mathbb{Z}_S \Gamma[\alpha_s(\mu_\run)] \frac{1}{2} \Bigl[ 1 + \frac{\alpha_s(\mu_\run)\beta_0}{2\pi} \ln\frac{\nu_H}{\nu_L}\Bigr] \nn \\ C(\mu_L,\nu_L,\nu_H;\zeta) &= \exp\left\{ A(\mu_L,\nu_L,\nu_H;\zeta) \ln^2\chi + (1-\zeta)\gamma_{RS} [\alpha_s(\mu_\run)] \ln \frac{\nu_H}{\nu_L}\right\} \,, \nn \end{align} and $\Omega,\chi$ in \eq{ACUeta} remain unchanged (with the exception that $\mu_L\to \mu_\run$). The final expressions for $I_b^0$ and $I_b^k$ given by \eqs{IbHermite}{Ibk-cH} are modified accordingly. So what we are doing is smoothly taking the limit $A \rightarrow 0$ (and $\gamma_{RS}\to0$) as we enter the high $q_T$ region. We already know from the previous section (see \eq{Ibk-limit}) that in this limit $I_b^0$ goes to 0 smoothly, which means the resummation factor is 0, and each $I_b^k$ goes to the appropriate fixed order limit as well. The central profiles we choose for the scales are: \begin{equation}\label{central} \left\{\mu_H,\, \nu_H,\,\mu,\, \nu\right\}_\text{central}=\left\{iQ,\, Q,\, \mu_\run(q_T),\, \nu_\run(q_T) \right\} \end{equation} $\mu_H$ is chosen as $i Q$, which implements the resummation of enhanced $\pi^2$ terms at each order in $\alpha_s$, which improves the perturbative convergence \cite{Ahrens:2008qu,Ahrens:2008nc}. While implementing the hard function running, we take the absolute value of $U_H$. We make six scale variations for $\mu, \nu$ by a factor of 2 about their central values for each of the profiles shown in Fig. \ref{profile_var}: \begin{gather} \frac{\mu_L}{2} \leftarrow \mu_L \rightarrow 2\mu_L \nn \\ \frac{\nu_L}{2} \leftarrow \nu_L\rightarrow 2\nu_L \label{scalevariations} \\ \Bigl(\frac{\mu_L}{2}, \frac{\nu_L}{2}\Bigr)\leftarrow (\mu_L,\nu_L)\rightarrow (2\mu_L, 2\nu_L)\,, \nn \end{gather} to estimate higher order terms as shown in \fig{profile}. In the first two variations in \eq{scalevariations} $\mu_L$ and $\nu_L$ are each varied independently, while in the last one, they are varied simulatneously. The anticorrelated variation ($(\mu_L/2, 2\nu_L)\leftarrow (\mu_L,\nu_L)\rightarrow ( 2\mu_L, \nu_L/2)$) is not included to avoid double counting \cite{Neill:2015roa}. We take the largest envelope in $\mu,\nu$ and $q_0$ variations as our estimate of the error band. In principle, we can make variations at either end of the running (i.e. at the high scales $\mu_H$ and $\nu_H$, or the low scales $\mu_L$ and $\nu_L$). In practice, since the value of $\alpha_s$ is larger at lower scales, the scale variation at the low scale of running yields the largest error band. There is implicit $q_T$ dependence in $z(q_T)$, $\mu_\run$, and $\nu_\run$. Now we are ready to give results for our final resummed cross section matched to the fixed-order QCD result. We give an explicit expression of the transverse spectrum which can be written as sum of resummed and nonsingular parts. We match the resummed and perturbative QCD results such that final result is valid in both small are large $q_T$ regions. This can be done by adding the resummed result to nonsingular terms where all logarithmic singular terms reproduced by EFT results in \eq{sing} are subtracted from the perturbative result \begin{equation} \sigma = \sigma^\text{res} + \sigma^\text{ns}\,,\quad \sigma^\text{ns}(\mu) = \sigma^\text{pert} (\mu) -\sigma^\text{sing}(\mu) \,, \label{nsdef} \end{equation} where the differential in $q_T^2$ and rapidity $y$ is implied. The fixed-order $\sigma^{\text{ns}}$ is evaluated at a scale $\mu$, which we choose to be equal to $\abs{\mu_H}$, the high scale to which the profile \eq{run} goes for large $q_T$. In this section we give explicit expressions for $\sigma^\text{res}$ and $\sigma^\text{sing}$, and \appx{per} gives $ \sigma^\text{pert}$. The resummed cross section $\sigma^\text{res}$ is given by \eq{thefinalresult}, with the modifications from the scale profiles \eqss{run}{ACprofile}{scalevariations} implemented. Meanwhile the expansion \eq{Fexpansion} of the fixed-order coefficient $\widetilde F$ in \eq{Ibdef} becomes an expansion in logs of $\mu_\run b_0$ since the $\hat\partial_\chi$ derivatives in \eq{logderivatives} bring down powers of $\ln \mu_\run b_0$ inside the $b$ integrand: \begin{equation} \label{Frunexpansion} \widetilde F(b,x_1,x_2,Q;\mu_\run,\nu_\run,\nu_H) = \sum_{n=0}^\infty \sum_{k=0}^{2n} \Bigl(\frac{\alpha_s(\mu_\run)}{4\pi}\Bigr)^n \widetilde F^{(n)}_k \ln^k\mu_\run b_0\,, \end{equation} where the coefficients $\widetilde F_k^{(n)}$ which were given to $\mathcal{O}(\alpha_s^2)$ at NNLL accuracy in \eqss{Ftree}{Foneloop}{Ftwoloop} for $\zeta = 0$ (the pure resummation region) now become: \begin{align} \label{Fprofile0} \widetilde F^{(0)}_0 &= f_i(x_1,\mu_\run)f_{\bar i}(x_2,\mu_\run) \\[12pt] \label{Fprofile1} \widetilde F^{(1)}_2 &= f_i(x_1,\mu_\run)f_{\bar i}(x_2,\mu_\run) \zeta\mathbb{Z}_S \frac{\Gamma_0}{2} \\ \widetilde F^{(1)}_1 &= f_i(x_1,\mu_\run)f_{\bar i}(x_2,\mu_\run) \biggl[ \mathbb{Z}_S \Gamma_0 \Bigl( (1\!-\!\zeta) \ln\frac{\nu_L}{\mu_L} + \zeta\ln \frac{\nu_H}{\abs{\mu_H}} \Bigr) + \mathbb{Z}_f \Gamma_0 \ln\frac{\nu_H^2}{Q^2} + 2\gamma_f^0\biggr] \nn \\ &\quad - [2P_{ij}^{(0)}\otimes f_j(x_1,\mu_\run)] f_{\bar i}(x_2,\mu_\run) - f_i(x_1,\mu_\run) [2P_{\bar i j} \otimes f_j(x_2,\mu_\run)] \nn \\ \widetilde F^{(1)}_0 &= f_i(x_1,\mu_\run) f_{\bar i}(x_2,\mu_\run) c_{\widetilde S}^1 \nn \\ &\quad + [I_{ij}^{(1)}\!\otimes\! f_j(x_1,\mu_\run)] f_{\bar i}(x_2,\mu_\run) + f_i(x_1,\mu_\run)[ I_{\bar i j}^{(1)}\!\otimes\! f_j(x_2,\mu_\run)]\,.\, \nn \end{align} while the two-loop coefficients $\widetilde F_k^{(2)}$ remain zero at NNLL accuracy. In the pure fixed order limit $\zeta=1$, the resummation is turned off as $\mu_\run=\nu_\run=|\mu_H|=\nu_H$, and we see the term containing explicit $\mu_L,\nu_L$ in \eq{Fprofile1} vanish. The integrals $I_{b,k}$ take the fixed order form in \eq{Ibkexact-final} with $\mu_L\to \mu_H$. In this limit, $\widetilde F_0^{(0,1)}$ actually does not contribute due to the fact that $I_b^0\to 0$. Either by putting \eq{Fprofile1} and \eq{Ibkexact-final} together or by inverse transforming \eq{sing} to momentum space, we obtain the singular part of the fixed-order cross section at $\mathcal{O}(\alpha_s)$ as \begin{align}\label{sigsing} \frac{\sigma^\text{sing}}{\sigma_0}&= \biggl\{ \! \delta(q_T^2)+\frac{\alpha_s(\mu)}{4\pi}\biggl(\! (c^1_H \!+\! c^1_{\widetilde{S}} \!+\! C_i\pi^2) \delta(q_T^2)- \gamma_f^0 \biggl[\frac{1}{q_T^2}\biggr]_+ \! \! -\Gamma_0\biggl[\frac{\ln(q_T^2/Q^2)}{q_T^2}\biggr]_+ \biggr)\! \biggr\}f_i(x_1) f_{\bar i}(x_2) \nn\\ & \quad +\frac{\alpha_s(\mu)}{4\pi}\biggl\{ \! \delta(q_T^2)\,[ I^{(1)}_{ij}\! \otimes \! f_{j}](x_1)\, f_{\bar i}(x_2)+\biggl[\frac{1}{q_T^2}\biggr]_+ [P^{(0)}_{ij} \! \otimes\! f_{j}](x_1)\, f_{\bar i}(x_2)+ x_1\! \leftrightarrow \! x_2 \! \biggr\} . \end{align} The coefficients in the singular piece of the cross section can be found in \appx{fixedorder}. Together with $\sigma^\text{pert}$ given in \appx{per}, \eq{sigsing} and \eq{nsdef} give the non-singular part of the cross section at $\mathcal{O}(\alpha_s)$, which we add to \eq{thefinalresult} to obtain the final resummed and matched cross section up to NNLL$+\mathcal{O}(\alpha_s)$. \subsection{Final resummed cross section in momentum space} \label{ssec:final} Then, the resummed cross section in momentum space \eq{resummedIb}, using \eqss{VGaussian}{Ibk}{Ibk-cH} to express the integral $I_b$, implementing the profile scales \eq{run}, and matching onto the fixed-order QCD cross section for large $q_T$, is given by the expression: \begin{equation} \label{thefinalfinalresult} \boxed{ \begin{split} \frac{d\sigma}{dq_T^2 dy} &= \frac{1}{2}\sigma_0 C_t^2(M_t^2,\mu_T)H(Q^2,\mu_H) U(\mu_\run,\mu_H, \mu_T) C(\mu_\run,\nu_\run,\nu_H) \\ &\quad\times \sum_{n=0}^\infty \sum_{k=0}^{2n} \Bigl(\frac{\alpha_s(\mu_\run)}{4\pi}\Bigr)^n \widetilde F^{(n)}_k(x_1,x_2,Q;\mu_\run,\nu_\run,\nu_H)I_b^k(q_T;\mu_\run,\nu_\run,\nu_H;\alpha,a_0;\beta,b_0) \\ &\quad + \frac{d\sigma_{\text{ns}}(\abs{\mu_H})}{dq_T^2 dy}\,. \end{split} } \end{equation} This fairly compact expression still has many pieces to it. We provide a roadmap to where to find them all: The basic cross sections $\sigma_0$ for Higgs production ($gg\to H$) and for DY ($q\bar q\to \gamma^*$) are given by: \begin{equation} \sigma_0^{\text{Higgs}} =\frac{M_h ^2}{576 \pi v^2 s}\,,\quad \sigma_0^{\text{DY}} =\frac{4 \pi (\alpha_{\text{em}}(Q))^2e_i^2}{3 N_c Q^2 s}\,, \end{equation} where the vacuum expectation value $v^2=1/(\sqrt{2} G_F)$ is determined by the Fermi coupling $G_F\approx 1.1664\times 10^{-5}\, \text{ GeV}^{-2}$ and $e_i$ is the electric charge of quark flavor $i$. The hard function $H$ is given by \eqss{Hexp}{Cexp}{Cn}, with the necessary one-loop non-cusp anomalous dimensions for DY and Higgs given by \eqs{gaHexp}{gaHexp-g} respectively and the one-loop constants given by \eq{c1}. The top matching coefficient $C_t^2$ for Higgs production is given by \eq{Ct} while it is set to 1 for DY. The $\mu$ (virtuality RG) evolution kernel $U(\mu_L,\mu_H,\mu_T)$ is given in \eq{UVdef}, with the parts of the exponent $K_{\Gamma,\gamma},\eta_\Gamma$ defined and expanded up to NNLL accuracy in \appx{def-K}. These all contain dependence on the cusp anomalous dimension, whose coefficients $\Gamma_n$ for DY and Higgs are given by \eq{Gacuspexp}. The factor $C(\mu_L,\nu_L,\nu_H)$, which came from expressing the exponentiated ``conformal'' part of the rapidity evolution kernel $V_\Gamma$ in \eq{Vstar} as a Gaussian in $\ln b$, is defined in \eq{ACUeta}, and expanded to NLL and NNLL accuracy in \appx{Gaussian}. The coefficients $\widetilde F^{(n)}_k$ in the expansion \eq{Frunexpansion} of the $\widetilde F$ defined in \eq{fixedorderfactor} that contains the fixed-order terms in the soft function, TMDPDFs, and ``non-conformal'' part of the rapidity evolution kernel $V_\beta$ defined in \eq{VGammaVbeta} have been given above in \eqs{Fprofile0}{Fprofile1} up to the order to which we shall need them for NNLL accuracy. \begin{figure} \centerline{\scalebox{.55}{\includegraphics{Hcomparison.pdf}}} \vskip-0.5cm \caption[1]{Systematic improvement in the accuracy of Higgs (left) and DY (right) cross sections with increasing number of terms, differential in $y$ (at $y=0$) and $q_T$ (and $Q^2$ for DY, shown at $Q = 125\text{ GeV}$ for comparison). Exact (red) gives resummed cross section without Hermite expansion (\emph{i.e.}~ numerical $b$ integration). $N=6$ (blue) is the result with six terms in the Hermite expansion, three each for real and imaginary terms. $N=7$ (black) is the result with seven terms, four for real and three for imaginary. Here we plot only the purely resummed result, i.e. with no matching to the fixed order cross section.} \label{fig:Hcompare} \end{figure} The integrals $I_b^k$ which we defined in \eq{Ibk} are given in final evaluated form in \eq{Ibk-cH}, the calculation of which formed the bulk of this \sec{analytic} and is one of the main results of this paper. That result is in terms of derivatives \eq{dcH} of the integrals $\mathcal{H}_n$ of Hermite polynomials against the Gaussian in \eq{cH2n} appearing in $I_b^0$ in \eq{IbHermite}, the explicit results for which are given by \eq{Hnresult}. Those integrals depend on the parameters $\alpha,a_0$ and $\beta,b_0$ that we used in the expansion of the function $\Gamma(1-ix)^2$ in terms of Gaussian-weighted Hermite polynomials in \eq{Hermiteexpansion}. For the numerical results in this paper, we used the values \begin{equation} a_0 =2\gamma_E^2+\frac{\pi^2}{6}\approx 2.31129\quad \text{and}\quad b_0=\frac{2}{3}\gamma_E^2+\frac{\zeta_3}{3\gamma_E} +\frac{\pi^2}{6}\approx 2.56122 \,, \end{equation} which were given in \eq{a0b0}, and values of $\alpha,\beta$ given by $\alpha^2-a_0=4$ and $\beta^2-b_0=4$, which were given in \eq{alphabeta}. These choices are by no means unique; other values can also be chosen, which would then give different coefficients in the Hermite expansion. Our choices allowed sufficiently accurate representation of the exact function $\Gamma(1-ix)^2$ with an economical basis of a few terms each for the real and imaginary parts, the coefficients of which we gave in \eq{cn}. \fig{Hcompare} illustrates the agreement of our resummed cross section \eq{thefinalfinalresult} to NNLL accuracy with a total of six or seven (four real, three imaginary) terms in the Hermite expansion vs. the result of numerically integrating the exact $b$ integrand in \eq{resummedIb}. The truncation error lies well within the perturbative NNLL uncertainty band. \tab{NkLL} gives the orders to which the anomalous dimensions and fixed-order pieces of \eq{thefinalresult} are to be truncated to achieve NLL, NNLL, \emph{etc.} perturbative accuracy in the resummed cross section. The central values of the resummation scales we choose in \eq{thefinalfinalresult} are given in \eq{central}, wherein the central values for $\mu_L=\nu_L$ are given by the solution of \eq{scale} and illustrated in \fig{scaleL}. The perturbative uncertainties are then estimated by performing the variations \eq{scalevariations}. The final piece $d\sigma_{\text{ns}}/dq_T^2 dy$ is defined by \eq{nsdef} and can be obtained from the fixed-order results in \appx{per}, subtracting off the singular terms \eq{sigsing}. It is good to remind ourselves at this point that the final formula \eq{thefinalfinalresult} is a threefold expansion as described in \ssec{intro3}: a perturbative expansion in $\alpha_s$ and resummed logs, an additional fixed-order expansion of the non-conformal piece $V_\beta$ of the rapidity evolution inside $\widetilde F$, and an expansion in Hermite polynomials in evaluating each $I_b^k$. It is thus quintessentially a ``formula'' according to a delightful definition we recently encountered: \emph{an expression given by an $=$ sign with a controlled error of known parametric form}.\footnote{A. Manohar, ``The Photon PDF,'' talk at \emph{Lattice QCD} workshop, Santa Fe, NM, Aug. 28--Sep 1, 2017, describing work in \cite{Manohar:2016nzj,Manohar:2017eqh}.} \section{Remarks on nonperturbative region of $q_T$} \label{sec:non_pert} The final result \eq{thefinalresult} for the resummed cross section is obtained by doing an integral \eq{Ibdef} over a wide range of $b$. It is a reasonable assumption that the perturbative resummation will hold as long as $1/b > \Lambda_{\text{QCD}} \sim 500\text{ MeV}$. This is roughly the value at which we begin to see the Landau pole in the $b$ space resummation scheme (see \fig{nonpert}). Beyond this value of $b$, we expect that nonperturbative effects will play a nontrivial role. Although it is not the goal of this paper to include or advance any new method to account for these nonperturbative effects, we will freely make some loose observations here, mainly delineating where we do and do not need to worry about them. While they are certainly important, we will have to leave their incorporation into our results to future work. \subsection{Remarks on nonperturbative effects} As we approach such low values of $1/b$, it is no longer possible to match perturbatively the beam functions onto PDFs as in \eq{beammatching}. Instead we need to retain the complete transverse momentum dependent beam function, which is now a fully nonperturbative function that need to be fit from data. In the $b$ space resummation scheme, where $\mu \sim 1/b$, giving the result \eq{inverseFT}, the running using the perturbative anomalous dimension will work only for $ 1/b \ll \Lambda_{\text{QCD}}$. A sensible thing to do in this scheme is to freeze the resummation at $ 1/b \sim 2 \Lambda_{\text{QCD}}$ and evaluate the beam function at this frozen low scale which is still perturbative. Interestingly, for the momentum resummation scheme leading to \eq{thefinalresult}, while we still need to replace the PDF's with the full nonperturbative TMDPDFs at low $q_T$, the resummation is always in the perturbative regime since $\mu \gg \Lambda_{\text{QCD}}$ for any value of $q_T$. \begin{figure} \centerline{\scalebox{.52}{\includegraphics{nonpert.pdf}}} \vskip-0.5cm \caption[1]{$b$ integrand for different invariant masses, in $b$ space resummation scheme \eq{inverseFT} versus in $p$ space resummation scheme \eq{Ibdef}. The Landau pole in the $b$ space integrand signals the onset of nonperturbative physics. The $p$ space integrand in our scheme vanishes smoothly for large $b$, although this does not mean nonperturbative effects are not important for small $q_T$.} \label{fig:nonpert} \end{figure} For the case of the soft function, since both resummation schemes use a $b$ dependent value for $\nu$, we need to freeze out the resummation at a perturbative value of $\nu_L \sim 1/b^* > \Lambda_{QCD}$ and fit the soft function at this scale from data. All this seems rather complicated and it might appear that without complete information about the nonperturbative functions, it is not possible to give a prediction for the cross section. However, once again the nature of the resummed perturbative exponent comes to the rescue. Even before we reach a nonperturbative value, the double logarithmic term in $b$ in the exponent completely damps out the integrand. Then the nonperturbative corrections become irrelevant since the region of $b$ space in which they start contributing is heavily suppressed. If we consider $b$ space resummation, the damping which is provided by the resummed exponent depends mainly on the hard scale $Q$ and the cusp anomalous dimension. For the Higgs, $Q$ is fixed and the cusp anomalous dimension is large so the damping is always large. Increasing the center-of-mass energy only changes the $x$ value where the PDFs are evaluated, but there is only a mild dependence on this factor. So, we can safely neglect any nonperturbative effects and rely completely on the perturbatively resummed cross section. For the case of DY, the cusp anomalous dimension is much smaller and the value of $Q$ is variable. So if we go to low $Q$, nonperturbative effects become important, see \fig{nonpert}. As can be seen from this figure, $b \sim 3\text{ GeV}^{-1}$ is a rough estimate of the value beyond which nonperturbative effects become important where we begin observing the divergence due to the Landau pole in the $b$ space resummation scheme. For low values of $Q$, several different ways of incorporating nonperturbative effects using model functions (which effectively cut off the Landau pole) in $b$ space resummation have been proposed \cite{Collins:2014jpa,Sun:2013dya,DAlesio:2014mrz,Scimemi:2017etj,Qiu:2000hf} . In this paper, we stick to showing results for larger values of $Q$ where these effects are not as important. A detailed discussion of how to handle nonperturbative effects in our hybrid impact parameter-momentum space resummation scheme will be given in future work. We suspect\footnote{Thanks to D. Neill}, among other things, that the nature of our asymptotic $V_\beta$ expansion in \eqs{Vbeta}{VGammaVbeta} will in fact give clues about the best way to include nonperturbative effects together with our perturbative resummation scheme. This follows from the fact that in the $b$ space resummation scheme, the Landau pole, related to the running of $\alpha_s$ was the indicator of the onset of non-perturbative effects. In our scheme, we have expanded out the running of $\alpha_s$ in the form of the $V_{\beta}$ function and it is natural that the breakdown of this expansion will dictate the onset of nonperturbative effects. (See \cite{Scimemi:2016ffw} for a recent study of nonperturbative power corrections based on renormalon divergences of perturbative expansions of TMD functions.) \subsection{Remarks on perturbative low $q_T$ limit} \label{ssec:lowqT} The $q_T\to 0$ behavior of the perturbative resummed $q_T$ distribution has of course been extensively discussed, and is known to be affected by configurations of multiple large momentum emissions $\vect{k}_T^i$ that cancel vectorially $\sum_i\vec{k}_T^i = \vec{q}_T^{\,i}\sim 0$, leading $d\sigma/dq_T^2$ to go to a constant nonzero value \emph{e.g.}~ \cite{Parisi:1979se,Becher:2011xn,Ebert:2016gcn,Bizon:2017rah}. We will not add anything to these discussions here, postponing consideration of this regime until we also include nonperturbative effects as remarked above. We observe, however, that our scheme in \sec{analytic} applies practically only to the larger $q_T$ regime. Specifically, we are using an approximation for the ratio of gamma functions $F(t) = \Gamma[-t]/\Gamma[1+t]$ appearing in \eq{Mellin-Barnes} in $t$ space. While the approximation that we are currently using is good enough in the perturbative regime ($q_T \geq 2\text{ GeV}$), below this $q_T$ scale, if we compare it to the CSS resummation then it shows exponential suppression as opposed to a constant behavior. However, the $q_T$ scale at which this deviation from the exact resummed cross section happens could be systematically lowered by improving our expansion of $F(t)$ (\emph{i.e.}~ adding more terms to our final formula \eq{IbHermite}). This is evident from the behavior of the integrand in $t$ space \eq{saddle}. As $q_T$ is lowered, both $A$ and $t_0$ become large. Since $A$ controls the suppression of the integrand which allows us to approximate the ratio of $\Gamma$ functions as a series, as $q_T$ is lowered, this suppression becomes smaller and smaller which will force us to include more and more terms in our expansion in order to maintain the same accuracy. While we can get good accuracy in the perturbative regime $q_T \sim $ 2 GeV with a few terms, below this scale it is impractical to continue using this series approximation. So in principle, to recover the behavior as $q_T\to 0$, we can no longer do an expansion in the Hermite polynomials. We observe, however, that as $A$ increases at lower $q_T$, the integrand in $b$ space (\eq{Ibk}) is now highly suppressed at large $b$. This means that it would be more practical to now do an expansion of the Bessel function itself (which was not possible at larger $q_T$), rather than go to $t$ space. For $q_T\lesssim 2\text{ GeV}$ we would need a series expansion that approximates $J_0(bq_T)$ well just out to its first peak or so. It should be possible to find such an expansion similar to what we did for large $q_T$ above that still allows us to do the $b$ space integrals analytically and accurately, which by construction would now display constant behaviour at low $q_T$. We defer a presentation of the details of this procedure to future work. So the Mellin-Barnes representation is in some sense the dual of the Bessel function, in that at perturbative $q_T$, $t$ space is more amenable to an expansion and hence an analytical result with a few terms, but fails as we move into the low-$q_T$ region where nonperturbative effects kick in, where expanding directly in $b$ space makes more sense. \section{Comparison with previous formalisms} \label{sec:compare} There have been numerous other techniques developed for implementing the resummation for transverse momentum spectra of gauge bosons. We will briefly comment on how ours compares to some of them, though we will not undertake any sort of in-depth comparison here and will not really do justice to any of these other methods. A more detailed discussion of them was given in \cite{Neill:2015roa} as well as \cite{Ebert:2016gcn}. The earliest one was the CSS formalism \cite{Collins:1984kg,Collins:2011zzd}, which was applied in, \emph{e.g.}~ \cite{Bozzi:2010xn,deFlorian:2011xf}, for computing DY and Higgs transverse momentum cross sections. The value of $\mu_L$ is implicitly chosen to be $1/b_0$. There is no explicit independent scale $\nu$, however a comparison of the resummed exponents reveals that the implicit choice for $\nu_L$ is also $1/b_0$. So the central values agree with the $b$ space resummation implemented in \eq{inverseFT} and an earlier paper \cite{Neill:2015roa}. The difference in the two approaches is two-fold. Firstly, the error analysis using scale variation is different in the absence of an independent scale $\nu$. Varying only $\mu$ scales is likely to underestimate the uncertainty. Second, in the high $q_T$ region, the matching procedure is different. In the CSS formalism, there is no systematic way of turning off resummation while matching to the fixed order cross section, so that the predicted results differ in the high $q_T$ region. The Landau pole is handled by implementing a smooth cut-off in $b$ space. However, as we have seen, as long as we are at high $Q$, this does not affect the prediction. An explicit comparison between the two schemes was given in \cite{Neill:2015roa}. \begin{figure} \centerline{\scalebox{.55}{\includegraphics{NNLL+NLO.pdf}}} \vskip-0.2cm \caption[1]{Comparison of NNLL+NLO cross section (resummed cross section matched to O($\alpha_s$) fixed order cross section using profiles) in two schemes, $b$-space resummation \eq{inverseFT} and $p$-space resummation \eq{thefinalresult}. The overlap is a good cross-check of the accuracy of our method, and the improvement in the reliable estimation of uncertainties and computation time in our resummation scheme has been described in the text. The Higgs cross section is differential only in $q_T$.} \label{fig:nnll} \end{figure} In \fig{nnll}, we compare the $b$ space resummation scheme for the implementation in \cite{Neill:2015roa} at NNLL+NLO accuracy for the Higgs and DY transverse spectrum with the hybrid $b$-space/momentum-space resummation scheme developed in this paper. This will serve transitively as a comparison also with other $b$ space resummation schemes. We can deduce the following \begin{itemize} \item The width of the error bands is comparable in the entire region of $q_T$ which is not too surprising since the error analysis in both \cite{Neill:2015roa} and the present paper were based on the same variations \eq{scalevariations} around the respective central values. \item In the low $q_T$ region, the central value in our hybrid scheme is lower that the pure $b$ space scheme even though it is within the error band. This is to be expected since the two schemes differ in subleading terms at a given resummation accuracy. \item In the high $q_T$ regime, the results agree exactly since in this range, the resummation has been turned off and the cross section is just the one-loop fixed order cross section in both schemes. \end{itemize} Another technique was implemented in \cite{Becher:2011xn,Becher:2012yn} which again follows the CSS formalism with the implicit $\nu$ choice $1/b$, but the $\mu$ choice is made in momentum space choosing $\mu \sim q_T+q_T^*$, where $q_T^*$ is chosen as 2 GeV for DY and 8 GeV for the Higgs. For the kinematics we chose to illustrate in \ssec{muLscale}, we actually found very similar shifts at low $q_T$ based on our analysis of the scales which minimize the contributions of the residual fixed order logs, which in itself parallels the logic in \cite{Becher:2011xn,Becher:2012yn}, though we do not necessarily adopt the same physical interpretations. The matching procedure is again similar to the CSS case and hence differs from the scheme in \cite{Neill:2015roa} the high $q_T$ regime. Again a detailed discussion of the differences was provided in \cite{Neill:2015roa}. There also have been methods proposed for setting \emph{all} renormalization scales in momentum space. The most recent \cite{Ebert:2016gcn} technique has been to solve the RG equations in momentum space directly. In momentum space the beam and soft functions are functions of plus distributions of the form $\left [ 1/q_T^2 \ln^n( q_T^2 /Q^2) \right]_+ $ and these terms are resummed directly in momentum space using a technique of distributional scale setting. This involves setting the scale under an integral of the plus distribution. The integral turns the plus distribution into ordinary logarithms which can be minimized by choosing a specific momentum scale. However, they also observe that for transverse momentum spectra of gauge bosons, a direct scale choice of $\mu ,\nu \sim q_T$ does not work since this scale choice gives a spurious contribution from highly energetic emissions ($ k_T\gg q_T$) in the phase space and hence a scale that varies with the energy of emissions has to be used so that the region of phase space of energetic emissions is suppressed. This is reflective of the divergence observed in the soft resummation \eq{softNLL} at low values of $b$, which, in our method we chose to cure by adding subleading terms in the cross section through \eq{nuLstar}. Mathematically the solution proposed in \cite{Ebert:2016gcn} is quite elegant. It will be interesting to see its implementation numerically and to compare the results at NNLL. Another method of obtaining the transverse momentum spectra has been proposed \cite{Monni:2016ktx,Bizon:2017rah} that uses the coherent branching formalism. The cross section is given in terms of a convolution over independent emissions off the initial gluons (or quarks for DY). It then singles out the hardest emission which also sets the scale for $\nu$ which again suppresses the energetic emissions since all other emissions are by construction of lower energy. This differs mainly from \cite{Ebert:2016gcn} in this scale choice, as in \cite{Ebert:2016gcn} $\nu \sim k_i$ which follows the energy of each emission instead of just the hardest one. Amusingly, every proposal we know of so far (\emph{e.g.}~ \cite{Ebert:2016gcn,Monni:2016ktx,Bizon:2017rah} and this paper) to implement TMD resummation in momentum space yields a result formally correct at a given order of logarithmic accuracy, but in terms of either an infinite sum or infinite nest of expressions (beyond the perturbative expansion itself) that must be truncated to yield a result that can be evaluated numerically. Refs.~\cite{Ebert:2016gcn,Monni:2016ktx,Bizon:2017rah} obtain their final resummed results in terms of infinite sums over gluon emissions, which \cite{Monni:2016ktx,Bizon:2017rah} implemented in a Monte Carlo routine. Our final result, on the other hand, contains the infinite sum in \eq{IbHermite}, over Hermite polynomials in the basis expansion of the function $\Gamma(t)^2$ arising from the representation \eq{Mellin-Barnes} we used for the Bessel function in the inverse Fourier transform from impact parameter to momentum space. This is of course quite different from sums over gluon emissions. Truncating our formula corresponds to the level of numerical accuracy one attains for the Bessel function and resultant integral, rather than the number of gluons one includes in the emission amplitudes. All the methods have their pros and cons in terms of the perturbative series obtained, error analysis, and how rigorously or easily nonperturbative effects can be included (see, \emph{e.g.}~ \cite{Collins:2014jpa}). Again, we leave it to future work to show how we do the latter in our method. \section{Conclusions} \label{sec:conclusions} We took a fresh look at resummation for transverse momentum spectra of gauge bosons in momentum space. In contrast to the classic procedure which chooses both virtuality and rapidity scales $\mu,\nu$ for resummation both in impact parameter space, we proposed a hybrid prescription for resummation, choosing the rapidity renormalization scale $\nu$ with impact parameter dependence and the virtuality scale $\mu$ in momentum space. We made a choice $\nu_L^* \sim \nu_L(\mu_L b_0)^{-1+p}$ for the low (soft) rapidity scale, and observe that with this choice, the integral over the $b$ space rapidity resummation exponent is convergent. We stress that a well-defined power counting for $\ln(\mu_Lb_0)$ is not possible before we have a stable soft exponent and that only when this exponent is in place, we can treat $\ln(\mu_L b_0) $ as a small log with an appropriate choice of $\mu_L$. We also give a prescription for obtaining the $\mu_L$ scale in momentum space using the analysis of the $b$ space integrand to justify our power counting $\ln(\mu_L b_0) \sim 1 $, which shifts the scale up from the na\"{i}ve momentum-space choice $\mu_L\sim q_T$. We then use the idea that restricting the soft exponent in $b$ space to be at most quadratic and thus Gaussian in $\ln (\mu_L b_0)$ allows us to obtain a semi-analytic formula for the cross section. Using the Mellin-Barnes representation of the Bessel function and the absence of the Landau pole in our resummation formalism, we are able, with certain approximations for the Bessel function appearing in the inverse Fourier transform that are independent of the details of the observable or kinematics, to give a closed-form analytic expression for the cross section at any order of resummation accuracy. In brief, the main ideas and results of our paper are: \begin{itemize} \item Exponentiation of quadratic fixed-order \emph{small} logs of $\mu_L b_0$ from the soft function and rapidity evolution that are formally subleading at a given order of resummation accuracy, but automatically make the $b$ integral in going to momentum space convergent without an additional regulator or cutoff. This is formally achieved by the shifted scale choice $\nu_L \to \nu_L^*$ in \eq{nuLstar}. \item Division of the rapidity exponent $V$ into an exponentiated part $V_\Gamma(\nu_L^*,\nu_H;\mu_L)$ in \eq{VGaussian} that is quadratic and thus Gaussian in $\mu_L b_0$, and a part $V_\beta$ in \eq{Vbeta} expanded in an asymptotic series, making the $b$ integral \eq{Ibdef} doable. \item Use of the Mellin-Barnes representation \eq{Mellin-Barnes} of the Bessel function, transformation to the form \eq{Ibimag}, and expansion of the pure function $\Gamma(-c-ix)^2$ appearing therein in a series of Hermite polynomials times Gaussians in \eq{Hermiteexpansion}, which for NNLL accuracy in the cross section can be safely truncated to a few terms each in the real (even) and imaginary (odd) parts, each term of which gives rise to an integral over $b$ (or $x$) which can be done be performed analytically, giving the result \eq{Hnresult}. \item The above steps give rise to the final resummed cross section in momentum space, \eq{thefinalfinalresult}, in which we can implement scale variations and profiles to reliably estimate theoretical uncertainty and match smoothly onto fixed-order predictions for large $q_T$. \end{itemize} The final result \eq{thefinalfinalresult} represents a threefold expansion: \emph{perturbative expansion} in $\alpha_s$ and resummed logs in the RG and RRG evolution kernels and fixed-order hard, soft, and collinear functions; \emph{$V_\beta$ expansion}, a fixed-order asymptotic expansion of the non-conformal part of the RRG evolution kernel to ensure a Gaussian rapidity kernel; and \emph{Hermite expansion}, in number of terms in basis expansion of the pure Bessel function in the inverse Fourier transform between $b$ and $q_T$ space. Each of these is systematically improvable. We do not even claim that the particular methods, expansions, and strategies we implemented are the fastest or most accurate amongst all similar strategies. It is fast, and it is accurate. Keeping just a few terms in the Hermite expansion we obtain an error in the cross section at the percent level, much better than the NNLL perturbative accuracy to which we work in this paper, while obtaining the result with a $\sim$five-fold improvement in computation time in our tests. We hope our presentation provides a blueprint and an example to obtaining a faster, more accurate predictions for many TMD observables in momentum space, and is certainly open to further development and improvement. We applied our results to obtain the transverse spectrum of the Higgs as well as the DY $q_T$ spectrum at NNLL, matched to fixed-order $\mathcal{O}(\alpha_s)$ results at large $q_T$. We give a comparison with results obtained using the CSS formalism and observe a very good agreement where they should agree, consistent within subleading terms which is observed from the overlapping of the error bands at both NLL and NNLL. We also gave cursory discussions of the relevance of nonperturbative effects in different kinematical regimes, and also of how our method compares with some recently proposed methods of resummation directly in momentum space for all renormalization scales. The techniques we have proposed should be applicable to other observables that depend on a transverse momentum or are sensitive to ``soft recoil,'' (\emph{e.g.}~ \cite{Chiu:2011qc,Chiu:2012ir}), and admit of a factorization of the form \eq{txsec} with a convolution between soft and collinear functions in (2-D) transverse momentum $\vec{q}_T$ describing modes separated in rapidity as in \fig{modes}. When a (semi-)analytic formula can be obtained as we have done, it should drastically cut down computation time and improve our understanding of the physical behavior of the cross section and its computational uncertainties as a function of the scales it depends on. We will perform a a more detailed phenomenological study using our expressions with comparisons to data in the near future, and then also apply our techniques to other TMD observables. \begin{acknowledgments} We are grateful to D. Neill for many helpful conversations, especially for suggesting the use of an integral representation of the Bessel function to simplify the analytical computation of cross sections, and for a detailed review of a preliminary draft which (we believe) improved its presentation considerably, and to S. Fleming and O.~Z. Labun for collaboration on early stages of this work and on related work. This work was supported by the U.S. Department of Energy through the Office of Science, Office of Nuclear Physics under Contract DE-AC52-06NA25396 and by an Early Career Research Award, through the LANL/LDRD Program, and within the framework of the TMD Topical Collaboration. \end{acknowledgments}
1,941,325,220,608
arxiv
\section*{Abstract} Macroscopic behavior of scientific and societal systems results from the aggregation of microscopic behaviors of their constituent elements, but connecting the macroscopic with the microscopic in human behavior has traditionally been difficult. Manifestations of homophily, the notion that individuals tend to interact with others who resemble them, have been observed in many small and intermediate size settings. However, whether this behavior translates to truly macroscopic levels, and what its consequences may be, remains unknown. Here, we use call detail records (CDRs) to examine the population dynamics and manifestations of social and spatial homophily at a macroscopic level among the residents of 23 states of India at the Kumbh Mela, a 3-month-long Hindu festival. We estimate that the festival was attended by 61 million people, making it the largest gathering in the history of humanity. While we find strong overall evidence for both types of homophily for residents of different states, participants from low-representation states show considerably stronger propensity for both social and spatial homophily than those from high-representation states. These manifestations of homophily are amplified on crowded days, such as the peak day of the festival, which we estimate was attended by 25 million people. Our findings confirm that homophily, which here likely arises from social influence, permeates all scales of human behavior. \section*{Introduction} When the behavior of each individual in a group is dependent on their interactions with others around them, the collective behavior of the group as a whole can be surprisingly different from what would be expected by simply extrapolating off that of the individual \cite{schelling2006micromotives,strandburg2015shared,helbing2000simulating}. In particular, people think and behave differently in crowds than in small scale settings\cite{le1897crowd,park1972crowd,blumer1946elementary}, and this crowd behavior can occasionally lead to tragic events and even human stampedes\cite{ngai2009human,greenough2013kumbh,maclean2003power}. Individuals tend to form groups spontaneously and engage in collective decision-making outside of such dramatic events as well, but the nature of this type of herding--and the extent to which it happens--depends on how outnumbered the group is compared to the reference population. For example, friendship networks of adolescents demonstrate greater social homophily if they are in the minority \cite{gonzalez2007community}, whereas majority members do not share this preference\cite{vermeij2009ethnic}. This phenomenon is in line with the description by Simmel who argued that individuals ``resist being leveled'' in a crowd\cite{simmel1903metropolis}. If, however, the minority group is too small to form an independent community, it is possible for the minority to show heterophily rather than homophily\cite{currarini2009economic}. This finding highlights the importance of the surrounding social context, in particular the relative size of the group. Social homophily can also lead to spatial homophily and thereby give rise to segregation\cite{schelling1971dynamic, hatna2014combining}. While the term homophily is used to mean different things, we use it here to refer to the tendency for people who are similar to be associated with one another regardless of the mechanism that causes this association. This use of the term is distinct from quantifying homophily by the frequency of associations among similar people, since people in the majority will have a greater frequency of associations with others in the majority simply due to having more opportunities for forming them\cite{mcpherson2001birds,hallinan1985effects}. While several studies have investigated homophily of racial groups on smaller scales, we explore how such homophilous tendencies might persist on a much larger macroscopic scale. The behavior of individuals in a classroom cannot be used to extrapolate onto the behavior of those packed into a crowd of millions. The Kumbh Mela is a religious Hindu festival that has been celebrated for hundreds of years\cite{mehrotra2015kumbh}, and the 2013 Kumbh Mela, organized in Allahabad, stands out from all others today and throughout history due to its magnitude. As it is infeasible to collect demographic data from millions of participants, we turned to call detail records (CDRs) that have been used to investigate social networks, mobility patterns, and other massive events \cite{blondel2015survey,onnela2007structure,onnela2007analysis,gonzalez2008understanding,wesolowski2012quantifying,aleissawired}. Cell phone operators routinely maintain records of communication events, mainly phone calls and text messages, for billing and research purposes. These communication metadata, at minimum, keep track of who contacts whom, when, and for how long (voice calls only). Using these call detail records (CDRs), we first estimate the attendance of each of 23 states of India at the event before investigating the relationship between a state's attendance and the degree of both social homophily and spatial homophily amongst its attendees. \section*{Methods} \subsection*{Data description} We had access to CDRs\footnote{Only summary statistics from the CDRs were provided to us: social network information and daily customer counts at various cell towers located at the Kumbh. Caller IDs were anonymized, and no individual-level characteristics were provided to us aside from billing area codes and whether or not a prepaid or postpaid plan was used.} for one Indian operator for the period from January 1 to March 31, 2013. This dataset contains records of 146 million (145,736,764) texts and 245 million (245,252,102) calls for a total of 390 million (390,988,866) communication events. Given the logistical impossibility of collecting demographic, linguistic, or cultural attributes of Kumbh participants at scale, we based our investigation of homophily on a marker that acts as a proxy for these covariates, namely, cell phone area codes. The area codes correspond to different states\footnote{Though officially India has more than 23 states, we adhere instead to the 23 functional state divisions used by the service provider.} of India, and as a result of India's States Reorganization Act of 1956 these divisions summarize demographic variability along linguistic origin, ethnic agglomeration, and preexisting social bonds and boundaries. While CDRs readily lend themselves to studying social networks and social homophily, to investigate spatial homophily we additionally acquired access to the cell tower IDs at the Kumbh venue. Combined with the latitude and longitude of each of the 207 towers at the site\footnote{In anticipation of the large influx of people at the Kumbh, temporary infrastructure was brought into the venue prior to the start of the festival so as to provide sufficient coverage for the large number of expected cell phone users.}, we were able to infer the caller's location (at the time of phone-based communication) with relatively high spatial resolution. The grid that divides the Kumbh site into regions around each cell tower, called the Voronoi tessellation, groups all points on the map closest to each cell tower. The birds-eye view of Allahabad in Fig. 1 shows the estimated attendance on one of the busiest and most favorable days for ritual bathing in the Ganges river. \vspace{1pc}\noindent\textbf{Figure 1. Cell phone usage around the cell towers at the Kumbh during its busiest day.} The heat map polygons represent the Voronoi tessellation around the cell towers that occupied the site of the Kumbh Mela event in Allahabad, India. Cell towers with no activity are removed from the analysis and their Voronoi cells are merged into neighboring active cell towers. Map data used to produce the river traces: Google, DigitalGlobe. \vspace{1pc} \subsection*{Attendance Estimation} Extrapolating population measures from CDRs has become feasible in recent years due to the rapid increase in the prevalence of cell phones. While CDRs provide raw counts of cell phone users, to estimate attendance, these numbers need to be adjusted by (i) overall prevalence of cell phones in India, (ii) the state-specific market shares of our provider, (iii) the probability of daily use for a person known to be present at the venue, and (iv) the probability of phone non-use during a person's entire stay at the venue. First, regarding overall phone prevalence, $71.3\%$ of people in India had a wireless subscription in 2013 \cite{TRAIreport}. Second, regarding market share, the number of unique handsets are counted on a daily basis for each of 23 distinct states of India (Table S1), as defined by the service provider, and each count is extrapolated from the service provider's market share in the given states. The service provider's market share varies widely state by state (range $13.7\%$, $42.6\%$). It is important to use state-specific market share, because if average market share is used instead, the state-specific attendance counts can be off by more than a factor of $2$. These handset counts are added together for each day before extrapolating to the general population. Third, regarding daily use, it is likely that many Kumbh attendees who use their phone at least once do not use their phone every day while at the festival. If not addressed, this would bias our population estimate downwards. By tracking phone activity, length of stay can be estimated based on the time period a person's phone is active while at the Kumbh. Based on this, we estimate the percentage of customers who use their phone on any given day during their stay conditional on them using their phone at least once during their stay to be $40.4\%$. (Note that this quantity applies to daily estimates, not to cumulative estimates. See \nameref{S1_text}.) Fourth, regarding non-use, the probability of a person not using his or her phone during the entire stay at the venue is difficult to account for; these individuals are not visible in the observed data, and yet the proportion of non-users could potentially be substantial given that many visitors from outside regions would have to pay roaming fees, which likely leads them to minimize their phone use. To overcome this difficulty, we first examine four available daily population projections\cite{projections}, each for a different day, and calibrate the proportion of non-users such that our resulting daily estimate for that same day is most consistent with the four daily projections. We obtain an estimate of $40.6\%$ for non-use (coincidentally similar to $40.4\%$ obtained above for daily use) and we use this estimate to adjust both cumulative and daily attendance. \subsection*{Social Homophily} A social network is constructed between customers who used their phone at the Kumbh. A network edge is assumed between any two people who communicated with one another at any point over the course of the Kumbh. To study how a state's extent of social homophily is related to its level of representation, defined as the number of people present from the state divided by the total Kumbh attendance, we select a measure that results in consistent estimates of homophily regardless of state representation. The measure of social homophily considered in refs.~\cite{coleman1958relational,currarini2009economic} applied to our setting would define homophily for any given state as the proportion of ties that involve two participants from that state, but due to measuring absolute differences instead of relative differences, the homophily for states with small representation would be biased downwards due to their small proportions. A standard stochastic block model (SBM) approach\cite{holland1983stochastic} applied to our setting would assume an equal likelihood of forming network ties between any two participants from the same state. However, if this model is misspecified and there exist additional social structure within each state (within each block), as is almost certainly the case, then this approach is likely biased in the opposite direction and overestimates the social homophily in states with lower representation.\footnote{To see the reason for this, consider the case where state A sends only a single group of friends to the Kumbh, whereas state B sends 100 different groups of friends. A random pair selected from state A will have a much higher likelihood of being friends than will a random pair from state B, even if social homophily is equally strong within the friendship groups of the two states.} The biases of both these methods are discussed in further detail in \nameref{S1_text}. To circumvent these problems, we shift our focus from dyads to same-state connected triples, sets of three nodes from the same state that are connected either by two edges, resulting in an open triple, or three edges, resulting in a closed triple. The rationale behind this choice is that the three nodes in a connected triple can be assumed to belong to the same social group whether the triple is open or closed. By considering the propensity for same-state connected triples to be closed, we can gain insight into how densely connected the social groups are in which these triples are embedded. This approach is a way of sampling pairs of nodes from the same social group even when the social groups themselves are unobserved. The proportion of triples that are closed provides a natural measure of social homophily (see Fig. 2). This measure is commonly referred to as the global clustering coefficient or the transitivity index \cite{wasserman1994social} calculated over each state-specific network. Ignoring residents from the local state whose phone use is likely different from all other states\footnote{When studying social homophily we ignore the attendees from the local state where the Kumbh is held, eastern Uttar Pradesh, because the social behavior of the locals is likely not comparable to those from the other 22 states. While visitors from other states are all present for the same purpose of participation in the Kumbh, this is not true for the locals, many of whom were employed to help run the Kumbh in various roles. Outsider phone usage will likely be exclusively for coordinating purposes at the event, due to the cost of roaming calls. On the other hand, locals use their phones much more freely and for everyday purposes.}, there are 1,630,553 connected triples in the full Kumbh social network. \vspace{1pc}\noindent \textbf{Figure 2. Schematics of homophily measures (A) and call detail records (B).} For homophily measures (\textbf{A}), the three dotted lines represent spatial boundaries for the Voronoi tessellation around the cell towers, separating the shaded region into three Voronoi cells, in two (a low and high homophily) examples. The solid lines denote which nodes are in communication in the social network, either through voice call or text message. In the context of spatial homophily, two nodes are considered nearby if and only if they both are in the same spatial region (Voronoi cell) on the same day. The size of Voronoi cells range from as small as a $1/4 \mbox{km}^2$ to as large as $20 \mbox{km}^2$. For the call detail records (\textbf{B}), analysis of spatial homophily uses all pairwise communication events involving at least one customer of our operator who is present at the Kumbh, whereas analysis of social homophily only considers the ties between customers of our operator. \vspace{1pc} Letting $C_{ijk}=1$ if the $(i,j,k)$ triple is closed and $C_{ijk}=0$ if it is open, and let $R_{ijk}$ be the state of the three nodes in the triple, with $W_{r}$ as the proportion of the total cumulative Kumbh population by March 31, 2013, that belongs to state $r=1,\dots,23$. Across the 22 non-local states, $W_{r}$ ranges from $0.018\%$ to $7.45\%$, thus varying over 2.5 orders of magnitude. We fit the following regression model over all connected triples: \begin{equation}\label{logisticmodel} \mbox{logit}(\mbox{pr}(C_{ijk}=1)) = \beta_0 + \beta_1 \log_{10}W_{R_{ijk}} \end{equation} The model requires independence between observations for accurate inference, and because the same individual can be involved in multiple triples, this independence does not hold. The estimate $\hat{\beta}_1$ from \eqref{logisticmodel} is still unbiased, but its standard error and the $P$-value for the two-sided test of the null hypothesis $\beta_1=0$ will not be correct if this dependence is ignored. Taking advantage of the large sample size, for accurate inference we select a random subset of triples where we do not allow the same individual to appear in more than one triple. \subsection*{Spatial Homophily} Let $n_{crd}$ be the number of customers near cell tower $c$ from state $r$ on day $d$ of the Kumbh, and let $N_{rd} = \sum_{c=1}^Cn_{crd}$ be the total number of customers from state $r$ at the Kumbh on day $d$, where the sum is taken over all $C$ cell towers. To avoid double-counting, if a person uses multiple cell towers on the same day, only the first cell tower is recorded. The probability that any two given individuals from the state $r$ are nearby on the day $d$ is: \begin{equation}\label{nearbyprob} p_{rd}=\frac{1}{N_{rd}}\sum_{c=1}^C n_{crd}\frac{n_{crd}-1}{N_{rd}-1} \end{equation} Here two people are defined to be ``nearby'' on a particular day when they are both located in the same Voronoi cell on that day, using the cell tower designation mentioned above. The intuition behind equation \ref{nearbyprob} is that, given the location of one person, the probability a different randomly selected person from their state is in the same Voronoi cell is $(n_{crd}-1)/(N_{rd}-1)$. The probability in equation \ref{nearbyprob} has the desirable property of not scaling with state representation $W_r$ if spatial homophily is kept constant.\footnote{To see this, suppose that we hold constant how a particular state's attendees are spread out over the cell towers of the Kumbh, i.e. suppose we fix $n_{crd}/N_{rd}$. If we then increase the number of people present at the Kumbh from that state, $p_{rd}$ will stay essentially unchanged with a negligible increase, because $(a\cdot n_{crd}-1)/(a\cdot N_{rd}-1) > (n_{crd}-1)/(N_{rd}-1)$ for any $a>1$.} This property is essential if we wish to evaluate the relationship between spatial homophily and state attendance/representation. Finally, let $Q_{r}^{A} = \sum_{d=1}^{90}p_{rd}/90$ be the probability that any two given individuals from state $r$ are nearby averaged over all 90 days. To evaluate busy, or high volume, days, we consider the three days with the highest attendance. We grouped each of these three days together along with the two days that preceded each and the two days that followed each, leading to a set of 15 days we labeled as high volume days. The remaining 75 days were grouped together to form the set of low volume days. We let $Q_r^{H}$ be the average of the $p_{rd}$ over the high volume days, $Q_r^{L}$ be the average of the $p_{rd}$ over the low volume days, and we defined $Q_r^{D} = Q_r^{H}/Q_r^{L}$ to be the ratio of spatial homophily when comparing high volume days to low volume days. \section*{Results} \subsection*{Attendance Estimation} Since the extent of homophily for any given group can depend on the relative size of that group compared to others, we first estimate daily and cumulative attendance for participants from each state which can then simply be added up to obtain overall attendance estimates. Existing estimates of the Kumbh's attendance vary widely and most are obtained with heavy extrapolation based on rough head counts combined with the rate of flow at high traffic points leading to the Kumbh venue \cite{wsjarticle}. These estimates have the limitation that they only look at the primary entrances into the Kumbh and ignore traffic flow from secondary entrances. And while daily estimates can be inferred from traffic flow or satellite images, cumulative attendance is more difficult to obtain, because a satellite image cannot tell if the same people are present for many weeks, or if people stay only a short time before leaving to be replaced by newcomers. Our estimates for the total daily and cumulative attendance are shown in Fig. 3. They clearly show a spike of attendance on each of the Kumbh's three primary bathing days. These days hold special religious significance and bathing on these days is seen to be particularly auspicious. Based on the above numbers, we estimate the peak daily attendance of the 2013 Kumbh on February 10th to be $25$ million, and the total cumulative attendance from January 1 to March 31 to be $60.6$ million, which suggests that the event was the largest recorded gathering in humanity's history. A sensitivity analysis in Fig. 3 shows the cumulative attendance if the percent of customers that are non-users is varied from the estimated $40.6\%$. For example, if the percent of customers that are non-users is $45\%$, then the cumulative attendance sinks to $54$ million, whereas if the percent of customers that are non-users is $35\%$, then the cumulative attendance rises to $69$ million. \vspace{1pc}\noindent\textbf{Figure 3. Estimates for daily and cumulative attendance at the Kumbh.} The cumulative (\textbf{A}) and daily (\textbf{B}) attendance at the Kumbh is estimated from January 1st, 2013, to March 30th, 2013. Daily estimates are the number of unique handsets used extrapolated by the (i) the national prevalence of cell phones, (ii) state-specific market share of the service provider, (iii) the likelihood of inactivity on a daily basis, and (iv) the proportion of individuals who never use their phone (non-users). Cumulative estimates are extrapolated only by (i), (ii), and (iv), which accounts for the apparent difference between daily and cumulative counts on January 1st. The sensitivity of total cumulative attendance to changes in (iv) shows the importance of accounting for this form of censoring in the data \textbf{(C)}. The curve plotted is $f(x)=c/x$, where $c=24467257$. \vspace{1pc} \subsection*{Social Homophily} We investigate social homophily among the residents of the 23 states, using state-specific attendance estimates, by constructing a social network of Kumbh attendees. The network nodes correspond to people and edges correspond to one or more pairwise communication events between people. Note that only communication events involving the service provider's customers present at the Kumbh venue are observed (see Fig. 2), and both parties must be customers of the provider to be included in the network so that their state of residence can be ascertained. The resulting network contains 2,130,463 nodes and 8,204,602 ties. The network is constructed using the full three month period using both text and call information combined because otherwise the network would become too sparse if segmented. When there is strong social homophily in a state, the connected triples in the social network among attendees from that state will have an increased likelihood of being closed. After fitting model \eqref{logisticmodel} we find that there is strong negative association between social homophily and state representation. The model fit has an estimate of $\hat{\beta}_1=-0.208$, $95\%$ CI $(-0.259,-0.157)$, implying that a ten-fold increase in $W_r$ corresponds to an $81\%$ decrease in the expected proportion of closed triples. The analysis restricted to a subset of independent triples yields a $P$-value less than $10^{-20}$ and this significance remains robust to the subset selected. This analysis reduces sample size and sacrifices some statistical power by looking only at a subset of independent triples in order to allow for accurate statistical inference. Even then, the $P$-value remains very significant, providing strong evidence that minority states at the Kumbh tend to show significantly greater social homophily as compared to well represented states. \subsection*{Spatial Homophily} Does the finding of heavily outnumbered states being more tightly-knit in their social networks apply to spatial homophily as well? We use our knowledge of which cell tower is used by a caller to approximate caller location. Let $Q_r^{A}$ be the probability that any two given individuals from state $r$ are physically nearby averaged over all 90 days of the Kumbh. The $Q_r^{A}$ and their confidence intervals are illustrated in Fig. 4, with $Q_{r}^{A}$ ranging between $0.0025$ and $0.018$, reflecting over a 7-fold difference in the propensity for spatial homophily across states, with a mean value of $0.013$. States with low representation tend to be more spatially homophilous than states with high representation. In contrast, the local people from the eastern Uttar Pradesh, where the Kumbh Mela takes place, alone make up a majority at the Kumbh, and they show significantly less spatial homophily. Overall, there is a strong negative correlation (Pearson's $\rho=-0.54$) between spatial homophily ($Q_j^{A}$) and average logarithmic daily representation at the Kumbh. \vspace{1pc}\noindent\textbf{Figure 4. The spatial homophily and representation of the 23 mainland states of India at the Kumbh.} The point estimates and $95\%$ confidence intervals for $Q_r^{A}$, the probability that any two given customers from state $r$ are physically close to one another, (\textbf{A}) and $Q_r^{D}$, the relative increase of state $r$'s spatial homophily on busy days compared to normal days, (\textbf{B}), both demonstrate an inverse relationship with state representation. The states have been ranked first by representation at the Kumbh (\textbf{C}) and then by degree of spatial homophily (\textbf{D}) (see \nameref{S1_text} for the list of state names). The heat map colors correspond to the rankings. The yellow star is the city of Allahabad, the location of the 2013 Kumbh Mela. The near inversion of colors when comparing the two panels demonstrates a clear negative association between state representation and spatial homophily. \vspace{1pc} The average spatial homophily $Q_r^{A}$ above was computed over the full three-month period, but it is conceivable that spatial homophily is a dynamic characteristic that varies from day to day, reflecting the changing compositions of different social groups. We conjectured that the extent of spatial homophily might be different on the three primary bathing days of February 10, February 15, and March 10 as compared to the other less crowded days. To test this, we define $Q_r^{D}$ to be the ratio of spatial homophily on crowded, high volume, days relative to spatial homophily on lower attendance days for state $r$. Fig. 4 shows that states with low representation tend to have a greater increase in spatial homophily on the high volume days. Participants from these underrepresented states appear particularly sensitive to increase in crowds, and they seem to group together more closely as the crowds build up. Some of the states with high representation are more robust to changes in the crowd size. In fact, there were seven states that had the opposite effect (though these effects were quite mild in comparison). There is a gap between the top four most represented states at the Kumbh (Uttar Pradesh East, Madhya Pradesh, Bihar, and Delhi) and the remaining states. These four well-represented states all showed less spatial homophily on the busier days. Overall there is moderate negative correlation (Pearson's $\rho=-0.27$) between $Q_{r}^{D}$ and average logarithmic daily representation at the Kumbh. \section*{Discussion} We used CDRs to estimate daily and cumulative attendance at the 2013 Kumbh Mela which, according to our analyses, represents the largest gathering of people in recorded history. While participants from all states demonstrated social and spatial homophily, these phenomena were stronger for the states with low representation at the event and were further amplified on especially crowded days. Given that a person may not use their phone immediately upon arriving or before leaving the Kumbh, it is likely that the duration of stay as estimated by their phone usage is truncated. To account for this censoring, a model for daily phone usage is required that can estimate the amount of censoring. We chose the simple model that assumed that each person had some independent probability of using their phone on each day. While this model is intuitive and provides suitable estimates for the amount of censoring, it may be the case that phone usage is captured better by a more complicated and involved model. Though we consider the proportion of connected triples that are closed in the Kumbh social network as a way of measuring the homophilous tendencies of attendees from each state, we draw a distinction between this measure and what is more commonly known as triadic closure. In the social network context, triadic closure is the mechanism by which connections are formed through a mutual acquaintance. However, since we do not observe when the original network ties are formed, we cannot comment on triadic closure \cite{simmel1950sociology} as a causal mechanism for tie formation. Our observations avoid a causal connotation and focus instead on observed associative measures. Our finding on spatial homophily is compatible with the phenomenon of ``associative homophily,'' which states that at a social gathering a person is more likely to join or continue engagement with a group as long as that group contains at least one other person who is similar to her \cite{ingram2007people}. Because every group is likely to have at least one person from the majority, associative homophily plays a relatively weak role for someone in the majority as she will be comfortable in almost every group. On the other hand, a person in the minority may have to actively find a group that contains another person similar to him, inflating the minority group's apparent homophily. This framework offers one possible explanation for the tighter cohesion of the states at the Kumbh with low representation. In conclusion, whether at the individual, group, or state level, it appears that no one likes to be outnumbered. We all seek safety in numbers. \section*{Supporting Information} \subsection*{S1 Text} \label{S1_text} {\bf Supplementary Text.} Extended discussion of how some measures of homophily can be susceptible to confounding with the size of the subgroup. The names and corresponding market shares of the 23 mainland states of India is listed. Some intuition for how censoring takes effect is also included. \vspace{1pc}\noindent\textbf{Figure S1. Stochastic block model edge probabilities by state.} The $p_{kk}$ represent the probability that any random two nodes in state $k$ share an edge, assuming this probability is the same for all pairs of nodes in state $k$. The strong association between this probability and state representation is heavily biased under model mispecification as is more likely the case here, exagerating the result. The baseline probability is calculated assuming no block structure, i.e. all nodes have the same probability of being connected to one another regardless of state membership. \vspace{1pc} \vspace{1pc}\noindent\textbf{Figure S2. Simple illustration of the bias produced by the stochastic block model under model misspecification.} Social groups are displayed in blue, and are assumed to all be of equal size. The probability that two people in the same social group share an edge is $0.20$. The probability that two people in different social groups share an edge is $0.04$. States A and B are constructed to have identical homophily, i.e. the probability of an edge between two people in the same social group is the same for both states. The average edge probability displayed takes the average over all possible pairs of nodes in the state. \vspace{1pc} \vspace{1pc}\noindent\textbf{Figure S3. Schematic for estimation of the probability of phone usage on any given day.} Each square represents a different day, and it is assumed that a person arrives at and departs from the Kumbh only once. The estimated proportion of days a phone is used is calculated as the total number days a phone is used summed across all customers, divided by the length of stay summed across all customers. \vspace{1pc} \vspace{1pc}\noindent\textbf{Table S1. State Acronyms and operator market share.} The acronyms for the twenty-three telecommunications states in India used by the operator are listed. In addition, the market share of the operator as measured by the percentage of the total number of people in the state with some form of subscription to a phone plan, taken from the month of January 2013. \vspace{1pc} \subsection*{S2 Information} \label{S2_information} {\bf Supplementary Information.} Network data taken over the full duration of the Kumbh Mela. Daily handset count data, stratified by state. \section*{Acknowledgments} IB and JPO are supported by Harvard T.H. Chan School of Public Health Career Incubator Award to JPO. TK is supported by the HBS Division of Research. The authors declare no conflict of interests. The authors would like to thank Gautam Ahuja, Clare Evans, Gokul Madhavan, Daniel Malter, and Peter Sloot for contributing their helpful comments, suggestions, critiques and discussion, and would like to thank the operator for providing access to their data. A special thanks to the operator for both providing us access to their data as well as accomodating us on their campus grounds as we worked on the analysis. In particular, we wish to express our thanks to employees Rohit Dev and Vikas Singhal for their assistance.
1,941,325,220,609
arxiv
\section{Introduction and main results} \subsection{Background and outline of this paper} Let $G=(V,E)$ be a connected graph of bounded degree, and let $\lambda \in (0,\infty)$. The contact process $(\eta_t)$ on $G$ with parameter $\lambda$ is the continuous-time interacting particle system on $\set{0,1}^{V}$ with local transition rates given by \[ \eta \rightarrow \eta_x \text{ at rate } \left\{ \begin{array}{ll} 1, & \text{if }\eta(x) =1; \\ \lambda \sum_{\{y \colon \{x,y\} \in E\}} \eta(y), & \text{if } \eta(x)=0, \end{array} \right.\] for $x\in V$, and where $\eta_x$ is defined by $\eta_x(y) := \eta(y)$ for $y \neq x$, and $\eta_x(x) := 1 - \eta(x)$. \medskip Equivalently (see also Section \ref{sec contact}), one can imagine that for each site $x$ and each neighbour $y$ of $x$, there are 'clocks', denoted by $I(y,x)$ and $H(x)$, which ring after independent, exponentially distributed, with mean $1/ \lambda$ and $1$ respectively, times (independent of the other clocks). At each ring of the clock $I(y,x)$ the following happens: if $y$ has value $1$ and $x$ has value $0$, the value of $x$ immediately changes to $1$. At each ring of the clock $H(x)$ the following happens: if $x$ has value $1$, the value of $x$ immediately changes to $0$.\medskip The contact process was introduced by \citet{HarrisCP1974} in 1974 as a toy model for the spread of an infection in a population. With this interpretation in mind, $\lambda$ is often referred to as the ``infection" parameter and a site $x \in V$ is said to be \emph{infected} at time $t$ if $\eta_t(x)=1$, otherwise it is said to be \emph{healthy}. A central question is whether the infections ``survive" with positive probability or eventually die out, i.e.\ all sites become healthy. As general references on contact processes we mention the books \cite{LiggettIPS1985} and \cite{LiggettSIS1999} by Liggett, from which we next recall some well known properties. \medskip The ``healthy" configuration where all sites are equal to $0$, denoted by $\bar{0}$, is clearly an absorbing state for the contact process. On the other hand, starting from the full configuration where all sites are initially infected, the contact process evolves towards an invariant measure $\bar{\nu}_{\lambda}$. This state is often called the \emph{upper invariant measure}. Throughout this text we write \emph{upper stationary contact process} to denote the contact process whose law at an arbitrary time equals $\bar{\nu}_{\lambda}$. \medskip A well-known property of the contact process is that, if $G$ is countable infinite, it undergoes a phase transition: there is a critical threshold $\lambda_c \in [0,\infty)$, depending on $G$, such that, for all $\lambda <\lambda_c$, $\bar{\nu}_{\lambda}= \delta_{\bar{0}}$, and for all $\lambda>\lambda_c$, $\bar{\nu}_{\lambda}\neq \delta_{\bar{0}}$. Here, $\delta_{\bar{0}}$ denotes the measure that concentrates on $\bar{0}$. In this paper we focus on the supercritical phase, i.e., the case $\lambda>\lambda_c$. \medskip Van den Berg, H\"aggstr\"om and Kahn \cite{BergHaggstromKahn2006} proved that the upper invariant measure satisfies the following property (called downward FKG in \cite{LiggettSteifSD2006}): for any finite $\Delta \subset V$, the conditional measure $\bar{\nu}_{\lambda}(\cdot \mid \eta \equiv 0 \text{ on } \Delta)$ is positively associated. (See \citet{LiggettCASS2006} for a slightly stronger property). \citet{LiggettSteifSD2006} used this result to show that, for the supercritical contact process on $\mathbb{Z}^d$, $d\geq1$, the upper invariant measure stochastically dominates a non-trivial Bernoulli product measure (see Corollary 4.1 therein). For the $d$-ary homogeneous tree $T_d$, where each site has $d+1$ neighbouring sites, they showed such a domination result for $\lambda>4$. \medskip In this paper, we investigate if analogs of the mentioned domination results by Liggett and Steif hold for the contact process observed in space \emph{and time}. \medskip One of our main results concerns the upper stationary contact process on $T_d$ with $d\geq2$. We show that, for $\lambda > \lambda_c(\integers)$, there exists a subset $V$ of the vertices of $T_d$, containing a ``positive fraction" of all the vertices of $T_d$, such that the following holds: the contact process on $T_d$ observed on $V$ stochastically dominates a non-trivial independent spin-flip process (see Section \ref{sec sd and Bpm} for a definition of such processes). This is the content of Theorem \ref{thm spin-flip tree} below. Interestingly, this cannot happen for the upper stationary contact process on graphs having subexponential growth (such as $\ensuremath{\mathbb{Z}}^d$), as shown in Proposition \ref{prop amenable}. \medskip We furthermore prove that the upper stationary contact process on $\mathbb{Z}^d$ with $d\geq1$ and $\lambda>\lambda_c$, observed on certain (discrete-time) $d$-dimensional space-time slabs, stochastically dominates a non-trivial Bernoulli product measure. This is the content of Theorem \ref{thm spin-flip 2} below. Using this, we conclude in Theorem \ref{prop cone-mixing} that the contact process projected onto a thin space-time slab satisfies a strong mixing property known as cone-mixing. \medskip The projection of the contact process onto a sub-lattice can be interpreted as a hidden Markov model and is motivated for instance by the study of phase transition phenomena in nonlinear filtering (see \citet{RebeschiniHandelPTNF2015}) as well as the study of a random walk in a dynamic random environment (see \citet{BethuelsenVolleringRWDRE2016}). \medskip A key observation for our arguments is that the results in \cite{BergHaggstromKahn2006} imply that the above mentioned downward FKG property extends to the contact process observed in space-time (see Lemma \ref{lem DFKG for CP}). Our proofs are based on this observation, together with specific properties of the contact process and results and techniques from \cite{LiggettSteifSD2006}. \subsubsection*{Outline of this paper} In the next subsection we recall some basic definitions before we present our main results for the contact process in Subsection \ref{sec results}. In Subsection \ref{sec mixing} we discuss certain mixing properties which follow from our main results. Section \ref{sec preliminaries} is devoted to some preliminary results. Proofs of our main results are provided in Section \ref{sec proofs}. In Section \ref{sec questions} we present some open questions. \subsection{Stochastic domination and Bernoulli product measures}\label{sec sd and Bpm} Besides the contact process there are two key concepts in the presentation of our main results, namely stochastic domination and Bernoulli product measures. For the convenience of the reader, we briefly recall their definitions.\medskip Given a countable set $V$, we are interested in probability measures on $\Omega := \{0,1\}^V$ and $D_{\Omega}[0,\infty)$, the set of c\`adl\`ag functions on $[0,\infty)$ taking values in $\Omega$. For this, denote by $\mathcal{F}$ the product $\sigma$-algebra corresponding to $\Omega$ and let $\mathcal{M}_1(\Omega)$ be the set of probability measures on $(\Omega,\mathcal{F})$, and similarly, let $ \mathcal{M}_1(D_{\Omega}[0,\infty))$ be the set of measures on $D_{\Omega}[0,\infty)$. \medskip For $\rho\in [0,1]$, we denote by $\mu_{\rho} \in \mathcal{M}_1(\Omega)$ the \emph{Bernoulli product measure} with density $\rho$. That is, for any finite $\Delta,\Lambda \subset V$ such that $\Delta \cap \Lambda = \emptyset$, the measure $\mu_{\rho}$ has cylinder probabilities given by \begin{align} \mu_{\rho} \left(\eta \in \Omega \colon \eta(x)=1 \: \forall x\in \Delta, \eta(x)=0 \: \forall x \in \Lambda \right) = \rho^{|\Delta|}(1-\rho)^{|\Lambda|}. \end{align} A related object is the following continuous-time process. Given $\alpha\geq0$, the \emph{independent spin-flip process} $(\xi_t)$ with parameter $\alpha$ is the continuous-time Markov process on $\{0,1\}^{V}$ with local transition rates given by \[ \eta \rightarrow \eta_x \text{ at rate } \left\{ \begin{array}{ll} 1, & \text{if }\eta(x) =1; \\ \alpha, & \text{if } \eta(x)=0, \end{array} \right.\] for $x\in V$. Note that $(\xi_t)$ is ergodic with unique invariant measure $\mu_{\rho}$, where $\rho = \rho(\alpha) = \alpha/(\alpha+1)$. \medskip We next introduce the concept of stochastic domination. For this, we associate to $\Omega$ the partial ordering such that $\xi \leq \eta$ if and only if $\xi(x) \leq \eta(x)$ for all $x \in V$. An event $B \in \mathcal{F}$ is said to be \emph{increasing} if $\xi\leq\eta$ implies $1_{B}(\xi)\leq 1_{B}(\eta)$. If $\xi\leq\eta$ implies $1_{B}(\xi)\geq 1_{B}(\eta)$ then $B$ is called \emph{decreasing}. For $\mu_1, \mu_2 \in \mathcal{M}_1(\Omega)$ we say that $\mu_1$ \emph{stochastically dominates} $\mu_2$ if $\mu_2(B) \leq \mu_1(B)$ for all increasing events $B \in \mathcal{F}$. Recall that, by Strassen's theorem (see \cite{LiggettIPS1985}, p.\ 72), $\mu_1$ \emph{stochastically dominates} $\mu_2$ is equivalent to the existence of a coupling $(\eta,\xi)$ so that $\eta$ has distribution $\mu_2$ and $\xi$ has distribution $\mu_1$, and $\eta \leq \xi$ a.s. The definition of stochastic domination readily translates to measures on $D_{\Omega}[0,\infty)$, by extending the partial ordering for elements in $\Omega$ to $\Omega^{[0,\infty)}$, requiring that $\xi_t(x) \leq \eta_t(x)$ for all $(x,t) \in V \times [0,\infty)$. \medskip Another key concept used in the proof of the following theorems is that of \emph{downward FKG}, to which we return to in Section \ref{sec preliminaries}. \subsection{Main results}\label{sec results} As shown in \citet{LiggettSteifSD2006}, Corollary 4.1, the upper stationary contact process on $\mathbb{Z}^d$, $d \geq 1$, with $\lambda>\lambda_c$ stochastically dominates a non-trivial Bernoulli product measure when observed at a fixed time $t$. That is, $\bar{\nu}_{\lambda}$ stochastically dominates $\mu_{\rho}$ for some $\rho\in(0,1)$. \medskip On the other hand, as also shown in \cite{LiggettSteifSD2006}, stochastic domination of a non-trivial Bernoulli product measure does not hold in general for the entire space-time evolution. This can be extended to the contact process on graphs having subexponential growth.\medskip Let $G=(V,E)$ be a connected graph of bounded degree and denote by $d \colon V\times V \rightarrow \integers_{\geq 0}$ the graph distance on G. Following \cite[p.\ 181]{LyonsPeresTrees2016}, the graph $G$ is said to have \emph{subexponential growth} (of balls) if \begin{align}\label{eq subexponential growth} \liminf_{n \rightarrow \infty} \left| \{ x \in V \colon d(o,x) \leq n \} \right|^{1/n} =1, \quad \text{ for some }o \in V, \end{align} where $|\cdot|$ denotes the cardinality. Otherwise $G$ is said to have \emph{exponential growth}. Further, we say that $\Delta \subset V$ has \emph{positive density} if \begin{align}\label{eq density of set} \liminf_{n \rightarrow \infty} \frac{ \left|\{ x \in \Delta \colon d(o,x) \leq n \}\right|}{\left|\{ y \in V \colon d(o,y) \leq n\} \right|}> 0, \quad \text{ for some }o \in V.\end{align} \begin{rem} Since we assume that $G$ is connected and has bounded degree we may in \eqref{eq subexponential growth} and \eqref{eq density of set} replace ``for some $o \in V$'' by ``for all $o \in V$''. \end{rem} \begin{prop}\label{prop amenable} Let $(\eta_t)$ be the upper stationary contact process on a connected graph $G=(V,E)$ having subexponential growth and bounded degree with $\lambda>0$. Consider $\Delta \subset V$ having positive density. Then, for \textbf{no} parameter value except $\alpha=0$ can $(\eta_t)$ and $(\xi_t)$ be coupled so that, when initialised from $\bar{\nu}_{\lambda}$ and $\mu_{\rho(\alpha)}$ respectively, it holds that \begin{align} \widehat{\bP} \left(\eta_t(x) \geq \xi_t(x) \text{ for all } (x,t) \in \Delta \times[0,\infty)\right)=1.\end{align} \end{prop} The proof of Proposition \ref{prop amenable} follows by an almost direct extension of the proof of \cite[Proposition 1.1]{LiggettSteifSD2006}, and is given in Section \ref{sec proofs 1}. In fact, the proof also works if Condition \eqref{eq subexponential growth} is replaced by the following condition, \begin{align}\label{eq extend subexp} \liminf_{n\rightarrow \infty} \frac{|\{ (x,y) \in E \colon d(o,x)=n =d(o,y)-1\}| }{ |\{ x \in V \colon d(o,x)\leq n\}|}=0,\: \text{ for some }o\in V, \end{align} which is easily seen to be weaker than \eqref{eq subexponential growth}. A natural question is whether Proposition \ref{prop amenable} also holds if \eqref{eq extend subexp} does not hold. Theorem \ref{thm spin-flip tree} below states that this is not the case for homogeneous trees. See also Question $2$ in Section \ref{sec questions}. \medskip \begin{thm}\label{thm spin-flip tree} Let $(\eta_t)$ be the upper stationary contact process on $T_d$, $d\geq2$, with $\lambda> \lambda_c(\integers)$. Let $V$ be the set of vertices of $T_d$. Then there is a $\Delta \subset V$ having positive density together with an $\alpha= \alpha(\lambda)>0$ and a coupling $\widehat{\bP}$ of $(\eta_t)$ and $(\xi_t)$, initialised from $\bar{\nu}_{\lambda}$ and $\mu_{\rho(\alpha)}$ respectively, such that \begin{align}\label{eq projected onto tree}\widehat{\bP} \left(\eta_t(x) \geq \xi_t(x) \text{ for all } (x,t) \in \Delta \times[0,\infty) \right)=1.\end{align} \end{thm} Thus, \eqref{eq projected onto tree} in Theorem \ref{thm spin-flip tree} concerns the contact process on $T_d$ \emph{projected onto} a subset of $V \times [0,\infty)$ (a terminology we often refer to later). Theorem \ref{thm spin-flip tree} says that the contact process projected onto $\Delta \times [0,\infty)$ stochastically dominates an independent spin-flip process. \medskip To prove Theorem \ref{thm spin-flip tree}, we first show that the contact process on $\{0,1,\dots \}$ observed at the vertex $0$ stochastically dominates an independent spin-flip process (in fact, we show a generalisation of this). Once this is obtained, Theorem \ref{thm spin-flip tree} follows by a monotonicity argument. From the precise argument, given in Section \ref{sec proof of spt}, it moreover follows that the set $\Delta$ in Theorem \ref{thm spin-flip tree} can be chosen such that the l.h.s.\ of \eqref{eq density of set} equals $\frac{d-1}d$. \medskip Denote by \begin{align}\label{eq results survival} \tau^x:= \inf \{ t\geq 0 \colon \eta_t^x \equiv \bar{0} \}, \quad x \in V, \end{align} the extinction time for the contact process $(\eta_t^x)$ started with only $x$ initially infected. \begin{thm}\label{thm spin flip 3} Let $(\eta_t)$ be the upper stationary contact process on a connected graph $G=(V,E)$ having bounded degree with $\lambda>0$. Let $x \in V$ for which there exist $C,c>0$ such that, \begin{align}\label{eq survival2} &\ensuremath{\mathbb{P}} ( \tau^x=\infty) >0; \\&\label{eq survival} \ensuremath{\mathbb{P}} ( s<\tau^x<\infty) \leq Ce^{-cs}, \quad \text{ for all } s\geq0. \end{align} Then there exist $\alpha= \alpha(\lambda)>0$ and a coupling $\widehat{\bP}$ of $(\eta_t)$ and $(\xi_t)$ initialised from $\bar{\nu}_{\lambda}$ and $\mu_{\rho(\alpha)}$ respectively, such that \begin{align}\widehat{\bP} \left(\eta_t(x) \geq \xi_t(x) \text{ for all } t\in [0,\infty) \right)=1.\end{align} \end{thm} \medskip Note that \eqref{eq survival2} and \eqref{eq survival} are known to hold for all vertices throughout the supercritical phase for the contact process on $\mathbb{Z}^d$, $d\geq1$ (see \cite[Theorem 1.2.30]{LiggettSIS1999}), and on $\{0,1,\dots\}$ (see \cite{DurrettGriffeathCPhighDim1982}, p.\ 546 and \cite{DurretGriffeathCP1983}).\medskip For the proof of Theorem \ref{thm spin flip 3} (in Section \ref{sec proofs 3}) we use that the contact process satisfies the downward FKG property in space-time (see Section \ref{sec preliminaries} for a proper definition). Combining this with large deviation estimates of the probability that there are no infections at the site $x$ in the time interval $[0,t]$ and a general theorem in \cite{LiggettSteifSD2006} (which we state in Lemma \ref{lem DFKG domi1}) yields the statement of Theorem \ref{thm spin flip 3}. \medskip It seems natural that Theorem \ref{thm spin flip 3} can be extended to the case where instead of observing the contact process at a single site, we observe it on a finite subset $\Delta \subset V$. Apart from some special cases, we are not able to show this in general. On the other hand, interestingly, we are able to extend Theorem \ref{thm spin flip 3} when restricting to observations at discrete times. For this, denote by \begin{align}\label{eq notation Z} \integers_{T}:= \{0,\pm T, \pm 2T,\dots \}, \quad T\in(0,\infty). \end{align} \begin{thm}\label{thm spin flip 4} Let $(\eta_t)$, $\lambda$ and $G$ be as in Theorem \ref{thm spin flip 3}. Let $\Delta\subset V$ be finite and let $x \in \Delta$ be such that \eqref{eq survival2} and \eqref{eq survival} hold. Then, for each $T \in(0,\infty)$, there exist $\rho= \rho(\lambda,T,\Delta)>0$ such that $(\eta_t)$ projected onto $\Delta \times \integers_T$ stochastically dominates a Bernoulli product measure with parameter $\rho$. \end{thm} We end this subsection with a result, Theorem \ref{thm spin-flip 2} below, for the supercritical contact process on $\mathbb{Z}^d$. As seen in Proposition \ref{prop amenable}, this process cannot stochastically dominate a non-trivial independent spin-flip process, not even when projected onto a subset $\Delta$ of positive density. This naturally leads to the question what happens for subsets $\Delta \subset \mathbb{Z}^d$ for which the l.h.s.\ of \eqref{eq density of set} equals $0$. Theorem \ref{thm spin-flip 2} concerns one such case, namely, the contact process projected onto certain (discrete-time) space-time slabs.\medskip For $m\in \nat$, let \begin{align} \integers^d_{d-1}(m):= \left\{ (x_1,\dots, x_d) \in \mathbb{Z}^d \colon x_d\in \{0,\dots,m-1\} \right\}, \end{align} be the $(d-1)$-dimensional sublattice of $\mathbb{Z}^d$ of width $m$. When $m=1$ we simply write $\integers^d_{d-1}$. \begin{thm}\label{thm spin-flip 2} Let $(\eta_t)$ be the upper stationary contact process on $\mathbb{Z}^d$, $d\geq 1$, with $\lambda> \lambda_c$. Let $T \in (0,\infty)$ and $m\in \nat$. Then there exists $\rho=\rho(\lambda,T,m) >0$ such that $(\eta_t)$ projected onto $\integers^d_{d-1}(m) \times \integers_T$ stochastically dominates a Bernoulli product measure with parameter $\rho$. \end{thm} \subsection{Mixing properties}\label{sec mixing} The purpose of this subsection is to show that the domination results we have obtained so far are useful in order to conclude mixing properties for the contact process, in particular when observed in a subspace.\medskip We first note that, from the statement of Theorem \ref{thm spin-flip 2} with $m=1$, we obtain a stronger notion of domination, which we present next. For $t\in (0,\infty)$ and $T \in (0,\infty)$, let $\integers_{T}(t) := \{s \in \integers_{T} \colon s < tT \}$ and denote by $\mathcal{P}_{\lambda}^{\text{slab}} \left( \cdot \right)$ the law of the projection of $(\eta_t)$ onto $\mathbb{Z}^d_{d-1} \times \integers_{T}.$ \begin{cor}\label{cor spin-flip 2} Let $(\eta_t)$ be the upper stationary contact process on $\mathbb{Z}^d$, $d\geq 1$, with $\lambda> \lambda_c$. Let $T \in (0,\infty)$. Then, with $\rho=\rho(\lambda,T,1)$ as in Theorem \ref{thm spin-flip 2}, for every finite $\Delta \subset \mathbb{Z}^d_{d-1} \times \integers_{T}(0)$, the measure $\mathcal{P}_{\lambda}^{\text{slab}} \left( \cdot \mid \eta \equiv 0 \text{ on } \Delta \right)$ stochastically dominates a Bernoulli product measure with density $\rho$ on $\mathbb{Z}^d_{d-1} \times \left(\integers_{T} \setminus \integers_{T}(0) \right)$. \end{cor} Corollary \ref{cor spin-flip 2} implies that the contact process projected on $\mathbb{Z}^d_{d-1} \times \integers_{T}$ has strong mixing properties. We next make precise what we mean by strong mixing properties.\medskip Fix $ T \in (0,\infty)$ and let, for $\theta \in (0,\frac{1}{2} \pi)$ and $t \geq 0$, \begin{align} C_t^{\theta} := \left\{ (x,s) \in \mathbb{Z}^d_{d-1} \times \integers_{T} \colon \norm{x} \leq (s-t) \tan \theta \right\} \end{align} be the cone whose tip is at $(o,t)$ and whose wedge opens up with angle $\theta$, where $o\in \mathbb{Z}^d$ denotes the origin. A process $(\xi_t)_{t\in \integers_T}$ on $\{0,1\}^{\mathbb{Z}^d_{d-1}}$ is said to be \emph{cone-mixing} if, for all $\theta \in (0,\frac{1}{2}\pi)$, \begin{align}\label{eq cone-mixing} \lim_{t \rightarrow \infty} \sup_{\substack{A \in \mathcal{F}_{<0}, B \in \mathcal{F}_t^{\theta} \\ \ensuremath{\mathbb{P}}(A)>0}} \left| \ensuremath{\mathbb{P}}(B\mid A) - \ensuremath{\mathbb{P}}(B) \right| = 0, \end{align} where $\mathcal{F}_{<0}$ is the $\sigma$-algebra generated by the lower half-space $ \{ \xi_s(x)\colon (x,s) \in \mathbb{Z}^d_{d-1}\times \integers_{T}(0) \}$ and $\mathcal{F}_t^{\theta}$ is the $\sigma$-algebra generated by $ \{ \xi_s(x) \colon (x,s) \in C_t^{\theta} \}$. \begin{thm}\label{prop cone-mixing} Let $T \in (0,\infty)$. The upper stationary contact process on $\mathbb{Z}^d$, $d\geq 1$, with $\lambda>\lambda_c$, projected onto $\mathbb{Z}^d_{d-1}\times \integers_{T}$, is cone-mixing. \end{thm} Cone-mixing was introduced in \citet{CometsZeitouniLLNforRWME2004} and used there to prove limiting properties for certain random walks in mixing random environment. More recently, the cone-mixing condition has been adapted to random walks in dynamically evolving random environments, see Avena, den Hollander and Redig \cite{AvenaHollanderRedigRWDRELLN2011}. For such models, a standing challenge is to prove limit properties for the random walk when the dynamic environment does not converge towards a unique stationary distribution, uniformly with respect to the initial state. \medskip Theorem \ref{prop cone-mixing} gives one way to overcome this challenge for the particular case where the random environment is the contact process and the random walk stays inside $\mathbb{Z}^d_{d-1}$. Our result has recently been applied in \citet{BethuelsenVolleringRWDRE2016} (see Theorem 2.6 therein) to prove (among other things) a law of large numbers for such random walks. \section{Preliminaries}\label{sec preliminaries} In this section we provide some preliminary results which are important for the proofs of our theorems. \subsection{Downward FKG and related properties}\label{sec DFKG} As already mentioned, the concept of downward FKG (from now on abbreviated by dFKG) plays a key role in the proof of our main theorems. We next provide a definition of this and some related properties. \begin{defn}\label{def FKG} Let $\mu\in \mathcal{M}_1(\Omega)$. We say that $\mu$ is \begin{description} \item[a)] positively associated if $\mu(B_1 \cap B_2) \geq \mu(B_1) \mu(B_2)$ for any two increasing events $B_1,B_2 \in \mathcal{F}$. \item[b)] dFKG if for every finite $\Lambda\subset V$, the measure $\mu(\cdot \mid \eta \equiv 0 \text{ on } \Lambda)$ is positively associated. \item[c)] FKG if for every finite $\Lambda \subset V$ and $\sigma \in \Omega$, the measure $\mu(\cdot \mid \eta \equiv \sigma \text{ on } \Lambda )$ is positively associated. \end{description} \end{defn} It is immediate that FKG implies dFKG, which again implies positive association. The Bernoulli product measures $\mu_{\rho}$, $\rho \in [0,1]$, are examples of measures which clearly satisfy the FKG property. In \cite{LiggettIPS1994} it was shown that the upper invariant measure is not always FKG, whereas \cite{BergHaggstromKahn2006} proved that it satisfies the dFKG property (see Theorem 3.3 and Equation (20) in that paper). With the same arguments as in \cite{BergHaggstromKahn2006} the latter property can be extended to the following lemma. \begin{lem}\label{lem DFKG for CP} Consider the upper stationary contact process $(\eta_t)$ on $G=(V,E)$ with $\lambda>0$. For any $t_1<t_2< \dots < t_n$ the joint distribution of $(\eta_{t_1},\dots,\eta_{t_n})$, which is a probability measure on $\Omega^n$, satisfies the dFKG property. \end{lem} \begin{proof} The proof is exactly the same as the proof of Theorem 3.3 in \cite{BergHaggstromKahn2006}. \end{proof} The following lemma gives a useful property, used in the proof of Theorem \ref{thm spin flip 4}. \begin{lem}\label{lem dfkg max} Let $V$ be countable and assume that the random variables $(X_i)_{i \in V}$ are dFKG. Let $P=(P_j)_{j \geq 1}$ be a partitioning of $V$ into disjoint subsets. Then the random variables $(Y_j)_{j\geq1}$ where $Y_j = \max \{ X_i, \: i \in P_j \}$ are dFKG. \end{lem} \begin{proof} This follows easily from the dFKG property of $(X_i)$. (Use that the $Y_j$'s are increasing functions of $(X_i)$ and that $\{ Y_j =0\} = \{ X_i=0, i \in P_j\}$). \end{proof} The dFKG property was used in \cite{LiggettSteifSD2006} to give a sufficient and necessary condition for a translation invariant measure $\mu$ on $\{0,1\}^{\integers}$ to dominate a Bernoulli product measure with density $\rho\in [0,1]$. Since their result plays an important role for our proofs, we recall the precise statement. \begin{lem}[Theorem 1.2 in \cite{LiggettSteifSD2006}]\label{lem DFKG domi1} Let $V= \integers$ and let $\mu \in \mathcal{M}_1(\Omega)$ be a translation invariant measure on $\{0,1\}^{\integers}$ which is dFKG. Then the following are equivalent. \begin{enumerate} \item $\mu$ stochastically dominates $\mu_{\rho}$. \item $\mu( \eta \equiv 0 \text{ on } \{1,2,\dots, n\} ) \leq (1-\rho)^n$ for all $n$. \item For all disjoint, finite subsets $\Lambda$ and $\Delta$ of $\{1,2,3,\dots\}$, we have \begin{align} \mu \left( \eta(0)=1 \mid \eta \equiv 0 \text{ on } \Lambda, \eta \equiv 1 \text{ on } \Delta \right) \geq \rho. \end{align} \end{enumerate} \end{lem} In \cite{LiggettSteifSD2006} also a generalisation of Lemma \ref{lem DFKG domi1} to measures on $\{0,1\}^{\mathbb{Z}^d}$ with $d\geq2$ is presented. Though most of our arguments only use Lemma \ref{lem DFKG domi1}, for the proof of Corollary \ref{cor spin-flip 2} we need the higher dimensional version, which we state below. We use the notation \[ \mathcal{D} :=\left\{ (x_1,\dots, x_d) \in \mathbb{Z}^d \colon \exists m \text{ such that } x_i = 0\: \forall i<m \text{ and } x_m <0\right\}.\] \begin{lem}[Theorem 4.1 in \cite{LiggettSteifSD2006}]\label{lem DFKG domi} Let $V= \mathbb{Z}^d$ with $d \geq 2$ and let $\mu \in \mathcal{M}_1(\Omega)$ be a translation invariant measure on $\{0,1\}^{\mathbb{Z}^d}$ which is dFKG. Then the following are equivalent. \begin{enumerate} \item $\mu$ stochastically dominates $\mu_{\rho}$. \item $\mu( \eta \equiv 0 \text{ on } [1,n]^d ) \leq (1-\rho)^{n^d}$ for all $n$. \item For all disjoint, finite subsets $\Lambda$ and $\Delta$ of $\mathcal{D}$, we have \begin{align} \mu \left( \eta(o)=1 \mid \eta \equiv 0 \text{ on } \Lambda, \eta \equiv 1 \text{ on } \Delta \right) \geq \rho. \end{align} \end{enumerate} \end{lem} \begin{rem} Lemma \ref{lem DFKG domi} was stated (and proven) in \cite{LiggettSteifSD2006} for $d=2$. However, the extension of their argument to general dimensions is immediate and yields Lemma \ref{lem DFKG domi} (as also commented directly before the proof in \cite{LiggettSteifSD2006}, see p.\ 232 therein). \end{rem} \subsection{The contact process}\label{sec contact} We next give a brief and somewhat informal construction of the contact process via the so-called graphical representation. For a more thorough description we refer to \cite{LiggettSIS1999}, p.\ 32-34. \medskip Let $G=(V,E)$ be a connected graph having bounded degree and fix $\lambda \in (0,\infty)$. Let $H:= (H(x))_{x \in V}$ and $I:=(I(x,y))_{\{x,y\}\in E}$ be two independent collections of (doubly-infinite) i.i.d Poisson processes with rate $1$ and $\lambda$, respectively. On $V \times \reals$, draw the events of $H(x)$ as \emph{crosses} over $x$ and the events of $I(x,y)$ as \emph{arrows} from $x$ to $y$. \medskip For $x,y \in V$ and $ s \leq t$, we say that $(y,t)$ is connected to $(x,s)$ by a backwards path, written $(x,s) \leftarrow (y,t)$, if and only if there exists a directed path in $V \times \reals$ starting at $(y,t)$, ending at $(x,s)$ and going either backwards in time without hitting crosses or ``sideways'' following arrows in the opposite direction of the prescribed direction. Otherwise we write $(x,s) \nleftarrow (y,t)$. In general, for $\Lambda, \Delta \subset V \times \reals$, we write $\Delta \leftarrow \Lambda$ ($\Delta \nleftarrow \Lambda$) if there is a (there is no) backwards-path from $\Lambda$ to $\Delta$. Next, define the process $(\tilde{\eta}_t)$ on $\Omega$ by \begin{align} \tilde{\eta}_t(x) := \left\{\begin{array}{cc}1, & \text{ if }V \times \{-\infty \} \leftarrow (x,t); \\0, & \text{otherwise},\end{array}\right. \end{align} where $V \times \{-\infty \} \leftarrow (x,t)$ denotes the event that there exists a backwards-path from $(x,t)$ to $V \times \{s\}$ for all $s\leq t$. It is well known that $(\tilde{\eta}_t)$ has the same distribution as the upper stationary contact process $(\eta_t)$ with infection parameter $\lambda>0$. In the following we use the notation $(\eta_t)$ for either representations of the contact process and denote by $\mathcal{P}_{\lambda}$ the corresponding path measure. \medskip We next state a lemma which is useful for most of our proofs. The proof and the statement is inspired by \cite[Lemma 2.11]{BirknerCernyDepperschmidtRWDRE2015}. For its statement, recall \eqref{eq results survival} and note that, as follows from the graphical representation, \begin{align} \ensuremath{\mathcal{P}}_{\lambda}\left( s<\tau^x <\infty\right) =\ensuremath{\mathcal{P}}_{\lambda} \left( V \times \{-s\} \leftarrow (x,0) \text{ but } V \times \{-\infty \} \nleftarrow (x,0) \right). \end{align} \begin{lem}\label{lem spin-flip 1} Consider the upper stationary contact process on a connected graph $G=(V,E)$ of bounded degree with $\lambda>0$. Let $\Delta \subset V$ and assume that there exist $\epsilon, C,c>0$ such that for all $x \in \Delta$, \begin{align}\label{eq lem spin-flip 1 assump2} &\mathcal{P}_{\lambda} \left( \tau^x=\infty \right) >\epsilon; \\ &\label{eq lem spin-flip 1 assump} \mathcal{P}_{\lambda} \left( s<\tau^x <\infty \right) \leq C e^{-cs}, \quad s \geq 0. \end{align} Then, for any $T \in (0,\infty)$, there exists $\rho=\rho(T)>0$ such that for all $n$ and all $x_1,\dots, x_n \in \Delta$; \begin{align}\label{eq lem spin-flip 1} \mathcal{P}_{\lambda} \left( \eta_{Ti}(x_i)=0, i = 1,2,\dots, n\right) \leq (1-\rho)^n. \end{align} \end{lem} \begin{proof} Fix $T \in (0,\infty)$ and let $\textbf{x}=(x_i)_{i \in \integers}$ be an infinite sequence of elements $x_i \in \Delta$. For $i \in \ensuremath{\mathbb{Z}}$, denote by \begin{align}D_i := \inf \{ l \in \nat \colon V \times T(i-l) \nleftarrow (x_i,T i) \},\end{align} and note that $D_i T$ yields an approximation (up to an error of at most $T$) on how far backwards in time $(x_i,T i)$ is connected to another space-time point. In particular, $\eta_{T i}(x_i) = 0$ if and only if $D_i<\infty$. Define $\mathcal{T}_0=0$ and, iteratively, \begin{align} \ensuremath{\mathcal{T}}_{i+1} := \ensuremath{\mathcal{T}}_i + D_{n-\ensuremath{\mathcal{T}}_i}, \quad i \geq 0. \end{align} Let $K:= \sup \{i \colon \ensuremath{\mathcal{T}}_i<\infty \}$. We have the following relation (easy to check) between events: \begin{align} \{ \eta_{iT}(x_i) =0 \text{ for } i=1,\dots,n \} &= \{ D_1,\dots,D_n <\infty \} \\ &\subset \{\mathcal{T}_K \geq n\}. \end{align} Finally, $\mathcal{P}_{\lambda} (\mathcal{T}_K\geq n)$ is exponentially small in $n$. This follows by standard arguments from the following consequences of \eqref{eq lem spin-flip 1 assump2} and \eqref{eq lem spin-flip 1 assump} (using the independence properties of the graphical representation): for all $i$, all positive integers $t_i>t_{i-1}> \dots >t_1 \geq 1$, and all $s \geq 1$, we have \begin{align} &\mathcal{P}_{\lambda} \left( D_{n-\mathcal{T}_i} = \infty \mid \ensuremath{\mathcal{T}}_1=t_1, \dots, \ensuremath{\mathcal{T}}_i=t_i \right) >\epsilon; \\ &\mathcal{P}_{\lambda} \left( s \leq D_{n-\mathcal{T}_i} < \infty \mid \ensuremath{\mathcal{T}}_1=t_1, \dots, \ensuremath{\mathcal{T}}_i=t_i \right) \leq Ce^{-c(s-1)}. \end{align} \end{proof} \section{Proofs}\label{sec proofs} \subsection{Proof of Proposition \ref{prop amenable}}\label{sec proofs 1} For the proof of Proposition \ref{prop amenable} we follow that of \cite[Proposition 1.1]{LiggettSteifSD2006}, which we extend to graphs having subexponential growth. \begin{proof}[Proof of Proposition \ref{prop amenable}] Let $G=(V,E)$ be a graph as in the statement of the proposition and let $\lambda \in (0, \infty)$. Fix $o \in V$, and consider $\Delta \subset V$ having positive density. Hence, there is a $\gamma>0$ and a $N\in \nat$ such that, for all $n \geq N$, we have that $|\Delta \cap B(n)|>\gamma |B(n)|$, where $B(n) := \{ x\in V \colon d(o,x) \leq n\}$. Next, assume that the contact process on $G$ with infection parameter $\lambda>0$, projected onto $\Delta$, stochastically dominates a non-trivial independent spin-flip process with parameter $\alpha>0$. Consequently, for every $T>0$ and $n\geq N$, we have that \begin{align}\label{eq prop amenable bound} &\mathcal{P}_{\lambda} \left( \eta_t(x) =0 \text{ for all } (x,t) \in B(n)\times[0,T] \right) \\ \leq & \mathcal{P}_{\lambda} \left( \eta_t(x) =0 \text{ for all } (x,t) \in (\Delta \cap B(n)) \times[0,T] \right) \\ \leq &e^{-\gamma \alpha|B(n)| T} = e^{-c_1|B(n)| T}, \quad \text{ where }c_1 = \gamma \alpha. \end{align} Thus, the probability in \eqref{eq prop amenable bound} decays exponentially at a rate proportional to the volume of $B(n)\times[0,T]$. To conclude the statement of Proposition \ref{prop amenable} for $\lambda>0$, we show that this estimate cannot hold and thus argue by means of contradiction. In doing so, we make use of the graphical representation of the contact process. Let $A_{n,T}$ denote the event that there are no arrows in the graphical representation from sites outside $B(n)$ to any site in $B(n)$ during the time period $[0, T ]$. Note that the l.h.s.\ of \eqref{eq prop amenable bound} is bounded below by \begin{align} \mathcal{P}_{\lambda} \left( \{ \eta_0(x) = 0 \text{ for } x \in B(n) \} \cap A_{n,T} \right). \end{align} Moreover, this is again bounded below by \begin{align} &\mathcal{P}_{\lambda} \left( \{ \eta_0(x) = 0 \text{ for } x \in B(n) \} \right) e^{-\lambda d | B(n+1) \setminus B(n)| T} \\ \label{eq prop am bound 2}\geq &\left[ \prod_{x \in B({n})}\bar{\nu}_{\lambda}(\eta_0(x) = 0 ) \right]e^{-\lambda d | B(n+1) \setminus B(n)| T}, \end{align} where $d$ denotes the maximum degree of $G$, and where we used that the contact process is positively associated. Next, since $G$ has subexponential growth (and hence satisfies \eqref{eq extend subexp}), we can find $n$ large such that $\lambda d |B(n+1)\setminus B(n)| < c_1 | B(n)|$. For such $n$, by taking $T$ sufficiently large, the expression \eqref{eq prop am bound 2} is larger than the r.h.s.\ of \eqref{eq prop amenable bound}: a contradiction. \end{proof} \subsection{Proof of Theorem \ref{thm spin flip 3}}\label{sec proofs 3} \begin{proof}[Proof of Theorem \ref{thm spin flip 3}] Consider the upper stationary contact process $(\eta_t)$ on a connected graph $G=(V,E)$ having bounded degree and with $\lambda>0$. Fix $x\in V$ such that \eqref{eq survival2} and \eqref{eq survival} hold and define, for $t,s\in \reals$ with $t<s$, the event $A_{t,s}:= \{ \eta_u(x)=0 \colon u\in[t,s)\}$. Further, let $f\colon [0,\infty]\times[0,\infty)\rightarrow [0,1]$ denote the function \begin{align} f(t,u) = \ensuremath{\mathcal{P}}_{\lambda} \left( A_{0,t} \mid A_{-u,0} \right). \end{align} Clearly, $f(t,u)$ is non-increasing in $t$. By Lemma \ref{lem DFKG for CP} we have that, for each $n$, the collection of random variables $\left(\eta_t(y), y \in V, t\in \integers_{1/n} \right)$ is dFKG (recall from \eqref{eq notation Z} that $\integers_{1/n}$ denotes $\{ k/n \colon k \in \integers \}$). Further, it is standard (and easy to see) that, for $t<s$, \begin{align} \ensuremath{\mathcal{P}}_{\lambda} (A_{t,s}) = \lim_{n \rightarrow \infty} \ensuremath{\mathcal{P}}_{\lambda} \left( \eta_u(x)=0 \text{ for all } u \in [t,s) \cap \integers_{1/n} \right). \end{align} Using this approximation, the above mentioned dFKG property, and general results for measures satisfying dFKG (see Section \ref{sec DFKG}), it follows that \begin{align}\label{eq one star} f(t,u) \text{ is non-decreasing in } u, \end{align} so $f(t):=\lim_{u \rightarrow \infty} f(t,u)$ exists (and is $>0$) and $\ensuremath{\mathcal{P}}_{\lambda}(A_{0,t}\mid B) \leq f(t)$ for all events $B$ that are measurable with respect to $(\eta_s(x), s \leq 0)$. Further, since, \begin{align} f(t+s,u) &= \ensuremath{\mathcal{P}}_{\lambda} (A_{0,t+s} \mid A_{-u,0}) \\ &= \ensuremath{\mathcal{P}}_{\lambda}(A_{0,t} \mid A_{-u,0}) \ensuremath{\mathcal{P}}_{\lambda}(A_{t,t+s} \mid A_{-u,t}) \\ &= f(t,u) f(s,t+u), \end{align} we get, by letting $u\rightarrow \infty$, $f(t+s)=f(t)f(s)$, from which we obtain that there is a $c\geq0$ such that \begin{align}\label{eq two stars} f(t) = e^{-ct}, \quad \text{for all } t \geq 0. \end{align} By Lemma \ref{lem spin-flip 1} (with $T=1$), there is an $\alpha>0$ such that \begin{align}\label{eq six stars} \ensuremath{\mathcal{P}}_{\lambda}(A_{0,t}) \leq e^{-\alpha t}, \quad t \geq 1. \end{align} We claim that $c \geq \alpha$ (and hence $c>0$). The proof of this claim uses some of the arguments in the proof of Lemma \ref{lem DFKG domi1} in \cite{LiggettSteifSD2006}. For completeness, we include it here. Suppose $c<\alpha$. Let $\alpha' \in (c,\alpha)$. Fix $t>1$ and take an integer $l$ so large that $f(t,lt)$ is `very close' to $f(t)$ (and hence, by \eqref{eq two stars}, to $e^{-ct}$). More precisely, we take $l$ sufficiently large so that \begin{align}\label{eq three stars} f(t,lt)> e^{-\alpha' t}. \end{align} For all integers $k \geq 0$ we have that, on the one hand (by \eqref{eq six stars}), \begin{align}\label{eq four stars} \ensuremath{\mathcal{P}}_{\lambda}(A_{0,klt}) \leq e^{-\alpha klt}, \end{align} while on the other hand \begin{align} \ensuremath{\mathcal{P}}_{\lambda}(A_{0,klt}) &= \ensuremath{\mathcal{P}}_{\lambda}(A_{0,lt}) \prod_{i=l}^{kl-1} \ensuremath{\mathcal{P}}_{\lambda} \left( A_{it,(i+1)t} \mid A_{0,it}\right) \\&\geq \ensuremath{\mathcal{P}}_{\lambda}(A_{0,lt}) (f(t,lt))^{kl} \\&> \ensuremath{\mathcal{P}}_{\lambda}(A_{0,lt}) e^{-\alpha'tkl}, \end{align} where the first inequality uses \eqref{eq one star} and stationarity, and the second inequality comes from \eqref{eq three stars}. Since $\alpha'<\alpha$ (and $\ensuremath{\mathcal{P}}_{\lambda}(A_{0,lt})>0$) this violates \eqref{eq four stars} if $k$ is sufficiently large, and yields a contradiction. This proves the claim. By the claim, and the inequality one line below \eqref{eq one star}, we have that $\ensuremath{\mathcal{P}}_{\lambda}(A_{0,t} \mid B) \leq e^{-\alpha t}$ for all events $B$ that are measurable with respect to $(\eta_s(x), s\leq 0)$. Finally, we also clearly have (by the contact process dynamics) that the conditional probability of the event $\{\eta_s(x)=1 \text{ for all } s \in (0,t)\}$, given that $\eta_0(x)=1$ and any additional information about the process before time $0$, is exactly $e^{-t}$. We conclude that the process $(\eta_s(x))$ dominates a spin-flip process which goes from state $0$ to $1$ at rate $\alpha$ and from $1$ to $0$ at rate $1$. \end{proof} \subsection{Proof of Theorem \ref{thm spin flip 4}} \begin{proof}[Proof of Theorem \ref{thm spin flip 4}] Fix $T \in (0,\infty)$ and let $\Delta \subset V$ be finite with $x \in \Delta$ such that \eqref{eq survival2} and \eqref{eq survival} hold. Furthermore, consider the doubly infinite sequence $(Y_i)_{i \in \integers}$, where $Y_i$ is given by \begin{align}\label{eq proof thm sf 4 def} Y_i := \max \{ \eta_{T i}(y) \colon y \in \Delta \}. \end{align} By Lemma \ref{lem DFKG for CP} and Lemma \ref{lem dfkg max} we note that $(Y_i)$ is dFKG, and, since the upper stationary contact process is invariant under temporal shift, the sequence is also translation invariant. By Lemma \ref{lem spin-flip 1}, there is a $\rho>0$ such that \begin{align} \ensuremath{\mathcal{P}}_{\lambda} \left( Y_j=0, j=1,\dots n \right) \leq (1-\rho)^n. \end{align} Hence, by Lemma \ref{lem DFKG domi1}, we get \begin{align}\label{eq proof thm sf 4} &\mathcal{P}_{\lambda} \left( Y_1=1 \mid Y_{-j}=0, j=0,\dots,n\right)\geq \rho. \end{align} It is not difficult to see that \eqref{eq proof thm sf 4} yields the following: for some $0< \tilde{\rho} \leq \rho$, \begin{align}\label{eq strong domi} \mathcal{P}_{\lambda} \left(\eta_{T}(x) = 1 \text{ for all }x \in \Delta \mid Y_{-j}=0, j =0,\dots, n \right) \geq \tilde{\rho}, \end{align} for all $n \in \nat$. Indeed, since the contact process evolves in continuous-time and the graph is connected, infections can spread with positive probability from any point in $\Delta$ to all other points in $\Delta$ in a small time interval. To make this more formal one can first consider a sequence defined similar to $(Y_i)$, only replacing $T$ by $T/2$ in \eqref{eq proof thm sf 4 def}. By the same argument as above, using again the dFKG property, we have that for some $\delta>0$, \begin{align} \mathcal{P}_{\lambda}\left( \max \{ \eta_{T/2}(y)\colon y \in \Delta \} =1 \mid Y_{-j}=0, j=0, \dots, n\right) >\delta. \end{align} Furthermore, since $\Delta$ is finite, and $G$ is connected, there is an $\epsilon>0$ such that (with the notation introduced below \eqref{eq results survival}) \begin{align} \inf_{z \in \Delta} \mathcal{P}_{\lambda} \left( \eta_{\frac{T}{2}}^z(y) = 1 \text{ for all } y \in \Delta \right) >\epsilon. \end{align} Thus, using the fact that the contact process is a Markov process, we conclude \eqref{eq strong domi} with $\tilde{\rho}\geq \epsilon \delta >0$. Finally, using again the dFKG property of the collection $( \eta_{Ti}(y), y \in \Delta, i \in \integers)$, we obtain that \eqref{eq strong domi} still holds if the conditioning $\{Y_{-j}=0, j =0,\dots, n\}$ is replaced by any event measurable with respect to $( \eta_{-Ti}(y), y \in \Delta, i \geq 0 )$. This concludes the proof of the theorem. \end{proof} \subsection{Proof of Theorem \ref{thm spin-flip tree}}\label{sec proof of spt} To prove Theorem \ref{thm spin-flip tree} we first prove that the contact process on $\{0,1,\dots\}$ observed at the vertex $\{0\}$ stochastically dominates an independent spin-flip process. Indeed, the required estimates \eqref{eq survival2} and \eqref{eq survival} for this context is provided by the following result in \cite{DurrettGriffeathCPhighDim1982}, see Equation (21) on page 546 therein. \begin{lem}[\cite{DurrettGriffeathCPhighDim1982}, Equation (21), and \cite{DurretGriffeathCP1983}] \label{lem contact N} Consider the contact process on $V= \{0,1,\dots \}$ with $\lambda>\lambda_c$. Then there exists constants $\epsilon, C,c>0$ such that \eqref{eq survival2} and \eqref{eq survival} hold. \end{lem} \begin{proof}[Proof of Theorem \ref{thm spin-flip tree}] Firstly, by Lemma \ref{lem contact N} applied to Theorem \ref{thm spin flip 3}, we have that the contact process on $\{0,1,2,\dots\}$ with $\lambda>\lambda_c$ observed at the vertex $\{0\}$ stochastically dominates an independent spin flip process with $\alpha>0$. From the above observation, the statement of Theorem \ref{thm spin-flip tree} follows by a monotonicity argument using again the graphical construction of the contact process. To make this last argument precise, fix an arbitrary point $o\in T_d$ and call it the root. Denote by $u(o)=0$ its label. Furthermore, label the remaining sites according to their distance with respect to $o$ in a unique way. That is, each $x\in T_d$ with $\norm{x-o}=1$ has a label $u(x)=(0,i)$ for some $i\in\{1,\dots,d+1\}$ and for $y \in T_d$ satisfying $\norm{y-o}=\norm{z-o}+1=n$ and $\norm{y-z}<n$, $n\geq 2$, set $u(y)=(u(z),i)$, $i \in \{1,\dots,d\}. $ Thus, for each $x\in T_d\setminus \{o\}$, we have that \[u(x)\in \bigcup_{n \geq 0}\left[\{0\} \times \{1,\dots,d+1\} \times \{1,\dots, d\}^n\right]. \] Denote by $\Delta\subset T_d$ the set of vertices having as last entry of its label a number different from $1$. Using the graphical representation of the contact process, consider the process $(\xi_t)$ on $T_d$ where for each $(x,t) \in \Delta \times \reals$ we set $\xi_t(x)=1$ if and only if there is an infinite backwards path from $(x,t)$ constrained to infection arrows between the sites with label $\{u(x),(u(x),1),(u(x),1,1),\dots \}$. Moreover, for $x\in \Delta^c$, let $\xi_t(x)=0$ for all $t \in \reals$. By construction, the evolution of $(\xi_t)$ on $T_d$ is dominated by that of the contact process. Furthermore, the evolution at site $x\in \Delta$ is in one-to-one correspondence with the contact process on $\{0,1,2,\dots\}$, and the evolution at different sites $x,y\in \Delta$ is independent. Thus, on the set $\Delta$ the process $(\xi_t)$ stochastically dominates a non-trivial independent spin-flip process, and consequently, so does also the contact process. Lastly, we note that, from the above construction, it holds that $\Delta$ has positive density and that the l.h.s.\ of \eqref{eq density of set} equals $\gamma= \frac{d-1}{d}>0$. This concludes the proof. \end{proof} \subsection{Proof of Theorem \ref{thm spin-flip 2}}\label{sec proof spin-flip 2} In order to prove Theorem \ref{thm spin-flip 2}, we make use of the well known fact that the supercritical contact process on $\mathbb{Z}^d$ with $d\geq 2$ survives in $2$-dimensional space-time slabs (see \cite{BezuidenhoutGrimmettCP1990}). More precisely, let, for $k \in \nat$, \begin{align} S_{k} := \left\{ x \in \mathbb{Z}^d \colon x_i \in \{0,1,\dots,k-1\}, i=1,\dots d-1 \right\}, \end{align} and denote by $(_k\eta_t)$ the contact process on $S_k$ and by $\mathcal{P}_{\lambda,k}$ its path measure. This process on $S_k$ with $\lambda>\lambda_c(\mathbb{Z}^d)$ survives with positive probability if the width $k$ is large enough. The proof of this proceeds via a block argument and comparison with a certain $2$-dimensional (dependent) directed percolation model. This argument also gives a form of exponential decay, more precisely, the following lemma holds. \begin{lem}\label{lem slabs} Let $\lambda >\lambda_c(\mathbb{Z}^d)$ and $d\geq2$. Then there exists $k\in \nat$ and $\epsilon, C,c \in (0,\infty)$, such that for all $x \in S_k$, \begin{align} \label{eq slab2} &\ensuremath{\mathcal{P}}_{\lambda,k} \left( \tau^x=\infty \right)>\epsilon; \\ &\label{eq slab} \ensuremath{\mathcal{P}}_{\lambda,k} \left( s<\tau^x<\infty \right) \leq Ce^{-cs}, \quad \text{ for all } s>0. \end{align} \end{lem} \begin{proof} This follows again by comparison with a $2$-dimensional directed percolation model and a renormalization arguments. For a proof we refer the reader to the proof of Theorem 1.2.30a) in \cite{LiggettSIS1999}, where such an argument is explained in detail. Though proved there for the unrestricted contact process $(\eta_t)$ the argument works, mutatis mutandis, for $(_k\eta_t)$ as soon as $k$ is taken sufficiently large. \end{proof} \begin{proof}[Proof of Theorem \ref{thm spin-flip 2}] Fix $T \in (0,\infty)$ and note that the case $d=1$ is an immediate consequence of Theorem \ref{thm spin flip 4}. Indeed, the estimates \eqref{eq survival2} and \eqref{eq survival} for that case are known to hold due to \cite[Theorem 5]{DurretGriffeathCP1983}. For the case $d\geq 2$ we use a slightly more involved argument, by partitioning $\mathbb{Z}^d \times \reals$ into slabs. Fix $k$ such that \eqref{eq slab2} and \eqref{eq slab} hold. For $\textbf{i}=(i_1,\dots,i_{d-1})\in \integers^{d-1}$, let $P_{\textbf{i}}= \left( S_{k} + k \cdot (i_1,\dots,i_{d-1},0)\right) \times \reals$. Note that $P_{\textbf{i}} \cap P_{\textbf{j}}=\emptyset$ whenever $\textbf{i}\neq \textbf{j}$ and that $\bigcup_{\textbf{i} \in \integers^{d-1}} P_{\textbf{i}} = \mathbb{Z}^d \times \ensuremath{\mathbb{R}}$. Next, consider the process $(\zeta_t)$ which is obtained from the graphical representation of the contact process on $\mathbb{Z}^d$ by suppressing all infection arrows between slabs $P_{\textbf{j}}$. Trivially the evolution of $(\zeta_t)$ is dominated by that of $(\eta_t)$. Moreover, the evolution of $(\zeta_t)$ in each slab is independent of the others and has the same law as $(_k\eta_t)$. Let $\textbf{i} \in \integers^{d-1}$. By applying Theorem \ref{thm spin flip 4} with $\Delta = \mathbb{Z}^d_{d-1}(m) \cap (S_k +k\cdot (\textbf{i},o))$, it follows that the process $(\zeta_t)$ observed on the vertices $\Delta$ at times that are multiples of $T$ stochastically dominates a non-trivial Bernoulli product measure with density $\rho>0$. By the above mentioned independence, this implies the statement of Theorem \ref{thm spin-flip 2} for $(\zeta_t)$. Since $(\zeta_t)$ is stochastically dominated by $(\eta_t)$, we conclude the proof. \end{proof} \subsection{Proof of Corollary \ref{cor spin-flip 2}} \begin{proof}[Proof of Corollary \ref{cor spin-flip 2}] Fix $T \in (0,\infty)$ and recall the definition of $\ensuremath{\mathcal{P}}_{\lambda}^{\text{slab}}$ in Section \ref{sec mixing}. Note that $\ensuremath{\mathcal{P}}_{\lambda}^{\text{slab}}$ is translation invariant and that, due to Lemma \ref{lem DFKG for CP}, it is also dFKG. In particular, we may apply Lemma \ref{lem DFKG domi} to $\ensuremath{\mathcal{P}}_{\lambda}^{\text{slab}}$. A direct consequence of Theorem \ref{thm spin-flip 2} with $m=1$ is that whenever $\lambda>\lambda_c$, there is a $\rho>0$ such that \begin{align} \ensuremath{\mathcal{P}}_{\lambda}^{\text{slab}} \left( \eta_s(x) = 0, (x,s) \in \mathbb{Z}^d_{d-1}\times \integers_{T} \cap [1,n]^d \times [T,nT] \right) \leq (1-\rho)^{n^d}. \end{align} Hence, the measure $\ensuremath{\mathcal{P}}_{\lambda}^{\text{slab}}$ satisfies Property $2$ in Lemma \ref{lem DFKG domi}. Consequently, $\ensuremath{\mathcal{P}}_{\lambda}^{\text{slab}}$ also satisfies Property $3$ in Lemma \ref{lem DFKG domi}, from which the statement of Corollary \ref{cor spin-flip 2} follows. \end{proof} \subsection{Proof of Theorem \ref{prop cone-mixing}} Theorem \ref{prop cone-mixing} follows from Corollary \ref{cor spin-flip 2} and a standard coupling argument, together with classical properties of the contact process. \begin{proof}[Proof of Theorem \ref{prop cone-mixing}] Fix $ T \in (0,\infty)$ and let $\rho>0$ be such that the statement of Corollary \ref{cor spin-flip 2} holds. Next, denote by $\mu \in \mathcal{M}_1(\Omega)$ the probability measure under which all vertices outside $\mathbb{Z}^d_{d-1}$ have value $0$ a.s., and those in $\mathbb{Z}^d_{d-1}$ correspond with independent Bernoulli random variables with parameter $\rho$. Further, for $\eta \in \Omega$, denote by $\delta_{\eta} \in \mathcal{M}_1(\Omega)$ the probability measure which concentrates on $\eta$, and write $\bar{1}\in \Omega$ for the configuration where all sites are equal to $1$. Then, by Corollary \ref{cor spin-flip 2}, and since $\bar{\nu}_{\lambda} \leq \delta_{\bar{1}}$, we have, for $\theta \in (0,\pi/2)$, $t >0$ and $B \in \mathcal{F}_t^{\theta}$ increasing, and for any $A \in \mathcal{F}_{<0}$ with $\mathcal{P}_{\lambda}^{\text{slab}}(A)>0$, that \begin{align}\label{eq thm 1.7 hippo} \left| \mathcal{P}_{\lambda}^{\text{slab}}(B\mid A) - \mathcal{P}_{\lambda}^{\text{slab}}(B) \right| &\leq \widehat{\mathcal{P}}_{\mu,\delta_{\bar{1}}} \left( \eta^1 \neq \eta^2 \text{ on } C_t^{\theta} \right), \end{align} where $ \widehat{\mathcal{P}}_{\mu,\delta_{\bar{1}}}$ is the standard graphical construction coupling of the contact processes on $\mathbb{Z}^d$ started at time $0$ from a configuration drawn according to $\mu$ and $\delta_{\bar{1}}$, respectively. Furthermore, we have that \begin{align}\label{eq thm 1.7 last one} \begin{split} \widehat{\mathcal{P}}_{\mu,\delta_{\bar{1}}} \left( \eta^1 \neq \eta^2 \text{ on } C_t^{\theta} \right) &\leq \sum_{ (x,s) \in C_t^{\theta}} \widehat{\mathcal{P}}_{\mu,\delta_{\bar{1}}} \left( \eta^1_s(x) \neq \eta^2_s(x) \right) \\ &= \sum_{ (x,s) \in C_t^{\theta}} \widehat{\mathcal{P}}_{\mu,\delta_{\bar{1}}} \left( \eta^1_s(o) \neq \eta^2_s(o) \right), \end{split} \end{align} where the last equation holds due to translation invariance in the first $(d-1)$ spatial directions. Since the set of increasing events in $\mathcal{F}_t^{\theta}$ generates $ \mathcal{F}_t^{\theta}$, in order to conclude the argument, it is sufficient to show that, for some $C,c\in(0,\infty)$, we have \begin{align}\label{eq add name}\widehat{\mathcal{P}}_{\mu,\delta_{\bar{1}}} \left( \eta^1_s(o) \neq \eta^2_s(o) \right) \leq Ce^{-cs}.\end{align} This can be shown using known estimates for the supercritical contact process on $\mathbb{Z}^d$. For completeness we present the details. Let $N:= \inf \{ \norm{x} \colon \eta_0^1(x) = 1 \text{ and } \tau^x=\infty \}$. Then, for any $a>0$, we have that \begin{align}\label{eq thm 1.7 very last} \begin{split} \widehat{\mathcal{P}}_{\mu,\delta_{\bar{1}}} \left( \eta^1_s(o) \neq \eta^2_s(o) \right) &\leq \widehat{\mathcal{P}}_{\mu_{\rho},\delta_{\bar{1}}} \left( \{ N > as \} \right) \\&+ \widehat{\mathcal{P}}_{\mu_{\rho},\delta_{\bar{1}}} \left( \{\eta^1_s(o) \neq \eta^2_s(o) \} \cap \{N\leq as\}\right). \end{split} \end{align} That the first term on the righthand side decays exponentially (in $as$) follows from \cite[Theorem 1.2.30]{LiggettSIS1999}. For the other term, we have that \begin{align} &\widehat{\mathcal{P}}_{\mu_{\rho},\delta_{\bar{1}}} \left( \{\eta^1_s(o) \neq \eta^2_s(o) \} \cap \{ N\leq as\} \right) \\ \leq &\sum_{y\in [-as,as]^d} \widehat{\mathcal{P}}_{\delta_{{\bar{0}_y}},\delta_{\bar{1}}} \left( \{\eta^1_s(o) \neq \eta^2_s(o) \} \cap \{\tau^y=\infty \} \right) \\ \label{eq thm 1.7 latter term} \leq &\sum_{y\in [-as,as]^d} \widehat{\mathcal{P}}_{\delta_{{\bar{0}_y}},\delta_{\bar{1}}} \left( \eta^1_s(o) \neq \eta^2_s(o) \mid \tau^y=\infty \right), \end{align} where $\bar{0}_{y}$ is the configuration given by $\bar{0}_y(x)=\bar{0}(x)=0$ for all $y\neq x$ and $\bar{0}_{y}(y) = 1-\bar{0}(y)=1$. From the large deviation estimates obtained in \cite[Theorem 1.4]{GaretMarchandLDPCPRE2013}, by choosing $a>0$ in \eqref{eq thm 1.7 latter term} sufficiently small, the term inside the sum of \eqref{eq thm 1.7 latter term} decays exponentially (in $s$), uniformly for $y \in [-as,as]^{d}$. Hence, since the sum only contains polynomially many terms, we have that \eqref{eq thm 1.7 very last} decays exponentially with respect to $s$. In conclusion, there exist $C,c>0$ such that \eqref{eq add name} holds, from which, by \eqref{eq thm 1.7 hippo} and \eqref{eq thm 1.7 last one}, we conclude the proof. \end{proof} \section{Open questions}\label{sec questions} We expect that the statement of Theorem \ref{thm spin-flip tree} can be improved. \begin{q} Can the condition $\lambda>\lambda_c(\integers)$ in Theorem \ref{thm spin-flip tree} be replaced by $\lambda>\lambda_c(T_d)$? \end{q} \begin{q} Does Theorem \ref{thm spin-flip tree} hold with $\Delta = T_d$? \end{q} Motivated by Theorem \ref{thm spin-flip 2} and Lemma \ref{lem spin-flip 1}, the following questions seem natural. \begin{q}\label{q path} Consider the upper stationary contact process on $\mathbb{Z}^d$, $d\geq1$, with $\lambda>\lambda_c$, and let $\textbf{x} = (x_i)_{i \in \integers}$ be an infinite sequence of elements in $\mathbb{Z}^d$. Does the contact process projected onto $\{ (x_i,i) \colon i \in \integers \}$ stochastically dominate a non-trivial Bernoulli product measure? \end{q} \begin{q}\label{q walk} Consider the upper stationary contact process $(\eta_t)$ on $\mathbb{Z}^d$, $d\geq1$, with $\lambda>\lambda_c$, and let $\textbf{X}=(X_i)_{i \geq 0}$ be a simple random walk on $\mathbb{Z}^d$ started at $X_0=o$. Does the sequence $(\eta_i(X_i))_{i \geq 0}$ dominate a non-trivial Bernoulli sequence? \end{q} \begin{rem} A positive answer to Question \ref{q path} with a uniform bound on the density $\rho>0$ would imply a positive answer to Question \ref{q walk}. \end{rem} Lastly, motivated by Proposition \ref{prop amenable} and Theorem \ref{thm spin-flip 2}, we state the following question. \begin{q}\label{q domination} Consider the upper stationary contact process $(\eta_t)$ on $\mathbb{Z}^d$, $d\geq1$, with $\lambda>\lambda_c$. For which $\Delta \subset \mathbb{Z}^d$ having ``zero density'' (that is, the l.h.s.\ of \eqref{eq density of set} equals $0$) does $(\eta_t)$ projected onto $\Delta\times [0,\infty)$ dominate a non-trivial independent spin-flip process? \end{q} \subsection*{Acknowledgment} The authors thank Markus Heydenreich and Matthias Birkner for discussions and comments. S.A. Bethuelsen thanks LMU Munich for hospitality during the writing of the paper. S.A. Bethuelsen was supported by the Netherlands Organization for Scientific Research (NWO).
1,941,325,220,610
arxiv
\section{Introduction}\label{sec:intro} Despite the importance of vector bundles in geometry and topology, there are few explicit methods to produce them. On complex projective spaces, the simplest complex bundles to write down are sums of line bundles. Indecomposable bundles are difficult to describe explicitly, but there are some famous examples: one is that of the Horrocks--Mumford bundle of rank $2$ on $\CP^4$ \cite{HorMum}; another is the Horrocks bundles of rank $3$ on $\CP^5$ \cite{Hor2}. In \cite{Hor}, Horrocks takes another approach and constructs new algebraic vector bundles from given ones using a modified extension group procedure. Horrocks' construction takes as input two complex rank two algebraic bundles $V$ and $W$ on $\CP^3$, which must have the same first Chern class and satisfy certain technical hypotheses, and outputs another complex rank $2$ algebraic vector bundle with the same first Chern class. We will write $V+_H W$ for the output bundle, although the construction does not define a group structure. Atiyah and Rees show that Horrocks' construction essentially produces all topological equivalence classes of complex rank $2$ bundles on $\CP^3$ from the simplest ones. \begin{thm}[{\cite[Theorem 1.1]{AR}}]\label{thm:horrocks_generates} Any complex rank $2$ topological vector bundle on $\CP^3$ can be obtained from a sum of line bundles by iteratively applying the following operations: \begin{itemize} \item tensoring by a line bundle; and \item applying Horrocks' construction. \end{itemize} \end{thm} \begin{rmk} This result shows that every topological equivalence class of complex rank $2$ vector bundles on $\CP^3$ admits an algebraic representative. \end{rmk} The algebra involved in Horrocks' construction turns out to be quite specialized. It does not directly generalize to produce algebraic vector bundles of other ranks or on other spaces. However, we will explore Horrocks' construction from a homotopical vantage point and show that, from this perspective, it does generalize. Our main results for rank $2$ bundles on $\CP^3$ is as follows: \begin{prop}[Topological Horrocks addition]\label{prop:main_construction} Fix an integer $a_1\in \mathbb Z$. Let $\G_{a_1}$ denote the set of topological equivalence classes of complex rank $2$ topological bundles on $\CP^3$ with first Chern class equal to $a_1$. \begin{enumerate} \item[(i)] $\G_{a_1}$ carries an abelian group structure $+_{a_1}$, via an explicit construction on classifying spaces; \item[(ii)] The identity is $L\oplus \underline{\C}$, where $L$ is the complex line bundle determined by $c_1(L)=a_1$ and $\underline \C$ is the trivial complex line bundle; and \item[(iii)] The second Chern class defines a homomorphism $c_2\: \G_{a_1} \to H^4(\CP^3;\Z).$ \end{enumerate} \end{prop} We show this addition agrees with Horrocks' construction as far as possible. \begin{thm}\label{cor:alg_top_agree_somewhat}Suppose that $V$ and $W$ are two rank $2$ algebraic vector bundles such that $c_1(V)=c_1(W)=a_1$ and such that the Horrocks sum $V+_HW$ is defined. Then $$V+_H W \simeq V +_{a_1} W$$ as topological vector bundles. \end{thm} Our strategy to prove Theorem~\ref{cor:alg_top_agree_somewhat} is conceptually simple. In \cite{AR}, Atiyah and Rees show that any topological rank $2$ bundle on $\CP^3$ is determined by its first and second Chern class together with an additional $\Z/2$-valued invariant $\alpha$, which they define. Thus, to prove that $V+_H W$ and $V+_{a_1} W$ are topologically equivalent, it suffices to show their Chern classes and $\alpha$-invariants agree. Our methods for constructing the groups of Proposition~\ref{prop:main_construction} extend as follows. \begin{prop}[Topological Horrocks addition for rank $3$ bundles]\label{prop:main_construction2} Fix a complex rank $2$ topological bundle $V_0$ on $\CP^5$. Let $\G_{V_0}$ denote the set of topological equivalence classes of complex rank $3$ topological bundles on $\CP^5$ with the same first and second Chern class as $V_0$. \begin{enumerate} \item[(i)] $\G_{V_0}$ carries an abelian group structure, via an explicit construction on classifying spaces; \item[(ii)] The identity is $V_0 \oplus \underline{\C};$ and \item[(iii)] The third Chern class defines a homomorphism $c_3\: {\G}_{V_0} \to H^6(\CP^5;\Z).$ \end{enumerate} \end{prop} \begin{rmk} The underlying set of $\G_{V_0}$ depends only on a choice of first and second Chern class. However, we cannot define a group relative to an arbitrary choice of integers $c_1,c_2$ without a rank $2$ bundle with these Chern classes. \end{rmk} \subsecl{Tools and methods}{subsec:tools} Propositions~\ref{prop:main_construction} and \ref{prop:main_construction2} follow from a more general study of relative infinite loop space structures on truncated diagrams. In the homotopy category of spaces, the key result can be stated as follows. \begin{prop}\label{prop:maps_group} Let $f\: X \to Y$ be a map of pointed, simply connected spaces, with section $s\: Y \to X$. Suppose that $f$ is $n$-connective and let $m:=2n-2.$ Let $\taum$ denote the $m$-truncation functor. Then $$\begin{tikzcd} \tau_m X \arrow[r," \tau_m f"] & \tau_m Y\arrow[l,dashed, bend left, " \tau_m s"] \end{tikzcd}$$ is a group object in homotopy classes of $m$-truncated spaces over $\taum Y,$ with binary operation given by truncating the diagram \begin{center}\begin{tikzcd}[column sep=5.5em] X \times_Y X \arrow[r, dashed, bend right] & X \cup_Y X \arrow[l, "(1\cup sf) \times (sf \cup 1)"] \arrow[r, "\operatorname{fold}"] & X \etikz where the dotted arrow exists only after truncating and $\operatorname{fold}$ is induced from the pushout by the identity on both summands. \end{prop} Using the skeleton-truncation adjunction on the homotopy category of spaces, we obtain: \begin{cor}\label{cor:add_pi_zero} Let $f\: X \to Y$, $s\: Y \to X$, $n$, and $m$ be as in Proposition~\ref{prop:maps_group}. Let $C$ be an $m$-skeletal space and fix $c\: C \to Y.$ Let $[C,X]_c$ denote homotopy classes of maps $g\:C\to X$ over $Y$. \begin{center} \begin{tikzcd} C \ar[r, dashed, "g"]\ar[dr,"c" below] & X\ar[d,"f" left]\\ & Y \end{tikzcd} \end{center} Then $[C,X]_c$ has a natural group structure with identity $s\circ c$. \end{cor} \begin{rmk} The previous corollary can be viewed as a relative version of Borsuk's classical construction, as given in \cite{Borsuk_Spanier}. More recently, such ideas have been used in \cite{AF18}, for example. \end{rmk} It is rather tedious to prove Proposition~\ref{prop:maps_group} in a point-set way. Instead, we prove a refinement. The intuition is that, as a consequence of $n$-connectivity, the fibers of $\tau_m f\:\tau_m X \to \tau_m Y$ are infinite loop spaces and the automorphisms of the fibers induced by paths in the base are infinite loops maps; this data should assemble to make $\tau_m X \to \tau_mY$ an infinite loop space over $\tau_m Y$. The machinery of an $\infone$-categorical Grothendieck construction, for example straightening and unstraightening as in \cite{HTT}, allows us to make this precise. \begin{prop}\label{prop:general_add}Let $f\: X \to Y$ be a map of pointed simply connected spaces with section $s\: Y \to X$. Suppose $f$ has homotopy fiber with first nonzero homotopy group in degree $n$ and let $m:=2n-2.$ Then the functor of infinity categories $$\Groth_*(\taum f)\: \taum Y \to \s_*$$ given by applying $m$-truncation $\tau_m$ and followed by pointed straightening $\Groth_*$ factors as \begin{center} \begin{tikzcd} Y \arrow[rr,"{\Groth_*(\taum f)}"]\arrow[d, dashed, "\exists" left]& & \mathcal{S}_* & \\ \operatorname{Spectra} \arrow[urr, "\,\,\,\, \Omega^\infty" below] &&& . \end{tikzcd} \end{center} In particular, $\Groth_*(\taum f)$ takes values in infinite loop spaces and infinite loop maps. \end{prop} By applying the inverse to $\Groth_*$ to $\Groth_*(\tau_m f)$, we recover the binary operation proposed in Proposition~\ref{prop:maps_group} on the homotopy category. Moreover, Proposition~\ref{prop:general_add} shows that the group object $$ \begin{tikzcd} \tau_m X \ar[r,"\tau_m f"] &\tau_m Y\ar[l,dashed,bend left,"\tau_m s"]\end{tikzcd}$$ as in Proposition~\ref{prop:maps_group} is a grouplike $\mathbb E_\infty$-space over $\tau_m Y$, rather than just an $H$-group object. \subsecl{Paper outline}{subsec:outline} Section~\ref{sec:group_obs} serves as a set-up for the rest of the paper, including both technical results and key examples. In Subsection~\ref{subsecl:proof_general_add}, we give proofs of Corollary~\ref{cor:add_pi_zero}, Proposition~\ref{prop:maps_group}, and Proposition~\ref{prop:general_add}. Subsection~\ref{subsecl:proof_general_add} relies on infinity-categorical machinery, so we summarize the key facts before our proofs. If the reader prefers to take Corollary~\ref{cor:add_pi_zero} and Proposition~\ref{prop:maps_group} for granted, the rest of the paper does not rely heavily on the details of the proofs, although some functoriality results are necessary and can be referenced as needed. Subsection~\ref{subsec:examples} includes the key examples of group structures on sets of vector bundles, which are the focus of the rest of the paper. In addition to the existence of certain group structures, the proof of Theorem~\ref{cor:alg_top_agree_somewhat} uses certain elementary comparisons between different groups, which we include in Section~\ref{subsec:const}. In Section~\ref{sec:rk2p3}, we focus on rank $2$ bundles on $\CP^3$. Subsection~\ref{subsec:classical_horrocks} summarizes Horrocks' construction and reviews the basic properties of the Atiyah--Rees alpha invariant. In Subsection~\ref{subsec:proof_main_contruction}, we prove Proposition~\ref{prop:main_construction}. Subsection~\ref{subsec:proof_rk2_thm} includes the proof of Theorem~\ref{cor:alg_top_agree_somewhat}. The last Section~\ref{sec:generalization} focuses on rank $3$ bundles on $\CP^5$. In Subsection~\ref{subsec:proofrk3p5} we prove Proposition~\ref{prop:main_construction2}. We study these group structures in more detail in Subsection~\ref{subsecl:interesting}. In particular, we show that there exists a rank two bundle $V_0=L_0\oplus L_1$ such that $\G_{V_0}$ contains an element $W=L_2\oplus L_3 \oplus L_4$, where $L_i$ are line bundles, so that the subgroup generated by $W$ is infinite and contains bundles which are not themselves sums of line bundles. We also include Appendix~\ref{app:grothendieck} which supplies details of the pointed Grothendieck construction for spaces. This material is used in Section~\ref{sec:group_obs}, although the section can be read without the appendix. This material is not new, but our exposition consolidates the necessary background and provides a reference for more comprehensive literature. \subsecl{Acknowledgements}{subsec:acknowledgements} First and foremost I want to thank my Ph.D. advisor, Mike Hopkins, for suggesting this project and for his wisdom and support throughout my Ph.D. program. I am also grateful to Haynes Miller for his mentorship during my time in graduate school; and to both Haynes and Elden Elmanto for serving on my dissertation committee and offering feedback on my thesis write-up. After my move to UCLA, Mike Hill's guidance and encouragement -- mathematical and practical -- were invaluable for me while improving and revising my thesis. This work benefited greatly from my conversations with Aravind Asok, Lukas Brantner, Hood Chatham, Jeremy Hahn, Yang Hu, Brian Shin, Alexander Smith, and Dylan Wilson. While working on this project, the author was supported by the National Science Foundation under Award No.~2202914. \subsecl{Conventions}{subsec:conventions} \begin{itemize} \item ``Vector bundle" will refer to a complex, topological vector bundle unless otherwise stated. ``Rank" will always refer to complex rank. \item We write $\s$ for the category of spaces and $\op{Sp}$ for the category of spectra. \item Limits and colimits of spaces are implicitly homotopy limits and colimits. \item We write $\mathcal O(k)$ for the topological vector bundle on $\CP^n$ or $\CP^\infty$ with first Chern class equal to $k$. When we consider algebraic bundles, we will also use this notation for the algebraic bundle with the same property. \item Given a classifying space $\BU(n)$, we write $\gamma_n$ for the universal bundle on it. \item We write $\underline{\C}$ for the trivial rank $1$ topological vector bundle on any space. \item Given spaces $X$ and $Y$, we write $[X,Y]$ for homotopy classes of maps with domain $X$ and codomain $Y$. \item Given a space or spectrum $X$, we write $\tau_nX$ for its $n$-th Postnikov section and $\tau_n\: X \to \tau_n X$ for unit of reflection onto $n$-truncated objects, evaluated at $X$. \end{itemize} \section{Preliminaries and examples}\label{sec:group_obs} The main objective of this section is to Proposition~\ref{prop:maps_group}, Corollary~\ref{cor:add_pi_zero}, and Proposition~\ref{prop:general_add} (Subsection~\ref{subsecl:proof_general_add}), and to apply these to produce group structures on collections of vector bundles in examples of interest (Subsection~\ref{subsec:examples}). In Subsection~\ref{subsecl:proof_general_add}, we prove Proposition~\ref{prop:general_add} first and then deduce Proposition~\ref{prop:maps_group} and Corollary~\ref{cor:add_pi_zero}. Much of this relies on aspects of the Grothendieck construction, as described in detail by Lurie \cite[2.2 and 3.2]{HTT} and summarized in Appendix~\ref{app:grothendieck}. Subsection~\ref{subsec:examples} applies Corollary~\ref{cor:add_pi_zero} to maps from $\CP^3$ and $\CP^5$ into diagrams involving classifying spaces of rank $2$ and $3$ bundles, respectively, giving rise to group structures on certain collections of rank $2$ bundles on $\CP^3$ and rank $3$ bundles on $\CP^5$. We also show that our methods produce multiple distinct group structures on the same set of bundles. The study of these examples comprises the later sections of the paper. Finally, in Subsection~\ref{subsec:const}, we investigate the relationship between two different examples from the previous section. The result in this subsection is used in the proof of Theorem~\ref{cor:alg_top_agree_somewhat}. \subsecl{Technical results}{subsecl:proof_general_add} In this subsection we prove Proposition~\ref{prop:general_add}, Corollary~\ref{cor:add_pi_zero}, and Proposition~\ref{prop:general_add}. We also record several other facts about the additive structures in question, which we will use later to study specific examples. \begin{notation}\label{note:keyproperties} Let $\s$ denote the $\infone$-category of spaces and $\s_*$ that of pointed spaces. For $Y$ a space, let $(\s_{/Y})_*$ denote $\infone$-category of pointed objects in the $\infone$-category of spaces over $Y$, i.e. spaces over $Y$ together with a choice of section. Let $(\s_*)^Y$ denote the $\infone$-category of functors from the Poincare infinity groupoid of $Y$ to $\s_*$. The following facts are justified in Appendix~\ref{app:grothendieck}. \begin{enumerate} \item Given a map of spaces $f\: X \to Y$ with section $s\: Y \to X$, we can naturally associate a functor $\Groth_*(f)\: Y \to \s_*,$ the {\em pointed straightening of $f$}. Heuristically, $\Groth_*(f)$ takes a point $p$ in $Y$ to the homotopy fiber $f^{-1}(p)$ pointed by $s(p)$. \item The pointed straightening functor $\St_*$ participates in an equivalence of $\infone$-categories \begin{center} \begin{tikzcd} ({\s_{/Y}})_* \arrow[r,bend left, "\Groth_*"] & ({\s_*})^Y.\arrow[l,bend left, "\Un_*"] \end{tikzcd} \end{center} (See Corollary~\ref{cor:compatible}.) \item There is a functor $\op{forget}\:\left(\s_{/Y}\right)_* \to \s_{/Y}$ given on objects by $\left(Y \to x \right) \mapsto x$ and a free basepoint functor $+\: \s_{/Y} \to \left(\s_{/Y}\right)*$ given on objects by $x \mapsto ( Y \to Y \sqcup x),$ participating in an adjunction of $\infone$-categories: \begin{center} \begin{tikzcd} \s_{/Y} \ar[r, bend right, "+" below] \ar[r, phantom, "\dashv" {labl, near end}] & \left(\s_{/Y}\right)*\,\,. \ar[l, bend right, "\forget " above] \end{tikzcd} \end{center} (See Lemma~\ref{lem:pointed_adjunction}.) \end{enumerate} \end{notation} \begin{proof}[Proof of Proposition~\ref{prop:general_add}] For non-negative integers $a\leq b$, let $\op{Sp}_{[a,b]}$ denote the full subcategory of spectra with homotopy groups zero outside the range $[a,b]$. Similarly, let $\s_{*,[a,b]}$ denote the full subcategory of pointed spaces with homotopy groups zero outside the range $[a,b]$ (i.e. $a$-connective, $b$-truncated spaces). Observe that $\Groth_*(\taum f)$ takes values in $\s_{*,[n,2n-2]},$ the subcategory of pointed spaces which are $n$-connective and $(2n-2)$-truncated. By \cite[Theorem 5.1.2]{MS}, $$\Omega^\infty\: \Sp_{[n,2n-2]} \to S_{*,[n,2n-2]}$$ is an equivalence of $\infone$-categories. So, the factorization is automatic and there exists some functor of $\infone$-categories $A\: Y \to \operatorname{Sp}$ such that there is an equivalence \begin{equation}\label{eq:deloop_functor}\Omega^\infty A \simeq \St_*(\taum f)\end{equation} in the $\infone$-category of such functors. Moreover, this factorization is functorial since we can obtain $A$ by applying an inverse to $\Omega^\infty\: \op{Sp}_{[n,2n-2]} \to \s_{*,2n-2}$ to $\Groth_*(\taum f)$ . \end{proof} \begin{cor}\label{cor:unstraightened_group_ob} The diagram $$\begin{tikzcd} \tau_m X \arrow[r," \tau_m f"] & \tau_m Y\arrow[l,dashed, bend left, " \tau_m s"] \end{tikzcd}$$ is an group object in the category of $m$-truncated spaces over $\taum Y$, with binary operation given by applying $\op{Un}_*$ to the group object $\Groth_*(\taum f)$. \end{cor} We can now deduce another result stated in the introduction. \begin{proof}[Proof of Corollary~\ref{cor:add_pi_zero}] Let $c\: C\to Y$ be a map from an $m$-skeletal space $C$ to our base space $Y$, where $m:=2n-2$. Let $c_+$ denote the object\begin{center} \begin{tikzcd} C \sqcup \taum Y \arrow[r,"c" below] & \taum Y \arrow[l, dashed, bend right] \end{tikzcd} \end{center} given by applying $+$ as in Notation~\ref{note:keyproperties}(3) to the object $\taum c\: C \to \taum Y $. Proposition~\ref{prop:general_add} implies that the space $$\operatorname{Maps}_{{(\s)_*}^{\taum Y} }\left(\Groth_*(c_+),\Groth_* \left( \taum f\right)\right)$$ possesses the structure of a group-like $\mathbb E_\infty$ space, i.e. infinite loop space. By Notation~\ref{note:keyproperties}(2) we have natural equivalences of grouplike $\mathbb E_\infty$-spaces \begin{align*}\operatorname{Maps}_{{(\s)_*}^{\taum Y} }\Big(\St_*(c_+),\St_* \big( \tau_{\leq m} f\big)\Big) &\simeq \operatorname{Maps}_{ (\mathcal{S}/\taum Y)_*}\big(\Un_* \St_*(c_+), \Un_* \Groth_*(\taum f)\big) \\ &\simeq \operatorname{Maps}_{ (\mathcal{S}/\taum Y)_*} \big(C \sqcup \taum Y, \taum X\big)\\&\simeq \operatorname{Maps}_{ \mathcal{S}/\taum Y} \big(C, \taum X\big),\end{align*} where the third step is the definition of $c_+$ and Notation~\ref{note:keyproperties}(3). This implies \begin{align*}\pi_0\operatorname{Maps}_{ \mathcal{S}_{/\taum Y}} \big(C, \taum X\big)& \simeq \pi_0\operatorname{Maps}_{ \s_{/Y}} \big( \sk^m C , X\big) \\ &\simeq \pi_0\operatorname{Maps}_{\sy} \big( C , X\big)\\ &\simeq [C,X]_c,\end{align*} giving the group structure on $[C,X]_c$. Note that the identity of $\pi_0\operatorname{Maps}_{ \mathcal{S}_{/\taum Y}} \big(C, \taum X\big)$ is obtained by post-composing $c\: C \to \tau_m Y$ with $s\: \tau_m Y \to \tau_m X$. Tracing through the isomorphisms above, this shows that $s \circ c\: C \to X$ is the identity in $[C,X]_c$. \end{proof} We record one additional result that will be useful later. \begin{lem}\label{lem:loop_map} Suppose that we have a commutative solid diagram \begin{center} \begin{tikzcd} X\ar[dr,"\,\,\,f" right]\ar[rr,"h"] && X',\ar[dl,"f'" left]\\ & Y\ar[ul,bend left,dashed, "s"]\ar[ur, bend right,dashed, "s'" below] & \end{tikzcd} \end{center} where $f$ and $ f'$ are both $n$-connective, and $s$ and $s'$ are sections of $f$ and $f'$, respectively. Then $\Groth_*(\taum h)\: \Groth_*(\taum f) \to \Groth_*(\taum f')$ is a natural transformation of infinite loop maps. \end{lem} \begin{proof} Note that the functor $\Groth_*(\taum -) \: (\s_{/Y})_* \to (\s_*)^{\tau_m Y}$ factors through $\op{Sp}^{\tau_m Y}$. \end{proof} This immediately implies: \begin{cor}\label{cor:functoriality_fixed_base} With set-up as in Lemma~\ref{lem:loop_map}, let $m:=2n-2$. Fix an $m$-skeletal space $C$ and let $c\: C\to Y$ be given. Then the map $h\circ -\: [C,X]_c \to [C,X']_c$ is a group homomorphism. \end{cor} We now explicitly describe the group operation of Corollary~\ref{cor:unstraightened_group_ob} in the homotopy category of spaces by unraveling the binary operation on $\Un_*\Groth_*(\tau_mf)$. \begin{proof}[Proof of Proposition~\ref{prop:maps_group}] For a spectrum or space $b$, let $\op{F}$ denote the fold map from $b \sqcup b \to b$. In the case of a spectrum, identifying $b\times b \simeq b \sqcup b$ and applying the $\op{F}$ gives $b$ a group object structure. Recall from the proof of Proposition~\ref{prop:general_add} that Equation~\eqref{eq:deloop_functor} supplied a functor $A\: \taum Y \to \op{Sp}$ such that $\Groth_*(\taum f) = \Omega^\infty A$. The group object structure on $A$ is given by $$A \times A \simeq A \sqcup A \xrightarrow{\op{F}}A.$$ Applying $\Un_*$, we have a homotopy commutative diagram \begin{equation}\label{eq:adds_agree} \begin{tikzcd}[row sep =.3in, column sep = .1in] \taum X \times_{\taum Y} \taum X\arrow[d,"\simeq"]\arrow[r,"\simeq"]& \Un_*\Omega^\infty (A\times A) & \Un_* \Omega^\infty (A \sqcup A)\arrow[dr, phantom,"\star"]\arrow[l,"\simeq"]\arrow[r,"\Un_*\Omega^\infty(\op{F})"]\arrow[d,"\simeq"] & \Un_* \Omega^\infty A\arrow[d,"\simeq"]\\ \taum (X \times_Y X) & &\taum(X) \cup_{\taum(Y)} \taum(X)\arrow[r,"\op{F}"]\arrow[d,"\dagger"]\arrow[dr,phantom,"\star\star"] & \taum X\arrow[d,"\dagger"]\\ \taum(X \cup_Y X)\arrow[u,"\simeq", "\tau_m(sf\times 1 \cup 1 \times sf)" right]\arrow[rr,"*"] & &\taum (\taum(X) \cup_{\taum(Y)} \taum(X))\arrow[r,"\taum\op{F}" ] & \taum X, \end{tikzcd} \end{equation} where the maps labeled $\dagger$ above are the natural map from a space into its truncation and the map labeled $*$ is the $\taum$-truncation of the map on pushouts induced by the map of spans \begin{equation}\label{eq:star} \begin{tikzcd} X\arrow[d] & Y\arrow[r,"s" above]\arrow[l,"s"]\arrow[d] & X\arrow[d]\\ \taum X & \taum Y\arrow[r,"\taum s" above]\arrow[l,"\taum s"] & \taum X, \end{tikzcd} \end{equation}with the vertical arrows are again the natural maps from a space to its truncation. In Diagram~\ref{eq:adds_agree}, arrow $\tau_m(sf\times 1 \cup 1 \times sf)$ is an equivalence by the hypothesis that $f$ is $n$-connective. Inspecting the diagram, it is clear that the two right rectangles labeled $\star$ and $\star\star$ commute up to homotopy, by naturality. One then checks that starting at the bottom left corner, going up and around the large left rectangle produces the map $*$ induced by \eqref{eq:star}. The composite obtained by starting at the upper left-hand corner of the outer rectangle, going down by two, and then right by two computes the proposed operation on $\taum X$ over $\taum Y$. On the other hand, starting in the upper left-hand corner but going first right by two and then down by two computes the group object structure obtained in Corollary~\ref{cor:unstraightened_group_ob}. \end{proof} From this description of the group structure, we obtain group homomorphisms from homotopy classes of maps over one base to homotopy classes of maps over another base. First, note the following elementary lemma. \begin{lem} A diagram \begin{equation}\label{eq:functoriality_2} \begin{tikzcd} X\ar[d,"f"]\ar[r,"h"] & X'\ar[d,"f'" left] \\ Y\ar[r,"j"] \ar[u, bend left, "s"]& Y'\ar[u, bend right, "s'" right] \end{tikzcd} \end{equation} where $hs \simeq s'j$, $jf \simeq f'h$, and $s$ and $s'$ are sections of $f$ and $f'$, respectively, gives rise to a homotopy commutative diagram \begin{center} \begin{tikzcd} X \times_Y X \ar[d]& X \cup_Y\ar[l]\ar[r]\ar[d] X & X\ar[d] \\ X' \times_{Y'} X' & X' \cup_{Y'} X'\ar[l]\ar[r] & X' \end{tikzcd} \end{center} \end{lem} \begin{cor}\label{cor:group_hom_2} Let $h\: X \to X'$ fit into a diagram as in Equation~\eqref{eq:functoriality_2}. Suppose also that $f$ and $f'$ are $n$-connective and let $m=2n-2$. Let $C$ be $m$-skeletal and let $c\: C \to Y$ be given. Then postcomposition with $h$ induces a group homomorphism $h\circ -\: [C,X]_c \to [C,X']_{j\circ c}.$ \end{cor} \subsecl{Examples of groups structures on vector bundles}{subsec:examples} We now discuss the examples that will be the focus of the paper. The first example that we give is the one that corresponds to Horrocks' construction. \begin{example}\label{ex:sum_trivial} The determinant map $\det(\gamma_2)\: \BU(2)\to \BU(1)$ is $4$-connective and admits a section represented by the bundle $\gamma_1\oplus \underline \C$ on $\BU(1)$. If we fix $\mathcal O(a_1)\: \CP^3 \to \BU(1)$ for $a_1\in \mathbb Z$, this gives a group structure on $$\{V\: \CP^3 \to \BU(1) \, | \, c_1(V)=a_1\},$$ with identity $\mathcal O(a_1)\oplus \underline{\C}.$ \end{example} We can also produce a different group structure on the same set of vector bundles. \begin{example}\label{ex:sum_nontrivial} The map $\det(\gamma_2)\: \BU(2) \to \BU(1)$ admits a section $\left(\gamma_1\otimes \mathcal O(\smallminus b)\right) \oplus \mathcal O(b)$ where $b\in \Z$ is arbitrary. If we fix $a_1 \in \mathbb Z$, this gives a group structure on $$\{V\: \CP^3 \to \BU(2) \, | \, c_1(V)=a_1\},$$ with identity $\mathcal O(a_1\smallminus b)\oplus \mathcal O(b).$ \end{example} We will explore the previous two examples and how they are related in Subsection~\ref{subsec:const}, so we establish notation for each. \begin{defn}\label{not:particular_sums} Let $+_{a_1}$ denote the binary operation on rank $2$ bundles on $\CP^3$ with first Chern class $a_1$ given in Example~\ref{ex:sum_trivial}. We write $\G_{a_1}$ for this group. Let $+_{a_1,b}$ denote the binary operation on rank $2$ bundles on $\CP^3$ with first Chern class $a_1$ given in Example~\ref{ex:sum_nontrivial}. We write $\G_{a_1,b}$ for this group. \end{defn} Next, we discuss a group structure for rank $3$ bundles on $\CP^5$, which will be our subject in Section~\ref{sec:generalization}. \begin{example}\label{ex:rk3p5} Consider the following diagram \begin{equation}\label{eq:tilde_diagram1} \begin{tikzcd}[column sep=3em] \tBU(3)\arrow[dr, phantom, near start, "\scalebox{1}{\color{black}$\lrcorner$}"]\arrow[d,"v"]\arrow[r]& \BU(3)\arrow[d,"c_1\times c_2"]\\ \BU(2)\arrow[r,"c_1\times c_2"]\arrow[u,bend left,dashed, "s"] & K(\Z,2) \times K(\Z,4), \end{tikzcd} \end{equation} where: \begin{itemize} \item $\tBU(3)$ is defined as the homotopy pullback of $c_1\times c_2\: \BU(3)\to K(\Z,2) \times K(\Z,4)$ along $c_1 \times c_2\: \BU(2) \to K(\Z,2) \times K(\Z,4)$, and \item the dotted arrow $s$ is induced by the identity map on $\BU(2)$ and a map given representably by $\gamma_2 \oplus \underline{\C}$, where $\gamma_2$ is the universal bundle on $\BU(2)$. \end{itemize} The map $v$ has the same connectivity as $c_1 \times c_2\: \BU(3) \to K(\Z,2) \times K(\Z,4)$. A Postnikov tower analysis shows that the latter map is $6$-connective. By Corollary~\ref{cor:add_pi_zero}, for any fixed $V_0 \: \CP^5 \to \BU(2)$, the set of homotopy classes of lifts $[\CP^5,\tBU(3)]_{V_0}$ carries a natural group structure induced by an infinite loop structure on $\tau_{10} \tBU(3)$ as a pointed object over $\tau_{10} \BU(2)$. A priori, a map into $\tBU(3)$ consists of a map into $\BU(3)$ and a null homotopy witnessing the fact that $c_i(V)=c_i(V_0)$ for $i=1,2$. However, choices of nullhomotopy are a torsor for $H^1(\CP^5;\Z) \times H^3(\CP^5;\Z) \simeq 0$. \end{example} \begin{rmk} Up to homotopy, a choice of line bundle on $\CP^3$ is equivalent to a first Chern class. So, to generalize topological Horrocks' addition to rank $3$ bundles, we could try to work with the set of rank $3$ vector bundles on $\CP^5$ with fixed $c_1$ and $c_2$. This cannot work without a choice of $V_0$. To apply our setup, we would need a section \begin{center} \begin{tikzcd} \BU(3) \arrow[r, "c_1 \times c_2" above] & K(\Z,2) \times K(\Z,4) \arrow[l, dashed, bend left, "\exists ?"], \end{tikzcd} \end{center} but there is a cohomological obstruction: consider the $\mod 3$ power operations on $H^*(-;\Z/3\Z)$ of the source and target. Recall that, in $H^*(\BU(3);\Z/3\Z)$, $ P^1(c_2) = c_1^2c_2 + c_2^2 -c_1c_3.$ In $H^*(K(\Z,2) \times K(\Z,4);\Z/3\Z)$, $P^1(\iota_4)=0.$ Since $(c_1\times c_2)^*(\iota_{2i}) = c_i$, for $i=1,2$, the existence of a space-level section $\sigma$ for $c_1 \times c_2$ would force $\sigma^*(P^1(c_2))=P^1(\iota_4),$ and we would get that $\iota_2^2\iota_4 + \iota_4^2=0,$ which is a contradiction.\end{rmk} Example~\ref{ex:rk3p5} above generalizes as follows. \begin{example}\label{ex:general_example} Let $g\: X \to Y$ be a map of connected spaces. Consider the diagram \begin{center} \begin{tikzcd} X \times_Z Y\ar[r]\ar[d, "p_1"] & Y\ar[d, "\tau_k"] \\ X\ar[r, "\tau_k \circ F"]\ar[u,dashed, bend left, "s" left] & \tau_k Y , \end{tikzcd} \end{center} where $Z:= \tau_k Y$, $p_1$ is the natural projection, and the section $s_g$ from $X$ to the pullback is induced by the the identity on $X$ and $g\:X \to Y$. The map $p_1$ is $k+1$-connective since $\tau_k$ is. By Corollary~\ref{cor:unstraightened_group_ob}, $\tau_{2k}f\:\tau_{2k}\left(X \times_Z Y\right) \to \tau_{2k} X$ is a group object in $2k$-truncated spaces over $\tau_{2k}X$. So, for any $2k$-skeletal space $C$ with $c\: C\to X$, the set $[C,X]_c$ inherits a group structure. \end{example} \subsecl{Comparing group structures on rank $2$ bundles: properties of $+_{a_1}$ and $+_{a_1,b}$}{subsec:const} Recall from Definition~\ref{not:particular_sums} that $+_{a_1}$ and $+_{a_1,b}$ are two binary operations (with different identities) on the set of rank $2$ vector bundles over $\CP^3$ with first Chern class equal to $a_1$. We investigate the relationship between these two operations. \begin{lem} Let $V,W,Z$ be rank $2$ vector bundles on $\CP^3$ with first Chern class $a_1$. Then $$(V+_{a_1}W)+_{a_1,b}Z = V+_{a_1}(W+_{a_1,b} Z).$$ \end{lem} \begin{proof} Consider the diagram: \begin{equation}\label{eq:zig_zag_pushout} \begin{tikzcd}[column sep=2em, row sep=3em] & \BU(1)\arrow[dl,"\,\,\,\,\,\,\,\,\,\,\,\,\, \gamma_1\oplus \underline \C" {below}] \arrow[dr," \gamma_1\oplus \underline \C \,\,\,\,\,\,\,\,\,\,\,\,\," {below}] & & \BU(1)\arrow[dr,"\gamma_1\otimes \mathcal O(\smallminus b) \oplus \mathcal O(b)"]\arrow[dl,"\!\!\!\!\!\!\gamma_1\otimes \mathcal O(\smallminus b) \oplus \mathcal O(b)"]\\ \BU(2)\ar[dr,"\!\!\det \gamma_2"] && \BU(2)\ar[dl,"\det\gamma_2"]\ar[dr,"\det\gamma_2 "] && \BU(2)\ar[dl,"\!\!\!\det \gamma_2 "]\\ & \BU(1) & & \BU(1) \end{tikzcd} \end{equation} The different orders of taking pullbacks (resp. pushouts) of terms in the second row along terms in the last row (resp. first row) are explicitly homotopy equivalent. To simplify notation, let \begin{align*} g&:=\gamma_1\oplus \underline \C & g'&:= \gamma_1 \otimes \mathcal O(\smallminus b) \oplus \mathcal O(b) & d & =\det \gamma_2\\ B_2&:=\BU(2) &B_1&:=\BU(1). & &\end{align*} We have a homotopy commutative diagram \begin{center} \begin{tikzcd}[column sep=1em] &B_2 \\ B_2\cup_{B_1,g'} B_2 \arrow[ur,"\op{fold}"] & & B_2 \cup_{B_1,g} B_2\arrow[ul, "\op{fold}"]\\ \left(B_2{{\cup}}_{B_1,g} B_2\right)\cup_{B_1,g'} B_2\ar[d, "\simeq_{\leq 6}"]\ar[u,"\op{fold}\cup \op{id}"] & B_2 \cup_g B_2 \cup_{g'} B_2\ar[r,"\simeq"]\ar[dd]\ar[l,"\simeq"] & B_2{{\cup}}_{B_1,g}\left( B_2\cup_{B_1,g'} B_2 \right)\ar[d, "\simeq_{\leq 6}"]\ar[u,"\op{id}\cup \op{fold}"]\\ \left(B_2{\times}_{B_1,d} B_2\right)\times_{B_1,d} B_2 & & B_2{{\times}}_{B_1,d} \left( B_2\times_{B_1,d} B_2\right) \\ &B_2\times_{B_1,d} B_2\times_{B_1,d} \times B_2 \ar[ur,"\simeq"]\ar[ul,"\simeq"] \end{tikzcd} \end{center} After applying $\pi_0\op{Maps}(\CP^3,-)$ we invert the maps marked $\simeq_{\leq 6}$ and, by Proposition~\ref{prop:maps_group}, equality of the two long composites proves the lemma. \end{proof} This implies: \begin{cor}\label{cor:additons_related_formula} Let $V,W$ be rank $2$ vector bundles on $\CP^3$ with $c_1(V)=c_1(W)=a_1$. Then $$V+_{a_1,b} W= V+_{a_1} W -_{a_1} \left(\mathcal O(a_1-b)\oplus \mathcal O(b)\right).$$ \end{cor} \begin{proof} This is a special case of a general fact. Suppose that $+,*$ are two abelian group structures on a set $S$ such that for all $x,y,z\in S$, $$(x+y)* z = x+(y * z).$$ Let $e_{+}$, $e_*$ be the respective identities. Then \begin{align*} (x+e_{*})*(y+e_{*}) &=(x+e_{*})*(e_*+y)\\ &=x+(e_{*} * e_{*})+y\\ &=x+y+e_*.\end{align*} Set $x=x+(-e_{*})$ and $y=y+(-e_{*})$ in the above formula, where $-e_*$ is the additive inverse of $e_*$. This yields: \begin{align*} x*y &=x-e_*+y-e_*+e_*\\ &=x+y-e_*. \end{align*} We get the result by applying to the set of vector bundles with first Chern class $a_1$, letting $+=+_{a_1}$ and $*=+_{a_1,b}$, and noting that $e_*=\mathcal O(a_1-b)\oplus \mathcal O(b)$. \end{proof} \section{A topological addition for rank $2$ bundles on $\CP^3$ with fixed $c_1$}\label{sec:rk2p3} In the previous section, we defined the group structures $\G_{a_1}$ and $\G_{a_1,b}$ on the set of rank $2$ bundles on $\CP^3$ with first Chern class equal to $a_1$ (see Example~\ref{ex:sum_trivial}, Example~\ref{ex:sum_nontrivial}, and Definition~\ref{not:particular_sums}). Our project in this section is to relate these group structures to Horrocks' construction and prove Theorem~\ref{cor:alg_top_agree_somewhat}, which asserts that $+_{a_1}$ produces the same topological bundle as Horrocks' construction when both are defined. We begin in Subsection~\ref{subsec:classical_horrocks} with a review of Horrocks' construction. We also recall the key properties of the Atiyah--Rees $\alpha$-invariant. In Subsection~\ref{subsec:proof_main_contruction}, we prove that $c_2\: \G_{a_1} \to H^2(\CP^3;\Z)$ is a group homomorphism, completing the proof of Proposition~\ref{prop:main_construction}. Verifying additivity of $c_2$ involves only the definition of the group structure and functoriality results about the types of group structures of interest, as given in see Corollary~\ref{cor:functoriality_fixed_base}. We also show that $\alpha\: \G_0 \to H^2(\CP^3;\Z)$ is a group homomorphism. In Subsection~\ref{subsec:proof_rk2_thm}, we finally show that topological Horrocks addition and Horrocks' construction agree when both are defined (Theorem~\ref{cor:alg_top_agree_somewhat}). We do this by showing that both constructions have the same effect on complete invariants of rank $2$ bundles on $\CP^3$. In \cite{AR}, Atiyah and Rees study complex rank $2$ topological vector bundles on $\CP^3$. They define a $\Z/2$-valued invariant $\alpha$ of such bundles when $c_1\equiv 0 \pmod 2$ and prove the following result. \begin{thm}[{\cite[Theorems 1.1, 2.8, 3.3]{AR}}]\label{thm:AR_classification} Given $a_1,a_2\in \Z$ with $a_1a_2\equiv 0 \pmod 2$, the number of rank $2$ bundles on $\CP^3$ with $i$-th Chern class $a_i$ is: \begin{itemize} \item equal to $2$ if $a_1\equiv 0 \pmod 2$; and \item equal to $1$ otherwise. \end{itemize} In the first case, a rank $2$ vector bundle on $\CP^3$ is determined by $c_1,c_2,$ and $\alpha$. \end{thm} \begin{rmk} The condition $a_1a_2\equiv 0 \pmod 2$ is necessary and sufficient for two integers to be the Chern classes of complex rank 2 topological bundle on $\CP^3$.\end{rmk} So, to show that $V+_{a_1} W \simeq V+_H W$ it suffices to check the two bundles have the same Chern classes and $\alpha$ invariant. Both $+_{a_1}$ and $+_H$ fix the first Chern class, so it suffices to show that: \begin{itemize} \item[(i)] $c_2(V+_{a_1}W)=c_2(V+_HW)$, and \item[(ii)] $\alpha(V+_{a_1} W)= \alpha(V+_H W)$. \end{itemize} Horrocks and Atiyah--Rees compute the effect of $+_H$ on $c_2$ and $\alpha$ (see Theorem~\ref{cor:alpha_almost_add_alg} below). Item (i) follows from the additivity of $c_2$ for both operations. Checking (ii) is more complicated because the $\alpha$ invariant does not play well with our definition of $+_{a_1}$ when $a_1 \neq 0$. We bootstrap from this case $a_1=0$ to obtain a formula for $\alpha(V+_{a_1} W)$ in terms of $\alpha(V)$ and $\alpha(W)$. This involves the study of the groups $\G_{0,b}$ for $b$ nonzero. \subsecl{Horrocks' construction and the classification of rank $2$ bundles on $\CP^3$}{subsec:classical_horrocks} We first review Horrocks' construction for rank $2$ algebraic vector bundles. \begin{const}[Horrocks' construction \cite{Hor}]\label{alg_horrocks} Let $V_1$ and $V_2$ be rank $2$ locally free sheaves on $\CP^3$. Suppose that: \begin{itemize} \item we have isomorphisms $\wedge^2 V_1 \simeq \wedge^2 V_2\simeq \mathcal O(n)$; \item we have regular sections\footnote{A section $s$ of $V$ is regular if its vanishing locus is of codimension equal to the dimension of $V$.} $s_i\: \mathcal O \to V_i^*$; and \item the sheaves $\mathcal R_i := \op{coker}(s_i^*\:\mc F_i \to \mc O)$ have disjoint supports. \end{itemize} For each $i=1,2$, the Koszul complex relative to $s_i$ has the form $ 0 \to \mathcal \mathcal O(n) \to V_i\to \mathcal{O}$ and is exact since $s_i$ is regular. By definition, we have exact sequences $$0\to \mathcal O(n) \to V_i \to \mathcal O \to \mathcal R_i\to 0.$$ The sheaf $V_1+_H V_2$ is defined by the diagram \begin{equation}\label{horrocks_add_defn} \begin{tikzcd} 0 \arrow[r] & \mathcal \mathcal O(n) \oplus \mathcal O(n) \arrow[r] & V_1 \oplus V_2 \arrow[r]\arrow[from=d] & \mathcal O \oplus \mathcal O \arrow[r]\arrow[from=d,"\Delta"] \arrow[from=dl, phantom, "\scalebox{1}{\color{black}$\llcorner$}" {rotate=180, near start}, color=black] &\mc {R}_1 \oplus\mc{ R}_2 \arrow[r] \arrow[from=d,"\simeq"'] & 0\\ 0 \arrow[r] & \mathcal O(n) \oplus \mathcal O(n) \arrow[r]\arrow[u,"\simeq"]\arrow[d,"\nabla"] &\mc W \arrow[r]\arrow[d] & \mathcal O\arrow[r] & \mc {R}_1 \oplus\mc{ R}_2 \arrow[r] & 0 \\ 0 \arrow[r] & \mathcal \mathcal O(n) \arrow[r] & V_1 +_{H} V_2 \arrow[r] \arrow[from=ul, phantom, "\scalebox{1}{\color{black}$\lrcorner$}" {rotate=180, near end}, color=black] &\mathcal O\arrow[from=u,"\simeq"]\arrow[r] &\mc {R}_1 \oplus\mc{ R}_2 \arrow[r]\arrow[from=u, "\simeq"] & 0 \\ \end{tikzcd} \end{equation} where $\mathcal W$ is the indicated pullback along the diagonal $\Delta$ and $V_1+_HV_2$ is the pushout along the fold map $\nabla$. The middle and bottom rows in Diagram~\eqref{horrocks_add_defn} are both exact and $V_1+_H V_2$ is locally free of rank $2$ \cite[Theorem 1]{Hor}. Note that, from the bottom exact sequence of Diagram~\ref{horrocks_add_defn}, we see that $$c_1(V_1 \horro V_2) = c_1 (V_1) = c_1(V_2) =n.$$ \end{const} Now we turn our attention to the $\alpha$-invariant in the statement of Theorem~\ref{thm:AR_classification}. \begin{const}[The $\alpha$-invariant \cite{AR}]\label{def:alpha}Let $V$ be rank $2$ bundle on $\CP^3$ with $c_1=0$. Such bundles are classified by $\BSU(2)$. The accidental isomorphism $\BSU(2)\simeq BSp(1)$ composed with stabilization gives $$\alpha\:\BSU(2) \to BSp \simeq \Omega^\infty \Sigma^4 KO,$$ where $KO$ denotes the real $K$-theory spectrum. Thus we have a class $\alpha \in KO^4(\BSU(2))$ and we define $$\alpha(V):=p_*V^*(\alpha),$$ where $p_*$ is the $KO$-theory pushforward for the spin manifold $\CP^3$. \end{const} Atiyah and Rees extend $\alpha$ to all bundles with $c_1(V)\equiv 0 \pmod 2$ by letting \begin{equation}\label{eq:alpha_general}\alpha(V):=\alpha\left(V\otimes \mathcal O\left(\frac{c_1(V)}{2}\right)\right).\end{equation} \begin{thm}[{\cite[Theorem 1]{Hor}, \cite[Corollary 5.7]{AR}}]\label{cor:alpha_almost_add_alg} Let $V_1,V_2$ be rank $2$ algebraic bundles on $\CP^3$ with $c_1(V_1)=c_1(V_2)=-m$, with $m\geq 0$.\footnote{This is necessary for $V_i$ to admit regular sections.} Then $$c_2(V_1+_H V_2)=c_2(V_1)+c_2(V_2).$$ Furthermore, suppose that $m=2n$ with $n\geq 0$. \begin{itemize} \item If $n$ is odd or $n\equiv 0\pmod 4$, then $\alpha(V_2 \horro V_2) = \alpha(V_1) + \alpha(V_2) \in \Z/2\Z.$ \item If $n\equiv 2\pmod 4$, then $\alpha(V_1 \horro V_2) = \alpha(V_1) + \alpha(V_2)+1.$ \end{itemize} \end{thm} \subsecl{Proof of Proposition~\ref{prop:main_construction}}{subsec:proof_main_contruction} In the previous section, we defined the group structures $\G_{a_1}$ on the set of rank $2$ bundles on $\CP^3$ with first Chern class equal to $a_1$ (see Example~\ref{ex:sum_trivial} and Definition~\ref{not:particular_sums}). This gives the group structure required by part (i) of Proposition~\ref{prop:main_construction}. For part (ii), note that for any $f\: X\to Y$ with section $s\: X \to Y$ satisfying the hypotheses of Proposition~\ref{prop:general_add}, and any $c\: C\to Y$ with $C$ an $m$-skeletal space, the identity element is given by $s\circ c$. In this case, $s \circ c \simeq \mathcal O(a_1) \oplus \underline{\C}$. Part (iii) is a consequence of the first part of the following result. \begin{prop}\label{prop:horrocks_add_alpha} Let $V,W\: \CP^3 \to \BU(2)$ be two bundles with $\det V \simeq \det W \simeq \mathcal O(a_1)$. Let $+_{a_1}$ denote topological Horrocks addition as in Definition~\ref{not:particular_sums}. Then: \begin{itemize} \item for any $a\in \mathbb Z$, $c_2$ is a homomorphism for $+_{a_1}$; and \item if $a_1= 0$, then $\alpha$ is a homomorphism for $+_0$. \end{itemize} \end{prop} \begin{proof} Let $H\Z$ denote the integral Eilenberg--Mac Lane spectrum. Note that the diagram \begin{center} \begin{tikzcd} \BU(2) \arrow[r, "c_2"]\arrow[d,"\det"]& \Omega^\infty \Sigma^4 H\Z\ar[d] \\ \BU(1) \arrow[r,"0"] \arrow[u,bend left, dashed, "s"] & * \arrow[u,bend right, dashed] \end{tikzcd} \end{center} homotopy commutes, since $s$ represents a bundle with $c_2=0$. By Corollary~\ref{cor:group_hom_2}, we get a group homomorphism $c_2\:[\CP^3,\BU(2)]_{c} \to H^4(\CP^3,\Z).$ We now prove a statement about $\alpha$, using Construction~\ref{def:alpha}. We have a homotopy commutative diagram \begin{equation}\label{eq:two_adds_compare} \begin{tikzcd} \BU(2)\arrow[d,"c_1"] & \BSU(2) \arrow[l,"\iota"]\arrow[r,"\alpha"]\arrow[d] & \Omega^\infty \Sigma^4 KO\arrow[d]\\ K(\Z,2)\arrow[u,dashed, bend left]& *\arrow[l]\arrow[u,dashed,bend left]\arrow[r] & *\arrow[u,dashed,bend left]. & \end{tikzcd} \end{equation} All vertical homotopy fibers are $4$-connective, so we can apply Corollary~\ref{cor:add_pi_zero} to all vertical diagrams to obtain group structures on homotopy classes of maps from $\CP^3$ into each diagram. By Corollary~\ref{cor:group_hom_2}, we get group homomorphisms \begin{equation} \begin{tikzcd} {{[\CP^3,\BU(2)]}_{c_1=0} } & {[\CP^3,\BSU(2)]} \arrow[l," \iota\circ (-)"]\arrow[r,"\alpha\circ (-)"] &{[\CP^3,\Omega^\infty\Sigma^4KO]} \simeq KO^4(\CP^3). \end{tikzcd} \end{equation} The map $\iota\circ -$ induces an isomorphism on $\pi_0$ and for $V\: \CP^3\to \BU(2)$ with $c_1(V)=0$, $$p_*\Big(\alpha \circ \big((\iota\circ(-))^{-1}(V)\big)\Big)=\alpha(V).$$ Since push-forward on cohomology is a group homomorphism, we conclude that $\alpha$ is additive for $+_0$ as was to be shown. \end{proof} \subsecl{Proof of Theorem~\ref{cor:alg_top_agree_somewhat}}{subsec:proof_rk2_thm} Let $a_1 \in \Z$ and suppose that $V,W$ are rank $2$ bundles on $\CP^3$ with $c_1(V)=c_1(W)=a_1$. By Proposition~\ref{prop:main_construction}(iii), $c_2(V_1+_{a_1} V_2)=c_2(V)+c_2(V')=c_2(V_1+_H V_2)$. In the case $a_1\equiv 1 \pmod 2$, Theorem~\ref{thm:AR_classification} implies $V+_{a_1}W\simeq V+_HW$ and we are done. If $a_1\equiv 0\pmod 2$, we compare alpha invariants. Let $b=-\frac{a_1}{2}.$ Consider the commutative diagram \begin{equation}\label{eq:two_adds_compare} \begin{tikzcd}[column sep=6em] \BU(2)\arrow[d,"\det(-)" right]\ar[r,"(-)\otimes \mathcal O(\smallminus b)"] & \BU(2) \arrow[d,"\det(-)" left]\\ \BU(1)\arrow[u,dashed,bend left, "(-) \oplus \underline \C"]\ar[r,"\operatorname{id}" below] & \BU(1) \ar[u, dashed, bend right, "\left(-\otimes \mathcal O(\smallminus b)\right)\oplus \mathcal O(b)" right] \end{tikzcd} \end{equation} where for the maps we write the functor on vector bundles. Diagram~\eqref{eq:two_adds_compare} induces a group isomorphism $\phi\: \G_{a_1} \to \G_{0,b}$ given on bundles by $$V\mapsto V \otimes \mathcal O(\smallminus b).$$ Using this definition of $\alpha$, Proposition~\ref{prop:horrocks_add_alpha}, and Corollary~\ref{cor:additons_related_formula}, we see that if $V,W \in \G_{a_1}$ then: \begin{align*}\alpha\left(V+_{a_1} W\right)&=\alpha\left(\phi(V+_{a_1}W)\right) \\ &=\alpha\left(\phi(V)+_{0,b}\phi(W)\right) \\ &=\alpha\left(\phi(V) +_0 \phi(W)-_0 \mathcal O(\smallminus b)\oplus \mathcal O(b)\right) \\ &=\alpha(V)+\alpha(W)-\alpha\left(\mathcal O(\smallminus b)\oplus \mathcal O(b)\right). \end{align*} To compute $\alpha(\mathcal O(b)\oplus \mathcal O(\smallminus b))$, we need an auxiliary definition. For $V$ a rank 2 bundle on $\CP^3$ with $c_1(V)\equiv 0 \pmod 2$, let $$\Delta(V):= \frac{c_1^2-4c_2}{4}.$$ By \cite[Theorem 7.2]{AR}, if $V$ extends to $\CP^4$ then $\alpha(V)=\left(\Delta(\Delta-1)\right)/12\pmod 2.$ Since $\mathcal O(\smallminus b)\oplus \mathcal O(b)$ extends and $\Delta\left(\mathcal O(b)\oplus \mathcal O(\smallminus b)\right)=b^2,$ we see that \begin{align*}\alpha\left(\mathcal O(\smallminus b)\oplus \mathcal O(b)\right) &=\frac{b^2(b^2-1)}{12}\pmod 2\\ &=\frac{b^2(b+1)(b-1)}{12} \pmod 2\\ &=\frac{b^2(b+1)(b-1)}{4} \pmod 2 \end{align*} So we must determine when $b^2(b+1)(b-1)$ is divisible by $8$. \begin{itemize} \item If $b\equiv 0 \pmod 4$, then $16$ divides $b^2(b+1)(b-1)$ so $\alpha\left(\mathcal O(b)\oplus \mathcal O(\smallminus b)\right)=0.$ \item If $b$ is odd, then both $b+1$ and $b-1$ are even, and one of $b+1$ and $b-1$ are divisible by $4$. therefore $\alpha\left(\mathcal O(b)\oplus \mathcal O(\smallminus b)\right)=0$. \item If $b\equiv 2\pmod 4$, then $b+1$ and $b-1$ are odd, and $b^2$ is divisible by $4$ but not $8$. So $\alpha\left(\mathcal O(b)\oplus \mathcal O(\smallminus b)\right)=1.$ \end{itemize} Thus we see that $$\alpha\left(V+_aW\right)=\alpha(V)+\alpha(W)+\epsilon(a),$$ where $$\epsilon(a):=\begin{cases} 0 \text{ if $a \not \equiv 4\pmod 8$} \\ 1 \text{ if $a\equiv 4\pmod 8$.}\end{cases} $$ Comparing this with Theorem~\ref{cor:alpha_almost_add_alg}, we see that $\alpha\left(V+_{a_1}W\right)=\alpha\left(V+_HW\right)$.\qed \section{Generalization to rank $3$ bundles on $\CP^5$}\label{sec:generalization} We now explore additive structures on rank $3$ bundles on $\CP^5$. Recall that, given a fixed rank $2$ bundle $V_0$ on $\CP^5$, Example~\ref{rmk:inf_many} gives a group $\G_{V_0}$ with underlying set $$\{ V\: \CP^5 \to \BU(3) \, | \, c_i(V)=c_i(V_0), \, i=1,2\}$$ and identity element $V_0 \oplus \underline \C$. Our primary interest here is to better understand these group structures and their properties. We complete the proof of Proposition~\ref{prop:main_construction2} in Subsection~\ref{subsec:proofrk3p5} by showing that $c_3$ is a group homomorphism. In Subsection~\ref{subsecl:interesting}, we explore the structure of $\G_{V_0}$ for $V_0 = \mathcal O(a) \oplus \mathcal O(b)$. One might hope that $+_{V_0}$ allows for the construction of interesting bundles from simple ones since this was the case for Horrocks' construction on rank $2$ bundles on $\CP^3$ (see Theorem~\ref{thm:horrocks_generates}). We prove a result along these lines: \begin{prop}\label{prop:not_sum_lb} For infinitely many rank two bundles $V_0 := \mathcal O(a)\oplus \mathcal O(b)$, there exists a bundle $W=\mathcal O(x)\oplus \mathcal O(y) \oplus \mathcal O(z)$ with $W\in \G_{V_0}$, such that the subgroup generated by $W$ contains bundle which is not itself a sum of line bundles and such that the subgroup generated by $W$ is of finite index. \end{prop} The proof of this result is elementary, involving only the study of possible Chern classes of sums of line bundles and additivity of $c_3$ as proved in Subsection~\ref{subsec:proofrk3p5}. \subsecl{The third Chern class and the structure of $\G_{V_0}$ for rank $3$ bundles}{subsec:proofrk3p5} We first prove the additivity of $+_{V_0}$ for $c_3$ and complete a theorem stated in the introduction. \begin{proof}[Proof of Proposition~\ref{prop:main_construction2}] We have already established the group structure in Example~\ref{ex:rk3p5}. The identity element is $V_0 \oplus \underline{\C}$ by Corollary~\ref{cor:add_pi_zero}. To show that $c_3$ is a homomorphism, let $H\Z$ denote the integral Eilenberg--Mac Lane spectrum. Consider the diagram \begin{center} \begin{tikzcd}[row sep=1em] \tBU(3) \arrow[r, "c_3"]\arrow[d]& \Omega^\infty \Sigma^6 H\Z\ar[d] \\ BU(2) \arrow[r,"0"] \arrow[u,bend left, dashed, "s"] & * \arrow[u,bend right, dashed] \end{tikzcd} \end{center} This diagram is homotopy commutative since $s$ represents a bundle with $c_3=0$. By Corollary~\ref{cor:group_hom_2}, we get a group homomorphism $c_3\:[\CP^3,BU(3)]_{c} \to H^6(\CP^5,\Z).$ \end{proof} In \cite{MO}, we prove that rank $3$ bundles on $\CP^5$ are determined by Chern classes except if $c_1(V)\equiv c_2(V) \equiv 0 \pmod 3$, in which case there are three topologically distinct bundles with the same Chern classes as $V$. This proves the following result. \begin{cor}\label{cor:kernel} The kernel of $c_3\: \G_{V_0} \to \Z$ is: \begin{enumerate} \item trivial if $c_1(V_0)\not\equiv 0 \pmod 3$ or $c_2(V_0)\not\equiv 0 \pmod 3$; and \item isomorphic to $\Z/3$ if $c_1(V_0)\equiv c_2(V_0) \equiv 0 \pmod 3$. \end{enumerate} \end{cor} \begin{rmk}\label{rmk:c3_splits} Note that the image of $c_3$ is an infinite subgroup of $\Z$, since the conditions for three integers to be the Chern classes of a rank $3$ bundle on $\CP^5$ are a finite number of congruences (see \cite[Lemma 2.16]{MO}). In the first case in Corollary~\ref{cor:kernel}, $\G_{V_0}$ is abstractly isomorphic to $\Z$; in the second, $\G_{V_0} \simeq \Z \oplus \Z/3$.\end{rmk} In \cite{MO}, we define a $\Z/3$-valued invariant $\rho$ for rank $3$ bundles $V$ on $\CP^5$ such that $c_1(V) \equiv c_2(V)\equiv 0 \pmod 3$. The invariant $\rho$ can be viewed as a generalization of $\alpha$ for rank $2$ bundles on $\CP^3$: we prove that $c_1,c_2,c_3$ and $\rho$ are complete invariants of complex rank $3$ topological bundles on $\CP^5$. Additivity of $c_3$ naturally leads to the following: \begin{q} For general $V_0$ with first and second Chern class divisible by $3$, what is a formula for $\rho(V+_{V_0}W)$ in terms of $\rho(V)$, $\rho(W)$, and $\rho(V_0)$? \end{q} Note that a necessary condition for $\rho$ to be a group homomorphism is that $\rho(V_0\oplus \underline{\C})=0$. \subsecl{Constructing rank $3$ bundles on $\CP^5$ from sums of line bundles}{subsecl:interesting} We will begin this subsection by showing that there exist infinitely many choices $V_0=\mathcal O(a)\oplus \mathcal O(b)$ on $\CP^3$ such that $\G_{V_0}$ contains a non-identity sum of line bundles. This is a preliminary to prove Proposition~\ref{prop:not_sum_lb}. First, we simplify notation. \begin{defn} Given $n_1,\dots, n_k\in \Z$, let $\mathcal O_{n_1,\dots,n_k}:=\mathcal O({n_1})\oplus \dots \oplus \mathcal O({n_k}).$ \end{defn} The condition that $\mathcal O_{x,y,z}\in\G_{\mathcal O_{a,b}}$ is purely algebraic. Since \begin{align*} c_1(\mathcal O_{a,b}) &= a+b,\\ c_1(\mathcal O_{x,y,z}) &= x+y+z, \\ c_2(\mathcal O_{a,b})&=ab, \,\,\,\,\text{ and } \\ c_2(\mathcal O_{x,y,z})&=xy+yz+zx,\end{align*} an element $\mathcal O_{x,y,z} \in \G_{\mathcal O_{a,b}}$ is equivalent to an integer solutions to the equations \begin{align*}x+y+z&=a+b \\ xy+yz+zx &= ab.\end{align*} Note that, since the identity of $\G_{\mathcal O{a,b}}$ is $\mathcal O_{a,b} \oplus \underline{\C}$, a sum of line bundles $\mathcal O_{x,y,z}\in \G_{\mathcal O_{a,b}}$ is a nonidentity element if $c_3(\mathcal O_{x,y,z})=xyz$ is nonzero. Define new variables $c:=a-x$ and $d:=b-y$. The first equation is equivalent to $z = c+d$. So must solve $$(a-c)(b-d) + (b-d)(c+d)+(a-c)(c+d)=ab.$$ Expanding out this equation, we get the equivalent equation that $$Q:=c^2+d^2-bd-ac+cd =0.$$ This is the equation of a quadric hypersurface in $\mathbb{P}^3_{a,b,c,d}$, whose rational points can be described easily. First, we parameterize rational lines passing a fixed rational point $q$ on the variety $V(Q)$ cut out by the equation $Q$ in $\mathbb{P}^3$. Then we solve for the second point of intersection of each line through $q$ with $V(Q)$. Take $q$ to be the point with homogeneous coordinates $[a:b:c:d]= [1:0:0:0]$. We consider the linear span of $q$ and points of the form $[0:l:u:v]$ for $l,u,v$ integers not all zero. Such a line is parameterized by $$[s:t] \mapsto [t:ls: us:vs].$$ Substituting these coordinates in $Q=0$ gives the equation \begin{equation}\label{eq:parameterized_sols}(u^2+v^2-lv+uv)s^2 -uts=0.\end{equation} If this equation is not identically zero, we get solutions as follows. \begin{enumerate} \item For any integers $u,v,l$ we get a solution $$[a:b:c:d]=\big[ u^2+v^2+uv-lv:ul : u^2: uv \big].$$ \end{enumerate} If Equation~\eqref{eq:parameterized_sols} is identically zero, the entire line is contained in $V(Q)$ and we have infinitely many rational solutions. For example: \begin{enumerate} \item[(2)] For $u,v,l$ with $u=0$ and $v(l-v)=0$ satisfied, we get that the entire lines spanned by $[1:0:0:0]$ and $[0:l:0:v]$ is contained in $V(Q)$ and we have infinitely many rational solutions $[a:b:c:d]=[t:l:0:l]$ for $t,l \in \Z$ both nonzero. \end{enumerate} Returning to the original problem, we solve for $x,y,z,a,b$. In passing to and from projective geometry, we adjust by a scaling factor. \begin{enumerate} \item[(1*)] For any $u,l,v,w \in \Z$ we get solutions to our original equations: \begin{align*}x &= w(v^2+uv-lv),\\ y &= wu(l-v),\\ z &= wu(u+v)\\ a&=w(u^2+v^2+uv-lv),\\ b& = wul.\end{align*} \item[(2*)] For any choice of $t,l \in \Z$ we get two solutions: $$x=t, \,\,y = l, \,\, z=0,\,\,a=l,\,\,b=t;\,\, $$ $$x=t,\,\,y=0,\,\,z=l,\,\,a=t,\,\,y=l.$$ These are trivial solutions corresponding to the identity $\mathcal O_{t,l,0} \in \mathcal{A}_{\mathcal O_{t,l}}.$ \end{enumerate} \begin{rmk}\label{rmk:inf_many}The previous computation shows that there are infinitely many $\G_{V_0}$ where $V_0=\mathcal O_{a,b}$, such that $W=\mathcal O_{x,y,z}$ is a non-identity element of $\G_{V_0}$. Explicitly, $a,b,x,y,z$ functions of $l,w,u,v\in \Z$ as indicated in (1*) above, with $x,y,z\neq 0$, provide infinitely many such examples.\end{rmk} \begin{example}\label{example:explicit_small_index} Consider $w=u=v=1$ and $l=0$ in (1*) above. This gives $W=\mathcal O_{2 , \smallminus 1, 2}$ and $V_0= \mathcal O_{3,0}$. By construction, $W \in \G_{V_0}$. Since $c_1$ is even and $c_1, c_2$ are both divisible by $3$, the Schwarzenberger conditions as in \cite[Lemma 2.16]{MO} imply that there exists a bundle $W' \in \mathcal G_{V_0}$ with $c_3(W')=a$ if and only if $a$ is even. By Remark~\ref{rmk:c3_splits}, $\G_{V_0} \simeq \Z\oplus \Z/3$ and, under this identification, $c_3$ is projection onto the first factor. Since $c_3(W)=4$, this implies that the subgroup generated by $W$ is index $6$. \end{example} \begin{proof}[Proof of Proposition~\ref{prop:not_sum_lb}] Let $V_0={\mathcal O_{a,b}}$ and $W=\mathcal O_{x,y,z}$ be as in Remark~\ref{rmk:inf_many} above with $c_3(W)\neq 0$. By construction, $W\in \G_{V_0}$. Inductively, let $$+_{V_0}^n W:= W+_{V_0}\left(+_{V_0}^{n-1}W\right).$$ Assume for a contradiction that $+_{V_0}^nW$ is a sum of line bundles for all positive integers $n$. Let $a_1,a_2,a_3 \in \Z$ denote the Chern classes of $W$. By the fact that $+_{V_0}$ preserves $c_1$ and $c_2$ and that $c_3\: \G_{V_0} \to H^6(\CP^5;\Z)$ is a group homomorphism, we see that \begin{align*} c_1(+_{V_0}^n W) & = a_1, & c_2(+_{V_0}^nW) &=a_2, \text{ and}& c_3(+_{V_0}^nW) &= na_3. \end{align*} Thus, our assumption implies that the equations \begin{align*} X+Y+Z & = a_1\\ XY+YX+ZX &= a_2\\ XYZ &= n a_3 \end{align*} have a solution for all $n\in \Z$. Choose a prime number $p$ so that $p>|3 a_i|$ for $i=1,3$. Take $n=p$ in the above equations. Suppose that $(x_0, y_0, z_0)$ is a solution. Since $c_3(W)=a_3\neq 0$, the third equation implies that $p$ divides one of $x_0$, $y_0$, or $z_0$. Without loss of generality, $p$ divides $x_0$. But note that $$|z_0 y_0| \leq |a_3|< \frac{p}{3}$$ and so \begin{align*}|z_0|& < \frac{p}{3}, &|y_0| & < \frac{p}{3}. \end{align*} Since $|x_0|$ is divisible by $p$, it is larger than $|y_0|$, $|z_0|$, and $|y_0+z_0|$. Therefore \begin{align*} |x_0 + y_0 + z_0| & \geq |x_0| - |y_0|-|z_0|\\ & \geq p-\frac{p}{3}-\frac{p}{3}\\ & \geq \frac{p}{3}. \end{align*} On the other hand, $|x_0+y_0+z_0| = |a_1|< \frac{p}{3},$ a contradiction. Now consider the subgroup generated by $H\subset \G_{V_0}$ generated by $W$. Note since $c_3(W)\neq 0$, $\ker(c_3) \cap H$ is trivial and we have an exact sequence of groups $$\ker(c_3) \to \G_{V_0}/H \to \Z/c_3(W).$$ Since $\ker(c_3)$ is finite by Remark~\ref{rmk:c3_splits} and $\Z/c_3(W)$ is finite, $\G_{V_0}/H$ is also finite.\end{proof} \begin{qs} This section gives insight into the groups $\G_{V_0}$, but many questions remain. \begin{enumerate} \item For groups $\G_{V_0} \simeq \Z \oplus \Z/3$, what are generators for $3$-torsion? \item Are the Horrocks bundles of rank $3$ on $\CP^5$ elements of some $\G_{V_0}$? If so, what subgroups do they generate? \item What can be said about the structure of $\G_{V_0}$ for $V_0$ not a sum of line bundles? \end{enumerate} \end{qs} \begin{appendices} \section{The pointed Grothendieck construction for spaces}\label{app:grothendieck} By a Grothendieck construction, we mean a correspondence between objects over a base, on the one hand, and functors from the base into a category that contains the fibers, on the other. To illustrate this idea, consider the following basic example. To give a map from a set $A$ to a set $B$ is the same as giving a preimage in $A$ for each point in $B$. Given two maps of sets $a\:A\to B$ and $a'\:A'\to B$, a function $f\:A\to A'$ making the diagram $$ \begin{tikzcd} A\ar[rr,"f"]\ar[dr,"a"] && A'\ar[dl,"a'"] \\ &B \end{tikzcd} $$ commute is equivalent to a system of maps between all preimages of points in $B$. So, if $\Set$ denotes a category of small sets, we have an equivalence of categories $$\Set_{/B} \simeq \Fun_{\operatorname{LSet}}(B,\Set),$$ where $\operatorname{LSet}$ denotes a category of large sets. This equivalence is given left-to-right by $$\left(f\: A \to B\right) \mapsto \left( x \in B \mapsto \{x \} \times_B A\right),$$ and right-to-left by $$\left(F\: B \to \Set\right) \mapsto \sqcup_{x\in B} F(x),$$ noting that the latter carries a natural map to $B$. If we attempt the above procedure with sets replaced by spaces, we run into problems. We can take a map of spaces $f\: X \to Y$ and associate the function sending $p\in Y$ to $f^{-1}(p)$. However, extending this to a functor from the groupoid of $Y$ to spaces requires care. There is no canonical way to lift a path $Y$ to a map of fibers. These issues are resolved by retaining more coherence information, i.e. working with model categories or infinity categories. We will phrase the Grothendieck construction in terms of straightening and unstraightening of $\infone$-categories, in the sense of \cite{HTT}. Recall that our use of this machinery is to make precise the statement that a diagram of spaces \begin{equation}\label{eq:groupobdiag}\begin{tikzcd} X \arrow[r," f"] & Y\arrow[l,dashed, bend left, " s"] \end{tikzcd}\end{equation} such that $f$ is $n$-connective makes $X$ into an infinite loop space over $Y$ after $(2n-2)$-truncation. We wish to convert the study of Diagram~\eqref{eq:groupobdiag} in spaces to the study of its (pointed) homotopy fibers, in a coherent manner. Hence, this appendix focuses on explaining the pointed version straightening and unstraightening for functors to spaces. \begin{notation}\label{not:groth} We establish terminology and conventions and provide references. \begin{enumerate} \item Let $\sset$ the category of simplicial sets and $\operatorname{Cat}_{\Delta}$ denote the category of simplicially enriched categories. By $\infone$-category, we will mean a simplicial set that satisfies the inner horn condition (a quasi-category, \cite[1.1.2.4]{HTT}).\footnote{There are many other approaches to infinity categories, but this one is sufficient for our purposes.} \item We let $\mathfrak C$ denote the left adjoint to the homotopy coherent nerve functor $N\: \op{Cat}_{\Delta} \to \operatorname{sSet}$ (see \cite[1.1.5]{HTT}). These functors participate in a Quillen equivalence of simplicial model categories \begin{center} \begin{tikzcd} \operatorname{Cat}_{\Delta} \ar[r, bend right, "N" below] & \sset\ar[l, bend right, "\mathfrak C " above] \end{tikzcd} \end{center} between simplicial sets with the Joyal model structure and simplicially enriched categories with the Bergner model structure (see \cite[Section 1.1.5]{HTT} for discussion; see \cite{Berg2}, \cite{Berg1}, and \cite{Joyal1} for a proof). This sets up an equivalence between two theories of infinity categories -- that of quasi-categories and that of Kan complex enriched categories. \item We let ${\sset}_{/S}$ denote the simplicial model category of simplicial sets over $S$, endowed with the contravariant model structure. We have an associated simplicial category $\operatorname{ RFib}(S)$ obtained by taking fibrant-cofibrant objects. \item Let $\Kan$ denote the $\infone$-category of Kan complexes obtained by taking fibrant-cofibrant objects in the simplicial model category of simplicial sets with the Kan model structure and applying $N$. Given a Kan complex $S$, let $\Kan_{/S}$ denote $\infone$ over-category of Kan complexes over $S$. \item Given a $\infone$-category $C$ and a simplicial set $S$, we write $\Fun(S,C)$ for the simplicial set of maps from $S$ to $C$. This is an $\infone$-category since $C$ is, and models functors from $S$ to $C$ (see \cite[1.2.7.2]{HTT}). \item We ignore set-theoretic issues and refer the concerned reader to \cite[1.2.15]{HTT}. \end{enumerate} \end{notation} The fundamental result that we need is the following. \begin{thm}[Lure, \cite{HTT}]\label{thm:lurie_groth} Let $S$ be a Kan complex. There is an equivalence of $\infone$-categories \begin{equation}\label{diag:Kan} \begin{tikzcd} \Kan_{/S} \arrow[r, bend right, "\St"] & \Fun(S,\Kan) \arrow[l, bend right, "\Un"]. \end{tikzcd} \end{equation} \end{thm} \begin{proof} By \cite[2.2.3.11]{HTT}, for any simplicial set we have an equivalence of simplicial categories \begin{equation}\label{eq:frak} \begin{tikzcd} \operatorname{RFib}(S) \arrow[r, bend right, "\St"] & ({\op{sSet}}^{ {\mathfrak{C}[S]}^{\op{op}} })^\circ, \arrow[l, bend right, "\Un"] \end{tikzcd} \end{equation} where $\op{sSet}$ has the Kan model structure, ${\op{sSet}}^{ {\mathfrak{C}[S]}^{\op{op}} }$ has the projective model structure, and $(-)^\circ$ indicates we take fibrant-cofibrant objects. Since $S$ is a Kan complex, \cite[3.1.5.1(A3)]{HTT} implies $\Kan_{/S}\simeq N(\RFib(S))$ as $\infone$-categories. By \cite[4.2.4.4]{HTT}, applying the homotopy coherent nerve to the left-hand side of Equation~\eqref{eq:frak} recovers $\op{Fun}(S^{\op{op}}, \op{Kan}).$ By \cite[Section 57]{Rezkqcat} combined with \cite[Proposition 14.14]{Rezkqcat}, every Kan complex is equivalent to its opposite as an $\infone$-category. \end{proof} To obtain a version for spaces over a base with a section, we take pointed objects on either side of Theorem~\ref{thm:lurie_groth}. \begin{defn} Given an $(\infy,1)$-category $C$ with final object $t$ \cite[1.2.12]{HTT}, $C_*$ denote the full $\infone$-subcategory of $\Fun(\Delta^1,C)$ spanned by functors that restrict to $t$ on $0$. Explicitly, if $\op{fib}_t$ denotes the (1-categorical) fiber in $\sset$ at $t$, we have that $$C_* := \operatorname{fib}_{t}\big(\Fun(\Delta^1,C) \to \Fun(\Delta^0, C)\big),$$ where the map is given by pre-composition with $\{0\} \simeq \Delta^0 \hookrightarrow \Delta^1.$ \end{defn} \begin{rmk}Given an $\infone$-category $C$ with final object $t$, the infinity category $C_*$ is also modeled by the $\infone$-under category $C_{t/}$. See \cite[1.2.9]{HTT} for a discussion of $\infone$-under categories and \cite[4.2.1.5 and 7.2.2.8]{HTT} for the equivalence.\end{rmk} Since equivalent $\infone$-categories support the same category theory, the equivalences $\St$ and $\Un$ will induce one on pointed objects. \begin{lem}\label{lem:equiv_pointed} An equivalence of $\infone$-categories induces an equivalence upon taking pointed objects. \end{lem} \begin{proof} This is the dual statement of \cite[1.2.9.3]{HTT} (dualized, for example, following the method of \cite[1.2.9.5]{HTT}), specialized to the case of a final object. \end{proof} We can apply Lemma~\ref{lem:equiv_pointed} directly to the equivalences of Theorem~\ref{thm:lurie_groth}, but to understand the result we first describe pointed objects on the right-hand side of Diagram~\ref{diag:Kan}. \begin{lem} For any $\infone$-category $C$ with a final object $t$, the $\infone$-category $\Fun(S,C)$ has a final object the constant functor at $t$. There is an equivalence of $\infone$-categories $\Fun(S,C)_*\simeq \Fun(S,C_*)$. \end{lem} \begin{proof} The first statement follows from the dual of \cite[5.1.2.3]{HTT}. For the second, recall that $\Fun(S,C)_*$ is modeled by the fiber at the final object of the map $$ \Fun\left(\Delta^1,\Fun(S,C)\right) \to \Fun\left(\Delta^0, \Fun(S,C)\right)$$ given by restricting to $\{0\} \hookrightarrow \Delta^1$. Moreover, there is an adjunction $ S \times -\dashv \Fun(S,-)$ as enriched functors $\sset \to \sset$, since $\Fun(S,-)$ is an internal hom to in the monoidal category $(\sset, \times, \Delta^0)$. So we get isomorphisms of simplicial sets: \begin{align*}&\operatorname{fib}\left( \Fun\left(\Delta^1,\Fun(S,C)\right) \to \Fun\left(\Delta^0, \Fun(S,C)\right)\right) \\ &\simeq \operatorname{fib}\left(\Fun\left(\Delta^1\times S,C\right) \to \Fun \left(\Delta^0\times S,C \right)\right) \\ &\simeq \operatorname{fib}\left(\Fun\left(S, \Fun(\Delta^1, C)\right) \to \Fun\left( S,\Fun(\Delta^0,C)\right) \right).\end{align*} Since right adjoints preserve limits we have that: \begin{align*}\operatorname{fib}\big(\Fun\left(S, \Fun(\Delta^1, C)\right) \!\to\! \Fun\left( S,\Fun(\Delta^0,C)\right)\big) \simeq \Fun\left(S, \op{fib} \left(\Fun(\Delta^1,C)\! \to\! \Fun(\Delta^0, C)\right) \right).\end{align*} Tracing through our equivalences, we have shown that $\Fun(S,C)_* \simeq \Fun(S, C_*)$. \end{proof} \begin{cor}\label{cor:compatible} Straightening and unstraightening induce mutually inverse an equivalences of $\infone$-categories $\St_*\: {(\Kan_{/S}})_* \to \Fun(S,\Kan_*)$ and $\Un_*\:\Fun(S,\Kan_*) \to {(\Kan_{/S}})_* .$ \end{cor} We record one more technical result about pointed objects in $\infone$-categories. \begin{lem}\label{lem:pointed_adjunction} Let $\mathcal C$ be a $\infone$-category with a final object $t$ and with coproducts. Then there is an adjunction of $\infone$-categories \begin{center} \begin{tikzcd} C \ar[r, bend right, "+" below] \ar[r, phantom, "\dashv" {labl, near end}] & C_* \ar[l, bend right, "\forget " above] \end{tikzcd} \end{center} \end{lem} \begin{proof} The forgetful functor $\op{F}\:\big(t \to Y\big) \mapsto Y$ defines a map of $\infone$-categories. We have an $\infone$-categorical coproduct $\sqcup$ in $C$ \cite[4.4.1]{HTT}; to a point $c \in C$, we can consider the coproduct $c \sqcup t,$ which carries a natural map $t \to c \sqcup t$. To show $+$ is left-adjoint to $\op{F}$, we use \cite[5.2.2.8]{HTT}: it suffices to provide a unit transformation $$u\:\operatorname{Id}_C \to \op{F} \circ +$$ in the $\infone$-category $\Fun(C,C)$ such that a certain composite, as given in \cite[5.2.2.7]{HTT}, is an isomorphism in the homotopy category. We have a candidate given by $u_c\: c \to t \sqcup c$, the structure map associated to the coproduct. We must show the composite \begin{center} \begin{tikzcd} \Maps_{C_*}(t \to c \sqcup t, t \to y) \arrow[r," \op{apply \,\, F}"] & \Maps_{C}\big(\op{F}(t \to c \sqcup t), \op{F}(t \to y)\big) \arrow[r, "u_c \circ -"] &\Maps_C(c,y)\end{tikzcd}\end{center} is an isomorphism in the homotopy category of spaces for any choice of $c$ and $y$. Combining \cite[5.5.5.12]{HTT} with \cite[Lemma 7.2.2.8; Proposition 4.2.1.5]{HTT}, we have an equivalence of spaces: $$\Maps_{C_*}(t \to c \sqcup t, t \to y) \simeq \operatorname{hofib}\Big(\Maps_C(c \sqcup t, y) \to \Maps_C(t,y) \Big),$$ where the arrow $\Maps_C(c \sqcup t, y) \to \Maps_C(t,y) $ is induced by precomposing with the given map $t \to c \sqcup t,$ and the fiber is taken over the given $t \to y$. We have a homotopy commutative diagram of spaces: \begin{center} \begin{tikzcd} \Maps_C(c \sqcup t, y) \arrow[r]\arrow[d,"\simeq"]& \Maps_C(t,y)\arrow[d,"="] \\ \Maps_C(c,y) \times \Maps_C(t,y) \arrow[r, "\pi_2"] &\Maps_C(t,y), \end{tikzcd} \end{center} So \begin{align*}\Maps_{C_*}\big(t \to c \sqcup t, t \to y\big) & \simeq \operatorname{hofib}\big(\pi_2\: \Maps_C(c,y) \times \Maps_C(t,y) \to \Maps_C(t,y)\big)\\ & \simeq \Maps_C(c,y).\end{align*} Under this identification, we have a homotopy commutative diagram \begin{center} \begin{tikzcd}[column sep = .2in] \Maps_{C_*}(t \to c \sqcup t, t \to y)\arrow[d,"\operatorname{apply \,\,\, F}" left]\arrow[r,"\simeq"]& \op{fib}\left(\left(\Maps_C(c,y) \times \Maps_C(t,y)\right) \xrightarrow{\pi_2} \Maps_C(t,y)\right) \arrow[d]\arrow[r,"\simeq"] & \Maps_C(c,y) \arrow[dl,"1\times (t \to y)"] \arrow[ddl,bend left,"1"below]\\ \Maps_{C}( c \sqcup t, y)\arrow[r,"\simeq"] \arrow[d,"u_c \circ -" left] &\Maps_C(c,y) \times \Maps_C(t,y) \arrow[d, "\pi_1" left] \\ \Maps_C(c,y)\arrow[r,"="] & \Maps_C(c,y) \end{tikzcd} \end{center} Since the rightmost map is a weak homotopy equivalence, as are all the horizontal arrows, so is the left vertical composite. \end{proof} \end{appendices} \bibliographystyle{abbrv}
1,941,325,220,611
arxiv
\section{Introduction} The notion of shadowing plays an important role in the general qualitative theory of dynamical systems. In a dynamical system with the shadowing property we can fined an orbit which remains close to a numerical solution for a long time. So dynamical systems with the shadowing property behave nicely under computer simulations, in the sense that pseudo-orbits generated by the computer can be regarded as true orbits, provided that the error of each step measured by a distance (a metric) on the space of calculation is sufficiently small. Das et al.~\cite{DAS2013149} gave a purely topological definition for this notion, which is equivalent to the metric one in the case that the phase space is a compact metric space. It is well known that many dynamical properties of dynamical systems with the shadowing property are equivalent. In the other hand dynamical system with the shadowing property and the phase space inherit some properties to each others, for example the identity map on a compact space has shadowing property if and only if the phase space is totally disconnected. A really convincing theory of topological dynamics exists only with the assumption that the phase space $X$, in addition to being metric, is also compact. Most general results concerning chaotic properties, like positive entropy, are obtained under this hypothesis \cite{MR3539720,MOOTHATHU20112232,Wu2016,Wu2018TheCP,wu2019ijbc}. Sometimes one prefers to consider a dynamical system on a non-metrizable topological space. On the other hand, many dynamical concepts in the topological theory of dynamical systems are defined using the `distance' between sets and points. However, for general topological spaces such distance -or size-related concepts cannot be defined unless we have somewhat more structure than what the topology itself provides. This issue will be solved if we consider a completely regular, and not necessarily metrizable, topological spaces which equipped with an structure, called uniformity, enabling us to control the distance between points in these spaces. Das et al.~\cite{DAS2013149} generalized the usual definitions in metric spaces of expansivity, shadowing, and chain recurrence for homeomorphisms to topological spaces. We~\cite{WU2019145} introduced the topological concepts of weak uniformity, uniform rigidity, and multi-sensitivity and obtained some equivalent characterizations of uniform rigidity. Then, we \cite{wu-ijbc2} proved that a point transitive dynamical system is either sensitive or almost equicontinuous. Recently, we \cite{ahmadi3,ahmadi2} generalized concepts of entropy points, expansivity and shadowing property for dynamical systems to uniform spaces and obtained a relation between topological shadowing property and positive uniform entropy. Shah et al. \cite{Shah2016} showed that a dynamical system on a totally bounded uniform space which is topologically shadowing, mixing, and topologically expansive has the topological specification property. Then, we \cite{ahmadi} proved that a dynamical system with ergodic shadowing is topologically chain transitive. Good and Mac\'{\i}as \cite{good} obtained some equivalent characterizations and iteration invariance of various definitions of shadowing in the compact uniformity sense generalizing the compact metric sense. For more results on shadowing transitivity, and chain properties, one is referred to \cite{ahmadi4,ahmadi3,MOOTHATHU20112232,RICHESON2008251,WWL2018} and references therein. This paper studies topological definitions of chain transitivity, totally chain transitivity, and chain mixing property of dynamical systems with the topological shadowing property and obtains that topological chain recurrent, topological chain transitivity and topological chain mixing properties are equivalent in a connected uniform space and use the topological shadowing property to characterize totally disconnected uniform spaces. \section{Basic definitions and preliminaries}\label{2} A \textit{uniform space} is a set with a uniform structure defined on it. A {\it uniform structure} $\mathscr{U}$ on a space $X$ is defined by the specification of a system of subsets of the product $X\times X$ satisfying the following axioms: \begin{itemize} \item[U1)] for any $E_1,E_2\in\mathscr{U}$, the intersection $E_1\cap E_2$ is also contained in $\mathscr{U}$, and if $E_1\subset E_2$ and $E_1\in\mathscr{U}$, then $E_2\in\mathscr{U}$; \item[U2)] $\bigcap \mathscr{U} \supset \Delta_X = \{(x,x)~|~ x\in X\}$, i.e., every set $E\in\mathscr{U}$ contains the diagonal $\Delta_X$; \item[U3)] if $E\in\mathscr{U}$, then $E^{\mathrm{T}} = \left\{(y,x)~|~(x,y)\in E\right\} \in\mathscr{U}$; \item[U4)] for any $E\in\mathscr{U}$, there exists $\hat{E}\in\mathscr{U}$ such that $\hat{E}\circ \hat{E} \subset E$, where $$ \hat{E}\circ \hat{E} =\left\{(x,y)~|~ \text{ there exists a } z\in X \text{ such that }(x,z)\in \hat{E} \text{ and } (z,y)\in \hat{E}\right\}. $$ \end{itemize} The elements of $\mathscr{U}$ are called \textit{entourages} of the uniformity. A \textit{uniformity base} or a \textit{basis of entourages} on a set $X$ is a family of subsets of $X\times X$, such that the same three conditions U2), U3) and U4) are satisfied, and which satisfies the conditions for a filter base (i.e. the intersection of every pair of members of the family contains a member of the family). Given a uniformity $\mathscr{U}$ on $X$ we can, of course, take the whole of $\mathscr{U}$ as a base, in this sense. More usefully, we can take the symmetric entourages of $\mathscr{U}$ as a base, i.e., the entourages $E$ such that $E=E^\mathrm{T}$. If $(X,\mathscr{U})$ is a uniform space, then the {\it uniform topology} on $X$ is the topology in which a neighborhood base at a point $x\in X$ is formed by the family of sets $E[x]$, where $E$ runs through the entourages of $X$, and $E[x]=\{y\in X~|~(x,y)\in E\}$ is called the {\it cross section} of $E$ at $x$. By saying that a topological space is {\it uniformizable} we mean, of course, that there exists a uniformity such that the associated uniform topology is the given topology. It can be shown that a topological space is uniformizable if and only if it is completely regular. A mapping $f:X\longrightarrow Y$ from a uniform space $X$ into a uniform space $Y$ is {\it uniformly continuous} if the inverse image $(f\times f)^{-1}(E)$ is an entourage of $X$ for each entourage $E$ of $Y$. Throughout this paper, assume that all uniform spaces are Hausdorff, i.e., $\bigcap \mathscr{U}=\Delta_{X}$, and let $\mathbb{N}=\{1, 2, 3, \ldots\}$, $\mathbb{N}_0=\{0, 1, 2, \ldots\}$, and $$ E^n:=E\circ E\circ\dots \circ E~~(n \textrm{ times}), $$ for any $E\subset X\times X$. A {\it dynamical system} (briefly, {\it system}) is a pair $(X, f)$, where $X$ is a uniform space and $f:X\longrightarrow X$ is a uniformly continuous map. For $x\in X$ and $U, V\subset X$, let $$ N_f(x,U)=\{n\in\mathbb{N}_0~|~ f^n(x)\in U\} \mbox{ and } N_f(U,V)=\{n\in\mathbb{N}_0~|~U\cap f^{-n}(V)\neq\emptyset\}. $$ An infinite subset $A\subset\mathbb{N}_0$ is {\it relatively dense} (or {\it syndetic}) if there exists $k>0$ such that $\{n, n + 1\dots, n + k\}\cap A\neq\emptyset$ for all $n\in \mathbb{N}_0$ (this means that gaps are bounded); and is {\it thick} if $A$ intersects every syndetic subset of $\mathbb{N}_0$. Let $(X, f)$ be a dynamical system. A point $x\in X$ is \begin{itemize} \item[(1)] {\it recurrent} if $N_f(x, U)\setminus\{0\}\neq\emptyset$ for every neighborhood $U$ of $x$; \item[(2)] {\it minimal} (or {\it almost periodic}) if $N_f(x, U)$ is syndetic for every neighborhood $U$ of $x$; \item[(3)] {\it non-wandering} if $N_f(U, U)\setminus\{0\}\neq\emptyset$ for any neighborhood $U$ of $x$. \end{itemize} Denote the sets of all minimal, all recurrent, and all non-wandering points of $f$ by $M(f)$, $R(f)$, and $\Omega(f)$, respectively. It is easy to see that $\Omega(f)$ is a closed invariant subset of $X$, and $M(f)\subset R(f)\subset \Omega(f)$. A dynamical system $(X, f)$ is \begin{itemize} \item[(1)] {\it non-wandering} if ${N_f(U, U)\setminus\{0\}}\neq\emptyset$ for every nonempty open subset $U$ of $X$; \item[(2)] {\it transitive} if $N_f(U, V)\neq\emptyset$ for any pair of nonempty open subsets $U$, $V$ of $X$; \item[(3)] {\it totally transitive} if $f^n$ is transitive for any $n\in\mathbb{N}$; \item[(4)] {\it weakly mixing} if $f\times f$ is transitive. \end{itemize} Clearly, a dynamical system $(X, f)$ is non-wandering if and only if $\Omega(f)=X$. Let $(X,f)$ be a dynamical system and let $D$ and $E$ be entourages of $X$. A {\it $(D,f)$-chain} of length $n$ is a sequence $\Gamma=\{x_i\}_{i=0}^{n}$ such that $(f(x_i),x_{i+1})\in D$ for $i=0, \ldots, n-1$. An infinite $(D, f)$-chain is called a {\it $(D, f)$-pseudo-orbit}. If there exists no danger of confusion, we write $D$-chain and $D$-pseudo-orbit instead of $(D, f)$-chain and $(D, f)$-pseudo-orbit, respectively. A $D$-pseudo-orbit $\Gamma=\{x_i\}$ is {\it $E$-shadowed} by a point $y\in X$ if $(f^n(y),x_n)\in E$ for all $n \in \mathbb{N}_0$. For $E\in\mathscr{U}$, use the symbol $\mathcal{O}_{E}(f,x,y)$ for the set of all $E$-chains $\{x_i\}_{i=0}^{n}$ of $f$ with $x_0=x$ and $x_n=y$ for some $n\in \mathbb{N}$. For any $x,y\in X$, we write $x\rightsquigarrow_E y$ if $\mathcal{O}_{E}(f,x,y)\neq\emptyset$ and write $x\rightsquigarrow y$ if $\mathcal{O}_{E}(f,x,y)\neq\emptyset$ for every $E\in\mathscr{U}$. We write $x\leftrightsquigarrow y$ if $x\rightsquigarrow y$ and $y\rightsquigarrow x$. The set $\{x\in X~|~ x\leftrightsquigarrow x\}$ is called the \textit{chain recurrent set} of $f$ and is denoted by $CR(f)$. A dynamical system $(X,f)$ has the {\it topological shadowing property} \cite{DAS2013149} if for every entourage $E$ of $X$, there exists an entourage $D$ such that every $D$-pseudo-orbit is $E$-shadowed by some point $y$ in $X$. It is observed that for general topological spaces, metric shadowing and topological shadowing are independent concepts \cite{DAS2013149}. However, for compact metric spaces, metric shadowing and topological shadowing are equivalent. A dynamical system $(X,f)$ is \begin{itemize} \item[(1)] {\it topologically chain transitive} if, for any entourage $E$ of $X$ and any two points $x, y\in X$, $\mathcal{O}_{E}(f,x,y)\neq\emptyset$, or equivalently $x\rightsquigarrow_E y$; \item[(2)] {\it topologically chain recurrent} if $CR(f)=X$; \item[(2)] {\it totally topological chain transitive} if $f^n$ is topologically chain transitive for any $n\in\mathbb{N}$; \item[(3)] {\it topologically chain mixing} if, for any two points $x,y\in X$ and any entourage $D$ of $X$, there exists $N\in \mathbb{N}$ such that for any $n\geq N$, there exists a $D$-chain from $x$ to $y$ of length $n$. \end{itemize} A point $x\in X$ is an {\it equicontinuous point} for $f$, or say that $f$ is {\it equicontinuous} at $x$, if for every entourage $E$ of $X$, there exists an entourage $D$ so that $(x,y)\in D$ implies $(f^n(x),f^n(y))\in E$ for all $n\in\mathbb{N}_0$. A dynamical system $(X, f)$ is {\it equicontinuous} if, for every entourage $E$ of $X$, there exists an entourage $D$ such that $(f\times f)^n(D)\subset E$ for all $n\in\mathbb{N}_0$. Note that in the case of compact metric spaces, for every entourage $E$ of $X$, there exists $\varepsilon>0$ such that $d^{-1}([0,\varepsilon])\subset E$. Therefore, the above definitions coincide with the usual ones in compact metric spaces. However, this does not hold if $X$ is not compact. For example, consider the entourage $E=\{(x,y)\in\mathbb{R}^2~|~|x-y|<e^{-x^2}\}$ of $\mathbb{R}^2$. There exists no $\varepsilon>0$ such that $d^{-1}([0,\varepsilon])\subset E$ (see \cite[Example~12]{DAS2013149}). \section{Topological chain transitivity} {This section is devoted to characterize some topological chain property for dynamical systems on compact uniform spaces. In particular, it is proved that a compact dynamical system is topologically chain mixing if and only if it totally topological chain transitive.} \begin{lem}\label{Iterated-Lemma} Let $(X, f)$ be a dynamical system on a uniform space $(X, \mathscr{U})$. Then, $(X, f)$ has the topological shadowing property if and only if $(X, f^{n})$ has the topological shadowing property for all $n\in \mathbb{N}$. \end{lem} \noindent\textbf{Proof.}\quad From the definition of topological shadowing, this holds trivially. \hspace{\stretch{1}}$\blacksquare$ \begin{lem}\label{L-E-Lemma} Let $(X, f)$ be a topologically chain transitive system on a uniform space $(X, \mathscr{U})$ and $E\in \mathscr{U}$. Then, there exists $\iota_{E}\in \mathbb{N}$ such that, for any $y\in X$, $\iota_{E}$ is the greatest common divisor of the lengths of all $E$-chains from $y$ to itself. \end{lem} \noindent\textbf{Proof.}\quad Fix $x \in X$ and let $\iota_{E}(x)$ be greatest common divisor of the lengths of $E$-chains from $x$ to itself. For any $y \in X$ and any $E$-chain $\{y_{0} = y, y_{1}, \dots, y_{n} =y\}$ from $y$ to itself, since $(X, f)$ is topologically chain transitive, there exists an $E$-chain $\{x_{0} = x, x_{1}, \dots, x_k=y, x_{k+1},\dots, x_{j}=x\}$ from $x$ to $y$ and back to $x$. Clearly, $j=m\iota_{E}(x)$ for some $m\in \mathbb{N}$. Meanwhile, noting that the $E$-chain $\{x_{0} = x, x_{1}, \ldots, x_{k}=y, y_{1}, \ldots, y_{n} = y, x_{k+1}, \ldots, x_{j}=x\}$ from $x$ to itself has length $m\iota_{E}(x)+n$, one has $m\iota_{E}(x)+ n$ is necessarily a multiple of $\iota_{E}(x)$. This implies that $\iota_{E}(x) $ divides $n$. \hspace{\stretch{1}}$\blacksquare$ \begin{lem}\cite[Theorem 1.0.1]{Ramirez2005}\label{Prime} Let $a_1, \ldots, a_n\in \mathbb{N}$. If $(a_1,\ldots, a_n)=1$, then there exists $N\in \mathbb{N}$ such that any integer $s\geq N$ is representable as a non-negative integer combination of $a_1, \ldots, a_n$. \end{lem} Let $f: (X, \mathscr{U})\longrightarrow (X, \mathscr{U})$ be a topological chain transitive map and $E\in \mathscr{U}$. Define a relation `$\sim_{E}$' on $X$ by setting $x\sim_{E} y$ if and only if there exists an $E$-chain from $x$ to $y$ of length a multiple of $\iota_{E}$. \begin{itemize} \item Reflexive property: For any $x\in X$, it is clear that $x\sim_{E} x$; \item Transitive property: If $x\sim_{E} y$ and $y\sim_{E} z$, then by concatenating chains $x\sim_{E} z$; \item Symmetry property: If $x\sim_{E} y$, then there exists an $E$-chain $\Gamma_1$ from $x$ to $y$ of length a multiple of $\iota_{E}$. Since $f$ is topologically chain transitive, there exists an $E$-chain $\Gamma_2$ from $y$ to $x$ of length $m$. By concatenating $\Gamma_1$ and $\Gamma_2$, we obtain an $E$-chain from $x$ to $x$. From Lemma~\ref{L-E-Lemma}, it follows that $m$ is a multiple of $\iota_{E}$. That is $y\sim_{E} x $. \end{itemize} Therefore, $\sim_{E}$ is an equivalent relation on $X$. If $x\sim_{E} y$, from the symmetry property of $\sim_{E}$, it follows that there exists an $E$-chain $\Gamma'$ form $y$ to $x$ having length a multiple of $\iota_{E}$. Then, for any $E$-chain $\Gamma$ from $x$ to $y$ of length $m$, concatenating $\Gamma$ and $\Gamma'$ obtains an $E$-chain from $x$ to $x$. From Lemma~\ref{L-E-Lemma}, it follows that $m$ is a multiple of $\iota_{E}$. This mean that if $x\sim_{E} y$, then every $E$-chain from $x$ to $y$ must have length a multiple of $\iota_{E}$. For any $x\in X$, let $[x]_{E}$ be the equivalence class of $x$ for $\sim_{E}$. Since $(X, f)$ is topologically chain transitive, there exists an $E$-chain $\Gamma$ from $f^{\iota_{E}}(x)$ to $x$ of length $m$. Clearly, $\Gamma_1:=\{x, f(x), \ldots, f^{\iota_{E}}(x)\}$ is an $E$-chain from $x$ to $f^{\iota_{E}}(x)$. Concatenating $\Gamma_1$ and $\Gamma$ obtains an $E$-chain from $x$ to $x$. This implies that $m+\iota_{E}$ is a multiple of $\iota_{E}$, $m$ is as well. Thus, $f^{\iota_{E}}(x)\in [x]_{E}$, i.e., $f^{\iota_{E}}([x]_{E})=[x]_{E}$. For any $0\leq i<j<\iota_{E}$, it is clear that $f^{i}(x)\nsim_{E}f^{j}(x)$ (If $f^{i}(x)\sim_{E} f^{j}(x)$, then the length $j-i$ of chain $E$-chain $\{f^{i}(x), f^{i+1}(x), \ldots, f^{j}(x)\}$ must be a multiple of $\iota_{E}$, which is impossible). Therefore, there exist $\iota_E$ equivalence classes for $\sim_{E}$, $f$ cycles among the classes periodically, and every class is invariant under $f^{\iota_E}$. In fact, $X/\sim_{E}=\{[x]_{E}, [f(x)]_{E}, \ldots, [f^{\iota_{E}-1}(x)]_{E}\}$ for any $x\in X$. Similarly, we can define another equivalence relation `$\sim$' on $X$ by saying that $x \sim y$ if and only if $x \sim_{E} y$ for all $E\in\mathscr{U}$. For any $x\in X$, let $[x]$ denote the equivalence class of $x$ for $\sim$. \begin{prop}\label{Open-Closed} Let $(X, f)$ be a topologically chain transitive system on a uniform space $(X, \mathscr{U})$. Then, for any $E \in \mathscr{U}$ and any $x\in X$, $[x]_{E}$ is both open and closed, and $[x]$ is closed. \end{prop} \noindent\textbf{Proof.}\quad Choose $\hat{E}\in\mathscr{U}$ such that $\hat{E}^2\subset E$. For any $y\in [x]_{E}$, since $f$ is topologically chain transitive, there exists an $\hat{E}$-chain $\Gamma=\{x_0,\dots, x_{n}\}$ from $x$ to $y$. Noting that $\Gamma$ is also an $E$-chain, one has $n$ is a multiple of $\iota_E$. For any $z\in\hat{E}[y]$, from $(f(x_{n-1}), y)\in \hat{E}$, it follows that $(f(x_{n-1}), z)\in \hat{E}^{2}\subset E$. This means that $\{x_0, x_1, \ldots, x_{n-1}, z\}$ is an $E$-chain from $x$ to $z$ having length a multiple of $\iota_{E}$, i.e., $\hat{E}[y]\subset [x]_{E}$. Thus, $[x]_{E}$ is open. For any $y\in \overline{[x]_{E}}$, there exists $\hat{y}\in \hat{E}(y)\cap [x]_{E}$. since $f$ is topologically chain transitive, there exists an $\hat{E}$-chain $\hat{\Gamma}=\{\hat{x}_0,\dots, \hat{x}_{m}\}$ from $x$ to $\hat{y}$. Noting that $\hat{y}\in [x]_{E}$ and $\hat{\Gamma}$ is also an $E$-chain, one has $m$ is a multiple of $\iota_E$. From $(\hat{y}, y)\in \hat{E}$, it follows that $(f(\hat{x}_{m-1}), y)\in \hat{E}^{2}\subset E$. This means that $\{\hat{x}_0, \hat{x}_1, \ldots, \hat{x}_{m-1}, y\}$ is an $E$-chain from $x$ to $y$ having length a multiple of $\iota_{E}$, i.e., $y\in [x]_{E}$. Thus, $\overline{[x]_{E}}=[x]_{E}$, i.e., $[x]_{E}$ is closed. $[x]$ is also closed by applying $[x]=\bigcap_{E\in \mathscr{U}}[x]_{E}$. \hspace{\stretch{1}}$\blacksquare$ \begin{lem}\label{l3}\cite[Proposition 8.16]{MR1687407} Let $\mathscr{A}$ be an open covering of a compact uniform space $X$. Then there exists an entourage $D$ such that $\mathfrak{C}(D)=\{D[x]~|~x\in X\}$ refines $\mathscr{A}$, i.e., each of the uniform neighbourhoods $D[x]$ is contained in some member of $\mathscr{A}$. \end{lem} \begin{prop}\label{Le=1} Let $(X, f)$ be a topologically chain transitive system on a compact uniform space $(X, \mathscr{U})$ and $E\in\mathscr{U}$. If $f^{\iota_E}$ is topologically chain transitive, then $\iota_E=1$. \end{prop} \noindent\textbf{Proof.}\quad Suppose on the contrary that $\iota_{E}> 1$ and let $X=\bigcup_{i=1}^{\iota_{E}} A_{i} $, where $A_{i}$'s are equivalence classes for $\sim_{E}$. By Proposition~\ref{Open-Closed}, the family $\mathscr{A}=\{A_1,\dots, A_{\iota_{E}}\}$ is an open cover of $X$. This, together with Lemma~\ref{l3}, implies that there exists an entourage $V\subset E$ such that $\mathfrak{C}(V)=\{V[x]~|~x\in X\}$ refines $\mathscr{A}$. Fix $x\in A_i$ and $y\in A_j$ for $i\neq j$. Since $f^{\iota_{E}}$ is topologically chain transitive, there exists a $(V, f^{\iota_{E}})$-chain $\{x_{0} = x, x_{1}, \dots , x_{n} = y\}$ from $x$ to $y$. Since $(f^{\iota_{E}}(x_0),x_1)\in V$ and $f^{\iota_{E}}(x_0)\in A_i$, by construction of $V$ we have $x_1\in A_i$. Therefore, by induction we conclude that $y=x_n\in A_i$, which is a contradiction. \hspace{\stretch{1}}$\blacksquare$ \begin{lem}\label{Prime-PO} Let $(X, f)$ be a topologically chain transitive system on a uniform space $(X, \mathscr{U})$ and $E\in \mathscr{U}$. If $\iota_{E}=1$, then for any $x\in X$, there exist two $E$-chains from $x$ to itself with relatively prime lengths. \end{lem} \noindent\textbf{Proof.}\quad Let $L_{E}:=\{n\in \mathbb{N}~|~ \text{there exists an } E\text{-chain from } x \text{ to } x \text{ of length } n\} =\{n_1, n_2, \ldots, n_{k}, \ldots\}$ $(n_1<n_2<\cdots <n_k<\cdots)$. For any $k\in \mathbb{N}$, let $\Gamma_k$ be an $E$-chain from $x$ to itself of length $n_{k}$. From proof of Lemma~\ref{L-E-Lemma}, it follows that the greatest common divisor of $\{n_1, n_2, \ldots, n_{k}, \ldots\}$ is equal to $\min\{(n_1, n_2), (n_1, n_2, n_3), \ldots\}=1$. This implies that there exist $n_1, n_2, \ldots, n_k\in L_{E}$ such that $(n_1, n_2, \ldots, n_k)=1$. Let $p=(n_1, n_2, \ldots, n_{k-1})$ and $p_{i}=\frac{n_i}{p}$ $(1\leq i\leq k-1)$. Then, $(p_1, p_2, \ldots, p_{k-1})=1$. By Lemma~\ref{Prime}, there exist $\alpha_1, \alpha_2, \ldots, \alpha_{k-1}\in \mathbb{N}$ such that $(\alpha_1p_1+\alpha_2p_2+\cdots +\alpha_{k-1}p_{k-1}, n_{k})=1$. By concatenating $\alpha_{1}$-$\Gamma_1$, $\alpha_2$-$\Gamma_2$, $\ldots$, and $\alpha_{k-1}$-$\Gamma_{k-1}$, we obtain an $E$-chain $\Gamma$ from $x$ to $x$, i.e., $\Gamma=\underbrace{\Gamma_1\cdots \Gamma_1}\limits_{\alpha_1} \underbrace{\Gamma_2\cdots \Gamma_2}\limits_{\alpha_2}\cdots \underbrace{\Gamma_{k-1}\cdots \Gamma_{k-1}}\limits_{\alpha_{k-1}}$. Clearly, $\Gamma$ is an $E$-chain from $x$ to itself of length $q=(\alpha_1p_1+\alpha_2p_2+\cdots +\alpha_{k-1}p_{k-1})p$. Clearly, $(q, n_{k})=1$ as $(p, n_{k})=1$. This means that $\Gamma$ and $\Gamma_{k}$ are two $E$-chains from $x$ to itself with relatively prime lengths. \hspace{\stretch{1}}$\blacksquare$ \begin{lem}\label{length<M} Let $(X, f)$ be a dynamical system on a compact uniform space $(X, \mathscr{U})$. If $(X, f)$ is topologically chain transitive, then for each $E\in\mathscr{U}$, there exists $M>0$ such that for each $x,y\in X$, there exists an $E$-chain from $x$ to $y$ of length less than or equal to $M$. \end{lem} \noindent\textbf{Proof.}\quad Fix $E\in\mathscr{U}$ and choose an entourage $\hat{E}\in\mathscr{U}$ such that $\hat{E}^2\subset E$. By compactness, there exist points $x_1,x_2,\dots,x_n\in X$ such that $X=\bigcup_{i=1}^{n}\hat{E}[x_i]$. Since $(X, f)$ is topologically chain transitive, then for any $i,j\in\{1,2,\dots,n\}$, there exists an $\hat{E}$-chain from $x_i$ to $x_j$ of length $m_{i,j}$. Put $M=\max \{m_{i,j}+1~|~i,j\in\{1,2,\dots,n\}\}$. If $x,y\in X$, then there exist $k,l\in\{1,2,\dots,n\}$ such that $f(x)\in \hat{E}[x_k]$ and $y\in\hat{E}[x_l]$. Choose an $E$-chain $\{x_k=z_0,z_1,\dots,z_{m_{k,l}}=x_l\}$ from $x_{k}$ to $x_{l}$ of length $m_{k, l}$. Then, we obtain an $E$-chain $\{x,z_0,z_1,\dots,z_{m_{k,l}-1},y\}$ from $x$ to $y$ of length less than or equal to $M$. \hspace{\stretch{1}}$\blacksquare$ \begin{thm}\label{1} A dynamical system $(X, f)$ on a compact uniform space $(X, \mathscr{U})$ is totally topological chain transitive if and only if it is topologically chain mixing. \end{thm} \noindent\textbf{Proof.}\quad By definition, if $f$ is topologically chain mixing, then it is totally topological chain transitive. Fix any $E\in \mathscr{U}$. Assume that $f$ is totally topological chain transitive, applying Proposition~\ref{Le=1} and Lemma~\ref{Prime-PO} yields that there exist two $E$-chains $\Gamma_1$ and $\Gamma_2$ from $x$ to itself with relatively prime lengths. By concatenating copies of these loops and applying Lemma~\ref{Prime}, we can get an $E$-chain from $x$ to itself of length $s$ for any $s> N$ or some $N\in \mathbb{N}$. By Lemma~\ref{length<M}, there exists some $M\in \mathbb{N}$ such that between any two points in $X$, there exists an $E$-chain of length less than or equal to $M$. By adding a loop at $x$ to a chain from $x$ to $y$, we can get a $E$-chain from $x$ to $y$ of any length greater than $M + N$, implying that $f$ is topologically chain mixing. \hspace{\stretch{1}}$\blacksquare$ The proof of the following proposition is similar to Theorem \ref{1}. \begin{prop}\label{p1} Let $(X, f)$ be a dynamical system on a connected compact uniform space $(X, \mathscr{U})$. If $(X, f)$ is topologically chain transitive, then it is topologically chain mixing. \end{prop} \noindent\textbf{Proof.}\quad For any $E\in\mathscr{U}$, we show that $\iota_E=1$. If $\iota_E >1$, then $X=\bigcup_{i=1}^{\iota_{E}} A_{i}$, where $A_{i}$'s are equivalence classes with respect to $\sim_E$, which are closed and open subsets of $X$ by Proposition~\ref{Open-Closed}, which contradicts the connectivity of $X$. An argument similar to the proof of Theorem~\ref{1} shows that $f$ is topologically chain mixing. \hspace{\stretch{1}}$\blacksquare$ \begin{cor}\label{c1} Let $(X, f)$ be a dynamical system on a connected compact uniform space $(X, \mathscr{U})$. Then, the following statements are equivalent: \begin{enumerate}[(1)] \item $f$ is topologically chain recurrent; \item $f$ is topologically chain transitive; \item $f$ is totally topological chain transitive; \item $f$ is topologically chain mixing. \end{enumerate} \end{cor} \noindent\textbf{Proof.}\quad Clearly $(4) \Longrightarrow (3) \Longrightarrow (2) \Longrightarrow (1)$. By Proposition~\ref{p1}, $(2)\Longrightarrow (4)$. It suffices to prove that $(1)\Longrightarrow (2)$. Suppose that $f$ is topologically chain recurrent and $E\in\mathscr{U}$. We say that $x$ and $y$ are $E$-chain equivalent if $x\rightsquigarrow_E y$ and $y\rightsquigarrow_E x$. Since $f$ is topologically chain recurrent, this is an equivalence relation. Let $x$ and $y$ be $E$-chain equivalent. Choose an entourage $D\in \mathscr{U}$ such that $D^2\subset E$ and $(f\times f)(D)\subset E$. Now we show that $x$ is $E$-chain equivalent to every $z\in D[y]$. Let $\{x_0=x,x_1,\dots,x_n=y\}$ be an $E$-chain from $x$ to $y$ and $\{z_0=y,z_1,\dots ,z_m=y\}$ be an $D$-chain from $y$ to itself. Then $\{x_0=x,x_1,\dots,x_n=y=z_0,z_1,\dots,z_m=z\}$ is an $E$-chain from $x$ to $z$. Similarly, let $\{y=y_0,y_1,\dots,y_p=x\}$ be an $E$-chain from $y$ to $x$. Then, $\{z,z_1,\dots, z_m=y=y_0,y_1,\dots,y_n=x\}$ is an $E$-chain from $z$ to $x$. Thus, $x$ is $E$-chain equivalent to $z$, implying that the equivalence class $[x]$ is open and therefore closed. This, together with connectivity of $X$, implies that $X=[x]$. Therefore, $f$ is topologically chain transitive. \hspace{\stretch{1}}$\blacksquare$ The next example shows that Corollary \ref{c1} does not hold for the non-connected case. \begin{ex}\label{ex1}\cite{ahmadi} Let $\mathbb{P}=\mathbb{R}\setminus \mathbb{Q}$. Suppose that $\textbf{a}=\{a_i\}_{i\in\mathbb{Z}}\subset\mathbb{P}$ is an increasing bi-sequence for which there exists a positive integer $k$ such that $a_i+1=a_{i+k}$ for all $i\in\mathbb{Z}$. Put $$ U_{\mathbf{a}}=\Cup_{i\in\mathbb{Z}}[\{(a_i,a_i)\}\cup (a_i,a_{i+1})\times(a_i,a_{i+1})] $$ \begin{figure}[ht] \centerline{\includegraphics[scale=0.12]{unif.jpg}} \caption{ Filter base for Michael line uniformity} \end{figure} (See Figure~1). Then the family $\mathcal{B}=\{U_{\textbf{a}}\}$ is a filter base and the uniformity generated by this filter base generate a topology in which any point of $\mathbb{P}$ is isolated. Let $\mathbb{S}^1$ be the unit circle. Consider the uniformity $\mathscr{U}$ on $\mathbb{S}^1$ -- just taking the projection modulo $1$ -- and define $f:\mathbb{S}^1\longrightarrow \mathbb{S}^1$ by $f(x)=x+\alpha$, $\alpha\in\mathbb{P}$. Then, it can be verified that $(\mathbb{S}^1,f)$ is totally topological chain transitive, topologically chain mixing and topologically chain recurrent, which does not have topological shadowing and mixing property. \end{ex} \section{Topological shadowing property} This section obtains some basic properties for non-wandering points set $\Omega(f)$. In particular, it is proved that a compact dynamical system $(X, f)$ share the same topological shadowing property with its restricted system $(\Omega(f), f|_{\Omega(f)})$. \begin{prop}\label{Infinite-Omega} Let $(X, f)$ be a dynamical system on a uniform space $(X, \mathscr{U})$ and $\Omega(f)\neq \emptyset$. Then, for any encourage $U\in \mathscr{U}$, $N_{f}(U[x], U[x])$ is an infinite set. \end{prop} \noindent\textbf{Proof.}\quad If $x$ is a periodic point, then clearly $N_{f}(U[x], U[x])$ is infinite. Suppose $x$ is not a periodic point. Then, for any fixed $N\in \mathbb{N}$, $f^{i}(x)\neq x$ for all $1\leq i\leq N$. Since $(X, \mathscr{U})$ is Hausdorff, there exists an encourage $\hat{U}\subset U$ such that $\hat{U}[f^{i}(x)]\cap \hat{U}[x]=\emptyset$ for all $1\leq i\leq N$. This, together with the continuity of $f$, yields that there exists an encourage $D\subset \hat{U}$ such that, for any $1\leq i\leq N$, $$ f^{i}(D[x])\subset \hat{U}[f^{i}(x)], $$ implying that $$ f^{i}(D[x])\cap D[x]\subset \hat{U}[f^{i}(x)]\cap \hat{U}[x]=\emptyset. $$ Then, there exists $n>N$ such that $f^{n}(D[x])\cap D[x]\neq \emptyset$ by $x\in \Omega(f)$. \hspace{\stretch{1}}$\blacksquare$ \begin{prop} Let $(X, f)$ be a dynamical system on a uniform space $(X, \mathscr{U})$ and $\Omega(f)\neq \emptyset$. If $(X, f)$ has the topological shadowing property and $x\in\Omega(f)$, then for any entourage $U\in \mathscr{U}$, there exists a natural number $k$ and a point $w\in U[x]$ such that $f^{nk}(w)\in U[x]$ for all $n\in\mathbb{N}$. \end{prop} \noindent\textbf{Proof.}\quad Choose an encourage $E\in\mathscr{U}$ such that $E^2\subset U$ and take an entourage $D\subset E$ such that every $D$-pseudo-orbit can be $E$-shadowed by some point in $X$ by the topological shadowing property. Pick an encourage $\hat{D}\in \mathscr{U}$ with $\hat{D}^2\subset D$. Since $x\in \Omega(f)$, there exists $z\in \mathrm{Int}(\hat{D}[x])\cap f^{-k}(\mathrm{Int}(\hat{D}[x]))$ for some $k\in\mathbb{N}$. If we write $\Gamma=\{z,f(z),\dots, f^{k-1}(z)\}$, then $\Gamma\Gamma\Gamma\cdots$ is an infinite $D$-pseudo-orbit of $f$ as $(f^{k}(z),z)\in \hat{D}^2\subset D$. Let $w\in X$ be a point which $E$-shadows this pseudo-orbit. Then, $(z,f^{nk}(w))\in E$ for all $n\in \mathbb{N}_0$. This, together with $(x, z)\in \hat{D}$, implies that $(x,f^{nk}(w))\in \hat{D}\circ E\subset E^2\subset U$, i.e., $f^{nk}(w)\in U[x]$. \hspace{\stretch{1}}$\blacksquare$ \begin{cor}\label{c2} Let $(X, f)$ be a dynamical system on a uniform space $(X, \mathscr{U})$ and $\Omega(f)\neq \emptyset$. If $(X, f)$ has the topological shadowing property and $x\in\Omega(f)$, then for any open set $U$ containing $x$, $k\mathbb{N}_0\subset N_f(U,U)$ for some $k\in \mathbb{N}$. \end{cor} \begin{prop}\label{lem2} Let $(X, f)$ be a dynamical system on a uniform space $(X, \mathscr{U})$. If $(X, f)$ has the topological shadowing property and totally transitive, then it is weakly mixing. \end{prop} \noindent\textbf{Proof.}\quad Since $f$ is totally transitive, it is clear that $\Omega(f)\neq \emptyset$. Given any two nonempty open subsets $U$ and $V$ of $X$, from Corollary \ref{c2}, it follows that there exists $k\in\mathbb{N}$ such that $k\mathbb{N}\subset N_f(U,U)$. Since $f^k$ is transitive, $k\mathbb{N}_0\cap N_f(U,V)\neq\emptyset$. Therefore, $N_f(U,U)\cap N_f(U,V)\neq\emptyset$, implying that $f$ is weakly mixing. \hspace{\stretch{1}}$\blacksquare$ The {\it $\omega$-limit set} $\omega(x, f)$ of $x$ consists of all the limit points of $\{f^{n}(x)~|~n\in \mathbb{N}_0\}$, i.e., $$ \omega(x, f)=\{y\in X~|~\exists n_{k}\nearrow +\infty \text{ with } f^{n_k}(x)\rightarrow y\}. $$ Let $\omega(f)=\bigcup_{x\in X}\omega(x, f)$. Clearly, $\omega(f)\subset \Omega(f)$ and $\omega(f)\neq\emptyset$ for every compact dynamical system $(X, f)$. \begin{prop}\label{Minimal} Let $(X, f)$ be a dynamical system on a compact uniform space $(X, \mathscr{U})$. If $(X, f)$ has the topological shadowing property and $x\in\Omega(f)$, then for any entourage $U\in \mathscr{U}$, there exists a natural number $k$ and a point $y\in U[x]\cap M(f)$ such that $f^{nk}(y)\in U[x]$ for all $n\in\mathbb{N}$. \end{prop} \noindent\textbf{Proof.}\quad Choose an encourage $E\in\mathscr{U}$ such that $E^3\subset U$ and take an entourage $D\subset E$ such that every $D$-pseudo-orbit can be $E$-shadowed by some point in $X$. Pick an encourage $\hat{D}\in \mathscr{U}$ with $\hat{D}^2\subset D$. Since $x\in \Omega(f)$, there exists $z\in \mathrm{Int}(\hat{D}[x])\cap f^{-k}(\mathrm{Int}(\hat{D}[x]))$ for some $k\in\mathbb{N}$. If we write $\Gamma=\{z,f(z),\dots, f^{k-1}(z)\}$, then $\Gamma\Gamma\Gamma\cdots$ is an infinite $D$-pseudo-orbit of $f$ as $(f^{k}(z),z)\in \hat{D}^2\subset D$. Let $w\in X$ be a point which $E$-shadows this pseudo-orbit. Then, $(z,f^{nk}(w))\in E$ for all $n\in \mathbb{N}_0$. This, together with $(x, z)\in \hat{D}$, implies that $(x,f^{nk}(w))\in \hat{D}\circ E\subset E^2$ for all $n\in \mathbb{N}_0$. Let $y\in\overline{\{f^{nk}(w)~|~n\in \mathbb{N}_0\}}$ be a minimal point for $f^{k}$ (existence is by Zorn's lemma and the compactness of $X$). Clearly, $y\in M(f)$. For any $m\in \mathbb{N}_0$, there exists $n_{m}\in \mathbb{N}_0$ such that $(f^{n_mk}(w), f^{mk}(y))\in E$, implying that $(x, f^{mk}(y))\in E^2 \circ E=E^3\subset U$, i.e., $f^{mk}(y)\in U[x]$. \hspace{\stretch{1}}$\blacksquare$ The following is an immediate corollary of Proposition~\ref{Minimal} and \cite[lemma 2.8]{akin2001} which states that if the minimal points for $f$ and $g$ are dense in $X$ and $Y$ respectively, then the minimal points of $f\times g$ are dense in $X\times Y$. \begin{cor}\label{Omega=M} Let $(X, f)$ be a dynamical system on a compact uniform space $(X, \mathscr{U})$. If $(X, f)$ has the topological shadowing property, then $\Omega(f)^n=\overline{M(f^{(n)})}$ for any $n\in \mathbb{N}$. \end{cor} A uniform $(X, \mathscr{U})$ is \begin{enumerate}[(1)] \item {\it connected} if $X$ contains no open and closed subset, other than the empty set and the full set; \item {\it totally disconnected} if all the connected subspaces of $X$ are one-point sets. \end{enumerate} We can define an equivalence relation `$\sim$' on $X$ by setting $x\sim y$ if there exists a connected subspace of $X$ containing both $x$ and $y$. The equivalence classes are called the {\it connected components} of $X$, or simply {\it components}. Clearly, $X$ is totally disconnected if and only if each connected component of $X$ is a singleton. Recall that a locally compact Hausdorff topological space $X$ is {\it totally disconnected} if and only if it has a basis of topology consisting of compact open sets. Suppose that $\mathscr{B}$ is a basis of entourages for a uniform space $X$. An injection $f$ of $X$ into itself is an {\it isobasism with respect to $\mathscr{B}$} if, for every entourage $V$ of $\mathscr{B}$, $(x, y)\in V$ is equivalent to $(f(x), f(y)) \in V$. When $f$ maps $X$ on itself, this condition is equivalent to $(f\times f) (V)= V$. \begin{lem}\label{isom}\cite[Theorem~8]{rhodes_1956} If $G$ is a uniformly equicontinuous group of maps of a uniform space $X$ on itself, then there exists a basis of entourages $\mathscr{B}$ on $X$ such that every map $f$ of $G$ is a $\mathscr{B}$-isobasism. \end{lem} \begin{thm} Let $(X,f)$ be a surjective equicontinuous dynamical system on a compact uniform space $(X, \mathscr{U})$. Then, $(X, f)$ has the topological shadowing property if and only if $X$ is totally disconnected. \end{thm} \noindent\textbf{Proof.}\quad By Lemma \ref{isom}, there exists a basic neighborhood $\mathscr{B}$ on $X$ such that $f$ is $\mathscr{B}$-isobasism. First assume that $f$ has the topological shadowing property. Suppose on the contrary that $X$ is not totally disconnected. Then, $X$ has a non-degenerate component $C$. Let $x$ and $y$ be two distinct points of $C$. Since $X$ is Hausdorff, there exists an encourage $E\in\mathscr{B}$ such that $(x,y)\notin E$. Choose $\hat{E}\in\mathscr{B}$ such that $\hat{E}^2\subset E$. By the topological shadowing property, there exists $D\in\mathscr{B}$ such that every $D$-pseudo-orbit can be $\hat{E}$-shadowed by some point in $X$. Since $C$ is connected, there exists a sequence $\{x=x_0,x_1\dots,x_n=y\}$ in $C$ such that $(x_i,x_{i+1})\in D$ for all $0\leq i \leq n-1$. Let $y_i=f^i(x_i)$ ($0\leq i \leq n-1$). Then, for any $0\leq i \leq n-1$, we have $(f^{i+1}(x_{i+1}),f^{i+1}(x_i))\in D$ by applying Lemma~\ref{isom}, implying that $(y_{i+1},f(y_i))\in D$. That means that $\{y_0, y_1, \dots, y_n\}$ is a $D$-pseudo-orbit and there exists a point $z\in X$ such that $(y_i,f^i(z))\in\hat{E}$ for all $0\leq i \leq n-1$. Since $f$ is $\mathscr{B}$-isobasism and $(f^n(z),y_n)=(f^n(z),f^n(x_n))\in\hat{E}\in \mathscr{B}$, then $(z, x_{n})\in \hat{E}$. This, together with $(z,x_0)=(z,y_0)\in \hat{E}$, implies that it follows that $(x,y)=(x_0,x_n)\in\hat{E}\circ\hat{E}\subset E$, which is a contradiction. Conversely assume that $X$ is totally disconnected. Given any entourage $E\in\mathscr{U}$ and choose $\hat{E}\in\mathscr{U}$ such that $\hat{E}^2\subset E$. Since $X$ is totally disconnected, then, for any $z\in X$, there exists an open and compact subset $W_z$ of $X$ such that $z\in W_z\subset \mathrm{Int}(\hat{E}[z])\subset \hat{E}[z]$. By the compactness of $X$, there exist $z_1, \dots, z_m\in X$ such that $X=\bigcup_{j=1}^{m}W_{z_j}$. Applying Lemma \ref{l3} yields that there exists an encourage $D\in\mathscr{B}$ with $D\subset E$ which refines the open cover $\{W_{z_1}, W_{z_2}, \ldots, W_{z_m}\}$. Let $\{x_0,x_1,\dots\}$ be a $D$-pseudo-orbit of $f$. For any fixed $n\in \mathbb{N}_0$, from $(f(x_n),x_{n+1})\in D$, it follows that there exists $1\leq i\leq m$ such that $\{f(x_n),x_{n+1}\}\subset W_{z_i}$. This, together with $(f^2(x_{n-1}),f(x_n))\in D$ (by the definition of isobasism), implies that $f^2(x_{n-1})\in W_{z_i}$. Going backward inductively, we have $f^{n+1}(x_0)\in W_{z_i}$. Thus, $\{f^{n+1}(x_0),x_{n+1}\}\subset W_{z_i}\subset\hat{E}[z_1]$, implying that $(f^{n+1}(x_0),x_{n+1})\in E$. This means that every $D$-pseudo-orbit of $f$ can be $E$-shadowed by the starting point. Therefore, $(X, f)$ has the topological shadowing property. \hspace{\stretch{1}}$\blacksquare$ Let $(X,f)$ be a compact dynamical system on a uniform space $(X, \mathscr{U})$ and $D\in \mathscr{U}$. Define a relation $\backsimeq_{D}$ on $\Omega(f)$ by $x\backsimeq_D y$ if and only if there exist $D$-chains from $x$ to $y$ and from $y$ to $x$ in $\Omega(f)$. From the definition of non-wandering points, it can be verified that for any $x\in \Omega(f)$ and any $E\in \mathscr{U}$, there exists an $E$-chain from $x$ to itself. This implies that $\backsimeq_{D}$ is an equivalence relation on $\Omega(f)$. For any $x\in \Omega(f)$, let $[x]_{D}$ be the equivalence class of $x$ for $\backsimeq_{D}$ in $\Omega(f)$. For any $y\in [x]_{D}$, there exist $D$-chains $\Gamma_1$ and $\Gamma_2$ in $\Omega(f)$ such that $\Gamma_1=\{x_0=x, x_1, \ldots, x_{n-1}, x_{n}=y\}$ and $\Gamma_2=\{y_0=y, y_1, \ldots, y_{m-1}, y_{m}=x\}$. From $y\in D[f(x_{n-1})]\cap f^{-1}(D[y_1])$, it follows that there exists an encourage $\hat{D}\subset D$ such that $\hat{D}[y]\subset D[f(x_{n-1})]\cap f^{-1}(D[y_1])$. Clearly, for any $z\in \hat{D}[y]\cap \Omega(f)$, $\hat{\Gamma}_1=\{x, x_1, \ldots, x_{n-1}, z\}$ and $\hat{\Gamma}_2=\{z, y_1, \ldots, y_{m-1}, x\}$ are $D$-chains in $\Omega(f)$, implying that $\hat{D}[y]\cap \Omega(f)\subset [x]_{D}$. Thus, $[x]_{D}$ is open in $\Omega(f)$. Take an encourage $\hat{D}\subset D$ such that $(f\times f)^{2}(\hat{D})\subset D$. From Proposition~\ref{Infinite-Omega}, it follows that there exists $n>2$ such that $f^{n}(\hat{D}[x])\cap \hat{D}[x]\neq \emptyset$. Then, there exists $z\in \hat{D}[x]$ such that $f^{n}(z)\in \hat{D}[x]$. Clearly, $(f^{2}(z), f^{2}(x))\in D$ by the choice of $\hat{D}$. This implies that $\{f(x), f^{2}(z), \ldots, f^{n-1}(z), x\}$ is a $D$-chain from $f(x)$ to $x$. Clearly, $\{x, f(x)\}$ is a $D$-chain from $x$ to $f(x)$. Thus, $x\backsimeq_{D}f(x)$, i.e., $f(x)\in [x]_{D}$. Similarly, it can be verified that $f^{n}(x)\in [x]_{D}$ for all $n\geq 2$. This implies that, for any $y\in [x]_{D}$, $f(y)\in [y]_{D}=[x]_{D}$, i.e, $f([x]_{D})\subset [x]_{D}$. \begin{thm} Let $(X,f)$ be a dynamical system on a compact uniform space $(X, \mathscr{U})$. If $(X, f)$ has the topological shadowing property, then $(\Omega(f), f|_{\Omega(f)})$ has the topological shadowing property. \end{thm} \noindent\textbf{Proof.}\quad For any fixed $E\in \mathscr{U}$, choose $\hat{E}\in\mathscr{U}$ such that $\hat{E}^3\subset E$. From the topological shadowing property of $(X, f)$, it follows that there exists an encourage $D\subset \hat{E}$ such that every $D$-pseudo-orbit can be $\hat{E}$-shadowed by some point in $X$. From the above discussion and compactness of $\Omega(f)$, there exist $x_1, x_2, \ldots, x_{m}\in \Omega(f)$ such that $\Omega(f)=\bigcup_{i=1}^{m}[x_i]_{D}$ and $[x_1]_{D}, \ldots, [x_m]_{D}$ are mutually disjoint. Let $\hat{D}\subset D$ be the entourage from Lemma~\ref{l3} which refines the open cover $\{[x_1]_{D}, \dots, [x_m]_{D}\}$. {\bf Claim.} Every $\hat{D}$-pseudo-orbit in $\Omega(f)$ can be $E$-shadowed by some point in $\Omega(f)$. Let $\{z_{s}\}_{s=0}^{+\infty}$ be a $\hat{D}$-pseudo-orbit in $\Omega(f)$. Then, there exists $1\leq i\leq m$ such that $z_{0}\in [x_{i}]_{D}$. This, together with $f(z_{0})\in [x_i]_{D}$ and $(f(z_0), z_1)\in \hat{D}$, implies that $z_{1}\in [x_{i}]_{D}$ as $\hat{D}$ refines the open cover $\{[x_1]_{D}, \dots, [x_m]_{D}\}$. Similarly, it can be verified that $z_{n}\in [x_{i}]_{D}$ for all $n\geq 2$. Fix $s\in \mathbb{N}$. Clearly, $\{z_0, \dots, z_s\}$ is a $\hat{D}$-chain in $\Omega(f)\cap [x_i]_{D}$. Since $[x_i]_{D}$ is an equivalence class, there exists a $D$-chain $\{y_0, \dots, y_t\}$ from $z_s$ to $z_0$. Let $\Gamma=\{z_0, \dots, z_{s-1}, y_0, \dots, y_{t-1}\}$. Then, $\hat{\Gamma}=\Gamma\Gamma\Gamma\cdots$ is a $D$-pseudo-orbit in $\Omega(f)$. Thus, there exists ${\bm z}_{s}\in X$ which $\hat{E}$-shadows $\hat{\Gamma}$. Put $k=s+t$, then $(f^{kn+j}({\bm z}_{s}),z_j)\in \hat{E}$ for all $n\geq 0$ and $0\leq j<k$. Choose $\hat{{\bm z}}_{s}\in \omega({\bm z}_{s}, f^{k})\subset \Omega (f)$ ($\omega({\bm z}_{s}, f^{k})\neq \emptyset$ by the compactness of $X$). Since $f$ is uniformly continuous, there exists an encourage $U\subset \hat{E}$ such that $(f\times f)^{j}(U)\subset \hat{E}$ holds for all $0\leq j\leq k$. Take $n_{0}\in \mathbb{N}$ such that $(f^{kn_0}({\bm z}_{s}), \hat{{\bm z}}_{s})\in U$. Then, for any $0\leq j\leq s$, $(f^{kn_0+j}({\bm z}_{s}), f^{j}(\hat{{\bm z}}_{s}))\in \hat{E}$ and $(f^{kn_0+j}({\bm z}_{s}), z_{j})\in \hat{E}$, implying that \begin{equation}\label{e-1} (f^{j}(\hat{{\bm z}}_{s}), z_j)\in \hat{E}^{2}. \end{equation} Choose a limit point $\hat{{\bm z}}$ of the sequence $\{\hat{{\bm z}}_{1}, \hat{{\bm z}}_{2}, \ldots, \hat{{\bm z}}_{s}, \ldots\}$. Clearly, $\hat{{\bm z}}\in \Omega(f)$. Similarly, by the uniform continuity of $f$ and (\ref{e-1}), it can be verified that, for any $s\in \mathbb{N}_0$, $(f^{s}(\hat{{\bm z}}), z_{s})\in E$. \hspace{\stretch{1}}$\blacksquare$ \section*{References}
1,941,325,220,612
arxiv
\section{Introduction} \label{sec:introduction} The European Union (EU) energy policy has considered energy efficiency as one of it its main targets \cite{dupont_defusing_2020}. By the Directive 2012/27/EU of 25 October 2012, the over arching goal is to accomplish the 2020 targets by the member states \cite{langsdorf2011eu}. Directive 2012/27/EU was revised to boost the energy efficiency of existing buildings, the ones in construction phase, and to re-emphasize on the energy performance of new upcoming buildings \cite{himeur2020efficient}. From the technology aspect, the weight of factors accounting for the global market has been varying. Currently in 2020, control systems represent the largest technological portion of 21\% contributing in the global market. On the other hand, communication networks contribute with a share of 18\% after representing the largest portion of 20\% in 2012 \cite{Himeur2020IJIS-NILM}. Other technology aspects including field equipment, sensors, software, and hardware currently account for 44\% of the market; slightly dropping from 46\% in 2012 \cite{HIMEUR2020115872}. Several areas in Information and Communications Technology (ICT) were investigated by Heras and Zarli \cite{de2008smart} to unlock potentials for the improvement of energy efficiency \cite{Varlamis2020CCIS}. These ICT areas include interoperability, building automation, and tools design and simulation. However, the areas of smart metering, user awareness, and decision support have been largely considered in recent research \cite{moranreducing, hannus2010ict, de2008smart,ye2008ict, horner2016known}, which emphasize on the significance of ICT in these areas. Smart metering is evident to be promising and technically practical throughout a variety of projects concluded across Europe, USA, and some other countries \cite{himeur2020improving}. By the means of information, rewards, and automation, Information Technology (IT) services can be integrated with metering infrastructure to enhance energy efficiency. Nevertheless, it is consumption awareness that is even held as more interesting for technology development. Smart metering data, including monitors through the internet on web applications and/or mobile devices, are made available to provide energy information and feedback tools \cite{hannus2010ict,Sardianos2020IJIS-ERS}. From the data analysis aspect, more suitable behavioral interventions can be achieved throughout carefully examining the profile of a given consumer, with full details, to infer better conclusions \cite{Sardianos2020GreenCom}. Therefore, this work proposes the micro-moment concept as a novel scheme to analyze the daily segments of energy consumption with time-based and contextual snapshot \cite{alsalemi2019ieeesystems, ALSALEMI2019classifier}. Given a specific point in time, the power consumption of an appliance, annexed with other added information such as user preferences, constitute an energy micro-moment. Fig. \ref{fig:mm} illustrates an example of an energy micro-moment. \begin{figure}[!ht] \centering \includegraphics[trim={0.8in 0.8in 0.9in 1in},clip,width=1\linewidth]{mm.pdf} \caption{An overview of a micro-moment example.} \label{fig:mm} \end{figure} In the field of health monitoring applications, Patel et al. \cite{patel2016wearable}. have addressed developing cloud-based ML models through a wearable computing platform. The ML pipeline is deployed to continuously evaluate the model’s performance, such that a degradation in performance can be detected. The model’s performance is evaluated with a recall and F1 score higher than 96\%, an overall recognition accuracy of 99.44\%, and a resting state model accuracy of 99.24\%. However, the accuracy is subject to limitations based on constrained settings of the collected data. Bihis and Roychowdhury have adopted Microsoft Azure ML Studio as the cloud-based computing platform to implement a new generalized flow \cite{bihis2015generalized}. Through this generalized flow, the overall classification accuracy is maximized due to its ability of fulfilling multi-class and binary classification functions. The work also proposes a customized generalized flow of unique modular representations. The proposed approach is tested on three public datasets in contrast with existing cutting-edge methods, and results showed a classification accuracy of 78-97.5\%. Chourasiya et al. have also adopted Microsoft Azure Machine Learning cloud, but for the classification of cyber-attacks \cite{roychowdhury2016ag}. The framework adopts a simple ML model with slight alteration, and by adjusting the multicast decision forest model, the results show an accuracy of 96.33\%. In this paper, we focus on the data processing aspect of micro-moments, particularly when cloud platforms are utilized as the computation engine. In the literature and commercial market, there is a wide pool of cloud ML services. While their features vary, many of cloud solutions include a free plan to allow researchers to get a taste of the power of cloud-based ML prior committing any financial investments. The remainder of this paper is organized as follows. Section \ref{sec:data-collection-system} reviews the larger energy efficiency framework on which this work is based. Section \ref{sec:platforms} discusses evaluated cloud platforms. Sections \ref{sec:datasets} and \ref{sec:algorthims} reviews used datasets and the classification algorithms, respectively. Results are reported and discussed in Section \ref{sec:results}. The paper is concluded in Section \ref{sec:conclusion}. \section{Overview of the (EM)\textsuperscript{3} Framework} \label{sec:data-collection-system} The (EM)\textsuperscript{3} platform has been designed for two target user groups \cite{alsalemi_access_2020}: \begin{enumerate} \item Homeowners that wish to reduce their energy footprint by avoiding unnecessary energy consumption, and by taking advantage of better energy tariffs that promote off-peak hours appliance usage; and \item Office buildings that focus on the deactivation of unused appliances (e.g. monitors, lights, heating, and cooling devices, etc.) when weather conditions and room occupancy permits. \end{enumerate} The (EM)\textsuperscript{3} framework has been designed to support consumers behavioral change via improving power consumption consciousness. It includes four main steps defined as: collecting data (i.e. consumption footprints and ambient conditions) from different appliances in domestic buildings \cite{alsalemi_rtdpcc_2019, alsalemi2020micro}, processing consumption footprints in order to abstract energy micro-moments to detect abnormalities, deploying users' preferences information to detect the similarity amongst them \cite{himeur2020novel, himeur2020applicability, himeur2020data, himeur2020robust}, and generating personalized recommendations to reduce energy wastage based on a rule-based recommender model \cite{sardianos_smartgreens_2019, sardianos2020rehab}. Sensing devices play an essential role in capturing data, and safely storing them in the platform database. To this end, in this article, we focus on investigating various architecture platforms attached to sensors \cite{sardianos2020model}. They are used for uploading wirelessly gathered data from different cubicles to the (EM)\textsuperscript{3} database server that is located at the Qatar university (QU) energy lab. A NoSQL CouchDB server database is deployed to store consumers' micro-moments and occupancy patterns, user preferences and properties, and energy efficiency recommendations and its rating score \cite{alsalemi_rtdpcc_2019,sardianos2020data}. The NoSQL database type was chosen for its fast data retrieval and its flexibility in data structure when compared with traditional SQL-based databases. The recommendation engine is based on an algorithm that considers user preferences, energy goals, and availability in order to maximize the acceptance of a recommended action and increase the efficiency of the recommender system \cite{himeur2020appliance}. The algorithm is based on the extracted user's habits that concern the repeated usage of devices at certain moments during the day \cite{Alsalemi2020sca}. It is extracted from the energy consumption data and the room occupancy information recorded in users’ (or office) recent history of activities \cite{AymanGPECOM2020}. Fig. \ref{fig:em3} portrays the overall architecture of (EM)\textsuperscript{3} energy efficiency ecosystem. It is worth noting that the power consumption of the selected devices is considered small. \begin{figure}[!ht] \centering \includegraphics[trim={1.7in 1.7in 1.7in 1.7in},clip,width=\linewidth]{em3.pdf} \caption{Overview of the (EM)\textsuperscript{3} framework.} \label{fig:em3} \end{figure} The next section describes the selected cloud platforms used for the micro-moment classification phase. \section{Cloud Evaluation Platforms} \label{sec:platforms} In order to choose the most suitable platform for cloud classification, a number of criteria is set. First, the platform has to include an accessible interface that is familiar with data scientists, i.e. compatible with common ML programming languages, such as Python and R. Second, the platform shall have different computational power configurations to benchmark the best performance for the algorithm and dataset at hand. Third, from an economical point of view, the platform has to allow researchers to use its functionalities for free to some extent. Based on these criteria, we have selected the following four cloud artificial intelligence platforms: \begin{itemize} \item Amazon Web Services Sagemaker (AWSS)\footnote{https://aws.amazon.com/sagemaker} \item Google Colab (GCL)\footnote{https://Colab.research.google.com} \item Google Cloud Platform (GCP)\footnote{https://cloud.google.com/products\#ai-and-machine-learning} \item Microsoft Azure Machine Learning (MAML)\footnote{https://azure.microsoft.com/en-us/services/machine-learning} \end{itemize} The above platforms, AWSS, GCL, GCP, and MAML, share a common feature set, which includes Python (or Jupyter notebooks) support, a free plan with limited computational resources, the ability to visualize some of the outcomes of the code run, and the privilege of selecting from numerous computational configurations. Some of the platforms, namely GCP, accept exported TensorFlow models for algorithm execution. It goes without saying how big Google services have become and the amount of services that they provide. One of these services is Compute Engine. GCP and GCL offer this service to allow customers to create Virtual Machines (VMs) via “Instances” on Google infrastructure to compute any amount of data. They promise the ability to run thousands of virtual Central Processing Units (vCPUs) quickly with a consistent performance \cite{noauthor_compute_nodate}. Moreover, they provide different machine types with various amounts of vCPUs and memory per vCPU to serve certain purposes \cite{noauthor_machine_nodate}. Not only that, but they also show the specifications of utilized vCPU, and in which machine types they exist \cite{noauthor_cpu_nodate}. Lastly, different NVIDIA GPUs are also highlighted, where they can be added to the created Instances along with the utilized vCPUs \cite{noauthor_gpus_nodate}. Naturally, the variety of options and the amount of heavy computational power they provide do not come for free. In other words, the more computational power (number of vCPUs, GPUs, and memory) is harnessed, the more it will cost the customer. Luckily, Google created Instances in a fashion where the customer can start and stop created Instances, hence, computational payments can only stack-up when the Instances are running. In addition to that, Google provides auto-scalability, where they utilize more Instances only when the traffic is high, and lay off some Instances when the traffic is low. This feature can be harnessed when the customer creates a Managed Instance Group (another feature) for a certain application, where once the traffic is high, more Instances get utilized \cite{noauthor_using_nodate}. Similar to GCP, MAML allows for Instances to be created to harness the VMs provided by them, similarly to AWSS. Moreover, autoscaling is also a feature that is available to increase the number of Instances when the demand is high, and reduce it when it is low to save customers from paying extra money \cite{rboucher_autoscale_nodate}. This requires the structuring of extra rules for the service to know when to incorporate extra Instances. It is worth mentioning that although the platforms were tested using Jupyter notebooks, they also provide support for Python through an existing Software Development Kit (SDK) for MAML and APIs, and libraries for GCP. In fact, MAML set October 9th, 2020 the day they will retire Azure Notebooks and support plugins to be used with Jupyter notebooks \cite{noauthor_microsoft_nodate}. It is worth noting that, from the user-experience point-of-view, it was slightly easier to get the first model to run on GCP with respect to its peer MAML. Moreover, for free tier users, it is easier to create and delete Instances on GCP. Although both allow for free trial phase for a whole year, GCP grants users 300 USD to be used in this year, while MAML grants free-tier users 200 USD to be used within the first 30 days of this trial, which is also a plus point for GCP when the customer is an individual, a small business or even a start-up company. Both cloud platforms require a billing card to be registered to ensure that the customer is an authentic user, and to avoid abuses from any potential customers \cite{noauthor_pricing_nodate}. This can similarly apply to CGL. The discussed aspects of the chosen platforms are summarized in Table \ref{tab:scalability}. \begin{table}[!ht] \centering \caption{Cloud platforms scalability comparison} \label{tab:scalability} \begin{tabular}{|>{\centering\arraybackslash}m{1.2cm}|>{\centering\arraybackslash}m{1.4cm}|>{\centering\arraybackslash}m{1.3cm}|>{\centering\arraybackslash}m{1.35cm}|>{\centering\arraybackslash}m{1.45cm}|} \hline Platform & Supports Big Data & CPU \,Addition & GPU\, Addition & Scalability \\ \hline GCP & Yes & Yes & Yes & Yes \\ \hline MAML & Yes & Yes & Yes & Yes \\ \hline AWSS & Yes & Yes & Yes & Yes \\ \hline GCL & Yes & Yes & Yes & No \\ \hline \end{tabular} \end{table} In the next section, a description of the utilized datasets for micro-moment classification is provided, which are evaluated within the selected cloud platforms. \section{Datasets Overview} \label{sec:datasets} In order to execute a number of classification algorithms to identify micro-moments, relevant datasets are required. They must include appliance-level data points in a household environment. In this work, we have selected the following datasets for cloud classification purposes: \begin{itemize} \item \textbf{SimDataset}: The virtual energy dataset (SimDataset), generated by our computer simulator, produces appliance-related datasets based on real data recordings \cite{ramadan_simulator_2019, ALSALEMI2019classifier}. By combining real smart meter data and periodic energy consumption patterns, we simulated sensible domestic electricity consumption scenarios with the aid of k-means clustering, a-priori extraction algorithm, and the innovative use of micro-moments. \item \textbf{DRED}: The Dutch Residential Energy Dataset (DRED) collected electricity use measurements \cite{uttama2015loced}, occupation trends and ambient evidence of one household in the Netherlands. Sensor systems have been installed to calculate aggregated energy usage and power consumption of appliances. In addition, 12 separate domestic appliances were sub-metered at sampling intervals of 1 min, while 1 Hz sampling rate was used to capture aggregated consumption. \item \textbf{QUD}: A specific anomaly detection dataset with its ground-truth labels is created on the basis of an experimental setup undertaken at the QU Lab, and is named Qatar University Dataset (QUD) \cite{alsalemi_access_2020, alsalemi2020micro}. A real-time micro-moment facility has been setup to gather reliable data on energy use. The QUD is a collection of readings from different mounted devices (e.g. light lamp, air conditioning, refrigerator, and computer) coupled with quantitative details, such as temperature, humidity, ambient light intensity, and space occupation \cite{himeur2020anomaly}. To the best of the researchers' understanding, QUD is the first dataset in the Middle East in which a normal 240V voltage is used with variable recording duration ranging from 3 seconds to 3 hours \cite{himeur2020building}. \end{itemize} With the aforementioned datasets, varying from simulated, small-scale, and large-scale, cloud artificial intelligence platforms will be utilized to classify those datasets into the following micro-moment classes \cite{ALSALEMI2019classifier}: \begin{itemize} \item 0: good consumption \item 1: switch the appliance on \item 2: switch the appliance off \item 3: excessive power consumption \item 4: consumption of power while outside room \end{itemize} Next we discuss the equipped ML algorithms to further enhance the understanding of the obtained results. \section{Implemented Algorithms} \label{sec:algorthims} In this work, with selected datasets and cloud platforms, a set of common yet powerful classification algorithms are employed, namely Support Vector Machines (SVM), K-Nearest Neighbors (KNN), and Deep Neural Network (DNN). The classification model of SVM is used based on the principle of systemic risk minimisation. This seeks to obtain an optimal isolation hyperplane, which reduces the distance between the features of the same set of appliances. Unless the function trends cannot be segregated linearly in the initial space, the data element can be converted into a new space with higher dimensions by utilizing kernel modules. In addition, the KNN algorithm is used to distinguish device function characteristics, this algorithm measures the distance of a candidate vector element to identify the $K$ nearest neighbors. The labels are analyzed and used to influence the class label of the candidate feature vector based on the majority vote, and thus, assign a class label to the respective appliance. Additionally, a novel DNN algorithm is used to classify phenomena. Typically speaking, deep learning is a sub-discipline of ML focused on the concept of studying various degrees of representation by the creation of a hierarchy of characteristics extracted by stacked layers. Keeping this in mind, the DNN system is based on the extension of conventional neural networks by adding additional hidden layers into the network layout between the input and output layers. This is achieved in order to provide a strong capacity to work with dynamic and non-linear grouping issues. As a consequence, DNN has attracted the interest of scientists over the last few years on the ground that it can provide better efficiency than many other current approaches in particular for regression, grouping, simulation, and forecasting goals. Under this framework, since non-linear separable data are being handled, deep learning is highly recommended for this problem. Furthermore, the efficiency of a deep learning algorithm is typically improved by growing the volume of data used for preparation. The above algorithms are easily exploited on the selected cloud platforms as Python supports various ML algorithms and these platforms employ Python-based scripts. The yielded results, using the selected datasets, are reported and discussed next. The algorithms are implemented using Python with help of both SciKit Learn and TensorFlow. \section{Results and Discussion} \label{sec:results} This section elaborates on the results of the cloud classification benchmark study. We highlight the performance of each evaluated cloud platform with respect to both the used algorithm and the utilized dataset. Following, light is shed on the limitations and future prospects of cloud artificial intelligence. \begin{table*}[ht] \centering \caption{Cloud classification performance} \label{tab:results} \begin{tabular} {|m{4cm} |>{\centering\arraybackslash}m{1.8cm} |>{\centering\arraybackslash}m{1.8cm} |>{\centering\arraybackslash}m{1.8cm} |>{\centering\arraybackslash}m{1.8cm} |>{\centering\arraybackslash}m{1.8cm} |>{\centering\arraybackslash}m{1.8cm}|} \hline\hline \multicolumn{7}{c}{\textbf{DRED}} \\ \hline\hline ML Algorithm & \multicolumn{2}{c|}{SVM} & \multicolumn{2}{c|}{KNN} & \multicolumn{2}{c|}{DNN}\\ \hline \backslashbox{Platform}{Performance} & Training Time (s) & Testing Time (s) & Training Time (s) & Testing Time (s) & Training Time (s) & Testing Time (s) \\ \hline MAML & 149.2108 & 12.7986 & 30.6957 & 3.3313 & 515.0723 & 0.2458 \\ \hline GCP & 134.7939 & 10.3981 & 30.0155 & 3.2382 & 1003.7682 & 0.6592 \\ \hline GCL & 84.5881 & 9.2814 & 28.4508 & 3.4481 & 418.3415 & 0.5036 \\ \hline AWSS & 98.8008 & 8.7878 & 31.4675 & 3.2993 & 1102.1146 & 0.7518 \\ \hline\hline \multicolumn{7}{c}{\textbf{QUD}} \\ \hline\hline ML Algorithm & \multicolumn{2}{c|}{SVM} & \multicolumn{2}{c|}{KNN} & \multicolumn{2}{c|}{DNN} \\ \hline \backslashbox{Platform}{Performance} & Training Time (s) & Testing Time (s) & Training Time (s) & Testing Time (s) & Training Time (s) & Testing Time (s) \\ \hline MAML & 43.2996 & 4.1559 & 2.1384 & 0.4313 & 142.6328 & 0.0736 \\ \hline GCP & 38.5514 & 3.7396 & 2.2601 & 0.4481 & 291.2316 & 0.2148 \\ \hline GCL & 36.2452 & 3.1886 & 2.7495 & 0.4975 & 122.2153 & 0.1658 \\ \hline AWSS & 34.0591 & 2.9403 & 2.1803 & 0.4050 & 313.4656 & 0.2229 \\ \hline\hline \multicolumn{7}{c}{\textbf{SimDataset}} \\ \hline\hline ML Algorithm & \multicolumn{2}{c|}{SVM} & \multicolumn{2}{c|}{KNN} & \multicolumn{2}{c|}{DNN}\\ \hline \backslashbox{Platform}{Performance} & Training Time (s) & Testing Time (s) & Training Time (s) & Testing Time (s) & Training Time (s) & Testing Time (s) \\ \hline MAML & 606.0671 & 28.3848 & 18.6040 & 2.0032 & 318.0821 & 0.1608 \\ \hline GCP & 535.6282 & 25.7717 & 17.7830 & 1.8951 & 643.8005 & 0.4350 \\ \hline GCL & 591.2706 & 23.4997 & 15.5310 & 1.9957 & 281.6197 & 0.3288 \\ \hline AWSS & 572.9653 & 21.4601 & 18.7907 & 2.0166 & 705.4971 & 0.4254 \\ \hline \end{tabular} \end{table*} Table \ref{tab:results} summarizes the classification performance according to the used platform, employed algorithm, and utilized dataset. It is evident that the ML algorithms exhibit varying performance. However, classification on the cloud provides higher performance without burdening the used local hardware. The results are an average computed from three different computation trials. Also, for each algorithm, accuracy and F-score values were similar and were excluded to focus on performance. The used cloud configurations are depicted in Table \ref{tab:conf}. \begin{table} \centering \caption{Used cloud platform configurations} \label{tab:conf} \begin{tabular}{|l|l|} \hline Platform & Configuration~ \\ \hline Azure~ & Azure-Standard-D12-v2-28GB~ \\ \hline GCP~ & GCP-n1-highcpu-4-3.60GB \\ \hline GCL~ & GCL-2-core Xeon-2.2GHz-13GB \\ \hline AWSS~ & AWSS-ml.t3.medium-2vCPU-2GB \\ \hline \end{tabular} \end{table} It is worthy to mention that the used platform exhibited similar performance comparatively considering the free plan option. Both MAML and GCP provided excellent performance, especially for testing. On the other hand, DNN consumed considerably longer for training. This can be explained by the nature of the the neural network, which is highly accelerated and well compensated at testing and deployment phases. Overall, cloud classification presents an ambitious prospect for ML, especially when local hardware cannot do the job. Embedded systems and Internet of Things (IoT) devices can be considered big users of such platforms. Also, when highly intensive computations are needed, cloud platforms are considered a convenient and economical solution. \section{Conclusions} \label{sec:conclusion} In this paper, common cloud artificial intelligence platforms are benchmarked and compared for micro-moment energy data classification. The AWSS, GCP, GCL, and MAML platforms are tested on the DRED, SimDataset, and QUD datasets. The KNN, DNN, and SVM classifiers have been employed. Superb performance has been observed in the cloud platform showing relatively close performance. Yet, the nature of some algorithms limits the training performance, such as DNN. Future work includes evaluating more platforms and integrating with the energy efficiency (EM)\textsuperscript{3} framework. \section*{Acknowledgment} \label{acknowledgements} This paper was made possible by National Priorities Research Program (NPRP) grant No. 10-0130-170288 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors.
1,941,325,220,613
arxiv
\section{Introduction} Quasiclassical methods, developed several years ago and used successfully to describe non-equilibrium states in superconductors and superfluids (see, e.g., \cite{schmid1975,larkin1976,eckern1981,eckern1981a}), have recently been extended to meso- and nanoscopic systems. This became possible after the formulation of boundary conditions, including hybrid structures and interface roughness \cite{shelankov2000,luck2001,luck2004}. In recent developments we focussed on spin-effects and spatial confinement, e.g., concerning the spin-Hall effect in a 2D electron gas \cite{raimondi2006}, and spin relaxation in narrow wires in the presence of spin-orbit coupling \cite{schwab2006}. Within the quasiclassical approach, it is also possible to study the influence of disorder and Coulomb interaction on the same footing \cite{kamenev1999,schwab2003}, which is manageable since the method works on an intermediate level: microscopic details of the wave-functions, on the scale of the interatomic distance, are integrated out, leaving equations of motion which can be solved with sufficient accuracy. As an illustrative example, we apply in this paper the quasiclassical method to determine the equilibrium Green's function, and hence the density of states (DoS), of the (spinless) 1D model known as Luttinger liquid \cite{tomonaga1950,luttinger1963, haldane1981}. This model contains two fermionic branches, linearized near the Fermi points, with $v_F$ and $N_0 = (\pi v_F)^{-1}$ the bare Fermi velocity and DoS. Only interaction processes with small ($\ll k_F$) momentum transfer are considered: standard parameters are $g_4$ and $g_2$, for scattering processes involving only one or both branches, respectively. The dimensionless quantities $\gamma_4 = g_4/2\pi v_F$ and $\gamma_2 = g_2/2\pi v_F$ are useful. The spectrum of charge fluctuations is linear, $\omega (q) = v |q|$; the renormalized velocity, $v$, and the parameter $K$, given by \begin{equation} \label{vK} v = v_F \left[ (1 + \gamma_4)^2 - \gamma_2^2 \right]^{1/2} \; , \;\; K = \left[ \frac{1+\gamma_4-\gamma_2}{1+\gamma_4+\gamma_2} \right]^{1/2} \; , \end{equation} are often called Luttinger liquid parameters. In the following Sect.\ 2 we discuss the solution of the equation of motion for the quasiclassical Green's function in the presence of a fluctuating potential, the latter representing the fermion-fermion interaction. The Keldysh technique is employed throughout. Considering the average -- with respect to the fluctuating field -- of the Green's function and an analogy to the $P(E)$ theory, the field fluctuations are related to an effective impedance of the Luttinger liquid, and finally to the DoS (Sect.\ 3). In Sect.\ 4 we consider an insulating boundary, i.e.\ a reflecting wall, and determine the DoS boundary exponent. A concluding discussion is given in Sect.\ 5. \section{Quasiclassical equation of motion and its solution} In essence, the Keldysh technique \cite{keldysh1965,rammer1986} differs from the zero-temperature and the Matsubara approach by employing time-ordering along a contour which runs from $-\infty$ to $+\infty$ and back to $-\infty$. Rewriting the theory in terms of standard ($-\infty ... +\infty$) integrations, a 2$\times$2 matrix structure results; e.g., the Green's function can be cast into the form \begin{equation} \label{G} \check{G} = \left\{ \begin{array}{cc} G^R & G^K \\ 0 & G^A \end{array} \right\} \end{equation} Accordingly, considering some general 2-particle interaction $V_0$ and decoupling this interaction using the Hubbard-Stratonovich transformation within the path-integral formalism, two auxiliary fields are necessary. As a result, the Green's function can be expressed as an average of the free-particle Green's function in the presence of fluctuating potentials as follows (see, e.g., Ref.\ \cite{kamenev1999}): \begin{equation} \label{av} \check{G} = \langle \check{G}_0 [\Phi] \, {\rm e}^{{\rm Tr}\ln (\check{1} + \check{G}_0 \check{\Phi} )} \rangle_{0, \Phi } = \langle \check{G}_0 [\Phi] \rangle_\Phi \end{equation} Here $\check{G}_0$ is the free-particle Green's function, and $\check{G}_0 ^{-1}[\Phi] = \check{G}_0^{-1} + \check{\Phi}$, where $\check{\Phi} = \phi_1 \check{\sigma}_0 + \phi_2 \check{\sigma}_1$; $\check{\sigma}_0$ and $\check{\sigma}_1$ denote the unit and the first Pauli matrix, and $\phi_1$ and $\phi_2$ correspond to the two potentials mentioned above. Furthermore, $\langle ...\rangle_{0,\Phi}$ denotes an average which is Gaussian by construction, and defined such that \begin{equation} \label{phi} \langle \phi_i (xt) \phi_j (x^\prime t^\prime) \rangle_{0, \Phi} = \frac{i}{2} \left\{ \begin{array}{cc} V_0^K & V_0^R \\ V_0^A & 0 \end{array} \right\}_{ij} (xt,x^\prime t^\prime) \end{equation} where for clarity space ($x$) and time ($t$) arguments are included. The average $\langle... \rangle_\Phi$, defined through Eqs.~(\ref{av}) and (\ref{phi}), can be non-Gaussian. (The spin will be suppressed througout this paper.) Note that a ``real'' electrical potential can be included by the replacement $\phi_1 \to \phi_1 + e\phi_{\rm ext}$ in $\check{G}_0 [\Phi]$ of Eq.\ (\ref{av}); the particle's charge is $-e$. In the next step, we consider the quasiclassical approximation, i.e.\ we consider the difference $\check{G}_0^{-1} [\Phi] \check{G}_0 [\Phi] - \check{G}_0 [\Phi] \check{G}_0^{-1} [\Phi] = 0$, keeping only the leading terms in the gradients with respect to the (spatial) center-of-mass coordinate (which we again denote by $x$). The equation is then integrated with respect to the magnitude of the momentum. The result is \begin{equation} \label{equ} \left[ \frac{\partial}{\partial t} + \frac{\partial}{\partial t^\prime} + v_F \hat{p}\cdot\nabla_x \right] \check{g}_{tt^\prime}(\hat{p}x) = i [\check{\Phi}, \check{g}] \end{equation} where $\hat{p}$ is the direction of the center-of-mass momentum, and \begin{equation} \label{comm} [\check{\Phi}, \check{g}] \equiv \check{\Phi}(xt)\check{g}_{tt^\prime}(\hat{p}x) - \check{g}_{tt^\prime}(\hat{p}x) \check{\Phi}(xt^\prime) \; . \end{equation} The quantity $\check{g}_{tt^\prime}(\hat{p}x)$ is the so-called quasiclassical Green's function, \begin{equation} \check g_{tt'}(\hat p x) = \frac{i}{\pi} \int d \xi \check G_{tt'}({\bf p}, x), \quad \xi = p^2/2m - \mu ;\end{equation} a major advantage compared to the full Green's function is the normalization, $\check{g}\check{g} = \check{1}$. It should be noted that in the quasiclassical approximation, only long-wavelength contibutions of the fluctuating fields are taken into account -- which is adequate for the problem addressed below. In the 1D case, $\nabla_x \to \partial_x$, and $\check{p} = \pm$. For the homogeneous equilibrium case, the solution is \begin{equation} \label{gnull} \check{g}_0 (\epsilon) = \left\{ \begin{array}{cc} 1 & 2 F(\epsilon) \\ 0 & -1 \end{array} \right\} \end{equation} where the Fourier transformed ($t - t^\prime \to \epsilon$) quantity is given, and $F(\epsilon) = \tanh (\beta\epsilon /2)$, $\beta = 1/k_B T$. A formal solution of Eq.\ (\ref{equ}) is found with the ansatz \begin{equation} \label{ansatz} \check{g}_{tt^\prime}(\hat{p}x) = {\rm e}^{i\check{\varphi}(xt)} \, \check{g}_{0,t-t^\prime} \, {\rm e}^{-i\check{\varphi}(xt^\prime)} \end{equation} provided \begin{equation} \label{solution} (\partial_t + v_F \hat{p} \partial_x ) \check{\varphi}(xt) \equiv D_{xt} \check{\varphi}(xt) = \check{\Phi} (xt) \; . \end{equation} Apparently, $\check{\varphi}$ has the same matrix structure as $\check{\Phi}$, namely $\check{\varphi} = \varphi_1 \check{\sigma}_0 + \varphi_2 \check{\sigma}_1$. For example, we obtain explicitly from (\ref{ansatz}): \begin{equation} g^R (tt^\prime) = \delta (t-t^\prime) -2i F (tt^\prime) \cos\varphi_2 \sin\varphi_2^\prime \, {\rm e}^{i(\varphi_1 -\varphi_1^\prime)} \end{equation} \begin{equation} g^K (tt^\prime) = 2 F (tt^\prime) \cos\varphi_2 \cos\varphi_2^\prime \, {\rm e}^{i(\varphi_1 -\varphi_1^\prime)} \end{equation} where $\varphi_j \equiv \varphi_j (xt)$ and $\varphi_j^\prime \equiv \varphi_j (xt^\prime)$ for brevity. We will discuss below that, for the Luttinger model, it is sufficient to assume that the phases are Gaussian distributed. It is then straightforward to determine $\langle \check{g} \rangle_\phi$; for example, we obtain\footnote{The quantities discussed from here on depend on the time difference, $t-t^\prime$, but we nevertheless use the notation $tt^\prime$ for brevity.} \begin{eqnarray} \langle \check{g}^R \rangle_\Phi (tt^\prime) & = & \delta (t-t^\prime) + F (tt^\prime) \, {\rm e}^{- J_0 (tt^\prime)} \, [ \sinh (J_1(tt^\prime)) - \sinh (J_2(tt^\prime)) ] \\ & = & - \langle \check{g}^A \rangle_\Phi (t^\prime t) \end{eqnarray} where \begin{eqnarray} J_0 (tt^\prime) = & \langle (\varphi_1 - \varphi_1^\prime)^2 \rangle_\Phi /2 & = J^K (0) - J^K (tt^\prime) \; , \\ J_1 (tt^\prime) = & \langle (\varphi_1 - \varphi_1^\prime) (\varphi_2 + \varphi_2^\prime) \rangle_\Phi & = J^R (tt^\prime) - J^A (tt^\prime) \; ,\\ J_2 (tt^\prime) = & \langle (\varphi_1 - \varphi_1^\prime) (\varphi_2 - \varphi_2^\prime) \rangle_\Phi & = - J^R (tt^\prime) - J^A (tt^\prime) \; . \end{eqnarray} Note that $J_0$ and $J_2$ are even, and $F$ and $J_1$ are odd under time reversal, $t-t^\prime \to t^\prime -t$. Then we define $J=-J_0+J_1$, such that \begin{equation} \label{J} J(t) = \int \frac{d\omega}{2\pi} \left[ J^R(\omega) - J^A (\omega) \right] \left[ B(\omega) + 1 \right] \left( {\rm e}^{-i\omega t} -1 \right) \end{equation} where $B(\omega) = \coth (\beta\omega/2)$, and we used $J^K (\omega) = [ J^R(\omega) - J^A (\omega) ] B (\omega)$. From the relation $J^R (tt^\prime) = J^A (t^\prime t)$ we obtain $J^A (\omega) = J^R (-\omega)$, implying that $J^R(\omega)-J^A(\omega)$ is odd in frequency. Combining the above relations, the (normalized) DoS is given by \begin{eqnarray} \label{DoS} N(\epsilon)\, = \, {\rm Re} g^R(\epsilon) & = & 1 + \pi \int dt \, {\rm e}^{i\epsilon t} \, \{ F(tt^\prime) [P(tt^\prime) - P(t^\prime t)]\}_{t^\prime=0} \nonumber \\ & = & 1 + \frac{1}{2} \int d\omega \, F(\epsilon - \omega) [P(\omega) - P(-\omega)] \end{eqnarray} with \begin{equation} \label{P} P(\omega) = \frac{1}{2\pi} \int dt \, {\rm e}^{J(t) +i\omega t} .\end{equation} Note that $P(t=0) = 1/2\pi$, i.e.\ $\int d\omega P(\omega) = 1$. In order to complete the argument, the phase fluctuations are easily related to the potential fluctuations, according to the relation (compare Eq.\ (\ref{solution})) \begin{equation} J^R (t) = \langle \varphi_1 (xt) \varphi_2 (x0) \rangle_\Phi = [ D_{xt}^{-1} D_{x^\prime t^\prime}^{-1} \langle \phi_1 (xt) \phi_2 (x^\prime 0) \rangle_\Phi ]_{x=x^\prime,t^\prime =0} \; , \end{equation} and the potential fluctuation can be related to the interaction. This last step, however, requires some discussion. First, note that expanding the exponent in Eq.\ (\ref{av}) up to second order, neglecting higher order terms, corresponds to the random phase approximation (RPA). Given this (Gaussian) approximation, the procedure for performing the phase average, outlined above, is permitted. As an important point, however, {\em the RPA is known to be exact for calculating the density response of the Luttinger model}, and hence the effective interaction, which concludes the argument: the potential fluctuations are given by \begin{equation} \langle \phi_1 (xt) \phi_2 (x^\prime t^\prime) \rangle_\Phi = \frac{i}{2} V^R (xt,x^\prime t^\prime) \end{equation} where $V^R (xt,x^\prime t^\prime)$ is the screened (RPA) interaction. Thus $J^R = (i/2) D^{-1} D^{-1} V^R$ in an obvious short-hand notation. \section{Effective impedance and bulk DoS} The approach described in the preceding section is very similar to the theory of charge tunneling in ultrasmall junctions, as reviewed, e.g., in \cite{ingold1992}, which is also known as $P(E)$ theory. (See also \cite{sassetti1994} for related articles.) In this context, the quantity $P(E)$ characterizes the influence of the environment on the tunneling between the two electrodes. For example, the forward tunneling rate is found to be given by \begin{equation} \label{forward} {\vec \Gamma} (V) = (e^2 R_T)^{-1} \int dE dE^\prime \, f(E) [ 1 - f(E^\prime + eV) ] \, P(E - E^\prime ) \end{equation} where $V$ is the voltage, $R_T$ the tunneling resistance, and $f(E)$ the Fermi function. Note the detailed balance condition $P(-E) = {\rm e}^{-\beta E} \, P(E)$. Physically, $P(E)$ describes the probability of exchanging the energy $E$ with the environment \cite{ingold1992}. On the other hand, in the present context, consider the tunneling rate from the interacting one-dimensional wire into a non-interacting lead, which clearly is given by \begin{equation} \label{forwardb} {\vec \Gamma} (V) = (e^2 R_T)^{-1} \int d\epsilon f(\epsilon)N(\epsilon)[ 1 - f(\epsilon + eV) ] \end{equation} with $N(\epsilon)$ the normalized DoS as given in Eq.\ (\ref{DoS}). Using this equation as well as the properties of $P(\omega)$ as described in the previous section and the detailed balance condition, it is straightforward to confirm that Eqs.\ ({\ref{forward}}) and (\ref{forwardb}) coincide, provided $P(E)$ from the tunneling theory is identified with $P(\omega)$. The reason for the complete correspondence between these two quantities is apparent on physical grounds, since the quantity $P(\omega)$ characterizing the Luttinger liquid arises from the auxiliary potential fluctuations due to the fermion-fermion interaction. In the zero-temperature limit, one finds easily \begin{equation} \label{T=0} N(\epsilon) = \int_0^{|\epsilon|} d\omega P(\omega) \quad\quad (T=0) \; . \end{equation} Exploiting the analogy further, we identify \begin{equation} \label{Z} J^R(\omega)-J^A(\omega) \equiv \frac{2\pi}{\omega} \frac{{\rm Re}Z(\omega)}{R_K} \end{equation} where $R_K = h/e^2 = 2\pi\hbar /e^2$ is the Klitzing constant, and we may call $Z(\omega)$ effective impedance of the Luttinger liquid. For the simple example of an ohmic impedance with a high-frequency cut-off \begin{equation} \label{ohmic} \frac{{\rm Re}Z(\omega)}{R_K} = \frac{1}{g} \frac{1}{1 + (\omega/\omega_R)^2} \end{equation} the result is \cite{ingold1992} \begin{equation} P(\omega) = \frac{{\rm e}^{-2\gamma /g}}{\Gamma (2/g)} \frac{1}{\omega} \left[ \frac{\omega}{\omega_R} \right]^{2/g} \quad\quad (T=0, \, 0<\omega < \omega_R) \end{equation} where $\gamma$ is Euler's constant. Thus the DoS is found to vanish at the Fermi energy, $N(\epsilon) \sim |\epsilon|^{2/g}$, where the exponent is given by \begin{equation} \label{2overg} \frac{2}{g} = 2 \cdot \left\{ \frac{\omega}{2\pi} [J^R (\omega)-J^A (\omega)]\right\}_{\omega\to 0} = -i \left\{ \frac{\omega}{\pi} \int \frac{dq}{2\pi} \frac{V^R(q\omega)}{(-i\omega +0 + i q v_F)^2} \right\}_{\omega\to 0} \end{equation} where $V^R(q\omega)$ is the screened retarded interaction. In order to determine this quantity, consider the 2$\times$2 matrix structure of right- and left-moving particles, denoted by ``$+$'' and ``$-$'', the bare interaction and the non-interacting response function: \begin{equation} \label{V0} \hat{V}_0 = \left\{ \begin{array}{cc} g_4 & g_2 \\ g_2 & g_4 \end{array} \right\} \; , \;\; \hat{\chi}_0 = \frac{1}{2\pi v_F} \left\{ \begin{array}{cc} \frac{q v_F}{-\omega - i0 + q v_F} & 0 \\ 0 & \frac{q v_F}{+\omega + i0 + q v_F} \end{array} \right\} \; , \end{equation} as well as the RPA equation $\hat{V}^R = (\hat{1} + \hat{V}_0 \hat{\chi}_0 )^{-1} \hat{V}_0$. This matrix structure implies that we have to introduce a branch index for the fluctuating fields, which we suppressed up to now: $\check \varphi, \check \Phi \to \check \varphi_\pm, \check \Phi_\pm $. The density of states of the right-moving particles, for example, is determined from the $\check\varphi_+$-$\check\varphi_+$ correlation functions. The relevant quantity, $\hat{V}^R_{++}$, is straightforwardly determined and inserted into Eq.\ (\ref{2overg}); the $q$-integral can be done by contour integration, and we obtain the standard \cite{larkin1974} result:\footnote{See \cite{oreg1996} for a recent summary. Note that often the parameter $K$ is denoted by $g$, which we avoid here since we prefer to use the latter symbol for the dimensionless conductance, see Eq.\ (\ref{ohmic}), in accordance with the $P(E)$ theory \cite{ingold1992}.} \begin{equation} \label{exponent1} 2/g = (K + K^{-1} -2)/2 \; . \end{equation} \section{DoS boundary exponent} Near a boundary or an interface the quasiclassical approximation is insufficient, and the quasiclassical propagators pointing into or out of the boundary (or interface) have to be connected by boundary conditions \cite{zaitsev1984,shelankov2000}. The boundary condition is particularly simple for spinless fermions in one dimension at an impenetrable wall: obviously $\check g_{tt'}(\hat p x) = \check g_{tt'}(-\hat p x )$, to ensure that there is no current through the wall. In the following we assume that the wall is located at $x=0$, and that the particles move in the half-space $x<0$. Instead of considering two branches of fermions in this half-space, we find it more convenient to mirror the left movers at the boundary and to consider only right movers in the full space, i.e.\ \begin{eqnarray} \check g_{tt'}(+,x)& = &\left\{ \begin{array}{ll} \check g_{tt'}(+,x) & \text{ for } x< 0 \\ \check g_{tt'}(-,-x) & \text{ for } x> 0 \end{array} \right. \\ \check \Phi_+(x) & = & \left\{ \begin{array}{ll} \Phi_+( x ) & \text{ for } x< 0 \\ \Phi_-( x ) & \text{ for } x> 0 \end{array} \right. \end{eqnarray} This Green's function solves Eq.~(\ref{equ}) both for $x \geq 0$ and $x \leq 0$. The bare interaction is now given in real space by \begin{equation} V_0(x,x') = g_4 \delta(x-x') + g_2 \delta(x+x') \; .\end{equation} The Fourier transfrom of $V_0$ is thus off-diagonal in momentum space, since the $g_2$-term couples $q$ with $-q$. Considering now the 2$\times$2 matrix structure of particles with momentum $q$ and $-q$, the bare interaction is \begin{equation} \left\{ \begin{array}{cc} V_{0,q,q} & V_{0,q,-q} \\ V_{0,-q,q} & V_{0,-q,-q} \end{array} \right\} = \left\{ \begin{array}{cc} g_4 & g_2 \\ g_2 & g_4 \end{array} \right\} \; ,\end{equation} i.e.\ identical to what was given in Eq.~(\ref{V0}) above with just a different meaning of the matrix index. Also the non-interacting response function and the RPA equation for the screended interaction are the same as before. Due to the absence of translation symmetry the effective impedance depends on the distance from the boundary, and is given by \begin{eqnarray} \frac{{\rm Re} Z(x, \omega) }{R_K } & = &\frac{\omega}{2 \pi } {\rm Im} \left[ \int \frac{dq}{2\pi} \frac{V^R_{q,q}(\omega)}{(-i\omega + i q v_F)^2} +e^{2 i q x} \frac{V^R_{q,-q}(\omega)} {(-i \omega + i q v_F)(-i\omega -iqv_F)} \right] \\ &=& \frac{1}{4}\left[ K + K^{-1}-2 + \cos( 2 \omega x/ v ) (K^{-1}-K) \right] \; .\end{eqnarray} Hence the density of states at the boundary is found to vanish as \begin{equation} N(\epsilon) \sim |\epsilon|^{(1-K)/K} \end{equation} in agreement with the boundary exponent obtained in \cite{fabrizio95}, and the impurity exponent given in \cite{fabrizio1997}. In fact this result has been considerably debated \cite{oreg1996,fabrizio1997,oreg1997}; compare also \cite{furusaki1997}, as well as the detailed presentation by von Delft and Schoeller \cite{delft1998}.\footnote{These authors emphasize the importance of a proper handling of Klein factors, i.e.\ fermionic anticommutation relations. For more recent studies of this aspect, see, for example, Ref.\ \cite{mocanu2004}.} \section{Summary} Quasiclassical theory is known to be a useful tool in the theoretical description of superconductors and superfluids, including non-equilibrium states and interfaces. For normal-conducting electrons the theory mainly serves for the microscopic foundation of a Boltzmann-like transport theory. However, when taking into account the Coulomb interaction in terms of a fluctuating Hubbard-Stratonovich field, the theory also captures important interaction effects, which are beyond the reach of the Boltzmann equation. In this article we illustrated this by considering the one-dimensional version of the theory, and demonstrated that it is possible to describe the non-Fermi liquid physics of Luttinger liquids. The mathematics involved is very similar to the $P(E)$ theory of tunneling, which we made use of in this article, and also to the functional bosonization approach to Luttinger liquids \cite{fogedby1976} (which we did not exploit here). We concentrated on the spinless case and on thermal equilibrium; the spin is straightforwardly included in the theory, and clearly -- since we use the Keldysh formalism -- non-equilibrium situations can be covered as well. We are confident that in the future the quasiclassical theory will become a useful, complementary approach to transport in interacting one-dimensional systems. \begin{acknowledgement} This work was supported by the Deutsche Forschungsgemeinschaft (SFB 484). \end{acknowledgement}
1,941,325,220,614
arxiv
\section{Introduction} In this work we study the fluctuations of the order parameter in an $SU(N_c)$ gauge theory within an effective model. Unlike the order parameter, these observables are finite and temperature dependent even in the confined phase, thus providing important diagnostic information about the mechanism of deconfinement phase transition, and the properties of gluons (and ghosts) in relation to the structure of $Z(N_c)$ vacuum. Even when powerful numerical methods such as Lattice QCD (LQCD) are available to perform ab-initio calculations of the full theory~\cite{Boyd:1996bx,Borsanyi:2012ve,Kaczmarek:2002mc}, it is instructive, and sometimes essential, to work on an effective model description of a dynamical system. First of all, it provides clear links between the observables and the underlying symmetry. Second, it enables straightforward application of the model to other extreme conditions~\cite{Fukushima:2013rx,Fukushima:2017csk}, or as a building block to study further coupling to other dynamical fields~\cite{Andersen:2014xxa,Bruckmann:2013oba,Fraga:2013ova,Pagura:2016pwr,Lo:2018wdo,Lo:2020ptj}. A common strategy to constructing an effective potential is via a polynomial of the order parameter field~\cite{Ratti:2005jh,mat_model_1,Lo:2013hla}, i.e. the Ginzburg-Landau theory. Symmetry restricts the kind of terms that can appear in the potential. The coefficients are generally smooth functions of temperatures ( and other external fields ), which need to be separately determined, e.g. by fitting observables to LQCD results. While a polynomial type potential is convenient to work with, the relation between model parameters and the properties of the underlying gluons (and ghosts) is not transparent. In this study, we employ an effective potential built from one-loop expressions of the field determinants of gluon and ghost described in Ref.~\cite{Reinosa:2014ooa}. (See also Ref.\cite{Braun:2007bx}.) The model naturally describes both the confined and the deconfined phases, as related to the spontaneous breaking of $Z(N_c)$ symmetry. In particular, the ghost term gives a confining, i.e. $Z(N_c)$ restoring, potential. The effective model, as a tool, allows us to gain insights into the interplay between vacuum structure and dynamics. The thermal properties of a pure gauge system have been analyzed previously by effective models~\cite{Meisinger:2001cq,Meisinger:2003id,mat_model_1,gen_1,Bannur:2006js,Braun:2007bx,sasaki_pot,Fukushima:2012qa,Lo:2013hla}. However, features of gluons in the confined phase are usually not examined, and the importance of fluctuation observables~\cite{Lo:2013etb} have not been fully realized. We therefore focus on these observables in this work and study how features of deconfinement manifest through them. We also use this opportunity to clarify the connection of these observables to eigenvalues of the Polyakov loop operator in a matrix model~\cite{mat_model_1,Meisinger:2001cq,Meisinger:2003id}. We show that the curvature masses associated with the Cartan angles, which serve as a proxy to study the $A_0$-gluon screening mass, shows a characteristic trend of a rapid drop in the vicinity of transition temperature $T_d$. Such a behavior is traceable to the competing effect of $Z(3)$ restoring (confining) and $Z(3)$ breaking (deconfining) forces. The strength of the masses is sensitive to the assumptions made on the dynamical properties of gluon and ghost propagators. Finally we present a possible relation between the glueball mass and $T_d$ suggested by the model. \section{group structure of $SU(N_c)$} The Polyakov loop operator in the fundamental representation, after a diagonalizing unitary transformation, can be expressed by the $N_c$ eigenphases $\vec{q}$: \begin{align} \hat{\ell}_F = {\rm diag} \, \left( e^{i q_1}, e^{i q_2}, \ldots, e^{i q_{Nc}} \right), \end{align} The first $N_c-1$ phases may be taken as independent, and unitarity is enforced by requiring \begin{align} q_{N_c} = - \sum_{j=1}^{N_c-1} \, q_j. \end{align} Alternatively, the angles can be expressed in terms of the $N_c-1$ group angles of the maximal $Z(N_c)$ Cartan subgroup, ($\gamma_n$'s): \begin{align} \vec{q} = \sum_{j=1}^{N_c-1} \, \gamma_j \, \vec{v}_j, \end{align} where $\{ \vec{v}_j \} $ is a set of basis vectors, each being an $N_c$ dimensional vector with its sum of elements zero. The order parameter field is obtained from a trace of $\hat{\ell}_F$: \begin{align} \ell = \frac{1}{N_c} \, {\rm Tr} \, \hat{\ell}_F. \end{align} For $N_c \geq 3$, the order parameter is complex, and one can explore its real and imaginary parts: \begin{align} \label{eq:xy} \begin{split} \ell &= X + i \, Y \\ X &= \frac{1}{N_c} \, \sum_{j=1}^{N_c} \, \cos(q_n) \\ Y &= \frac{1}{N_c} \, \sum_{j=1}^{N_c} \, \sin(q_n). \end{split} \end{align} Note that $X, Y$ are regarded as a scalar function of the $N_c-1$ Cartan angles $\vec{\gamma}$'s. To study the fluctuations of the order parameter in an effective model, we need to perform $X, Y$-field derivatives of a potential. Eq.~\ref{eq:xy} provides a connection between these derivatives with those acting on $\gamma_n$'s: \begin{align} \label{eq:dxy_op} \begin{split} \frac{d}{d X} &= \sum_{j=1}^{N_c-1} \, C_{1j}(\vec{\gamma}) \, \frac{d}{d \gamma_j} \\ \frac{d}{d Y} &= \sum_{j=1}^{N_c-1} \, C_{2j}(\vec{\gamma}) \, \frac{d}{d \gamma_j}, \end{split} \end{align} where the $2 \times (N_c-1)$ matrix $C$ is obtained by (left-) inverting the transpose of the Jacobian $J$: \begin{align} \begin{split} J &= \frac{\partial \, \{X, Y\}}{\partial \, \{\gamma_1, \gamma_2, \ldots \gamma_{N_c-1} \}} \\ C &= [J^t]^{-1}. \end{split} \end{align} Finally, starting with a potential expressed in terms of the Cartan angles, $U(\vec{\gamma})$, the susceptibilities can be computed by forming the curvature matrix $\bar{U}^{(2)}$~\cite{Sasaki:2006ww,Lo:2014vba,Lo:2018wdo} \begin{align} \label{eq:xycurva} \bar{U}^{(2)} = \frac{1}{T^4} \, \begin{pmatrix} \frac{\partial^2 \, U}{\partial X \, \partial X} & \frac{\partial^2 \, U}{\partial X \, \partial Y} \\ \frac{\partial^2 \, U}{\partial Y \, \partial X} & \frac{\partial^2 \, U}{\partial Y \, \partial Y} \end{pmatrix}. \end{align} The various $(X, Y)-$ field derivatives are calculated according to Eq.~\eqref{eq:dxy_op}. Inverting the curvature matrix gives \begin{align} T^3 \, \tilde{\chi} = \left( \bar{U}^{(2)} \right)^{-1}, \end{align} with \begin{align} \begin{split} T^3 \, \chi_L &= T^3 \, \tilde{\chi}_{11} \\ T^3 \, \chi_T &= T^3 \, \tilde{\chi}_{22}. \end{split} \end{align} Note that the notions of longitudinal and transverse directions~\cite{Lo:2013etb} correspond to real and imaginary components along the real line, this is not so for other $Z(N_c)$ vacua. To illustrate the computation of fluctuations, we consider a schematic effective potential (model A) of the form: \begin{align} \label{eq:potA} U = U_{\rm conf.} + U_{\rm glue} \end{align} where the confining part is modeled by the group invariant measure $H$~\cite{Fukushima:2003fw} \begin{align} \label{eq:pot1} U_{\rm conf.} = -\frac{b}{2} \, T \, \ln H. \end{align} This potential is confining in the sense that it tends to drive the system towards the $Z(N_c)$ symmetric vacuum ($\ell = 0$). The deconfining part, which prefers the spontaneously broken $Z(N_c)$ vacuum, is modeled as \begin{align} \label{eq:pot2} \begin{split} U_{\rm glue} &= n_{\rm glue} \, T \, \int \frac{d^3 k}{(2 \pi)^3} \, \times \\ &{\rm Tr}_A \, \ln ( I - \hat{\ell}_A \, e^{-\beta E_A(k)} ), \end{split} \end{align} with $E_A(k) = \sqrt{k^2 + m_A^2}$. In section~\ref{sec4}, we shall investigate some alternative forms of the potential and discuss issues of gauge dependence and inclusion of wavefunction renormalizations. As we are mainly interested in studying the influence from group structure, we shall keep the model parameters as simple as possible. In fact, we shall start with the parameterization: $ b = \left( 0.1745 \, {\rm GeV} \right)^{3} $, $n_{\rm glue}=2$, and $m_A \approx 0.756 $ GeV. ~\footnote{Such a value of gluon mass ($\approx 0.7$ GeV) is supported by calculations in different gauges. We have also checked that using the Gribov dispersion relation: $E(k) = \sqrt{k^2 + m_A^4/k^2}$~\cite{Zwanziger:2004np} or imposing a UV cutoff for $m_A \rightarrow m_A \, e^{-k^2/\Lambda^2}$~\cite{eric_gb} do not lead to significant difference in the observables studied.} Two group structures appear in this schematic model: the adjoint operator $\hat{\ell}_A$ and the group invariant measure $H$. It is useful to express them in terms of the eigenphases. For the former, \begin{align} \label{eq:adj1} \hat{\ell}_A = {\rm diag} \, \left( e^{i Q_1}, e^{i Q_2}, \ldots, e^{i Q_{{N_c}^2-1}} \right), \end{align} with \begin{align} \label{eq:adj2} \vec{Q} = \left( 0, \ldots, 0; q_j-q_k, -(q_j-q_k) \right), \end{align} for $j < k$, $j, k = 1, 2, \ldots N_c$. The adjoint angles are constructed from the root system~\cite{Georgi:1982jb,mat_model_1}, classified into Cartan and non-Cartan parts: a) $N_c-1$ zeros, representing identity matrix element in $\hat{\ell}_A$; b) $N_c \, (N_c-1) / 2$ pairs of $q_i-q_j$'s for $i > j$ and terms with the opposite sign. An intuitive way to understand the form of potential Eq.~\eqref{eq:pot2} is to realize that the adjoint derivative operator for the gluon field, in the presence of a diagonal background field $\hat{q} = i \, \beta g \bar{A}_0 = \beta g \bar{A}_4$, reads \begin{align} \begin{split} \bar{D}^{\rm adj}_\mu \, \mathbf{M} &= \partial_\mu \, \mathbf{M} + \delta_{\mu 0} \, \frac{1}{\beta} \, [ \hat{q}, \mathbf{M} ]. \end{split} \end{align} The adjoint operator acts on an arbitrary $SU(N_c)$ matrix $\mathbf{M}$, the latter has $N_c^2-1$ independent entries. As $\hat{q}$ is diagonal, the $ij$-th component of the commutator $[\hat{q}, \mathbf{M}]$ is given by~\cite{instanton,Dumitru:2013xna} \begin{align} (q_i - q_j) \, \mathbf{M}_{ij}. \end{align} For $i \neq j$, the multiplying factors are exactly the non-trivial entries of the adjoint angles $\vec{Q}$ in Eq.~\eqref{eq:adj2}. The remaining $N_c$ diagonal elements of $\mathbf{M}$, of which $N_c-1$ are independent, are multiplied by $0$, i.e., the Cartan part of $\vec{Q}$. The effects of the background field is thus similar to introducing an imaginary chemical potential for the $N_c^2-1$ independent components. In particular, the gauge field determinant can be constructed \begin{align} \label{eq:matsu} \begin{split} {\rm Tr} \, \ln \, \bar{D}^{2}_{\rm adj} &= \sum_a \, V \, \sumint \, \ln \, \left( (\omega_n + \frac{Q_a}{\beta})^2 + (\vec{k})^2 \right) \\ &= 2 \, V \, \int \frac{d^3 k}{(2 \pi)^3} \, {\rm Tr}_A \, \ln \, ( I - \hat{\ell}_A \, e^{-\beta k} ) \\ &+ (T=0). \end{split} \end{align} where $\sumint$ denotes a Matsubara sum over the bosonic frequencies and an integral over momenta. From now on, we shall retain only the finite temperature piece. Eq.~\eqref{eq:pot2} is its simple extension to introducing a finite gluon mass. Another group structure of interest is the invariant measure. This can also be expressed in terms of the eigenphases $q_j$'s via \begin{align} \begin{split} H &= \prod_{j > k} \, \vert e^{i q_j}-e^{i q_k} \vert^2 \\ &= \prod_{j > k} \, 4 \, \sin^2 \left( \frac{q_j-q_k}{2} \right). \end{split} \end{align} Note that there are $ N_c \, (N_c-1) /2 $ pairs of $(j > k)$ in the product. A fact that would prove useful later is the construction of the (logarithm of) invariant measure from \begin{align} \label{eq:haar} \ln H = {\rm Tr}_A^\prime \, \ln \left( I - \hat{\ell}_A \right), \end{align} where ${\rm Tr}^\prime$ denotes the partial trace over the non-Cartan roots to avoid irrelevant divergences from vanishing elements. Hence the effective potential can be expressed as \begin{align} U_{\rm conf.} = -\frac{b}{2} \, T \, {\rm Tr}_A^\prime \, \ln \, (1 - {\hat{\ell}_A}), \end{align} with $\hat{\ell}_A$ in Eqs.~\ref{eq:adj1} and~\ref{eq:adj2}. This establishes that an invariant measure term behaves like the glue potential~\eqref{eq:pot2} with $E_A(k) \rightarrow 0$, but of the opposite sign, and should be formally understood as a ghost contribution.~\cite{Weiss:1980rj,Gocksch:1993iy} Here we explicitly work out the case for $N_c = 2, 3, 4$. \subsection{$N_c=2$} In this case there is only a single independent eigenphase $\vec{q} = (q_1, -q_1)$ for the Polyakov loop operator $\hat{\ell}_F$: \begin{align} \hat{\ell}_F = {\rm diag} \, \left( e^{i q_1}, e^{-i q_1} \right), \end{align} and the order parameter field is purely real \begin{align} \ell = \cos q_1. \end{align} The adjoint angles can be constructed \begin{align} \vec{Q} = (0; 2 q_1, -2 q_1) \end{align} and from Eq.~\eqref{eq:haar} the invariant measure works out to be \begin{align} \label{eq:hsu2} \begin{split} H(q_1) &= 4 \, \sin^2 q_1 \\ &= 4 \, (1-\ell^2). \end{split} \end{align} The same result may be obtained from a slightly different starting point. Consider the parametrization of $SU(2)$ matrices $\{u\}$ by $(a_0, \vec{a})$ via \begin{align*} u = a_0 \, I + i \, \vec{a} \cdot \vec{\sigma} \end{align*} where $I, \vec{\sigma}$ are the $ 2 \times 2$ identity and Pauli matrices. The invariant measure is given by \begin{align*} \int d \mu_{\rm SU(2)} &= \int d^4 a \, \delta(a^2-1) \\ &= \int d a_0 \, d \vert \vec{a} \vert \, d^3 \hat{n} \, \vert \vec{a} \vert^2 \, \delta(a_0^2+\vec{a}^2-1) \\ &\propto \int d a_0 \, \sqrt{1-a_0^2}. \end{align*} The last line assumes a uniform distribution of $d^3 \hat{n}$. We see that $a_0$ plays the role of $\ell = \cos(q_1)$: The change of coordinates from $\ell$ to $q_1$, gives an extra factor of the square root term, leading to the same expression of the invariant measure in Eq.~\eqref{eq:hsu2}. \subsection{$N_c=3$} For SU(3) gauge group there are two independent eigenphases $\vec{q} = (q_1, q_2, -q_1-q_2)$, hence \begin{align} \hat{\ell}_F = {\rm diag} \, \left(e^{i q_1}, e^{i q_2}, e^{-i (q_1+q_2)} \right), \end{align} and the order parameter field is \begin{align*} \ell &= X + i \, Y \\ X &= \frac{1}{3} \, (\cos q_1 + \cos q_2 + \cos (q_1+q_2)) \\ Y &= \frac{1}{3} \, (\sin q_1 + \sin q_2 - \sin (q_1+q_2)). \end{align*} \begin{table}[h!] \centering \begin{tabular}{ |c|c|c| } \hline $Q_1$ & $Q_2$ & \\ \hline $0$ & $0$ & \\ \hline \hline $Q_3 = -Q_6 $ & $Q_4 = -Q_7$ & $Q_5=-Q_8$ \\ \hline $q_1-q_2$ & $2q_1+q_2$ & $q_1+2q_2$ \\ \hline \end{tabular} \caption{Adjoint angles of the Polyakov loop operator for the $SU(3)$ group.} \label{tab:su3q} \end{table} The adjoint angles are shown in Table~\ref{tab:su3q}. Using these with Eq.~\eqref{eq:haar} the invariant measure can be computed \begin{align} \label{eq:hsu3} \begin{split} H(q_1, q_2) &= 64 \, \sin^2 \frac{(q_1-q_2)}{2} \\ &\times \sin^2 \frac{(2 q_1 + q_2)}{2} \, \sin^2 \frac{(q_1 + 2 q_2)}{2}. \end{split} \end{align} We can also express the result in terms of the Cartan parameters. The two independent directions can be chosen to be \begin{align} \begin{split} \vec{v}_1 &= (1, 0, -1) \\ \vec{v}_2 &= (1/2, -1, 1/2), \end{split} \end{align} and \begin{align} \label{eq:su3gamma} \begin{split} \gamma_1 &= q_1 + {q_2}/2 \\ \gamma_2 &= -q_2 \end{split} \end{align} can be taken as independent variables and the invariant measure reads \begin{align} \begin{split} H(\gamma_1, \gamma_2) &\propto \sin^2 \frac{(\gamma_1 - 3/2 \, \gamma_2)}{2} \\ &\times \sin^2 \gamma_1 \, \sin^2 \frac{(\gamma_1 + 3/2 \, \gamma_2)}{2}. \end{split} \end{align} Specific to the SU(3) gauge group, the two independent degrees of freedom can be identified with the trace of the Polyakov loop operator in the fundamental representation $\ell$, the invariant measure can be expressed via $X, Y$: \begin{align} \begin{split} H(X, Y) &= 27 \times ( \, 1 - 6 ( X^2 + Y^2 ) + \\ & 8 (X^3 - 3 X Y^2) - 3 (X^2+Y^2)^2 \, ). \end{split} \end{align} For $N_c > 3$, the invariant measure generally depends on $N_c-1$ independent angles, and therefore is not expressible solely in terms of $(X, Y)$. \begin{figure*}[!ht] \label{fig1} \centering \includegraphics[width=0.49\linewidth]{fig1a.pdf} \includegraphics[width=0.49\linewidth]{fig1b.pdf} \includegraphics[width=0.49\linewidth]{fig1c.pdf} \includegraphics[width=0.49\linewidth]{fig1d.pdf} \includegraphics[width=0.49\linewidth]{fig1e.pdf} \includegraphics[width=0.49\linewidth]{fig1f.pdf} \caption{The Polyakov loop potentials~\eqref{eq:potA} (left) and the derived observables: the Polyakov loop expectation values, the longitudinal and the transverse susceptibilities (right) for $N_c=2, 3, 4$.} \end{figure*} \subsection{$N_c=4$} The analysis for $N_c=4$ and beyond proceeds in a similar fashion. For the SU(4) gauge group there are three independent eigenphases: \begin{align} \vec{q} &= (q_1, q_2, q_3, q_4) \end{align} with $ q_4 = -(q_1 + q_2 + q_3)$. The order parameter field is given by \begin{align} \begin{split} \ell &= X + i \, Y \\ X &= \frac{1}{4} \, (\cos q_1 + \cos q_2 + \cos q_3 + \cos (q_1+q_2+q_3)) \\ Y &= \frac{1}{4} \, (\sin q_1 + \sin q_2 + \sin q_3 - \sin (q_1+q_2+q_3)). \end{split} \end{align} The $15=3+6+6$ adjoint angles is composed of: 3 zeros, and 6 non-trivial angles and their negative values. \begin{table}[h!] \centering \begin{tabular}{ |c|c|c| } \hline $Q_1$ & $Q_2$ & $Q_3$ \\ \hline $0$ & $0$ & $0$ \\ \hline \hline $Q_4=-Q_{10}$ & $Q_5=-Q_{11}$ & $Q_6=-Q_{12}$ \\ \hline $q_1-q_2$ & $q_1-q_3$ & $q_2-q_3$ \\ \hline \hline $Q_7=-Q_{13}$ & $Q_8=-Q_{14}$ & $Q_9=-Q_{15}$ \\ \hline $2q_1+q_2+q_3$ & $q_1+2q_2+q_3$ & $q_1+q_2+2q_3$ \\ \hline \end{tabular} \caption{Adjoint angles of the Polyakov loop operator for the $SU(4)$ group.} \label{tab:su4q} \end{table} The invariant measure can be constructed from the non-trivial adjoint angles: \begin{align} H(\vec{q}) \propto \prod_{j=4-9} \, \sin^2 Q_j. \end{align} To translate this into the Cartan $\vec{\gamma}$, we can use the following basis vectors: \begin{align} \label{eq:su4} \begin{split} \vec{v}_1 &= (1, 1/3, -1/3, -1) \\ \vec{v}_2 &= (1, -1, -1, 1) \\ \vec{v}_3 &= (1/3, -1, 1, -1/3). \end{split} \end{align} In particular, going along $\vec{v}_1$, corresponding to the uniform eigenvalue ansatz~\cite{mat_model_1}, the order parameter field is purely real, and through $\gamma_1$ we can relate the invariant measure to the Polyakov loop: \begin{align} \begin{split} X &= \frac{1}{2} \, (\cos \gamma_1 + \cos (\gamma_1/3) ) \\ Y &= 0, \end{split} \end{align} and \begin{align} H(\vec{q} \rightarrow \gamma_1 \vec{v}_1) \propto \sin^6 (\gamma_1/3) \, \sin^4 (2 \gamma_1/3) \, \sin^2 (\gamma_1). \end{align} compared to a similar projection in the $SU(3)$ case: \begin{align} H(\vec{q} \rightarrow \gamma_1 \vec{v}_1)) \propto \sin^2 \frac{\gamma_1}{2} \, \sin^2 \gamma_1 \, \sin^2 \frac{\gamma_1}{2}. \end{align} \section{Polyakov loop and the susceptibilities} \subsection{general results} Once an effective potential is specified, its minimization and the extraction of various observables are standard procedure~\cite{Lo:2013hla}. Here we simply display the results in Fig.~\ref{fig1}, and highlight some observations: \begin{itemize} \item First, the order of the phase transition naturally changes from second order for $N_c=2$ to first order for $N_c \geq 3$. Note that the same set of model parameters has been used in the calculations. \item Second, the two susceptibilities derived for $N_c \geq 3$ are equal in the confined phase, and a narrow aspect ratio for the shape of the potential, i.e. $\chi_T \ll \chi_L$ in the deconfined phase. This case is known for $N_c=3$~\cite{Lo:2013etb}. Eq.~\eqref{eq:dxy_op} makes it possible to study the fluctuations beyond $N_c=3$ and for the first time we can verify a similar trend is observed in this class of model for $N_c \geq 4$ under the uniform eigenvalue ansatz~\cite{mat_model_1}. \item It is expected that the first order phase transition becomes stronger as $N_c$ increases. This is the case in this model, and comparing the $N_c=4$ case with $N_c=3$, we observe the Polyakov loop at $T_d$ increases, while the magnitudes of the susceptibilities decrease. The latter suggests larger curvatures of the potential around the minima, which sets the stage for a stronger phase transition. As $N_c$ increases further, we find that $\ell(T_d)$ tends to $\approx 0.5$, while the decreasing trend of the susceptibilities continues.~\footnote{The value becomes $\ell(T_d, N_c\rightarrow \infty) \approx 0.6$ for models B and C introduced later. See Eq.~\eqref{eq:modelB}.} \end{itemize} The Landau parameters can be directly extracted in this model. For the case of $SU(3)$ along the real line, we write \begin{align} \frac{U}{T^4} = \bar{u}_0 + \bar{u}_2 \, X^2 + \bar{u}_3 \, X^3 + \bar{u}_4 \, X^4 + \ldots. \end{align} Expanding the potentials~\eqref{eq:pot1} and ~\eqref{eq:pot2} in powers of $X$, we obtain (in the Boltzmann limit) \begin{align} \label{eq:landau} \begin{split} \bar{u}_0 &= \frac{1}{\pi^2} \, ( \frac{m_A}{T} )^2 \, K_2(\frac{m_A}{T}) \\ \bar{u}_2 &= \frac{3 b}{T^3} - \frac{9}{\pi^2} \, ( \frac{m_A}{T} )^2 \, K_2(\frac{m_A}{T}) \\ \bar{u}_3 &= - \frac{4 b}{T^3} + \frac{27}{\pi^2} \, (\frac{m_A}{T} )^2 \, K_2(\frac{2 m_A}{T}) \\ \bar{u}_4 &= \frac{21 b}{2 T^3} - \frac{81}{4 \pi^2} \, (\frac{m_A}{T} )^2 \, K_2(\frac{2 m_A}{T}), \end{split} \end{align} where $K_2$ is the modified Bessel function of the second kind (order $2$). These relations link the Landau parameters to properties of the underlying gluons. To derive these results we have used the fact that \begin{align} \begin{split} {\rm Tr} \, \hat{\ell}_A &= \left( {\rm Tr} \, \hat{\ell}_F \right)^2 - 1 \\ &\xrightarrow{SU(3)} 9 \, X^2 -1 \\ {\rm Tr} \, \hat{\ell}_A^2 &= \left( {\rm Tr} \, \hat{\ell}_F^2 \right)^2 - 1 \\ &\xrightarrow{SU(3)} 36 \, X^2 - 108 \, X^3 + 81 \, X^4 - 1 \end{split} \end{align} along the real line. The expansion works best in the confined phase, where $X \ll 1$. Note that the cubic term arises naturally from ${\rm Tr} \, \hat{\ell}_A^2$, and we can readily verify the standard scenario of a first order phase transition: $\bar{u}_3 < 0, \bar{u}_4 > 0$, and $\bar{u}_2$ changes sign (from positive to negative) close to $T_d$. See Fig.~\ref{fig2}. The susceptibilities can be simply constructed from \begin{align} (T^3 \, \chi_{L,T})^{-1} &\approx (2 \bar{u}_2). \end{align} Thus, the observables are driven by a competition between the confining and the deconfining potentials. This is a general observation for the class of models we study. Also the condition \begin{align} \bar{u}_2(T) = 0 \end{align} is useful for a qualitative understanding of $T_d$, giving $T \approx 0.29 $ GeV instead of the true value $ T_d = 0.274 $ GeV, calculated numerically. \begin{figure}[!ht] \centering \includegraphics[width=\linewidth]{fig2.pdf} \caption{Landau parameters (Eq.~\eqref{eq:landau}) derived from the effective potential~\eqref{eq:potA} as functions of temperature.} \label{fig2} \end{figure} \subsection{gluon density in the presence of Polyakov loop} \begin{figure}[!ht] \centering \includegraphics[width=\linewidth]{fig3.pdf} \caption{Thermal densities of quarks and gluons in the presence of a background Polyakov loop field for $N_c=2, 3, 4$. The results are evaluated at $T = 0.24$ GeV with an effective gluon mass $m_A = 0.756$ GeV and a quark mass $m_Q = 0.1$ GeV.} \label{fig3} \end{figure} A key feature of an effective Polyakov loop model is a description of the thermal densities of gluons and quarks in the presence of a Polyakov loop mean field. These can be conveniently expressed in terms of the eigenphases. Take for example the case for SU(3), along the real line, they depend only on a single angle variable $\gamma_1$ (i.e. $\gamma_2 = 0$), \begin{align} \hat{\ell}_F \rightarrow {\rm diag} \, \left( e^{i \gamma_1}, 1, e^{-i \gamma_1} \right) \end{align} and \begin{align} \begin{split} \hat{\ell}_A \rightarrow {\rm diag} \, &( 1, 1, e^{i \gamma_1}, e^{2 i \gamma_1}, e^{i \gamma_1}, \\ & e^{-i \gamma_1}, e^{-2 i \gamma_1}, e^{-i \gamma_1} ), \end{split} \end{align} such that \begin{align} \label{eq:glue_den} n_{\rm glue}(\gamma_1) = \int \frac{d^3 k}{(2 \pi)^3} \, \sum_{i=1-8} \frac{\hat{\ell}_A^{(j)}}{e^{\beta E_A}-\hat{\ell}_A^{(j)}}. \end{align} Note that as $\gamma_1 \rightarrow 0$, $\hat{\ell}_A$ becomes an identity in the $8 \times 8$ adjoint space, and Eq.~\eqref{eq:glue_den} recovers the free quantum bose gas limit: \begin{align} n_{\rm glue}(\gamma_1 \rightarrow 0) = 8 \times \int \frac{d^3 k}{(2 \pi)^3} \, \frac{1}{e^{\beta E_A(k)}-1}. \end{align} An analogous expression can be derived for quarks, except that the trace is over the entries of the Polyakov loop operator in the fundamental representation $\hat{\ell}_F$. It can also be expressed as a function of $\gamma_1$: \begin{align} n_{\rm quarks}(\gamma_1) = \int \frac{d^3 k}{(2 \pi)^3} \, \sum_{i=1-3} \frac{\hat{\ell}_F^{(j)}}{e^{\beta E_Q(k)}+\hat{\ell}_F^{(j)}}. \end{align} and similarly the free quantum fermion gas limit is recovered at $\gamma_1 \rightarrow 0$: \begin{align} n_{\rm quarks}(\gamma_1 \rightarrow 0) = 3 \times \int \frac{d^3 k}{(2 \pi)^3} \, \frac{1}{e^{\beta E_Q(k)}+1}. \end{align} A plot of these thermal densities are shown in Fig.~\ref{fig3}, illustrated for the case of $N_c = 2, 3, 4$. Note that the X-axis is the corresponding traced Polyakov loops, projected along the real line: \begin{align} \begin{split} \ell_{\rm SU(2)}(\gamma_1) &= \cos \gamma_1 \\ \ell_{\rm SU(3)}(\gamma_1) &= \frac{1}{3} \, \left( 1 + 2 \, \cos \gamma_1 \right) \\ \ell_{\rm SU(4)}(\gamma_1) &= \frac{1}{2} \, \left( \cos \gamma_1 + \cos (\gamma_1/3) \right). \end{split} \end{align} An important observation is that both densities are substantially suppressed at $\ell \rightarrow 0$ compared to the free gas limits $\ell \rightarrow 1$. This is how confinement is represented in this class of models: a statistical confinement which relates the thermal abundances of quarks and gluons to the expectation value of $Z(N_c)$ order parameter, i.e. the Polyakov loop. For gluons, the $\ell = 0$ limit turns mildly negative. This does not necessarily mean we have a negative pressure in the bulk, since other contributions, such as those coming from the confining ghosts, can reverse this negative value. The thermal distribution of confining gluons in the QCD medium is still an open issue, and a small negative value in some momentum range is not ruled out. Nevertheless, it is likely that the beyond one-loop, nonperturbative interactions will produce further corrections to this quantity.~\footnote{The standard prescription in an effective model is to simply subtract the potential at $\ell = 0$. This fixes the problem of negative partial pressure and density of gluons in the confined phase. Of course, results for the Polyakov loops and their fluctuations are not affected. However, one then needs to correct for the right number of gluonic degrees of freedom in the deconfined phase at high temperatures.} A similar plot from a nonperturbative study, such as Schwinger Dyson equations in a given gauge, can help to clarify the issue~\cite{vonSmekal:1997ern,vonSmekal:1997ohs,Fischer:2008uz,Aguilar:2008xm,Maas:2011se}. In any case the suppression discussed here, linked to an order parameter for the spontaneous $Z(N_c)$ breaking, is an effective description. There are interesting differences from models which predict the suppression in the spectrum via an infrared divergent mass (and wavefunction) renormalizations~\cite{Lo:2009ud}. The task remains to understand the connections between various models of confinement, and to further explore the dynamical aspects of confining quasi-gluons in the QCD medium, practical for building phenomenology of thermal system of glueballs and other objects. \section{curvature masses of Cartan angles} \label{sec4} \subsection{gauge dependence and effects of wavefunction renormalization} In Ref.~\cite{Reinosa:2014ooa} the phase transition of the pure Yang-Mills system is studied using the background field method in the Landau-DeWitt gauge. A confining potential is motivated from the ghost determinant: \begin{align} {\rm Tr} \, \ln \, \bar{D}^{2}_{\rm adj}. \end{align} This gives a potential of exactly the same form as $U_{\rm glue}$ in Eq.~\eqref{eq:pot2}, but with an opposite sign, and is hence $Z(N_c)$ restoring. Note that the correct way to account for a ghost contribution to the thermodynamic potential involves a bosonic Matsubara sum of the relevant operator (with a factor of $-1$), instead of a fermionic Matsubara sum.~\cite{Bernard} The total potential can be written as: \begin{align} U_{\rm tot} = 3 U_1(m_A) - U_1 (0). \end{align} This form makes it obvious that we are considering 3 gluons and 1 ghost. Both terms can be expressed with $U_1$ that reads \begin{align} \begin{split} U_1(m_A) &= \frac{1}{2 \beta} \, \sum_a \, \sumint \, \ln \, \left( \tilde{k}_a^2 + m_A^2 \right) \\ &= T \, \int \frac{d^3 k}{(2 \pi)^3} \, {\rm Tr}_A \, \ln \, ( I - \hat{\ell}_A \, e^{-\beta E_A(k)} ). \end{split} \end{align} with \begin{align} \label{eq:ktilde} \tilde{k}_a^2 = (\omega_n + \frac{Q_a}{\beta})^2 + (\vec{k})^2. \end{align} Note that the invariant measure term~\eqref{eq:haar} can be regarded as a limiting case of $U_1$ with $E_A \rightarrow 0$.~\footnote{In the axial gauge~\cite{Weiss:1980rj}, it was argued the invariant measure term is canceled by a similar term in the glue potential. The case of massive gluons remains to be explored.} One can speculate the form of potential in the 't Hooft-Feynman gauge to read \begin{align} \begin{split} U_{\rm tot} &= 2 U_1 (m_A) + \Delta U_\xi \\ \Delta U_\xi &= (1+\Delta n_\xi) \times ( U_1(m_A) - U_1 (0) ). \end{split} \end{align} The subscript $\xi$ signifies the possible gauge dependence, e.g. $\Delta n_\xi = 0 \, (1)$ in Landau-DeWitt ('t Hooft-Feynman) gauge. Note that the gluon mass $m_A$ itself can depend on the gauge choice. Nevertheless, in all cases the physical limit of 2 gluonic degrees of freedom at high temperature (the Stefan-Boltzmann limit) is recovered when we set $m_A=0$, $\hat{\ell}_A = I$ in the perturbative vacuum. \begin{figure}[!ht] \includegraphics[width=\linewidth]{fig4.pdf} \caption{Fits of ($T=0$) LQCD results~\cite{Bogolubsky:2009dc} of gluon propagator and ghost wavefunction renormalization with the generalized Gribov-Stingl form in Eqs.~\eqref{eq:gh1} and ~\eqref{eq:glue1}.} \label{fig4} \end{figure} We next expand the model to include effects of wavefunction renormalizations of gluons and ghosts~\cite{Fukushima:2012qa}. Assuming the background field continues to enter like Eq.~\eqref{eq:ktilde}, e.g., when the ghost propagator is modified by \begin{align} \frac{1}{\tilde{k}^2} \rightarrow \frac{Z_{\rm gh}(\tilde{k}^2)}{\tilde{k}^2}, \end{align} the corresponding change in the potential reads \begin{align} {\rm Tr} \, \ln \, \tilde{k}^2 \rightarrow {\rm Tr} \, \left( \ln \, \tilde{k}^2 - \ln \, Z_{\rm gh} (\tilde{k}^2) \right). \end{align} A further simplification is possible if we approximate $Z_{\rm gh}$ in the Gribov-Stingl form~\cite{Fukushima:2012qa}: \begin{align} \label{eq:gh1} Z_{\rm gh} \propto (\frac{\tilde{k}^2 + R_1^2}{\tilde{k}^2 + R_2^2}) \end{align} with some mass scales $R_{1,2}$. Note that $Z_{\rm gh}(\tilde{k}^2 \rightarrow \infty) \rightarrow 1$. The corresponding modification in the effective Polyakov loop potential is given by \begin{align} \label{eq:gh2} U_1(0) \rightarrow U_1(0) - ( U_1(R_1) - U_1(R_2) ) \end{align} for each ghost field. For demonstration, we fit the result of the Lattice determination of the wavefunction renormalization of the ghost propagator~\cite{Bogolubsky:2009dc} (in Landau gauge) with the parametrization~\eqref{eq:gh1}. A reasonable fit is obtained with parameters: $R_1 = 1.335$ GeV and $R_2=0.732$ GeV. A similar scheme can be applied to the gluons, with a slightly modified form: \begin{align} \label{eq:glue1} \begin{split} Z_{\rm glue} &= (\frac{\tilde{k}^2 + R_1^2}{\tilde{k}^2 + R_2^2})^{g_1} \, (\frac{\tilde{k}^2 + R_3^2}{\tilde{k}^2 + R_4^2})^{g_2} \\ D_{\rm glue} &= \frac{Z_{\rm glue}}{\tilde{k}^2 + m_A^2}. \end{split} \end{align} The parameters are: $(g_1, R_1, R_2) = (4, 2.615, 1.660)$, $(g_2, R_3, R_4) = (1, 2.616, 6.794)$, and $m_A = 0.756$, all in appropriate units of GeVs. The results are shown in Fig.~\ref{fig3}. The change in the effective potential can be intuitively understood as follows: The enhancement of $Z_{\rm gh}$ at low momenta dictates $R_1 > R_2$, and with a stronger Boltzmann suppression gives $\vert U_1(R_1) \vert < \vert U_1(R_2) \vert$, and finally leads to a strengthening of the ghost potential (while preserving its sign). See Eq.~\eqref{eq:gh2}. ~\footnote{It is also possible that the function drops rapidly to zero in the deep infrared and hence the form~\eqref{eq:gh2} needs to be modified~\cite{Alkofer:2000wg,Fischer:2008uz,Iritani:2009mp}. The scenario in other gauges, and the efficacy of the commonly used approximation schemes, such as static approximation or expansions around simple poles, should be investigated in the future.} A stronger potential is also found when implementing the Lattice result of the gluon propagator with a wavefunction renormalization. On the other hand, the value of $T_d$ depends on the competition between the two, and requires an actual calculation to deduce the trend. We thus obtain an unified framework to discuss the modeling of an effective potential in different approximation schemes: \begin{align} U_{\rm tot} = 2 U_1 (m_A) + \Delta U_\xi \end{align} with \begin{align} \begin{split} \Delta U_\xi &= (1 + \Delta n_\xi) \, ( U_1(m_A) - U_1(0) ) \\ &+ \sum_j \, g_j \, ( U_1(R_1^{(j)}) - U_1(R_2^{(j)}) ). \end{split} \end{align} The key observation is that the same group structure appears in various contributions to the potential, and details of gluon and ghost propagators are subsumed into the model parameters. The effective framework thus provides a transparent way to link the Polyakov loop observables with those of the gauge-fixed correlators~\cite{Fischer:2008uz,Maas:2011se}. In the following, we investigate how different model assumptions of the gauge-fixed correlators affect the fluctuations of the Polyakov loop. \subsection{susceptibilities and masses of Cartan angles} We choose to focus on the physical case of $N_c=3$. This case is unique in the sense that the two Cartan angles $\gamma_{1,2}$ can be directly identified with the two degrees of freedom of the Polyakov loop, i.e. $X, Y$. The (2 x 2) Jacobian matrix allows the translation between $(\gamma_1, \gamma_2) \leftrightarrow (X, Y)$: \begin{align} \begin{split} J &= \frac{\partial \, \{X, Y\}}{\partial \, \{\gamma_1, \gamma_2 \}} \\ J_{11} &= \frac{1}{3} \, \left( -\sin \frac{2 \gamma_1 + \gamma_2}{2} - \sin \frac{2 \gamma_1 - \gamma_2}{2} \right) \\ J_{12} &= \frac{1}{3} \, \left( - \frac{1}{2} \, \sin \frac{2\gamma_1 + \gamma_2}{2} + \frac{1}{2} \, \sin \frac{2\gamma_1 - \gamma_2}{2} - \sin \gamma_2 \right) \\ J_{21} &= \frac{1}{3} \, \left( \cos \frac{2\gamma_1 + \gamma_2}{2} - \cos \frac{2\gamma_1 - \gamma_2}{2} \right) \\ J_{22} &= \frac{1}{3} \, \left( - \frac{1}{2} \, \cos \frac{2\gamma_1 + \gamma_2}{2} + \frac{1}{2} \, \cos \frac{2\gamma_1 - \gamma_2}{2} - \cos \gamma_2 \right). \end{split} \end{align} We stress that studying the order parameter and its fluctuations along two independent directions is mandated by the existence of two independent Cartan generators, both relevant to describing the gauge group $SU(3)$. Many existing works have neglected the imaginary direction in the potential, and hence the appropriate field derivatives cannot be performed. We define the (dimensionless) curvature mass tensor for the Cartan angles as~\cite{Weiss:1981ev} \begin{align} \label{eq:carmass} \bar{m}^2_{ij} = \frac{\partial^2 \, U(\gamma_1, \gamma_2)}{\partial \gamma_i \, \partial \gamma_j} \, \frac{1}{T^4}. \end{align} The tensor elements are to be evaluated with values of $\gamma_1$ which minimize the potential, and $\gamma_2 \rightarrow 0$. For the class of potentials considered the off-diagonal terms vanish. The relation to the curvature masses associated with the $(X, Y)$ fields~\cite{rob_dof} is thus \begin{align} \begin{split} \bar{m}^2_{11} &= J_{11}^2 \, \bar{m}^2_{XX} + J_{21}^2 \, \bar{m}^2_{YY} \\ \bar{m}^2_{22} &= J_{12}^2 \, \bar{m}^2_{XX} + J_{22}^2 \, \bar{m}^2_{YY}, \end{split} \end{align} where \begin{align} \begin{split} \bar{m}^2_{XX} &= \frac{\partial^2 \, U}{\partial X \, \partial X} \, \frac{1}{T^4} \\ \bar{m}^2_{YY} &= \frac{\partial^2 \, U}{\partial Y \, \partial Y} \, \frac{1}{T^4}. \end{split} \end{align} A further simplification comes from the fact that the Jacobian matrix, evaluated along the real line (arbitrary $\gamma_1$, $\gamma_2=0$), is also diagonal: \begin{align} \begin{split} J_{11} &= -\frac{2}{3} \, \sin \gamma_1 = -\frac{1}{\sqrt{3}} \, \sqrt{(1- \ell )( 1+3 \ell )} \\ J_{22} &= -\frac{2}{3} \, ( \sin \frac{\gamma_1}{2} )^2 = -\frac{1}{2} \, (1- \ell) \\ J_{12} &= 0 \\ J_{21} &= 0, \end{split} \end{align} where $\ell = \frac{1}{3} \, ( 1 + 2 \, \cos \gamma_1 )$. Note that in the confined phase $\gamma_1 \rightarrow {2 \pi}/{3}$ \begin{align} \begin{split} J_{11} &\rightarrow -1/\sqrt{3} \\ J_{22} &\rightarrow -1/2, \end{split} \end{align} and in the deconfined phase $\gamma_1 \rightarrow 0$ they vanish as \begin{align} \begin{split} J_{11} &\rightarrow -\frac{2}{\sqrt{3}} \, \sqrt{(1- \ell )} \\ J_{22} &\rightarrow -\frac{1}{2} \, (1- \ell), \end{split} \end{align} with $\ell \rightarrow 1$. Note that $J_{22} \ll J_{11}$ in this limit. We thus obtain the following relation between the curvature masses of Cartan angles and the Polyakov loop susceptibilities: \begin{align} \label{eq:msq2sus} \begin{split} \bar{m}^2_{11} &= \frac{4}{9} \, (\sin \gamma_1)^2 \, \bar{m}^2_{XX} \\ \bar{m}^2_{22} &= \frac{4}{9} \, (\sin \frac{\gamma_1}{2})^4 \, \bar{m}^2_{YY}, \end{split} \end{align} with the Polyakov loop susceptibilities identified as \begin{align} \label{eq:sus2msq} \begin{split} (T^3 \, \chi_L) &= \frac{1}{\bar{m}^2_{XX}} = \frac{ (1- \ell )( 1+3 \ell ) }{3 \, \bar{m}^2_{11}} \\ (T^3 \, \chi_T) &= \frac{1}{\bar{m}^2_{YY}} = \frac{(1- \ell)^2}{4 \, \bar{m}^2_{22}} . \end{split} \end{align} This is a useful relation linking the Polyakov loop observables to those based on the Cartan angles. The latter can eventually be linked to $A_0$ and the transverse gluons. Note that such the clean separation of contributions to $T^3 \, \chi_{L, T}$ from $\bar{m}^2_{ii}$ is only true for $N_c=3$. Each susceptibility generally receives contributions from all Cartan curvature masses $\bar{m}^2_{ii}$. We derive an analytic expression for these Cartan curvature masses at ultra-high temperatures. The effective potential is expected to approach \begin{align} U(\gamma_1, \gamma_2) \approx 2 \, U_1(m_A=0). \end{align} Using the exact result of the integral \begin{align} \begin{split} U_a(Q_a) &= T \, \int \frac{d^3 k}{(2 \pi)^3} \, \ln \, ( I - e^{i \, Q_a} \, e^{-\beta k} ) \\ &= -\frac{T^4}{\pi^2} \, {\rm PolyLog}(4, e^{i \, Q_a}), \end{split} \end{align} and the polynomial expansion of the PolyLog function (valid in the restricted range of $Q_a \in [0, \pi]$) \begin{align} \frac{U_a(Q_a) + U_a(-Q_a)}{T^4} &= -\frac{\pi^2}{45} + \frac{Q_a^2}{6} - \frac{Q_a^3}{6 \pi} + \frac{Q_a^4}{24 \pi^2}, \end{align} we obtain the potential (see Table~\ref{tab:su3q} and Eq.~\eqref{eq:su3gamma}) \begin{align} \begin{split} \frac{U(\gamma_1, \gamma_2)}{T^4} &\approx -\frac{16 \pi^2}{90} + 2 \, \gamma_1^2 + \frac{3}{2} \, \gamma_2^2 \\ & + \frac{3 \, (4 \gamma_1^2 + 3 \gamma_2^2)^2}{32 \pi^2} - \frac{20 \gamma_1^3 + 27 \gamma_1 \gamma_2^2}{6 \pi}. \end{split} \end{align} The curvature masses~\eqref{eq:carmass} can be readily deduced: \begin{align} \label{eq:htlim} \begin{split} \bar{m}^2_{11} &\rightarrow 4 \\ \bar{m}^2_{22} &\rightarrow 3. \end{split} \end{align} It follows from Eq.~\eqref{eq:sus2msq} that while both susceptibilities approaches zero at high temperatures, with $\chi_T \ll \chi_L$, one can extract a finite limit for the curvature masses. Note that if we take \begin{align} \gamma_{1,2} \rightarrow r_{1,2} \, \beta g A_4, \end{align} these curvature masses are related to the dimensionful $m_{A_4}$ via \begin{align} \label{eq:matching} \begin{split} \bar{m}^2_{ii} &= \frac{m_{A_4}^2}{g^2 T^2 r_i^2} \\ {m}^2_{A_4} &= \frac{\partial^2 \, U}{\partial A_4 \, \partial A_4}. \end{split} \end{align} for $i = 1, 2$. The fact that $\bar{m}^2_{ii}$ has a finite limit forces \begin{align} m_{A_4} \propto g T, \end{align} as expected for a Debye screening mass. We stress that $m_{A_4}$ should be distinguished from the effective gluon mass $m_A$ introduced. The latter captures the infrared enhancement originated from the nonperturbative vacuum and exists even at vanishing temperature. The behaviors of these curvature masses at low temperatures are lesser known. In the confined phase, $Z(3)$ symmetry requires~\cite{Lo:2013etb}, in addition to $ \langle \hat{\ell}_F \rangle = 0 $, \begin{align} \begin{split} \langle {\hat{\ell}_F}^2 \rangle &= 0 \\ \implies \langle (X^2 - Y^2) \rangle &= 0. \end{split} \end{align} This means the two susceptibilities are equal in this phase. It follows from Eq.~\eqref{eq:sus2msq} that \begin{align} \label{eq:cons} \frac{\bar{m}^2_{22}}{\bar{m}^2_{11}} = \frac{3}{4} \end{align} in the confined phase. Note that the same ratio is observed in the high temperature limit~\eqref{eq:htlim}. Other than the constraint~\eqref{eq:cons} on the ratio, there is no restriction from symmetry concerning their magnitudes. In the language of an effective model, they reflect a competition between the confining (ghost) and the deconfining (glue) parts of the potential. And unlike $\langle \hat{\ell}_F \rangle$, they are finite and temperature dependent even in the confined phase. \begin{figure}[!ht] \centering \includegraphics[width=\linewidth]{fig5.pdf} \caption{Curvature masses associated with the Cartan angles calculated for different models (see text).} \label{fig5} \end{figure} To examine how the curvature masses associated with the Cartan angles depend on the assumed properties of the gluons and ghosts, we compute the observables in the following arrangements of the effective potential: \begin{itemize} \item model A: an invariant measure term~\eqref{eq:pot1} with 2 transverse gluons: \begin{align} \label{eq:modelA} U = -\frac{1}{2} \, b \, \ln H + 2 U_1(m_A) \end{align} \item model B: a ghost field term and 3 transverse gluons: \begin{align} \label{eq:modelB} U = -U_1(m_A=0) + 3 U_1(m_A) \end{align} \item model C: model B implemented with wavefunction renormalization effects discussed. \end{itemize} With no further tuning of model parameters, we obtain $T_d \approx (0.274, 0.274, 0.27)$ GeV for models A, B, and C. The results of Cartan masses are shown in Fig.~\ref{fig5}. We first report that Eq.~\eqref{eq:sus2msq} works: i.e., the same results of the susceptibilities are obtained when the curvature tensor (Eq.~\eqref{eq:xycurva}) are directly constructed by taking the appropriate $(X, Y)$-field derivatives on the potential derived in Ref.~\cite{sasaki_pot}. This gives some confidence for the general applicability of Eq.~\eqref{eq:dxy_op} for the general $N_c$ problem. The most obvious feature of the curvature masses is the dip around $T_d$. Note that a very similar behavior is found for the $A_0$-gluon screening mass extracted from LQCD when studying the inverse of the longitudinal propagator~\cite{lqcd_1}. See also the discussion in Ref.~\cite{Maas:2011ez}. In the effective model, this follows from the relation to the Polyakov loop susceptibilities. While the gluon (and ghost) parameters employed are smooth, the discontinuity is inherited from minimizing the mean-field potential. Note that a strong temperature dependence of these observables naturally arises without the need of introducing temperature dependent model parameters. In fact, in an improved scheme, the model parameters, including the additional temperature dependence, should be determined self-consistently. The high temperature limits~\eqref{eq:htlim} are approached very gradually: at $T \approx 30 \, T_d$ and from above (below) for models A, C (B). Note that model B has a known issue in the deconfined phase, that the Polyakov loop reaches unity too rapidly and the model may not be reliable beyond this point. Apparently, the existence of a secondary dip in the curvature masses in model B (also in model C) also comes with this problem. This does not happen to Model A, where the invariant measure term prevents this problematic behavior. It has been suggested that a 2-loop calculation may remove this artifact~\cite{2loop}. It would be interesting to see the corresponding modification in the curvature masses. There is no strict theoretical constraint on the low temperature behaviors of these curvature masses. The constraint~\eqref{eq:cons} on their ratio is verified in all cases. What is clear from the effective model study is that they depend strongly on the choice of the confining potential. This is particularly obvious in the $T \rightarrow 0$ limit: In model A, they diverge as (see Eqs.~\eqref{eq:landau} and~\eqref{eq:msq2sus}) \begin{align} \begin{split} \bar{m}^2_{11} &\rightarrow \frac{2 b}{T^3} \\ \bar{m}^2_{22} &\rightarrow \frac{3 b}{2 T^3}. \end{split} \end{align} In model B we get instead the finite results: \begin{align} \begin{split} \bar{m}^2_{11} &\rightarrow \frac{2}{3} \\ \bar{m}^2_{22} &\rightarrow \frac{1}{2}, \end{split} \end{align} Effect of wavefunction renormalization (model C), with the parameters chosen, is found to be small at low temperatures, but becomes substantial close to and above $T_d$. If we insist on imposing the matching condition~\eqref{eq:matching} and identify the $A_0$-gluon screening mass with $m_A$, we would obtain a $\propto \frac{1}{T^2}$ behavior for these curvature masses. It would thus be interesting to examine these observables with other gauge choice~\cite{Langfeld:2004qs,Dudal:2007cw}: to see whether or not the differences are due to gauge artifacts, and to gain insights in reliably describing the low temperature behavior of the Polyakov loop potential. \begin{figure*}[!ht] \centering \includegraphics[width=0.49\linewidth]{fig6a.pdf} \includegraphics[width=0.49\linewidth]{fig6b.pdf} \caption{The critical temperatures $T_d$ of model B~\eqref{eq:modelB} (left) using the input masses $m_A(N_c)$ indicated in the right panel. The latter are adjusted to fit the LQCD results on $T_d(N_c)$~\cite{lucini_td} and are compared to a fit to the LQCD of (half the) $0^{++}$ glueball mass~\cite{lucini_gb}. The gray dashed line shows the result based on Landau parameter analysis~\eqref{eq:tdnc} ($\bar{u}_2$).} \label{fig6} \end{figure*} \subsection{the appearance of glueballs} Finally we speculate how the glueballs may enter the effective description. In the current model a phenomenological gluon mass $m_A$ is introduced, which serves to suppress the gluon contribution (deconfining) in the potential at low temperatures. This also sets the scale for the critical temperature $T_d$. In Refs.~\cite{eric_gb,eric_cgauge}, a robust theoretical framework to introducing quasi-gluonic excitation is proposed via a Constituent Fock Space expansion. There, a BCS-like QCD vacuum is postulated and with a Bogoliubov-Valatin transformation the (massive) quasi-gluons are derived from the effective 1-body Hamiltonian. This mirrors the one-loop gluon potential considered here. In addition, glueball spectra can be derived with the same Hamiltonian using the 2-gluon states built from these quasi-gluons. A key observation is that the lowest lying states receive most of their masses from the quasi-gluons, i.e. \begin{align} \label{eq:gbmass} m_{\rm GB} \approx 2 \, m_A, \end{align} e.g. $m_{\rm GB} \approx 1.7 (2.1)$ GeV for the lowest $0^{++} (0^{+-})$ state, with $m_A = 0.8$ GeV in Ref.~\cite{eric_gb}. This naturally suggests a constituent model for the glueball states. Neglecting their interactions with the Polyakov loops, we may consider free gas of glueballs as an approximation for the $2 \rightarrow 2$ contribution to the partition function. ~\footnote{This is similar to the case where a $\sigma$-meson is generated in an NJL model and approximating its partial thermal pressure by a free bose gas of mass $m_\sigma \approx 2 M_Q$.} See Ref.~\cite{Lacroix2013} for an elaborate treatment of thermal glueballs. A non-trivial relation suggested by the effective model is a link between $T_d(N_c)$ and $m_A(N_c)$. Model B is ideal for this illustration as $m_A$ is the only adjustable parameter of the model. Tuning $m_A$ to match the model $T_d$ with the LQCD results~\cite{lucini_td} for various $N_c$'s, we extract the expected $N_c$ dependence of $m_A$. See Fig.~\ref{fig6}. The $m_A(N_c)$'s show a similar trend exhibited by a fit to LQCD $0^{++}$ glueball masses~\cite{lucini_gb}. The fit employs the functional form \begin{align} m_{\rm GB}^{\rm LQCD}(N_c)/\sqrt{\sigma} \approx m_\infty + c/N_c^2 \end{align} with (dimensionless) parameters $m_\infty = 3.307, c = 2.187$, based on the LQCD calculation in Ref.~\cite{lucini_gb} and fixing $c$ to the $N_c=3$ result. We also take $\sigma = 0.18 \, {\rm GeV}^2$. The general trend can be easily understood by studying the second Landau parameter~\eqref{eq:landau}. For model B, it reads \begin{align} \label{eq:tdnc} \bar{u}_2 \approx \frac{N_c^2}{\pi^2} \, \left( 1 - 3 \, \frac{1}{2} \, (\frac{m_A}{T} )^2 \, K_2(m_A/T) \right). \end{align} Solving for $m_A(N_c)$ from \begin{align} \label{eq:tdnc2} \bar{u}_2(m_A(N_c), T=T_d(N_c)) = 0, \end{align} we obtain the gray dashed line in Fig.~\ref{fig6} (right). Eq.~\eqref{eq:tdnc} suggests that the leading $N_c$ dependence comes not from the prefactor but from the $N_c$ dependence of $T_d$.~\footnote{Relation~\eqref{eq:tdnc} assumes the Boltzmann approximation. This may be justified for the massive gluons, but may not be the case for the ghost. The corrections, however, are $N_c$ dependent. For $N_c=3$, this amounts to replacing $1 \rightarrow \frac{\pi^2}{9} \approx 1.097$ in the right bracket. The corresponding result for $N_c=2$ is $1 \rightarrow \frac{\pi^2}{12} \approx 0.822$.} While it is not surprising that the observables are related, the effective model offers a simple approximate connection such as~\eqref{eq:tdnc}. \section{Conclusion} We have examined the fluctuations of the order parameter, measured by the Polyakov loop susceptibilities in the $SU(N_c)$ pure gauge theory, using an effective potential built from one-loop expressions of the field determinants of gluon and ghost. The connection between these observables with the Cartan angles and their curvature masses are derived. The latter can serve as a proxy for the $A_0$-gluon screening mass, and are strongly influenced by the $Z(N_c)$ structure of the vacuum. The Cartan curvature masses thus provide useful diagnostic information concerning the competition of gluons and ghosts in the QCD medium. While we expect gauge invariance for all observables based on the Polyakov loops, it is unlikely for the model potential in the current state to achieve this goal. For example, we see that the predictions of these curvature masses depend strongly on the assumptions of the gluon and ghost propagators, and the choice of gauge. Another essential limitation of the present model is that the propagators and wavefunction renormalizations we fitted are not LQCD computation in the background-field gauge. A more constructive way to proceed is to explore the potential, and more generally the problem of how confinement manifest, in various gauges, and check whether there could be non-trivial relations among the model parameters such that the gauge dependence would be removed or reduced when computing physical observables. What is clear from the model study is that the longitudinal gluon propagator and the Cartan curvature masses are connected, and should be determined self-consistently in model calculations. This makes for a more meaningful comparison with the finite temperature LQCD data~\cite{Aouane:2011fv,lqcd_1,Bornyakov:2010nc,Bicudo:2017uyy}. For the transverse gluons, we find no evidence for a substantial change in their masses across the phase transition, nor the need for the value to approach infinity in the confined phase. In fact they serve as constituents of the glueballs. A possible future application of the relations between the Polyakov loop observables with those from the gluon propagators could be in formulating a nonperturbative renormalization scheme for the former as composite operators. This is a necessary first step to properly compare effective model results with LQCD data of the Polyakov loops and the susceptibilities. This will be explored in a future work. \section{Acknowledgments} We acknowledge the support by the Polish National Science Center (NCN) under Opus grant no. 2018/31/B/ST2/01663. K.R. also acknowledges partial support of the Polish Ministry of Science and Higher Education. \bibstyle{}
1,941,325,220,615
arxiv
\section*{Acknowledgements} We would like to thank Ben Sherman, Cambridge Yang, Eric Atkinson, Jesse Michel, Jonathan Frankle, Saman Amarasinghe, Ajay Brahmakshatriya, and the anonymous reviewers for their helpful comments and suggestions. This work was supported in part by the National Science Foundation (NSF CCF-1918839, CCF-1533753), the Defense Advanced Research Projects Agency (DARPA Awards \#HR001118C0059 and \#FA8750-17-2-0126), and a Google Faculty Research Award, with cloud computing resources provided by the MIT Quest for Intelligence and the MIT-IBM Watson AI Lab. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of~the~funding~agencies. \section{Analysis} \label{sec:analysis} In this section, we analyze the parameters learned by DiffTune\xspace{} on llvm-mca\xspace{}, answering the following research questions: \begin{itemize} \item How similar are the learned parameters to the default parameters in llvm-mca\xspace{}? (\Cref{sec:results-default-comparison}) \item How optimal are the learned parameters? (\Cref{sec:results-optimality}) \item How semantically meaningful are the learned parameters? (\Cref{sec:results-semantics}) \end{itemize} \subsection{Comparison of Learned Parameters to Defaults} \label{sec:results-default-comparison} This section compares the default parameters to the learned parameters (from a single run of DiffTune\xspace{}) in Haswell. \paragraph{Distributional similarities} To determine the distributional similarity of the learned parameters to the default parameters, \Cref{fig:distributions} shows histograms of the values of the default and learned per-instruction parameters (\texttt{NumMicroOps}\xspace, \texttt{WriteLatency}\xspace, \texttt{ReadAdvanceCycles}\xspace, \texttt{PortMap}\xspace). The primary distinctions between the distributions are in \texttt{WriteLatency}\xspace and \texttt{ReadAdvanceCycles}\xspace; the learned parameters otherwise follow similar distributions to the defaults. The distributions of default and learned \texttt{WriteLatency}\xspace values in \Cref{fig:latency-distr} primarily differ in that only 1 out of the 837 opcodes in the default Haswell parameters has \texttt{WriteLatency}\xspace 0 (\texttt{VZEROUPPER}), whereas 251 out of the 837 opcodes in the learned parameters have \texttt{WriteLatency}\xspace 0. As discussed in \Cref{tab:mca-parameters}, a \texttt{WriteLatency}\xspace value of 0 means that dependent instructions do not have to wait before being issued, and can be issued in the same cycle; instructions may still be bottlenecked elsewhere in the simulation pipeline (e.g., in the execute stage). The distributions of default and learned \texttt{ReadAdvanceCycles}\xspace are presented in \Cref{fig:readadvance-distr}. The default \texttt{ReadAdvanceCycles}\xspace are mostly 0, with a small population having values 5 and 7; in contrast, the learned \texttt{ReadAdvanceCycles}\xspace are fairly evenly distributed, with a plurality being 0. Given that a significant fraction of learned \texttt{WriteLatency}\xspace values are 0, it is likely that many \texttt{ReadAdvanceCycles}\xspace values have little to no effect.\footnote{As noted in \Cref{sec:background}, llvm-mca\xspace{} subtracts \texttt{ReadAdvanceCycles}\xspace from \texttt{WriteLatency}\xspace to compute a dependency chain's latency. The result of this subtraction is clipped to be no less than zero.} \begin{table} \caption{Default and learned global parameters.} \label{tab:global} \begin{tabular}{p{12em}cc} \toprule \textbf{Architecture} \hfill \textbf{Parameters} & \textbf{\texttt{DispatchWidth}\xspace} & \textbf{\texttt{ReorderBufferSize}\xspace} \\ % % % \midrule \textbf{Haswell} \hfill Default & 4 & 192 \\ \hfill Learned & 4 & 144 \\ % % % % % % \bottomrule \end{tabular} \end{table} \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/dispatch-width-sweep-err.pdf} \\ \includegraphics[width=\columnwidth]{figures/reorder-buffer-size-sweep-err.pdf} \caption{llvm-mca\xspace{}'s sensitivity to values of \texttt{DispatchWidth}\xspace (Top) and \texttt{ReorderBufferSize}\xspace (Bottom) within the default (Blue) and learned (Orange) parameters.} \label{fig:global-sensitivity} \end{figure} \paragraph{Global parameters} \Cref{tab:global} shows the default and learned global parameters (\texttt{DispatchWidth}\xspace and \texttt{ReorderBufferSize}\xspace). The learned \texttt{DispatchWidth}\xspace parameter is close to the default \texttt{DispatchWidth}\xspace parameter, while the learned \texttt{ReorderBufferSize}\xspace parameter differs significantly from the default. By analyzing llvm-mca\xspace{}'s sensitivity to values of \texttt{DispatchWidth}\xspace and \texttt{ReorderBufferSize}\xspace within the default and learned parameters in \Cref{fig:global-sensitivity}, we find that although the learned global parameters do not match the default values exactly, they approximately minimize llvm-mca\xspace{}'s error because there is a wide range of values that result in approximately the same error. While llvm-mca\xspace{} is sensitive to small perturbations in the value of the \texttt{DispatchWidth}\xspace parameter (with the default parameters, a \texttt{DispatchWidth}\xspace of $3$ has error $33.5\%$, $4$ has error $25.0\%$, and $5$ has error $26.8\%$), it is relatively insensitive to perturbations of the \texttt{ReorderBufferSize}\xspace (with the default parameters, all \texttt{ReorderBufferSize}\xspace values above $70$ have error $25.0\%$). This is primarily because one of llvm-mca\xspace{}'s core modeling assumptions, that memory accesses always resolve in the L1 cache, means that most instructions spend few cycles in the issue, execute, and retire phases; the \texttt{ReorderBufferSize}\xspace is therefore rarely a bottleneck in llvm-mca\xspace{}'s modeling of the~BHive~dataset. \subsection{Optimality} \label{sec:results-optimality} This \namecref{sec:results-optimality} shows that while the parameters learned by DiffTune\xspace{} match the error of the default parameters, the learned values are not optimal: by using DiffTune\xspace{} to optimize just a subset of llvm-mca\xspace{}'s parameters, and keeping the rest as their expert-tuned default values, we are able to find parameters with lower error than when learning the entire set of parameters. \paragraph{Experiment} We learn only each instruction's \texttt{WriteLatency}\xspace in llvm-mca\xspace{}. We keep all other parameters as their default values. The dataset and objective used in this task are otherwise the same as presented in \Cref{sec:methodology}. \paragraph{Methodology} Training hyperparameters are similar to those presented in \Cref{sec:methodology}, and are reiterated here with modifications made to learn just \texttt{WriteLatency}\xspace parameters. We train both the surrogate\xspace and the parameter table using Adam~\citep{kingma_adam_2014} with a batch size of 256. We use a learning rate of $0.001$ to train the surrogate\xspace and a learning rate of $0.05$ to train the parameter table. To train the surrogate\xspace, we generate a simulated dataset of $2301110$ blocks. For each basic block in the simulated dataset, we sample a random parameter table, with each \texttt{WriteLatency}\xspace a uniformly random integer between $0$ and $10$ (inclusive). We loop over this simulated dataset $3$ times when training the surrogate\xspace. To train the parameter table, we initialize it to a random sample from the parameter training distribution, then train it for $1$ epoch against the original training set. \paragraph{Results} On Haswell, this application of DiffTune\xspace{} results in an error of $16.2\%$ and a Kendall Tau correlation coefficient of $0.823$, compared to an error of $23.7\%$ and a correlation of $0.745$ when learning the full set of parameters with DiffTune\xspace{}. These results demonstrate that DiffTune\xspace{} does not find a globally optimal parameter set when learning llvm-mca\xspace{}'s full set of parameters. This suboptimality is due in part to the non-convex nature of the problem and the size of the~parameter~space. \subsection{Case Studies} \label{sec:results-semantics} This section presents case studies of basic blocks simulated with the default and with the learned parameters, showing where the learned parameters better reflect the ground truth data, and where the learned parameters reflect degenerate cases of the optimization algorithm. To simplify exposition, the results in this section use just the learned \texttt{WriteLatency}\xspace values from the experiment in \Cref{sec:results-optimality}. \paragraph{\texttt{PUSH64r}} The default \texttt{WriteLatency}\xspace with the Haswell parameters for the \texttt{PUSH64r} opcode (push a 64-bit register onto the stack, decrementing the stack pointer) is 2 cycles. In contrast, the \texttt{WriteLatency}\xspace learned by DiffTune\xspace{} is 0 cycles. This leads to significantly more accurate predictions for blocks that contain \texttt{PUSH64r} opcodes, such as the following (in which the default and learned latency for \instr{testl} are both 1 cycle): \begin{lstlisting} pushq testl \end{lstlisting} The true timing of this block as measured by \citet{chen_bhive_2019} is 1.01 cycles. On this block, llvm-mca\xspace{} with the default Haswell parameters predicts a timing of 2.03 cycles: The \texttt{PUSH64r} forms a dependency chain with itself, so the default \texttt{WriteLatency}\xspace before each \texttt{PUSH64r} can be dispatched is 2 cycles. In contrast, llvm-mca\xspace{} with the learned Haswell values predicts that the timing is 1.03 cycles, because the learned \texttt{WriteLatency}\xspace is 0 meaning that there is no delay before the following \texttt{PUSH64r} can be issued, but the \texttt{PortMap}\xspace for \texttt{PUSH64r} still occupies \texttt{HWPort4} for a cycle before the instruction is retired; this 1-cycle dependency chain results in a more accurate prediction. In this case, DiffTune\xspace{} learns a \texttt{WriteLatency}\xspace that leads to better accuracy for the \texttt{PUSH64r}~opcode. \paragraph{\texttt{XOR32rr}} The default \texttt{WriteLatency}\xspace in Haswell for the \texttt{XOR32rr} opcode (xor two registers with each other) is 1 cycle. The \texttt{WriteLatency}\xspace learned by DiffTune\xspace{} is again 0 cycles. This is not always correct -- however, a common use of \texttt{XOR32rr} is as a \emph{zero idiom}, an instruction that sets a register to zero. For example, \lstinline{xorq Intel processors have a fast path for zero idioms -- rather than actually computing the \instr{xor}, they simply set the value to zero. Most of the instances of \texttt{XOR32rr} in our dataset ($4047$ out of $4218$) are zero idioms. This leads to more accurate predictions in the general case, as can be seen in the following example: \begin{lstlisting} xorl \end{lstlisting} The true timing of this block is 0.31 cycles. With the default \texttt{WriteLatency}\xspace value of 1, the Intel x86 simulation model of llvm-mca\xspace{} does not recognize this as a zero idiom and predicts that this block has a timing of 1.03 cycles. With the learned \texttt{WriteLatency}\xspace value of 0 and the fact that there are no bottlenecks specified by the \texttt{PortMap}\xspace, llvm-mca\xspace{} executes the \instr{xor}s as quickly as possible, bottlenecked only by the \texttt{NumMicroOps}\xspace of $1$ and the \texttt{DispatchWidth}\xspace of 4. With this change, llvm-mca\xspace{} predicts that this block has a timing of 0.27 cycles, again closer to the ground truth. \paragraph{\texttt{ADD32mr}} Unfortunately, it is impossible to distinguish between semantically meaningful values that make the simulator more correct, and degenerate values that improve the accuracy of the simulator without adding interpretability. For instance, consider \texttt{ADD32mr}, which adds a register to a value in memory and writes the result back to memory: \begin{lstlisting} addl \end{lstlisting} This block has a true timing of 5.97 cycles because it is essentially a chained load, add, then store---with the L1 cache latency being 4 cycles. However, llvm-mca\xspace{} does not recognize the dependency chain this instruction forms with itself, so even with the default Haswell \texttt{WriteLatency}\xspace of 7 cycles for \texttt{ADD32mr}, llvm-mca\xspace{} predicts that this block has an overall timing of 1.09 cycles. Our methodology recognizes the need to predict a higher timing, but is fundamentally unable to change a parameter in llvm-mca\xspace{} to enable llvm-mca\xspace{} to recognize the dependency chain (because no such parameter exists). Instead, our methodology learns a degenerately high \texttt{WriteLatency}\xspace of 62 for this instruction, allowing llvm-mca\xspace{} to predict an overall timing of 1.64 cycles, closer to the true value. This degenerate value increases the accuracy of llvm-mca\xspace{} without leading to semantically useful \texttt{WriteLatency}\xspace parameters. This case study shows that the interpretability of the learned parameters is only as good as the simulation fidelity; when the simulation is a poor approximation to the physical behavior of the CPU, the learned parameters do not correspond to semantically~meaningful~values. \section{Approach} \label{sec:approach} Tuning llvm-mca\xspace{}'s $\nparams$ parameters among \psize valid configurations\footref{foot:psize} by exhaustive search is impractical. Instead, we present DiffTune\xspace{}, an algorithm for learning ordinal parameters of arbitrary programs from labeled input and output examples. DiffTune\xspace{} leverages learned differentiable surrogates to make the optimization process more tractable. \paragraph{Formal problem statement} Given a program $f : \Theta \to \mathcal{X} \to \mathcal{Y}$ parameterized on parameters $\theta : \Theta$, that takes inputs $x : \mathcal{X}$ to outputs $y : \mathcal{Y}$, and given a dataset $\mathcal{D} : \mathcal{X} \times \mathcal{Y}$ of true input-output examples, find parameters $\theta \in \Theta$ to minimize a cost function (called the loss function, representing error) $\mathcal{L} : \left(\mathcal{Y} \times \mathcal{Y}\right) \to \mathbb{R}_{\geq0}$ of the program on the dataset: \begin{equation} \argmin_{\theta} \frac{1}{|\mathcal{D}|} \sum_{(x, y) \in \mathcal{D}} \mathcal{L}\left(f\left(\theta, x\right), y\right) \label{alg:problem} \end{equation} \vspace*{-0.5em} \paragraph{Algorithm} Figure~\ref{fig:algorithm-diagram} presents a diagram of our approach. We first collect a dataset of input-output examples from the program with varying values for $\theta$; that is, we sample $\theta$ from some distribution $D$, sample $x$ from the original dataset $\mathcal{D}$, then generate a new value $\hat{y} = f(\theta, x)$ by passing $\theta$ and $x$ into the original program and recording its output. We collect these examples $\left(\theta, x, \hat{y}\right)$ into a large simulated dataset: \[ \hat{\mathcal{D}} = \left\{\left(\theta, x, \hat{y}\right)\ |\ \theta \sim D, x \sim \mathcal{D}, \hat{y} = f(\theta, x)\right\} \] With this dataset, we optimize the surrogate\xspace $\hat{f} : \Theta \to \mathcal{X} \to \mathcal{Y}$ to mimic the original program, i.e., $\forall \theta,x. \hat{f}(\theta, x) \approx f(\theta, x)$. Specifically, we optimize the surrogate\xspace to minimize the average loss $\mathcal{L}$ over the simulated dataset $\hat{\mathcal{D}}$: \begin{equation} \argmin_{\hat{f}} \frac{1}{\left|\hat{\mathcal{D}}\right|} \sum_{(\theta, x, \hat{y}) \in \hat{\mathcal{D}}} \mathcal{L}\left(\hat{f}\left(\theta, x\right), \hat{y}\right) \label{alg:surrogate} \end{equation} We then optimize the parameters $\theta$ of the program against the true dataset $\mathcal{D}$. Specifically, we find: \begin{equation} \argmin_{\theta} \frac{1}{\left|\mathcal{D}\right|} \sum_{(x, y) \in \mathcal{D}} \mathcal{L}\left(\hat{f}\left(\theta, x\right), y\right) \label{alg:surrogate_problem} \end{equation} \vspace*{-0.5em} \paragraph{Discussion} Note the similarity between \Cref{alg:problem} and \Cref{alg:surrogate_problem}: the two equations only differ by the use of $f$ and $\hat{f}$, respectively. The close correspondence between forms makes clear that $\hat{f}$ stands in as a surrogate for the original program, $f$. This is a general algorithmic approach~\citep{queipo_surrogate_2005} that is desirable when it is possible to choose $\hat{f}$ such that it is easier or more efficient to optimize $\theta$ using $\hat{f}$ than $f$. \paragraph{Optimization} In our approach, we choose $\hat{f}$ to be a neural network. Neural networks are typically built as compositions of differentiable architectural components, such as \emph{embedding lookup tables}, which map discrete input elements to real-valued vectors; \emph{LSTMs}~\citep{hochreiter_lstm_1997}, which map input sequences of vectors to a single output vector; and \emph{fully connected layers}, which are linear transformations on input vectors. By being composed of differentiable components, neural networks are end-to-end differentiable, so that they are able to be trained using gradient-based optimization. Specifically, neural networks are typically optimized with stochastic first-order optimizations like stochastic gradient descent (SGD)~\citep{robbins_stochastic_1951}, which repeatedly calculates the network's error on a small sample of the training dataset and then updates the network's parameters in the opposite of the direction of the gradient to minimize the error. By selecting a neural network as $\hat{f}$'s representation, we are able to leverage $\hat{f}$'s differentiable nature not only to train $\hat{f}$ (solving the optimization problem posed in \Cref{alg:surrogate}) but also to solve the optimization problem posed in \Cref{alg:surrogate_problem} with gradient-based optimization. This stands in contrast to $f$ which is, generally, non-differentiable and therefore does not permit the computation of its gradients. \paragraph{Surrogate example} A visual example of this is presented in Figure~\ref{fig:diff-approx-plot}, which shows an example of the timing predicted by llvm-mca\xspace{} (blue) and a trained surrogate\xspace of llvm-mca\xspace{} (orange). The x-axis of Figure~\ref{fig:diff-approx-plot} is the value of the \texttt{DispatchWidth}\xspace parameter, and the y-axis is the predicted timing of llvm-mca\xspace{} with that \texttt{DispatchWidth}\xspace for the basic block consisting of the single instruction \lstinline{shrq $5, 16 The blue points represent the prediction of llvm-mca\xspace{} when instantiated with different values for \texttt{DispatchWidth}\xspace. The na\"{i}ve approach of optimizing llvm-mca\xspace{} would be combinatorial search, since without a continuous and smooth surface to optimize, it is impossible to use standard first-order techniques. DiffTune\xspace{} instead first learns a surrogate\xspace{} of llvm-mca\xspace{}, represented by the orange line in Figure~\ref{fig:diff-approx-plot}. This surrogate\xspace{}, though not exactly the same as llvm-mca\xspace{}, is smooth, and therefore possible to optimize with first-order techniques like gradient descent. \begin{figure} \includegraphics[width=0.95\columnwidth]{figures/dispatch-width-sweep} \caption{Example of timing predicted by llvm-mca\xspace{} (orange) and a surrogate\xspace (blue), while varying \texttt{DispatchWidth}\xspace. By learning the surrogate\xspace, we are able to optimize the parameter value with gradient descent, rather than requiring combinatorial search.} \label{fig:diff-approx-plot} \end{figure} \section{Background: Simulators} \label{sec:background} Simulators comprise a large set of tools for modeling the execution behavior of computing systems, at all different levels of abstraction: from cycle-accurate simulators to high-level cost models. These simulators are used for a variety of applications: \begin{itemize} \item gem5~\citep{binkert_gem5_2011} is a detailed, extensible full system simulator that is frequently used for computer architecture research, to model the interaction of new or modified components with the rest of a CPU and memory system. \item IACA~\citep{iaca} is a static analysis tool released by Intel that models the behavior of modern Intel processors, including undocumented Intel CPU features, predicting code performance. IACA is used by performance engineers to diagnose and fix bottlenecks in hand-engineered code~snippets~\citep{laukemann-osaca}. \item LLVM~\citep{llvm} includes internal CPU simulators to predict the performance of generated code~\citep{pohl_vectorization_2020,mendis_goslp_2018}. LLVM uses these CPU simulators to search through the code optimization space, to generate more optimal code. \end{itemize} Though these simulators are all simplifications of the true execution behavior of physical systems, they are still highly complex pieces of software. \subsection{llvm-mca\xspace{}} For example, consider llvm-mca\xspace{}~\citep{llvm-mca}, an out-of-order superscalar CPU simulator included in the LLVM~\citep{llvm} compiler infrastructure. The main design goal of llvm-mca\xspace{} is to expose LLVM's instruction scheduling model for testing. llvm-mca\xspace{} takes \emph{basic blocks} as input, sequences of straight-line assembly instructions with no branches, jumps, or loops. For a given input basic block, llvm-mca\xspace{} predicts the timing of 100 repetitions of that block, measured in cycles. \paragraph{Design} llvm-mca\xspace{} is structured as a generic, target-independent simulator parameterized on LLVM's internal model of the target hardware. llvm-mca\xspace{} makes two core modeling assumptions. First, it assumes that the simulated program is not bottlenecked by the processor frontend; in fact, llvm-mca\xspace{} ignores instruction decoding entirely. Second, llvm-mca\xspace{} assumes that memory data is always in the L1 cache, and ignores the memory~hierarchy. llvm-mca\xspace{} simulates a processor in four main stages: \emph{dispatch}, \emph{issue}, \emph{execute}, and \emph{retire}. The dispatch stage reserves physical resources (e.g., slots in the reorder buffer) for each instruction, based on the number of micro-ops the instruction is composed of. Once dispatched, instructions wait in the issue stage until they are ready to be executed. The issue stage blocks an instruction until its input operands are ready and until all of its required execution ports are available. Once the instruction's operands and ports are available, the instruction enters the execute stage. The execute stage reserves the instruction's execution ports and holds them for the durations specified by the instruction's port map assignment specification. Finally, once an instruction has executed for its duration, it enters the retire stage. In program order, the retire stage frees the physical resources that were acquired for each instruction. \input{approach-figures} \paragraph{Parameters} Each stage in llvm-mca\xspace{}'s model requires parameters. The \texttt{NumMicroOps}\xspace parameter for each instruction specifies how many micro-ops the instruction is composed of. The \texttt{DispatchWidth}\xspace parameter specifies how many micro-ops can enter and exit the dispatch stage in each cycle. The \texttt{ReorderBufferSize}\xspace parameter specifies how many micro-ops can reside in the issue and execute stages at the same time. The \texttt{PortMap}\xspace parameters for each instruction specify the number of cycles for which the instruction occupies each execution port. An additional \texttt{WriteLatency}\xspace parameter for each instruction specifies the number of cycles before destination operands of that instruction can be read from, while \texttt{ReadAdvanceCycles}\xspace parameters for each instruction specify the number of cycles by which to accelerate the \texttt{WriteLatency}\xspace of source operands (representing forwarding paths). In sum, the $837\xspace$ instructions in our dataset (\Cref{sec:methodology}) lead to $\nparams$ total parameters with \psize possible configurations in llvm-mca\xspace{}'s Haswell microarchitecture simulation.\footnote{\label{foot:psize}Based on llvm-mca\xspace{}'s default, expert-provided values for these parameters, the $\nparams$ parameters induce a parameter space of \psize valid configurations; the actual values are only bounded by integer representation sizes.} \subsection{Challenges} These parameter tables are currently manually written for each microarchitecture, based on processor vendor documentation and manual timing of instructions. Specifically, many of LLVM's \texttt{WriteLatency}\xspace and \texttt{PortMap}\xspace parameters are drawn from the Intel optimization manual~\citep{intel-documentation,mca-intel}, Agner Fog's instruction tables~\citep{agner,mca-agner}, and uops.info~\citep{abel_uops_2019,mca-uops}, all of which contain latencies and port mappings for assembly instructions across different architectures and microarchitectures. \paragraph{Measurability} However, these documented and measured values do not directly correspond to parameters in llvm-mca\xspace{}, because llvm-mca\xspace{}'s parameters, and abstract simulator parameters more broadly, are not defined such that they have a single measurable value. For instance, llvm-mca\xspace{} defines exactly one \texttt{WriteLatency}\xspace parameter per instruction. However, \citet{agner} and \citet{abel_uops_2019} find that for instructions that produce multiple results in different destinations, the results might be available at different cycles. Further, the latency for results to be available can depend on the actual value of the input operands. Thus, there is no single measurable value that corresponds to llvm-mca\xspace{}'s definition of \texttt{WriteLatency}\xspace. Different choices for how to map from measured latencies to \texttt{WriteLatency}\xspace values result in different overall errors (as defined in \Cref{sec:methodology}). For instance, when llvm-mca\xspace{} is instantiated with \citet{abel_uops_2019}'s maximum observed latency for each instruction, llvm-mca\xspace{} gets an error of $218\%$ when generating predictions for the Haswell microarchitecture; the median observed latency results in an error of $150\%$; and the minimum observed latency results in an error of $103\%$. \section{Conclusion} CPU simulators are complex software artifacts that require significant measurement and manual tuning to set their parameters. \NA{We present DiffTune\xspace{}, a generic algorithm for learning parameters within non-differentiable programs, using only end-to-end supervision.} Our results demonstrate that DiffTune\xspace{} is able to learn the entire set of \nparams microarchitecture-specific parameters from scratch in llvm-mca\xspace{}. Looking beyond CPU simulation, DiffTune\xspace{}’s approach offers the promise of a generic, scalable methodology to learn the parameters of programs using only input-output examples, potentially reducing many programming tasks to simply that of gathering data. \section{llvm\_sim\xspace{}} \label[appendix]{sec:exegesis} To evaluate that our implementation of DiffTune\xspace{} (\Cref{sec:implementation}) is extensible to simulators other than llvm-mca\xspace{}, we evaluate our implementation on llvm\_sim\xspace{}~\citep{exegesis}, learning all parameters that llvm\_sim\xspace{} reads from LLVM. llvm\_sim\xspace{} is a simulator that uses many of the same parameters (from LLVM’s backend) as llvm-mca\xspace{}, but uses a different model of the CPU, modeling the frontend and breaking up instructions into micro-ops and simulating the micro-ops individually rather than simulating instructions as a whole as llvm-mca\xspace{} does. \paragraph{Behavior} llvm\_sim\xspace{}~\citep{exegesis} is also an out-of-order superscalar simulator exposing LLVM's instruction scheduling model. llvm\_sim\xspace{} is only implemented for the x86 Haswell microarchitecture. Similar to llvm-mca\xspace{}, llvm\_sim\xspace{} also predicts timings of basic blocks, assuming that all data is in the L1 cache. llvm\_sim\xspace{} primarily differs from llvm-mca\xspace{} in two aspects: It models the front-end, and it decodes instructions into micro-ops before dispatch and execution. llvm\_sim\xspace{} has the following pipeline: \begin{itemize} \item Instructions are fetched, parsed, and decoded into micro-ops (unlike llvm-mca\xspace{}, llvm\_sim\xspace{} does model the frontend) \item Registers are renamed, with an unlimited number of physical registers \item Micro-ops are dispatched out-of-order once dependencies are available \item Micro-ops are executed on execution ports \item Instructions are retired once all micro-ops in an instruction have~been~executed \end{itemize} \paragraph{Parameters} We learn the parameters specified in \Cref{tab:exegesis-parameters}. We again assume that there are 10 execution ports available to dispatch for all microarchitectures and do not learn to dispatch to port groups. All other hyperparameters are identical to those described in \Cref{sec:methodology}. \paragraph{Results} Table~\ref{tab:results-exegesis-all} presents the average error and correlation of llvm\_sim\xspace{} with the default parameters, llvm\_sim\xspace{} with the learned parameters, Ithemal trained on the dataset as a lower bound, and the OpenTuner~\citep{ansel_opentuner_2014} baseline. By learning the parameters that llvm\_sim\xspace{} reads from LLVM, we reduce llvm\_sim\xspace{}'s error from $61.3\%$ to $44.1\%$. \section{Implementation} \label{sec:implementation} This section discusses our implementation of DiffTune\xspace{}, available online at \url{https://github.com/ithemal/DiffTune}. \paragraph{Parameters} We consider two types of parameters that we optimize with DiffTune\xspace{}: \emph{per-instruction parameters}, which are a uniform length vector of parameters associated with each individual instruction opcode (e.g.\ for llvm-mca\xspace{}, a vector containing \texttt{WriteLatency}\xspace, \texttt{NumMicroOps}\xspace, etc.); and \emph{global parameters}, which are a vector of parameters that are associated with the overall simulator behavior (e.g.\ for llvm-mca\xspace{}, a vector containing the \texttt{DispatchWidth}\xspace and \texttt{ReorderBufferSize}\xspace). We further support two types of constraints in our implementation: \emph{lower-bounded}, specifying that parameter values cannot be below a certain value (often $0$ or $1$), and \emph{integer-valued}, specifying that parameter values must be integers. During optimization, all parameters are represented as floating-point. \paragraph{Surrogate\xspace design} \Cref{fig:ithemal-parametric} presents our surrogate\xspace design, which is capable of learning parameters for x86 basic block performance models such as llvm-mca\xspace{}. We use a modified version of Ithemal~\citep{mendis_ithemal_2019}, a learned basic block performance model, as the surrogate\xspace. In the standard implementation of Ithemal (without our modifications), Ithemal first uses an embedding lookup table to map the opcode and operands of each instruction into vectors. Next, Ithemal processes the opcode and operand embeddings for each instruction with an LSTM, producing a vector representing each instruction. Then, Ithemal processes the sequence of instruction vectors with another LSTM, producing a vector representing the basic block. Finally, Ithemal uses a fully connected layer to turn the basic block vector into a single number representing Ithemal's prediction for the timing of that basic block. We modify Ithemal in two ways to act as the surrogate\xspace. First, we replace each individual LSTM with a set of 4 stacked LSTMs, a common technique to increase representative capacity~\citep{hermans_training_2013}, to give Ithemal the capacity to represent the dependency of the prediction on the input parameters as well as on the input basic block.\footnote{A stack of 4 LSTMs resulted in the best validation error for the surrogate.} Second, to provide the parameters as input we concatenate the per-instruction parameters and the global parameters to each instruction vector before processing the instruction vectors with the instruction-level LSTM. \paragraph{Solving the optimization problems} Training the surrogate\xspace requires first defining sampling distributions for each parameter (e.g., a bounded uniform distribution on integers). We then generate a large simulated dataset by repeatedly sampling a basic block from the ground-truth dataset, sampling a parameter table from the defined sampling distributions, instantiating the simulator with the parameter table, and generating a prediction for the basic block. We train the surrogate\xspace using SGD against this simulated dataset. During surrogate\xspace training, for parameters constrained to be lower-bounded we subtract the lower bound before passing them as input to the surrogate\xspace. To train the parameter table, we first initialize it to a random sample from the parameter sampling distribution. We generate predictions using the parameter table plugged into the trained surrogate\xspace, and train the parameter table by using SGD against the ground-truth dataset. Importantly, when training the parameter table, the weights of the surrogate\xspace are not updated. During parameter table training, for parameters constrained to be lower-bounded we take the absolute value of the parameters before passing them as input to the surrogate\xspace. \paragraph{Parameter extraction} Once we have trained the surrogate\xspace and the parameter table using the optimization process described in \Cref{sec:approach}, we extract the parameters from the parameter table and use them in the original simulator. For parameters with lower bounds, we take the absolute value of the parameter in the learned parameter table, then add the lower bound. For integer parameters, we round the learned parameter to the nearest integer. We do not use any special technique to handle unseen opcodes in the test set, just using the parameters for that opcode from the randomly initialized parameter table. \section{Introduction} \label{sec:introduction} Simulators are widely used for architecture research to model the interactions of architectural components of a system~\citep{binkert_gem5_2011,llvm-mca,iaca,ptlsim,marss,sanchez_zsim_2013}. For example, \emph{CPU simulators}, such llvm-mca\xspace{}~\citep{llvm-mca}, and llvm\_sim\xspace{}~\citep{exegesis}, model the execution of a processor at various levels of detail, potentially including abstract models of common processor design concepts such as dispatch, execute, and retire stages~\citep{patterson_hennessy}. CPU simulators can operate at different granularities, from analyzing just \emph{basic blocks}, straight-line sequences of assembly code instructions, to analyzing whole programs. Such simulators allow performance engineers to reason about the execution behavior and bottlenecks of programs run on a given processor. However, precisely simulating a modern CPU is challenging: not only are modern processors large and complex, but many of their implementation details are proprietary, undocumented, or only loosely specified even given the thousands of pages of vendor-provided documentation that describe any given processor. As a result, CPU simulators are often composed of coarse abstract models of a subset of processor design concepts. Moreover, each constituent model typically relies on a number of approximate design parameters, such as the number of cycles it takes for an instruction to pass through the processor's execute stage. Choosing an appropriate level of model detail for simulation, as well as setting simulation parameters, requires significant expertise. In this paper, we consider the challenge of setting the parameters of a CPU simulator given a fixed level of model detail. \paragraph{Measurement} One methodology for setting the parameters of such a CPU simulator is to gather fine-grained measurements of each individual parameter's realization in the physical machine~\citep{agner,abel_uops_2019} and then set the parameters to their measured values~\citep{mca-agner,mca-uops}. When the semantics of the simulator and the semantics of the measurement methodology coincide, then these measurements can serve as effective parameter values. However, if there is a mismatch between the simulator and the measurement methodology, then measurements may not provide effective parameter settings~\citep[Section~5.2]{ritter_pmevo_2020}. Moreover, some parameters may not correspond to measurable values. \paragraph{Optimizing simulator parameters} An alternative to developing detailed measurement methodologies for individual parameters is to infer the parameters from coarse-grained end-to-end measurements of the performance of the physical machine~\citep{ritter_pmevo_2020}. Specifically, given a dataset of benchmarks, each labeled with their true behavior on a given CPU (e.g., with their execution time or with microarchitectural events, such as cache misses), identify a set of parameters that minimize the error between the simulator's predictions and the machine's true behavior. This is generally a discrete, non-convex optimization problem for which classic strategies, such as random search~\citep{ansel_opentuner_2014}, are intractable because of the size of the parameter space (approximately \psize possible parameter settings in one simulator, llvm-mca\xspace{}). \paragraph{Our approach: DiffTune\xspace{}} In this paper, we present DiffTune\xspace{}, an optimization algorithm and implementation for learning the parameters of programs. We use DiffTune\xspace{} to learn the parameters of x86 basic block CPU simulators. DiffTune\xspace's algorithm takes as input a program, a description of the program's parameters, and a dataset of input-output examples describing the program's desired output, then produces a setting of the program's parameters that \NA{minimizes} the discrepancy between the program's actual and desired output. The learned parameters are then plugged back into the~original~program. The algorithm solves this optimization problem via a \emph{differentiable surrogate} for the program~\citep{queipo_surrogate_2005,shirobokov_differentiating_2020,louppe_adversarial_2017,grathwohl_backpropagation_2018,she_neuzz_2019}. A surrogate is an approximation of the function from the program's parameters to the program's output. By requiring the surrogate to be differentiable, it is then possible to compute the surrogate's gradient and apply gradient-based optimization~\citep{robbins_stochastic_1951,kingma_adam_2014} to identify a setting of the program's parameters that minimize the error between the program's output (as predicted by the surrogate) and the desired output. % To apply this to basic block CPU simulators, we instantiate DiffTune\xspace's surrogate with a neural network that can mimic the behavior of a simulator. This neural network takes the original simulator input (e.g., a sequence of assembly instructions) and a set of proposed simulator parameters (e.g., dispatch width or instruction latency) as input, and produces the output that the simulator would produce if it were instantiated with the given simulator's parameters. We derive the neural network architecture of our surrogate from that of Ithemal~\citep{mendis_ithemal_2019}, a basic block throughput estimation neural network. \paragraph{Results} Using DiffTune\xspace{}, we are able to learn the entire set of \nparams microarchitecture-specific parameters in the Intel x86 simulation model of llvm-mca\xspace{}~\citep{llvm-mca}. llvm-mca\xspace is a CPU simulator that predicts the execution time of basic blocks. llvm-mca\xspace{} models instruction dispatch, register renaming, out-of-order execution with a reorder buffer, instruction scheduling based on use-def latencies, execution by dispatching to ports, a load/store unit ensuring memory consistency, and a retire~control~unit.\footnote{We note that llvm-mca\xspace{} does not model the memory hierarchy.} We evaluate DiffTune\xspace{} on four different x86 microarchitectures, including both Intel and AMD chips. Using only end-to-end supervision of the execution time measured per-microarchitecture of a large dataset of basic blocks from \citet{chen_bhive_2019}, we are able to learn parameters from scratch that lead llvm-mca\xspace{} to have an average error of $24.6\%$, down from an average error of $30.0\%$ with llvm-mca\xspace's expert-provided parameters. In contrast, black-box global optimization with OpenTuner~\citep{ansel_opentuner_2014} is unable to identify parameters with less~than~$100\%$~error. \paragraph{Contributions} We present the following contributions: \begin{itemize} \item We present DiffTune\xspace, an algorithm for learning ordinal parameters of programs from input-output examples. \item We present an implementation of DiffTune\xspace for x86 basic block CPU simulators that uses a variant of the Ithemal model as a differentiable surrogate. \item We evaluate DiffTune\xspace{} on llvm-mca\xspace{} and demonstrate that DiffTune\xspace{} can learn the entire set of microarchitectual parameters in llvm-mca\xspace{}’s Intel x86 simulation model. \item \NA{We present case studies of specific parameters learned by DiffTune\xspace{}. Our analysis demonstrates cases in which DiffTune\xspace learns semantically correct parameters that enable llvm-mca\xspace to make more accurate predictions. Our analysis also demonstrates cases in which DiffTune\xspace learns parameters that lead to higher accuracy but do not correspond to reasonable physical values on the CPU.} \end{itemize} Our results show that DiffTune\xspace{} offers the promise of a generic, scalable methodology to learn detailed performance models with only end-to-end measurements, reducing performance optimization tasks to simply that of gathering data. \section{Future Work} \label{sec:limitations} DiffTune\xspace{} is an effective technique to learn simulator parameters, as we demonstrate with llvm-mca\xspace{} (\Cref{sec:results-main}) and llvm\_sim\xspace{} (\Cref{sec:exegesis}). However, there are several aspects of DiffTune\xspace{}'s approach that are designed around the fact that llvm-mca\xspace{} and llvm\_sim\xspace{} are basic block simulators that are primarily parameterized by ordinal parameters with few constraints between the values of individual parameters. We believe that DiffTune\xspace{}'s overall approach---differentiable surrogates---can be extended to whole program and full system simulators that have richer parameter spaces, such as gem5, by extending a subset of DiffTune\xspace{}'s individual~components. \paragraph{Whole program and full system simulation} DiffTune\xspace{} requires a differentiable surrogate that can learn the simulator's behavior to high accuracy. Ithemal~\citep{mendis_ithemal_2019}---the model we use for the {surrogate\xspace}---operates on basic blocks with the assumption that all data accesses resolve in the L1 cache, which is compatible with our evaluation of llvm-mca\xspace{} and llvm\_sim\xspace{} (which make the same assumptions). While Ithemal could potentially model whole programs (e.g., branching) and full systems (e.g., cache behavior) with limited modifications, it may require significant extensions to learn such more~complex~behavior~\citep{vila_cachequery_2020, hashemi_learning_2018}. In addition to the design of the surrogate\xspace, training the surrogate\xspace would require a new dataset that includes whole programs, along with any other behavior modeled by the simulator being optimized (e.g., memory). Acquiring such a dataset would require extending timing methodologies like BHive~\citep{chen_bhive_2019} to the full scope of target behavior. \paragraph{Non-ordinal parameters} DiffTune\xspace{} only supports ordinal parameters and does not support categorical or boolean parameters. DiffTune\xspace{} requires a relaxation of discrete parameters to continuous values to perform optimization, along with a method to extract the learned relaxation back into the discrete parameter type (e.g., DiffTune\xspace{} relaxes integers to real numbers, and extracts the learned parameters by rounding back to integers). Supporting categorical and boolean parameters would require designing and evaluating a scheme to represent and extract such parameters within DiffTune\xspace{}. One candidate representation is one-hot encoding, but has not been evaluated in DiffTune\xspace{}. \paragraph{Dependent parameters} All integers in the range $[1, \infty)$ are valid settings for llvm-mca\xspace{}'s parameters. However, other simulators, such as gem5, have stricter conditions---expressed as assertions in the simulator---on the relationship among different parameters.\footnote{For an example, see \url{https://github.com/gem5/gem5/blob/v20.0.0.0/src/cpu/o3/decode_impl.hh\#L423}, which is based on the interaction between different parameters, defined at \url{https://github.com/gem5/gem5/blob/v20.0.0.0/src/cpu/o3/decode_impl.hh\#L75}.} DiffTune\xspace{} also does not apply when there is a variable number of parameters: we are able to learn the port mappings in a fixed-size \texttt{PortMap}\xspace, but do not learn the number of ports in the \texttt{PortMap}\xspace, instead fixing it at 10 (the default value for the Haswell microarchitecture). Extending DiffTune\xspace{} to optimize simulators with rich, dynamic constrained relationships between parameters motivates new work in efficient techniques to sample such sets of parameters~\citep{dutra_efficient_2018}. \paragraph{Sampling distributions} Extending DiffTune\xspace{} to other simulators also requires defining appropriate sampling distributions for each parameter. While the sampling distributions do not have to directly lead to parameter settings that lead the simulator to have low error (e.g., the sampling distributions defined in \Cref{sec:methodology} lead to an average error of llvm-mca\xspace{} on Haswell of $171.4\% \pm 95.7\%$), they do need to contain values that the parameter table estimate may take on during the parameter table optimization phase (because neural networks like our modification of Ithemal are not guaranteed to be able to accurately extrapolate outside of their training distribution). Other approaches to optimizing with learned differentiable surrogates handle this by continuously re-optimizing the surrogate\xspace{} in a region around the current parameter estimate~\citep{shirobokov_differentiating_2020}, a promising direction that could alleviate the need to hand-specify proper sampling distributions. \section{Related Work} \label{sec:related} Simulators are widely used for architecture research to model the interactions of architectural components of a system~\citep{binkert_gem5_2011,llvm-mca,ptlsim,marss,sanchez_zsim_2013}. Configuring and validating CPU simulators to accurately model systems of interest is a challenging task~\citep{chen_bhive_2019,gutierrez_sources_2014,akram_validation_2019}. We review related techniques for setting CPU simulator parameters in \Cref{sec:related-simulate}, as well as related techniques to DiffTune\xspace{}~in~\Cref{sec:related-optimize}. \subsection{Setting CPU Simulator Parameters} \label{sec:related-simulate} In this section, we discuss related approaches for setting CPU simulator parameters. \paragraph{Measurement} One methodology for setting the parameters of an abstract model is to gather fine-grained measurements of each individual parameter's realization in the physical machine~\citep{agner,abel_uops_2019} and then set the parameters to their measured values~\citep{mca-agner,mca-uops}. When the semantics of the simulator and the semantics of the measurement methodology coincide, then these measurements can serve as effective parameter values. However, if there is a mismatch between the simulator and measurement methodology, then measurements may not provide effective parameter settings. All fine-grained measurement frameworks rely on accurate hardware performance counters to measure the parameters of interest. Such performance counters do not always exist, such as with per-port measurement performance counters on AMD Zen~\citep{ritter_pmevo_2020}. When such performance counters are present, they are not always reliable~\citep{weaver_hardware_2008}. \paragraph{Optimizing CPU simulators} Another methodology for setting parameters of an abstract model is to infer the parameters from end-to-end measurements of the performance of the physical machine. In the most related effort in this space, \citet{ritter_pmevo_2020} present a framework for inferring port usage of instructions based on optimizing against a CPU model that solves a linear program to predict the throughput of a basic block. Their approach is specifically designed to infer port mappings and it is not clear how the approach could be extended to infer different parameters in a more complex simulator, such as extending their simulation model to include data dependencies, dispatch width, or reorder buffer size. To the best of our knowledge, DiffTune\xspace{} is the first approach designed to optimize an arbitrary simulator, provided that the simulator and its parameters match DiffTune\xspace{}'s scope of applicability~(\Cref{sec:limitations}). \subsection{Differentiable surrogates and approximations} \label{sec:related-optimize} In this section, we survey techniques related to DiffTune\xspace{} that facilitate optimization by using differentiable surrogates or approximations. \paragraph{Optimization with learned differentiable surrogates} Optimization of black-box and non-differentiable functions with learned differentiable surrogates is an emerging set of techniques, with applications in physical sciences~\citep{shirobokov_differentiating_2020,louppe_adversarial_2017}, reinforcement learning~\citep{grathwohl_backpropagation_2018}, and computer security~\citep{she_neuzz_2019}. \citet{shirobokov_differentiating_2020} use learned differentiable surrogates to optimize parameters for generative models of small physics simulators. This technique, concurrently released on arXiv, is similar to an iterative version of DiffTune\xspace{} that continuously re-optimizes the surrogate around the current parameter table estimate. \citet{louppe_adversarial_2017} propose optimizing non-differentiable physics simulators by formulating the joint optimization problem as adversarial variational optimization. \citeauthor{louppe_adversarial_2017}'s technique is applicable in principle, though it has only been evaluated in small settings with a single parameter to learn. \citet{grathwohl_backpropagation_2018} use learned differentiable surrogates to approximate the gradient of black-box or non-differentiable functions, in order to reduce the variance of gradient estimators of random variables. While similar, \citeauthor{grathwohl_backpropagation_2018}'s surrogate optimization has a different objective: reducing the variance of other gradient estimators~\citep{reinforce}, rather than necessarily mimicking the black-box function. \citet{she_neuzz_2019} use learned differentiable surrogates to approximate the branching behavior of real-world programs then find program inputs that trigger bugs in the program. \citeauthor{she_neuzz_2019}'s surrogate does not learn the full input-output behavior of the program, only estimating which edges in the program graph are or are not taken. \paragraph{CPU simulator surrogates} \citet{ipek_exploring_2006} use neural networks to learn to predict the IPC of a cycle-accurate simulator given a set of design space parameters, to enable efficient design space exploration. \citet{lee_illustrative_2007} use regression models to predict the performance and power usage of a CPU simulator, similarly enabling efficient design space exploration. Neither \citeauthor{ipek_exploring_2006} nor \citeauthor{lee_illustrative_2007} then use the models to optimize the simulator to be more accurate; both also apply exhaustive or grid search to explore the parameter space, rather than using the gradient of the simulator~surrogate. \paragraph{Differentiating arbitrary programs} \citet{chaudhuri_smooth_2010} present a method to approximate numerical programs by executing programs probabilistically, similar to the idea of blurring an image. This approach lets \citeauthor{chaudhuri_smooth_2010} apply gradient descent to parameters of arbitrary numerical programs. However, the semantics presented by \citeauthor{chaudhuri_smooth_2010} only apply to a limited set of program constructs and do not easily extend to the set of program constructs exhibited by large-scale CPU simulators. \input{exegesis-figures} \section{Evaluation} \label{sec:results-main} In this section, we report and analyze the results of using DiffTune\xspace{} to learn the parameters of llvm-mca\xspace{} across different x86 microarchitectures. We first describe the methodological details of our evaluation in \Cref{sec:methodology}. We then analyze the error of llvm-mca\xspace{} instantiated with the learned parameters, finding the following: \begin{itemize} \item DiffTune\xspace{} is able to learn parameters that lead to lower error than the default expert-tuned parameters across all four tested microarchitectures. (\Cref{sec:results-error}) \item Black-box global optimization with OpenTuner~\citep{ansel_opentuner_2014} cannot find a full set of parameters for llvm-mca\xspace{}’s Intel x86 simulation model that match llvm-mca\xspace{}'s default error. (\Cref{sec:results-opentuner}) \end{itemize} To show that our implementation of DiffTune\xspace{} is extensible to CPU simulators other than llvm-mca\xspace{}, we evaluate DiffTune\xspace{} on llvm\_sim\xspace{} in \Cref{sec:exegesis}. \subsection{Methodology} \label{sec:methodology} Following \citet{chen_bhive_2019}, we use llvm-mca\xspace{} version 8.0.1 (commit hash \texttt{19a71f6}). We specifically focus on llvm-mca\xspace{}'s Intel x86 simulation model: llvm-mca\xspace{} supports behavior beyond that described in \Cref{sec:background} (e.g., optimizing zero idioms, constraining the number of physical registers available, etc.) but this behavior is disabled by default in the Intel microarchitectures evaluated in this paper. We do not enable or learn any behavior not present in llvm-mca\xspace{}'s default Intel x86 simulation model, including when evaluating on AMD. \paragraph{llvm-mca\xspace{} parameters} For each microarchitecture, we learn the parameters specified in \Cref{tab:mca-parameters}. Following the default value in llvm-mca\xspace{} for Haswell, we assume that there are 10 execution ports available for dispatch for all microarchitectures. llvm-mca\xspace{} supports simulation of instructions that can be dispatched to multiple different ports in the \texttt{PortMap}\xspace parameter. However, the simulation of port group parameters in the \texttt{PortMap}\xspace does not correspond to standard definitions of port groups~\citep{andrea_email,agner,ritter_pmevo_2020}. We therefore set all port group parameters in the \texttt{PortMap}\xspace to zero, removing that component of the simulation. \paragraph{Dataset} We use the BHive dataset from \citet{chen_bhive_2019}, which contains basic blocks sampled from a diverse set of applications (e.g., OpenBLAS, Redis, LLVM, etc.) along with the measured execution times of these basic blocks unrolled in a loop. These measurements are designed to conform to the same modeling assumptions made by llvm-mca\xspace{}. \begin{table} \caption{Dataset summary statistics.} \label{tab:dataset-summary-statistics} \centering \begin{tabular}{lc} \toprule \textbf{Statistic} & \textbf{Value} \\ \midrule \# Blocks & \\ \multicolumn{1}{r}{Train} & $230111$ \\ \multicolumn{1}{r}{Validation} & $28764$ \\ \multicolumn{1}{r}{Test} & $28764$ \\ \multicolumn{1}{r}{\textbf{Total}} & \textbf{$287639$} \\ \midrule Block Length & \\ \multicolumn{1}{r}{Min} & $1$ \\ \multicolumn{1}{r}{Median} & $3$ \\ \multicolumn{1}{r}{Mean} & $4.93$ \\ \multicolumn{1}{r}{Max} & $256$ \\ \midrule Median Block Timing & \\ \multicolumn{1}{r}{Ivy Bridge} & $132$ \\ \multicolumn{1}{r}{Haswell} & $123$ \\ \multicolumn{1}{r}{Skylake} & $120$ \\ \multicolumn{1}{r}{Zen 2} & $114$ \\ \midrule \# Unique Opcodes & \\ \multicolumn{1}{r}{Train} & $814$ \\ \multicolumn{1}{r}{Val} & $610$ \\ \multicolumn{1}{r}{Test} & $580$ \\ \multicolumn{1}{r}{\textbf{Total}} & $837$ \\ \bottomrule \end{tabular} \end{table} We use the latest available version of the released timings on Github.\footnote{\url{https://github.com/ithemal/bhive/tree/5878a18/benchmark/throughput}} We evaluate on the datasets released with BHive for the Intel x86 microarchitectures Ivy Bridge, Haswell, and Skylake. We also evaluate on AMD Zen~2, which was not included in the BHive dataset. The Zen~2 measurements were gathered by running a version of BHive modified to time basic blocks using AMD performance counters on an AMD EPYC 7402P, using the same methodology as \citeauthor{chen_bhive_2019}. Following \citeauthor{chen_bhive_2019}, we remove all basic blocks potentially affected by virtual page aliasing. We randomly split off $80\%$ of the measurements into a training set, $10\%$ into a validation set for development, and $10\%$ into the test set reported in this paper. We use the same train, validation, and test set split for all microarchitectures. The training and test sets are block-wise disjoint: there are no identical basic blocks between the training and test set. Summary statistics of the dataset are presented in \Cref{tab:dataset-summary-statistics}. \paragraph{Objective} We use the same definition of timing as \citet{chen_bhive_2019}: the number of cycles it takes to execute 100 iterations of the given basic block, divided by 100. Following \citeauthor{chen_bhive_2019}'s definition of error, we optimize llvm-mca\xspace{} to minimize the mean absolute percentage error (MAPE) against a dataset: \[ \text{Error} \triangleq \frac{1}{|\mathcal{D}|}\sum_{(x, y) \in \mathcal{D}} \frac{|f(x) - y|}{y} \] We note that an error of above $100\%$ is possible when $f(x)$ is much larger than $y$. \paragraph{Training methodolgy} We use Pytorch-1.2.0 on an NVIDIA Tesla V100 to train the surrogate\xspace and parameters. We train the surrogate\xspace and the parameter table using Adam~\citep{kingma_adam_2014}, a stochastic first-order optimization technique, with a batch size of 256. We use a learning rate of $0.001$ to train the surrogate\xspace and a learning rate of $0.05$ to train the parameter~table. To train the surrogate\xspace, we generate a simulated dataset of $2301110$ blocks ($10\times$ the length of the original training set). For each basic block in the simulated dataset, we sample a random parameter table, with each \texttt{WriteLatency}\xspace a uniformly random integer between $0$ and $5$ (inclusive), each value in the \texttt{PortMap}\xspace uniform between $0$ and $2$ cycles to between $0$ and $2$ randomly selected ports for each instruction, each \texttt{ReadAdvanceCycles}\xspace between $0$ and $5$, each \texttt{NumMicroOps}\xspace between $1$ and $10$, the \texttt{DispatchWidth}\xspace uniform between $1$ and $10$, and the \texttt{ReorderBufferSize}\xspace uniform between $50$~and~$250$. A random parameter table sampled from this distribution has error $171.4\% \pm 95.7\%$. See \Cref{sec:limitations} for more discussion of these sampling distributions. We loop over this simulated dataset $6$ times when training the surrogate\xspace, totaling an equivalent of $60$ epochs over the original training set. To train the parameter table, we initialize it to a random sample from the parameter training distribution, then train it for $1$ epoch against the original training set. \subsection{Error of Learned Parameters} \label{sec:results-error} \begin{table} \caption{Error of llvm-mca\xspace{} with the default and learned parameters, compared against baselines.} \label{tab:results-mca-all} \centering \begin{tabular}{p{12em}cc} \toprule \textbf{Architecture} \hfill \textbf{Predictor} & \textbf{Error} & \textbf{Kendall's Tau} \\ \midrule \textbf{Ivy Bridge} \hfill Default & $33.5\%$ & 0.788 \\ \hfill DiffTune\xspace{} & $25.4\% \pm 0.5\%$ & $0.735 \pm 0.012$ \\ \cmidrule{2-3} \hfill Ithemal & $9.4\%$ & 0.858 \\ \hfill IACA & $15.7\%$ & $0.810$ \\ \hfill OpenTuner & $102.0\%$ & 0.515 \\ \midrule \textbf{Haswell} \hfill Default & $25.0\%$ & 0.783 \\ \hfill DiffTune\xspace{} & $23.7\% \pm 1.5\%$ & $0.745 \pm 0.009$ \\ \cmidrule{2-3} \hfill Ithemal & $9.2\%$ & 0.854 \\ \hfill IACA & $17.1\%$ & $0.800$ \\ \hfill OpenTuner & $105.4\%$ & 0.522 \\ \midrule \textbf{Skylake} \hfill Default & $26.7\%$ & 0.776 \\ \hfill DiffTune\xspace{} & $23.0\% \pm 1.4\%$ & $0.748 \pm 0.008$ \\ \cmidrule{2-3} \hfill Ithemal & $9.3\%$ & 0.859 \\ \hfill IACA & $14.3\%$ & $0.811$ \\ \hfill OpenTuner & $113.0\%$ & 0.516 \\ \midrule \textbf{Zen 2} \hfill Default & $34.9\%$\footnotemark & 0.794 \\ \hfill DiffTune\xspace{} & $26.1\% \pm 1.0\%$ & $0.689 \pm 0.007$ \\ \cmidrule{2-3} \hfill Ithemal & $9.4\%$ & 0.873 \\ \hfill IACA & N/A & N/A \\ \hfill OpenTuner & $131.3\%$ & 0.494 \\ \bottomrule \end{tabular}% \end{table} \Cref{tab:results-mca-all} presents the average error and correlation of llvm-mca\xspace{} with the default parameters (labeled default), llvm-mca\xspace{} with the learned parameters (labeled DiffTune\xspace{}). As baselines, \Cref{tab:results-mca-all} also presents Ithemal's error, as the most accurate predictor evaluated by \citeauthor{chen_bhive_2019}, IACA's error, as the most accurate analytical model evaluated by \citeauthor{chen_bhive_2019}, and llvm-mca\xspace{} with parameters learned by OpenTuner (which we discuss further in \Cref{sec:results-opentuner}). Because IACA is written by Intel to analyze Intel microarchitectures, it does not generate predictions for Zen~2. We report mean absolute percentage error, as defined in \Cref{sec:methodology}, and Kendall's Tau rank correlation coefficient, measuring the fraction of pairs of timing predictions in the test set that are ordered correctly. For the learned parameters, we report the mean and standard deviation of error and Kendall's Tau across three independent runs~of~DiffTune\xspace{}. Across all microarchitectures, the parameter set learned by DiffTune\xspace{} achieves equivalent or better error than the default parameter set. These results demonstrate that DiffTune\xspace{} can learn all of llvm-mca\xspace{}’s microarchitecture-specific parameters, from scratch, to equivalent accuracy as the hand-written parameters. \begin{table} \caption{Error of llvm-mca\xspace{} with default and learned parameters on Haswell, grouped by BHive applications~and~categories.} \label{tab:bhive-category-errors} \centering \begin{tabular}{lccc} \toprule \multirow{2}{*}{\textbf{Block Type}} & \multirow{2}{*}{\textbf{\# Blocks}} & \textbf{Default} & \textbf{Learned} \\ &&\textbf{Error}&\textbf{Error}\\ \midrule OpenBLAS & 1478 & $28.8\%$ & $29.0\%$ \\ Redis & 839 & $41.2\%$ & $22.5\%$ \\ SQLite & 764 & $32.8\%$ & $21.6\%$ \\ GZip & 182 & $40.6\%$ & $20.6\%$ \\ TensorFlow & 6399 & $33.5\%$ & $22.1\%$ \\ Clang/LLVM & 18781 & $22.0\%$ & $21.0\%$ \\ Eigen & 387 & $44.3\%$ & $23.8\%$ \\ Embree & 1067 & $34.1\%$ & $21.3\%$ \\ FFmpeg & 1516 & $30.9\%$ & $21.2\%$ \\ \midrule Scalar (Scalar ALU operations) & 7952 & $17.2\%$ & $18.9\%$ \\ Vec (Purely vector instructions) & 104 & $35.3\%$ & $39.6\%$ \\ Scalar/Vec & \multirow{2}{*}{614} & \multirow{2}{*}{$53.6\%$} & \multirow{2}{*}{$37.5\%$} \\ \hfill (Scalar and vector arithmetic) \\ Ld (Mostly loads) & 10850 & $27.2\%$ & $24.4\%$ \\ St (Mostly stores) & 4490 & $24.7\%$ & $08.7\%$ \\ Ld/St (Mix of loads and stores) & 4754 & $27.9\%$ & $30.3\%$ \\ \bottomrule \end{tabular} \end{table} \footnotetext{llvm-8.0.1 does not support Zen~2. This default error we report for Zen~2 uses the znver1 target in llvm-8.0.1, targeting Zen~1. The Zen~2 target in llvm-10.0.1 has a higher error of $39.8\%$.} We also analyze the error of llvm-mca\xspace{} on the Haswell microarchitecture using the evaluation metrics from \citet{chen_bhive_2019}, designed to validate x86 basic block performance models. \citeauthor{chen_bhive_2019} present three forms of error analysis: overall error, per-application error, and per-category error. Overall error is the error reported in \Cref{tab:results-mca-all}. Per-application error is the average error of basic blocks grouped by the source application of the basic block (e.g., TensorFlow, SQLite, etc.; blocks can have multiple different source applications). Per-category error is the average error of basic blocks grouped into clusters based on the hardware resources used by each basic block. The per-application and per-category errors are presented in \Cref{tab:bhive-category-errors}. The learned parameters outperform the defaults across most source applications, with the exception of OpenBLAS where the learned parameters result in $0.2\%$ higher error. The learned parameters perform similarly to the default across most categories, with the primary exceptions of the Scalar/Vec category and the St category, in which the learned parameters perform significantly better than the default parameters. \subsection{Black-box global optimization with OpenTuner} \label{sec:results-opentuner} In this section, we describe the methodology and performance of using black-box global optimization with OpenTuner~\citep{ansel_opentuner_2014} to find parameters for llvm-mca\xspace{}. We find that OpenTuner is not able to find parameters that lead to equivalent error as DiffTune\xspace{} in llvm-mca\xspace{}'s Intel x86 simulation~model. \paragraph{Background} We use OpenTuner as a representative example of a black-box global optimization technique. OpenTuner is primarily a system for tuning parameters of programs to decrease run-time (e.g., tuning compiler flags, etc.), but has also been validated on other optimization problems, such as finding the series of button presses in a video game simulator that makes the most progress in the game. OpenTuner is an iterative algorithm that uses a multi-armed bandit to pick the most promising search technique among an ensemble of search techniques that span both convex and non-convex optimization. On each iteration, the bandit evaluates the current set of parameters. Using the results of that evaluation, the bandit then selects a search technique that then proposes a new set of parameters. \paragraph{Methodology} For computational budget parity with DiffTune\xspace{}, we permit OpenTuner to evaluate the same number of basic blocks as used end-to-end in our learning approach. We initialize OpenTuner with a sample from DiffTune\xspace{}'s parameter table sampling distribution. We constrain OpenTuner to search per-instruction (\texttt{NumMicroOps}\xspace, \texttt{WriteLatency}\xspace, \texttt{ReadAdvanceCycles}\xspace, \texttt{PortMap}\xspace) parameter values between 0 and 5, \texttt{DispatchWidth}\xspace between 1 and 10, and \texttt{ReorderBufferSize}\xspace between 50 and 250; these ranges contain the majority of the parameter values observed in the default and learned parameter sets.\footnote{Widening the search space beyond this range resulted in a significantly higher error for OpenTuner.} We evaluate the accuracy of llvm-mca\xspace{} with the resulting set of parameters using the same methodology as in \Cref{sec:results-error}. \paragraph{Results} \Cref{tab:results-mca-all} presents the error of llvm-mca\xspace{} when parameterized with OpenTuner's discovered parameters. OpenTuner performs worse than DiffTune\xspace{}, resulting in error above $100\%$ across all microarchitectures. Thus, DiffTune\xspace{} requires substantially fewer examples to optimize llvm-mca\xspace than OpenTuner requires.
1,941,325,220,616
arxiv
\subsection{An example of state attack against a power network} Consider the power network model analyzed in Example \ref{Example: power network structural analysis} and illustrated in Fig. \ref{power_network}, and let the variables $\theta_4$ and $\theta_5$ be affected, respectively, by the unknown and unmeasurable signals $u_1(t)$ and $u_2(t)$. Suppose that a monitoring unit is allowed to measure directly the state variables of the first generator, that is, $y_1(t)=\delta_1(t)$ and $y_2(t)=\omega_1(t)$. \begin{figure} \centering \includegraphics[width=.65\columnwidth]{./img/digraph_paths}\\ \caption{In the above network, there is no linking of size $2$ from the input to the output vertices. Indeed, the vertices $\theta_1$ and $\omega_1$ belong to every path from $\{u_1,u_2\}$ to $\{y_1,y_2\}$. Two input to output paths are depicted in red.}\label{digraph_paths} \end{figure} Notice from Fig. \ref{digraph_paths} that the maximum size of a linking from the failure to the output vertices is $1$, so that, by Theorem \ref{thm:vulnerability}, there exists a structural vulnerability. In other words, for every choice of the network matrices, there exist nonzero $u_1(t)$ and $u_2(t)$ that are not detectable through the measurements.\footnote{When these ouput-nulling inputs $u_1(t)$, $u_2(t)$ are regarded as additional loads, then they are entirely sustained by the second and third generator.} We now consider a numerical realization of this system. Let the input matrices be $B=[e_8 \; e_9]$ and $D = [0 \; 0]^{\transpose}$, the measurement matrix be $C = [e_1 \; e_4]^\transpose$, and the system matrix $A$ be as in equation \eqref{eq: power network descriptor system model} with $M_{g} = \text{blkdiag}(.125,.034,.016)$, $D_{g} = \text{blkdiag}(.125,.068,.048)$, and \begin{align*} \mathcal L = \!\left[ \begin{smallmatrix} .058 & 0 & 0 & -.058 & 0 & 0 & 0 & 0 & 0\\ 0 & .063 & 0 & 0 & -.063 & 0 & 0 & 0 & 0\\ 0 & 0 & .059 & 0 & 0 & -.059 & 0 & 0 & 0\\ -.058 & 0 & 0 & .235 & 0 & 0 & -.085 & -.092 & 0\\ 0 & -.063 & 0 & 0 & .296 & 0 & -.161 & 0 & -.072\\ 0 & 0 & -.059 & 0 & 0 & .330 & 0 & -.170 & -.101\\ 0 & 0 & 0 & -.085 & -.161 & 0 & .246 & 0 & 0\\ 0 & 0 & 0 & -.092 & 0 & -.170 & 0 & .262 & 0\\ 0 & 0 & 0 & 0 & -.072 & -.101 & 0 & 0 & .173 \end{smallmatrix} \right]. \end{align*} Let $U_1(s)$ and $U_2(s)$ be the Laplace transform of the attack signals $u_1(t)$ and $u_2(t)$, and let \begin{align*} \begin{bmatrix} U_1(s) \\ U_2(s) \end{bmatrix} = \underbrace{ \begin{bmatrix} \frac{-1.024 s^4 - 5.121 s^3 - 10.34 s^2 - 9.584 s - 3.531}{s^4 + 5 s^3 + 9.865 s^2 + 9.173 s + 3.531}\\ 1 \end{bmatrix}}_{\mathcal{N}(s)} \bar U(s), \end{align*} for {\em some arbitrary} nonzero signal $\bar U(s)$. Then it can be verified that the failure cannot be detected through the measurements $y_1(t)$ and $y_2(t)$. In fact, $\mathcal{N}(s)$ coincides with the null space of the input/output transfer matrix. An example is in Fig. \ref{sim_unstable}, where the second and the third generator are driven unstable by the attack, but yet the first generator does not deviate from the nominal operating condition. \begin{figure} \centering \includegraphics[width=.65\columnwidth]{./img/sim_unstable}\\ \caption{The velocities $\omega_2$ and $\omega_3$ are driven unstable by the signals $u_1(t)$ and $u_2(t)$, which are undetectable from the measurements of $\omega_1$ and $\delta_1$.}\label{sim_unstable} \end{figure} Suppose now that the rotor angle of the first generator and the voltage angle at the $6$-th bus are measured, that is, $C=[e_1 \; e_{12}]^\transpose$. Then, there exists a linking of size $2$ from $\mathcal{U}$ to $\mathcal{Y}$, and the system $(E,A,B,C)$ is left-invertible. Following Theorem \ref{thm:inv_zero}, the invariant zeros of the power network can be computed by looking at its reduced system, and they are $-1.6864 \pm 1.8070i$ and $-0.8136 \pm 0.2258i$. Consequently, if the network state is unknown at the failure time, there exists vulnerabilities that an attacker may exploit to affect the network while remaining undetected. Finally, we remark that such state attacks are entirely realizable by cyber attacks \cite{AHMR-ALG:11}. \subsection{An example of output attack against a power network}\label{sec:example_2} \begin{figure} \centering \includegraphics[width=.9\columnwidth]{./img/IEEE14} \caption{For the here represented IEEE 14 bus system, if the voltage angle of one bus is measured exactly, then a cyber attack against the measurements data is always detectable by our dynamic detection procedure. In contrary, as shown in \cite{YL-MKR-PN:09}, a cyber attack may remain undetected by a static procedure if it compromises as few as four measurements.} \label{fig:ieee14} \end{figure} Let the IEEE 14 bus power network (Fig. \ref{fig:ieee14}) be modeled as a descriptor system as in Section \ref{example:power}. Following \cite{YL-MKR-PN:09}, let the measurement matrix $C$ consist of the real power injections at all buses, of the real power flows of all branches, and of one rotor angle (or one bus angle). We assume that an attacker can compromise all the measurements, independently of each other, except for one referring to the rotor angle. Let $k \in \mathbb{N}_0$ be the cardinality of the attack set. It is known that an attack undetectable to a static detector exists if $k \ge 4$ \cite{YL-MKR-PN:09}. In other words, due to the sparsity pattern of $C$, there exists a signal $u_K(t)$, with (the same) four nonzero entries at all times, such that $D u_K(t) \in \operatorname{Im} (C)$ at all times. By Theorem \ref{Theorem: Static detectability of cyber-physical attacks} the attack set $K$ remains undetected by a Static Detector through the attack mode $u_K (t)$. On the other hand, following Theorem \ref{Theorem: Dynamic detectability of cyber-physical attacks}, it can be verified that, for the same output matrix $C$, and independent of the value of $k$, there exists \emph{no} undetectable (output) attacks for a dynamic monitor. It should be notice that this result relies on the fact that the rotor angle measurement is known to be correct, because, for instance, it is protected using sophisticated and costly security methods \cite{ARM-RLE:10}. Since the state of the IEEE 14 bus system can be reconstructed by means of this measurement only (in a system theoretic sense, the system is observable by measuring one generator rotor angle), the output attack $D u(t)$ is easily identified as $D u(t) = y(t) - C \hat x(t)$, where $\hat x (t) = x(t)$ is the reconstructed system state at time $t$. \subsection{An example of state and output attack against a water supply network} \begin{figure} \centering \includegraphics[width=1\columnwidth]{./img/epanet3} \caption{This figure shows the structure of the EPANET water supply network model \# 3, which features $3$ tanks ($\text{T}_1$, $\text{T}_2$, $\text{T}_3$), $2$ reservoirs ($\text{R}_1$, $\text{R}_2$), $2$ pumps ($\text{P}_1$, $\text{P}_2$), $96$ junctions, and $119$ pipes. Seven pressure sensors ($\text{S}_1, \dots, \text{S}_7$) have been installed to monitor the network functionalities. A cyber-physical attack to steal water from the reservoir $\text{R}_2$ is reported. Notice that the cyber-physical attack features two state attacks ($u_1$, $u_2$) and one output attack ($u_3$).} \label{fig:epanet3} \end{figure} Consider the water supply network EPANET 3 linearized at a steady state with non-zero pressure drops \cite{EPANET2:00}. The water network model as well as a possible cyber-physical attack are illustrated in Fig. \ref{fig:epanet3}. The considered cyber-physical attack aims at stealing water from the reservoir $\text{R}_2$ while remaining undetected from the installed pressure sensors $\text{S}_1, \dots, \text{S}_7$. In order to achieve its goal, the attacker corrupts the measurements of sensor $\text{S}_1$ (output attack), it steals water from the reservoir $\text{R}_2$ (state attack), and, finally, it modifies the input of the control pump $\text{P}_2$ to restore the pressure drop due to the loss of water in $\text{R}_2$ (state attack). We now analyze this attack in more details. Following the modeling in Section \ref{example:water}, an index-one descriptor model describing the evolution of the water network in Fig. \ref{fig:epanet3} is computed. For notational convenience, let $x_1 (t)$, $x_2 (t)$, $x_3 (t)$, and $x_4 (t)$ denote, respectively, the pressure at time $t$ at the reservoir $\text{R}_2$, at the reservoir $\text{R}_1$ and at the tanks $\text{T}_1$, $\text{T}_2$ and $\text{T}_3$, at the junction $\text{P}_2$, and at the remaining junctions. The index-one descriptor model reads as \begin{align*} \begin{bmatrix} \dot x_1(t) \\ M \dot x_2(t) \\ 0 \\ 0 \end{bmatrix} &= \begin{bmatrix} 0 & 0 & 0 & 0\\ 0 & A_{22} & 0 & A_{24}\\ A_{31} & 0 & A_{33} & A_{34}\\ 0 & A_{42} & A_{43} & A_{44} \end{bmatrix} \begin{bmatrix} x_1(t) \\ x_2(t) \\ x_3(t) \\ x_4(t) \end{bmatrix}, \end{align*} where the pattern of zeros is due to the network interconnection structure, and $M = \text{diag}(1,A_1,A_2,A_3)$ corresponds to the dynamics of the reservoir $\text{R}_1$ and the tanks $\text{T}_1$, $\text{T}_2$, and $\text{T}_3$. With the same partitioning, the attack signature reads as $B = [B_1 \; B_2 \; 0]$ and $D = [0 \; 0 \; D_1]$, where \begin{align*} B_1 &= \begin{bmatrix} 1 & 0 & 0 & 0 \end{bmatrix}^\transpose,\; B_2 = \begin{bmatrix} 0 & 0 & 1 & 0 \end{bmatrix}^\transpose,\; \text{and } \\ D_1 &= \begin{bmatrix} 1 & 0 & \dots & 0 \end{bmatrix}^\transpose. \end{align*} Let the attack $u_2 (t)$ be chosen as $u_2(t) = - A_{31} x_1(t)$. Then, the state variables $x_2$, $x_3$, and $x_4$ are decoupled from $x_1$. Consequently, the attack mode $u_1$ does not affect the dynamics of $x_2$, $x_3$, and $x_4$. Let $u_1(t) = -1$, and notice that the pressure $x_1 (t)$ decreases with time (that is, water is being removed from $\text{R}_2$). Finally, for the attack to be undetectable, since the state variable $x_1$ is continuously monitored by $\text{S}_1$, let $u_3 (t) = - x_1(t)$. It can be verified that the proposed attack strategy allows an attacker to steal water from the reservoir $\text{R}_2$ while remaining undetected from the sensors measurements. In other words, the attack $(Bu(t),Du(t))$, with $u(t) = [u_1^\transpose (t) \; u_2^\transpose (t) \; u_3^\transpose (t)]^\transpose$, excites only zero dynamics for the water network system in Fig. \ref{fig:epanet3}. We conclude this section with the following remarks. First, for the implementation of the proposed attack strategy, neither the network initial state, nor the network structure besides $A_{31}$ need to be known to the attacker. Second, the effectiveness of the proposed attack strategy is independent of the sensors measuring the variables $x_3$ and $x_4$. On the other hand, if additional sensors are used to measure the flow between the reservoir $\text{R}_2$ and the pump $\text{P}_2$, then an attacker would need to corrupt these measurements as well to remain undetected. Third and finally, due to the reliance on networks to control actuators in cyber-physical systems, the attack $u_2 (t)$ on the pump $\text{P}_2$ could be generated by a cyber attack \cite{AHMR-ALG:11}. \subsection{Preliminary notions} We start by recalling some useful facts about structured systems and structural properties \cite{KJR:88,WMW:85}. Let a \emph{structure matrix} $[M]$ be a matrix in which each entry is either a fixed zero or an indeterminate parameter. The system \begin{align} \label{eq:structural_descriptor} \begin{split} [E] \dot x(t) &= [A] x(t) + [B] u(t),\\ y(t) &= [C] x(t) + [D] u(t). \end{split} \end{align} is called \emph{structured system}, and it is sometimes referred to with the tuple $([E],[A],[B],[C],[D])$ of structure matrices. A system $(E,A,B,C,D)$ is an admissible realization of $([E],[A],[B],[C],[D])$ if it can be obtained from the latter by fixing the indeterminate entries at some particular value. Two systems are structurally equivalent if they are both an admissible realization of the same structured system. Let $d$ be the number of indeterminate entries of a structured system altogether. By collecting the indeterminate parameters into a vector, an admissible realization is mapped to a point in the Euclidean space $\mathbb{R}^d$. A property which can be asserted on a dynamical system is called \emph{structural} if, informally, it holds for \emph{almost all} admissible realizations. To be more precise, we say that a property is structural if and only if the set of admissible realizations satisfying such property forms a dense subset of the parameters space.\footnote{A subset $S \subseteq P \subseteq \mathbb{R}^d$ is dense in $P$ if, for each $r \in P$ and every $\varepsilon > 0$, there exists $s \in S$ such that the Euclidean distance $\|s - r\| \le \varepsilon$.} For instance, left-invertibility of a nonsingular system is a structural property with respect to $\mathbb{R}^d$ \cite{JMD-CC-JW:03}. Consider the structured cyber-physical system \eqref{eq:structural_descriptor}. It is often the case that, for the tuple $(E,A,B,C,D)$ to be an admissible realization of \eqref{eq:structural_descriptor}, the numerical entries need to satisfy certain algebraic relations. For instance, for $(E,A,B,C,D)$ to be an admissible power network realization, the matrices $E$ and $A$ need to be of the form \eqref{eq: power network descriptor system model}. Let $\mathbb S \subseteq \mathbb{R}^d$ be the admissible parameter space. We make the following assumption: \begin{itemize} \item[(A4)] the admissible parameters space $\mathbb S$ is a polytope of $\mathbb R^d$, that is, $\mathbb S = \setdef{x \in \mathbb{R}^d}{M x \ge 0}$ for some matrix $M$. \end{itemize} It should be noticed that assumption (A4) is automatically verified for the case of power networks \cite[Lemma 3.1]{FP-AB-FB:10u}. Unfortunately, if the admissible parameters space is a subset of $\mathbb{R}^d$, then classical structural system-theoretic results are, in general, not valid \cite[Section 15]{KJR:88}. We now define a mapping between dynamical systems in descriptor form and digraphs. Let ($[E]$,$[A]$,$[B]$,$[C]$,$[D]$) be a structured cyber-physical system under attack. We associate a directed graph $G=(\mathcal{V},\mathcal{E})$ with the tuple ($[E]$,$[A]$,$[B]$,$[C]$,$[D]$). The vertex set is $\mathcal{V} = \mathcal{U} \cup \mathcal{X} \cup \mathcal{Y}$, where $\mathcal{U}=\{u_1,\dots,u_{m}\}$ is the set of input vertices, $\mathcal{X}=\{x_1,\dots,x_{n}\}$ is the set of state vertices, and $\mathcal{Y}=\{y_1,\dots,y_{p}\}$ is the set of output vertices. If $(i,j)$ denotes the edge from the vertex $i$ to the vertex $j$, then the edge set $\mathcal{E}$ is $\mathcal{E}_{[E]} \cup \mathcal{E}_{[A]} \cup \mathcal{E}_{[B]} \cup \mathcal{E}_{[C]} \cup \mathcal{E}_{[D]}$, with $\mathcal{E}_{[E]}=\{(x_j,x_i) : [E]_{ij}\neq 0\}$, $\mathcal{E}_{[A]}=\{(x_j,x_i) : [A]_{ij}\neq 0\}$, $\mathcal{E}_{[B]}=\{(u_j,x_i) : [B]_{ij}\neq 0\}$, $\mathcal{E}_{[C]}=\{(x_j,y_i) : [C]_{ij}\neq 0\}$, and $\mathcal{E}_{[D]}=\{(u_j,y_i) : [D]_{ij}\neq 0\}$. In the latter, for instance, the expression $[E]_{ij} \neq 0$ means that the $(i,j)$-th entry of $[E]$ is a free parameter. \begin{example}{\bf \emph{(Power network structural analysis)}}\label{Example: power network structural analysis} \begin{figure}[tb!] \centering \includegraphics[width=.7\columnwidth]{./img/network_9buses}\\ \caption{WSSC power system with $3$ generators and $6$ buses. The numerical value of the network parameters can be found in \cite{ES:04}.}\label{power_network} \end{figure} \begin{figure} \centering \includegraphics[width=.7\columnwidth]{./img/digraph}\\ \caption{The digraph associated with the network in Fig. \ref{power_network}. The self-loops of the vertices $\{\delta_1,\delta_2,\delta_3\}$, $\{\omega_1,\omega_2,\omega_3\}$, and $\{\theta_1,\dots,\theta_6\}$ are not drawn. The inputs $u_1$ and $u_2$ affect respectively the bus $b_4$ and the bus $b_5$. The measured variables are the rotor angle and frequency of the first generator.}\label{digraph} \end{figure} Consider the power network illustrated in Fig. \ref{power_network}, where, being $e_i$ the $i$-th canonical vector, we take $[E]=\text{blkdiag}(1,1,1,M_1,M_2,M_3,0,0,0,0,0,0)$, $[B]=[e_8 \; e_9]$, $[C]=[e_1 \; e_4]^\transpose$, $[D] = 0$, and $[A]$ equal to \begin{align*} \left[ \begin{smallmatrix} 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ a_{4,1} & 0 & 0 & a_{4,4} & 0 & 0 & a_{4,7} & 0 & 0 & 0 & 0 & 0\\ 0 & a_{5,2} & 0 & 0 & a_{5,5} & 0 & 0 & a_{5,8} & 0 & 0 & 0 & 0\\ 0 & 0 & a_{6,3} & 0 & 0 & a_{6,6} & 0 & 0 & a_{6,9} & 0 & 0 & 0\\ a_{7,1} & 0 & 0 & 0 & 0 & 0 & a_{7,7} & 0 & 0 & a_{7,10} & a_{7,11} & 0\\ 0 & a_{8,2} & 0 & 0 & 0 & 0 & 0 & a_{8,8} & 0 & a_{8,10} & 0 & a_{8,12}\\ 0 & 0 & a_{9,3} & 0 & 0 & 0 & 0 & 0 & a_{9,9} & 0 & a_{9,11} & a_{9,12}\\ 0 & 0 & 0 & 0 & 0 & 0 & a_{10,7} & a_{10,8} & 0 & a_{10,10} & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & a_{11,7} & 0 & a_{11,9} & 0 & a_{11,11} & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & a_{12,8} & a_{12,9} & 0 & 0 & a_{12,12} \end{smallmatrix} \right] \end{align*} The digraph associated with the structure matrices $([E],[A],[B],[C],[D])$ is shown in Fig. \ref{digraph}. \oprocend \end{example} \subsection{Network vulnerability with known initial state}\label{sec:left_inv} We derive graph-theoretic detectability conditions for two different scenarios. Recall from Lemma \ref{undetectable_input} that an attack $u(t)$ is undetectable if $y(x_1,u,t) = y(x_2,0,t)$ for some initial states $x_1$ and $x_2$. In this section, we assume that the system state is known at the failure initial time,\footnote{The failure initial state can be estimated through a state observer \cite{ES:04}.} so that an attack $u(t)$ is undetectable if $y(x_0,u,t) = y(x_0,0,t)$ for some system initial state $x_0$. The complementary case of unknown initial state is studied in Section \ref{sec:inv_zeros}. Consider the cyber-physical system described by the matrices $(E,A,B,C,D)$, and notice that, if the initial state is known, then the attack undetectability condition $y(x_0,u,t) = y(x_0,0,t)$ coincides with the system being not left-invertible.\footnote{A regular descriptor system is left-invertible if and only if its transfer matrix $G(s)$ is of full column rank for all almost all $s \in \mathbb{C}$, or if and only if $\left[\begin{smallmatrix} s E - A & -B \\ C & D \end{smallmatrix}\right]$ has full column rank for almost all $s \in \mathbb{C}$ \cite[Theorem 4.2]{TG:93}.} Recall that a subset $S \subseteq \mathbb{R}^d$ is an \emph{algebraic variety} if it coincides with the locus of common zeros of a finite number of polynomials \cite{WMW:85}. Consider the following observation. \begin{lemma}{\bf \emph{(Polytopes and algebraic varieties)}}\label{lemma:algebraic} Let $S \subseteq \mathbb{R}^d$ be a polytope, and let $T \subseteq \mathbb{R}^d$ be an algebraic variety. Then, either $S \subseteq T$, or $S \setminus (S \cap T)$ is dense in $S$. \end{lemma} \begin{proof} Let $T \subseteq \mathbb{R}^d$ be the algebraic variety described by the locus of common zeros of the polynomials $\{\phi_1(x),\dots,\phi_t(x)\}$, with $t \in \mathbb N$, $t < \infty$. Let $P \subseteq \mathbb{R}^d$ be the smallest vector subspace containing the polytope $S$. Then $P \subseteq T$ if and only if every polynomial $\phi_i$ vanishes identically on $P$. Suppose that the polynomial $\phi_i$ does not vanish identically on $P$. Then, the set $T \cap P$ is contained in the algebraic variety $\{x \in P : \phi_i(x) = 0\}$, and, therefore \cite{WMW:85}, the complement $P \setminus (P \cap T)$ is dense in $P$. By definition of a dense set, the set $S \setminus (S \cap T)$ is also dense in $S$. \end{proof} In Lemma \ref{lemma:algebraic} interpret the polytope $S$ as the admissible parameters space of a structured cyber-physical system. Then we have shown that left-invertibility of a cyber-physical system is a structural property even when the admissible parameters space is a polytope of the whole parameters space. Consequently, given a structured cyber-physical system, either every admissible realization admits an undetectable attack, or there is no undetectable attack in almost all admissible realizations. Moreover, in order to show that almost all realizations have no undetectable attacks, it is sufficient to prove that this is the case for some specific admissible realizations. Before presenting our main result, we recall the following result. Let $\bar E$ and $\bar A$ be $N$-dimensional square matrices, and let $G(s\bar E-\bar A)$ be the graph associated with the matrix $s \bar E - \bar A$ that consists of $N$ vertices, and an edge from vertex $j$ to $i$ if $\bar A_{ij} \neq 0$ or $\bar E_{ij} \neq 0$. The matrix $s[\bar E] - [\bar A]$ is said to be structurally degenerate if, for any admissible realization $\bar E$ (respectively $\bar A$) of $[\bar E]$ (respectively $[\bar A]$), the determinant $|s\bar E - \bar A|$ vanishes for all $s \in \mathbb{C}$. Recall the following definitions from \cite{JMD-CC-JW:03}. For a given graph $G$, a path is a sequence of vertices where each vertex is connected to the following one in the sequence. A path is simple if every vertex on the path (except possibly the first and the last vertex) occurs only once. Two paths are disjoint if they consist of disjoint sets of vertices. A set of $l$ mutually disjoint and simple paths between two sets of vertices $S_1$ and $S_2$ is called a \emph{linking} of size $l$ from $S_1$ to $S_2$. A simple path in which the first and the last vertex coincide is called cycle; a \emph{cycle family} of size $l$ is a set of $l$ mutually disjoint cycles. The length of a cycle family equals the total number of edges in the family. \begin{theorem}{\bf \emph{(Structural rank of a square matrix \cite{KJR:94})}}\label{structural_rank} The structure $N$-dimensional matrix $s[\bar E] - [\bar A]$ is structurally degenerate if and only if there exists no cycle family of length $N$ in $G(s[\bar E] - [\bar A])$. \end{theorem} We are now able to state our main result on structural detectability. \begin{theorem}{\bf \emph{(Structurally undetectable attack)}}\label{thm:vulnerability} Let the parameters space of the structured cyber-physical system $([E],[A],[B],[C],[D])$ define a polytope in $\mathbb{R}^d$ for some $d \in \mathbb{N}_0$. Assume that $s[E] - [A]$ is structurally non-degenerate. The system $([E],[A],[B],[C],[D])$ is structurally left-invertible if and only if there exists a linking of size $|\mathcal{U}|$ from $\mathcal{U}$ to $\mathcal{Y}$. \end{theorem} Theorem \ref{thm:vulnerability} can be interpreted in the context of cyber-physical systems. Indeed, since $|sE-A| \neq 0$ by assumption (A1), and because of assumption (A4), Theorem \ref{thm:vulnerability} states that there exists a structural undetectable attack if and only if there is no linking of size $|\mathcal{U}|$ from $\mathcal{U}$ to $\mathcal{Y}$, provided that the network state at the failure time is known. \begin{proof} Because of Lemma \ref{lemma:algebraic}, we need to show that, if there are $|\mathcal{U}|$ disjoint paths from $\mathcal{U}$ to $\mathcal{Y}$, then there exists admissible left-invertible realizations. Conversely, if there are at most $|\mathcal{U}|-1$ disjoint paths from $\mathcal{U}$ to $\mathcal{Y}$, then every admissible realization is not left-invertible. \emph{(If)} Let $(E,A,B,C,D)$, with $|sE-A| \neq 0$, be an admissible realization, and suppose there exists a linking of size $|\mathcal{U}|$ from $\mathcal{U}$ to $\mathcal{Y}$. Without affecting generality, assume $|\mathcal{Y}| = |\mathcal{U}|$. For the left-invertibility property we need \begin{align*} \left| \begin{bmatrix} s E - A & -B \\ C & D \end{bmatrix} \right| = \left|sE-A\right| \left|D + C (sE-A)^{-1} B\right| \neq 0, \end{align*} and hence we need $\left|D + C (sE-A)^{-1} B\right| \neq 0$. Notice that $D + C (sE-A)^{-1} B$ corresponds to the transfer matrix of the cyber-physical system. Since there are $|\mathcal{U}|$ independent paths from $\mathcal{U}$ to $\mathcal{Y}$, the matrix $D + C (sE-A)^{-1} B$ can be made nonsingular and diagonal by removing some connection lines from the network. In particular, for a given linking of size $|\mathcal{U}|$ from $\mathcal{U}$ to $\mathcal{Y}$, a nonsingular and diagonal transfer matrix is obtained by setting to zero the entries of $E$ and $A$ corresponding to the edges not in the linking. Then there exist admissible left-invertible realizations, and thus the system $([E],[A],[D],[C],[D])$ is structurally left-invertible. \emph{(Only if)} Take any subset of $|\mathcal{U}|$ output vertices, and let $|\mathcal{U}|-1$ be the maximum size of a linking from $\mathcal{U}$ to $\mathcal{Y}$. Let $[\bar E]$ and $[\bar A]$ be such that $s[\bar E] - [\bar A] = \left[ \begin{smallmatrix} s[E] -[A] & [B]\\ [C] & [D] \end{smallmatrix} \right]$. Consider the previously defined graph $G(s [\bar E]- [ \bar A])$, and notice that a path from $\mathcal{U}$ to $\mathcal{Y}$ in the digraph associated with the structured system corresponds, possibly after relabeling the output variables, to a cycle in involving input/output vertices in $G(s [\bar E]- [ \bar A])$. Observe that there are only $|\mathcal{U}|-1$ such (disjoint) cycles. Hence, there is no cycle family of length $N$, being $N$ the size of $[\bar A]$, and the statement follows from Theorem \ref{structural_rank}. \end{proof} To conclude this section, note that Theorem \ref{thm:vulnerability} extends \cite{JWW:91} to regular descriptor systems with constraints on parameters. \subsection{Network vulnerability with unknown initial state}\label{sec:inv_zeros} If the failure initial state is unknown, then a vulnerability is identified by the existence of a pair of initial conditions $x_1$ and $x_2$, and an attack $u(t)$ such that $y(x_1,0,t) = y(x_2,u,t)$, or, equivalently, by the existence of invariant zeros for the given cyber-physical system. We will now show that, provided that a cyber-physical system is left-invertible, its invariant zeros can be computed by simply looking at an associated nonsingular state space system. Let the state vector $x$ of the descriptor system \eqref{eq: cyber_physical_fault} be partitioned as $[x_1^\transpose \; x_2^\transpose]^\transpose$, where $x_1$ corresponds to the dynamic variables. Let the network matrices $E$, $A$, $B$, $C$, and $D$ be partitioned accordingly, and assume, without loss of generality, that $E$ is given as $E = \textup{blkdiag}(E_{11},0)$, where $E_{11}$ is nonsingular. In this case, the descriptor model \eqref{eq: cyber_physical_fault} reads\,as \begin{align} \label{eq:partitioned_descriptor_system} \begin{split} E_{11} \dot x_1 (t)&= A_{11} x_1(t) + B_1 u(t) + A_{12} x_2(t)\,,\\ 0 &= A_{21} x_{1}(t) + A_{22} x_{2}(t) + B_{2} u(t)\,, \\ y(t) &= C_{1}x_{1}(t) + C_{2} x_{2}(t) + Du(t) \,. \end{split} \end{align} Consider now the associated nonsingular state space system which is obtained by regarding $x_{2}(t)$ as an external input to the descriptor system \eqref{eq:partitioned_descriptor_system} and the algebraic constraint as output: \begin{align} \label{eq:associated_nonsingular_system} \begin{split} \dot x_1 (t)&= E_{11}^{-1}A_{11} x_1(t) + E_{11}^{-1} B_1 u(t) +E_{11}^{-1} A_{12} x_2(t),\\ \tilde y(t) & = \begin{bmatrix} A_{21}\\ C_1 \end{bmatrix} x_1(t) + \begin{bmatrix} A_{22} & B_2\\ C_2 & D \end{bmatrix} \begin{bmatrix} x_2(t)\\ u(t) \end{bmatrix} \,. \end{split} \end{align} \begin{theorem}{\bf \emph{(Equivalence of invariant zeros)}}\label{thm:inv_zero} Consider the descriptor system \eqref{eq: cyber_physical_fault} partitioned as in \eqref{eq:partitioned_descriptor_system}. Assume that, for the corresponding structured system $([E],[A],[B],[C],[D])$, there exists a linking of size $|\mathcal{U}|$ from $\mathcal{U}$ to $\mathcal{Y}$. Then, in almost all admissible realizations, the invariant zeros of the descriptor system \eqref{eq:partitioned_descriptor_system} coincide with those of the associated nonsingular system \eqref{eq:associated_nonsingular_system}. \end{theorem} \begin{proof} From Theorem \ref{thm:vulnerability}, the structured descriptor system $([E],[A],[B],[C],[D])$ is structurally left-invertible. Let $(E,A,B,C,D)$ be a left-invertible realization. The proof now follows a procedure similar to \cite[Proposition 8.4]{JT:06}. Let $s \in \mathbb C$ be an invariant zero for the nonsingular system \eqref{eq:associated_nonsingular_system} with state-zero direction $x_1 \neq 0$ and input-zero direction $u$, that is \begin{equation*} \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix} = \underbrace{ \left[\begin{array}{c|cc} sI - E_{11}^{-1}A_{11} & -E_{11}A_{12} & - E_{11}^{-1} B_{1} \\ \hline A_{21} & A_{22} & B_{2}\\ C_{1} & C_{2} & D \end{array}\right] }_{\subscr{P}{nonsingular}(s)} \begin{bmatrix} x_{1} \\ x_{2} \\ u \end{bmatrix} \,. \end{equation*} A multiplication of the above equation by $\textup{blkdiag}(E_{11},-I,I)$ and a re-partioning of the resulting matrix yields \begin{equation} \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix} = \underbrace{ \left[\begin{array}{cc|c} sE_{11} - A_{11} & -A_{12} & - B_{1} \\ -A_{21} & -A_{22} & -B_{2}\\\hline C_{1} & C_{2} & D \end{array}\right] }_{\subscr{P}{singular}(s)} \begin{bmatrix} x_{1} \\ x_{2} \\ u \end{bmatrix} \label{eq:partitioned_descriptor_system_pencil} \,. \end{equation} Since $x_1 \neq 0$, we also have $x= [x_{1}^{\transpose} \; x_{2}^{\transpose}]^{\transpose} \neq 0$. Then, equation \eqref{eq:partitioned_descriptor_system_pencil} implies that $s \in \mathbb C$ is an invariant zero of the descriptor system \eqref{eq:partitioned_descriptor_system} with state-zero direction $x \neq 0$ and input-zero direction $u$. We conclude that the invariant zeros of the nonsingular system \eqref{eq:associated_nonsingular_system} are a subset of the zeros of the descriptor system \eqref{eq:partitioned_descriptor_system}. In order to continue, suppose that there is $s \in \mathbb C$ which is an invariant zero of the descriptor system \eqref{eq:partitioned_descriptor_system} but not of the nonsingular system \eqref{eq:associated_nonsingular_system}. Let $x = [x_{1}^{\transpose} \; x_{2}^{\transpose}]^{\transpose} \neq 0$ and $u$ be the associated state-zero and input-zero direction, respectively. Since $\operatorname{Ker}(\subscr{P}{singular}(s)) = \operatorname{Ker}(\subscr{P}{nonsingular}(s))$ and $s$ is not a zero of the nonsingular system \eqref{eq:associated_nonsingular_system}, it follows that $x_{1} = 0$ and $x_{2} \neq 0$. Accordingly, we have that \begin{equation*} \operatorname{Ker}\left( \begin{bmatrix} -A_{12} & -B_{1} \\ -A_{22} & -B_{2} \\ C_{2} & D \end{bmatrix} \right) \neq \{\emptyset\} \,. \end{equation*} It follows that the vector $[0^{\transpose} \; x_{2}^{\transpose} \; u^{\transpose}]^{\transpose}$ lies in the nullspace of $\subscr{P}{singular}(s)$ for each $s \in \mathbb C$, and thus the descriptor system \eqref{eq:partitioned_descriptor_system} is not left-invertible. In conclusion, if the descriptor system \eqref{eq:partitioned_descriptor_system} is left-invertible, then its invariant zeros coincide with those of the nonsingular system \eqref{eq:associated_nonsingular_system}. \end{proof} It should be noticed that, because of Theorem \ref{thm:inv_zero}, under the assumption of left-invertibility, classical linear systems results can be used to investigate the presence of structural undetectable attacks in a cyber-physical system; see \cite{JMD-CC-JW:03} for a survey of results on generic properties of linear systems. \subsection{Detectability and identifiability notions for static, Observe that a cyber-physical attack is undetectable if there exists a normal operating condition of the system under which the output would be the same as under the perturbation due to the attacker. Let $y(x_0,u,t)$ be the output sequence generated from the initial state $x_0$ under the attack signal $u(t)$. \begin{lemma}{\bf \emph{(Undetectable attack)}}\label{undetectable_input} For the linear descriptor system \eqref{eq: cyber_physical_fault}, the attack $(B_K u_K,D_K u_K)$ is undetectable by a static monitor if and only if $y(x_1,u_K,t) = y(x_2,0,t)$ for some initial condition $x_1, x_2 \in \mathbb{R}^n$ and for $t \in \mathbb{N}_0$. If the same holds for $t \in \mathbb{R}_{\ge 0}$, then the attack is also undetectable by a dynamic monitor. \end{lemma} Lemma \ref{undetectable_input} follows from the fact that our monitors are deterministic, so that $y(x_1,u_K,t)$ and $y(x_2,0,t)$ lead to the same output $\psi_1$. A more general concern than detectability is identifiability of attackers, that is, the possibility to distinguish from measurements between the action of two distinct attacks. We quantify the strength of an attack through the cardinality of the attack set. Since an attacker can independently compromise any state variable or measurement, every subset of the states and measurements of fixed cardinality is a possible attack set. \begin{lemma}{\bf \emph{(Unidentifiable attack)}}\label{unidentifiable_input} For the linear descriptor system \eqref{eq: cyber_physical_fault}, the attack $(B_K u_K,D_K u_K)$ is unidentifiable by a static monitor if and only if $y(x_1,u_K,t) = y(x_2,u_R,t)$ for some initial condition $x_1, x_2 \in \mathbb{R}^n$, attack $(B_R u_R,D_R u_R)$ with $|R|\le|K|$ and $R\neq{K}$, and for $t \in \mathbb{N}_0$. If the same holds for $t \in \mathbb{R}_{\ge 0}$, then the attack is also unidentifiable by a dynamic monitor. \end{lemma} Lemma \ref{unidentifiable_input} follows analogously to Lemma \ref{undetectable_input}. We now elaborate on the above lemmas to derive fundamental detection and identification limitations for the considered monitors. \subsection{Fundamental limitations of static monitors} Following Lemma \ref{undetectable_input}, an attack is undetectable by a static monitor if and only if, for all $t \in \mathbb{N}_0$, there exists a vector $\xi(t)$ such that $y(t) = C \xi(t)$. Notice that this condition is compatible with \cite{YL-MKR-PN:09}, where an attack is detected if and only if the residual $r(t) = y(t) - \tilde C \hat x(t)$ is nonzero for some $t \in \mathbb N_0$, where $\hat x(t) = C^{\dag} y(t)$. In the following, let $\|v\|_0$ denote the number of nonzero components of the vector $v$. \begin{theorem}{\bf \emph{(Static detectability of cyber-physical attacks)}} \label{Theorem: Static detectability of cyber-physical attacks} For the cyber-physical descriptor system \eqref{eq: cyber_physical_fault} and an attack set $K$, the following statements are equivalent: \begin{enumerate} \item the attack set $K$ is undetectable by a static monitor; \item there exists an attack mode $u_K (t)$ satisfying, for some $x (t)$ and at every $t \in \mathbb N_0$, \begin{align}\label{eq:cond2_static} C x(t) + D_K u_K(t) = 0. \end{align} \end{enumerate} Moreover, there exists an attack set $K$, with $|K| = k \in \mathbb{N}_0$, undetectable by a static monitor if and only if there exist $x \in \mathbb{R}^n$ such that $\| C x \|_0 =k$. \end{theorem} \smallskip Before presenting a proof of the above theorem, we highlight that a necessary and sufficient condition for the equation \eqref{eq:cond2_static} to be satisfied is that $D_K u_{K}(t) = u_{y,K}(t) \in \operatorname{Im} (C)$ at all times $t \in \mathbb N_0$, where $u_{y,K}(t)$ is the vector of the last $p$ components of $u_K(t)$. Hence, statement (ii) in Theorem \ref{Theorem: Static detectability of cyber-physical attacks} implies that {\em no} state attack can be detected by a static detection procedure, and that an undetectable output attack exists if and only if $\operatorname{Im} (D_{K}) \cap \operatorname{Im} (C) \neq \{ 0 \}$. \begin{pfof}{Theorem \ref{Theorem: Static detectability of cyber-physical attacks}} As previously discussed, the attack $K$ is undetectable by a static monitor if and only if for each $t \in \mathbb N$ there exists $x(t)$, and $u_{K}(t)$ such that \begin{align*} r(t) = y(t) - CC^{\dagger} y(t) = (I- C C^{\dagger}) \left( C x(t) + D_K u_K(t) \right) \end{align*} vanishes. Consequently, $r(t) = (I - C C^{\dagger}) D_K u_K(t)$, and the attack set $K$ is undetectable if and only if $D_K u_{K}(t) \in \operatorname{Im}(C)$, which is equivalent to statement (ii). The last necessary and sufficient condition in the theorem follows from (ii), and the fact that every output variable can be attacked independently of each other since $D = \begin{bmatrix}0 , I\end{bmatrix}$. \end{pfof} We now focus on the static identification problem. Following Lemma \ref{unidentifiable_input}, the following result can be asserted. \begin{theorem}{\bf \emph{(Static identification of cyber-physical attacks)}} \label{Theorem: Static identifiability of cyber-physical attacks} For the cyber-physical descriptor system \eqref{eq: cyber_physical_fault} and an attack set $K$, the following statements are equivalent: \begin{enumerate} \item the attack set $K$ is unidentifiable by a static monitor; \item there exists an attack set $R$, with $|R|\le|K|$ and $R\neq{K}$, and attack modes $u_K (t)$, $u_R (t)$ satisfying, for some $x (t)$ and at every $t \in \mathbb N_0$, \begin{align*} C x(t) + D_K \left( u_K(t) + u_R(t) \right)= 0. \end{align*} \end{enumerate} Moreover, there exists an attack set $K$, with $|K| = k \in \mathbb{N}_0$, unidentifiable by a static monitor if and only if there exists an attack set $\bar K$, with $|\bar K| \le 2k$, which is undetectable by a static monitor. \end{theorem} \smallskip Similar to the fundamental limitations of static detectability in Theorem \ref{Theorem: Static detectability of cyber-physical attacks}, Theorem \ref{Theorem: Static identifiability of cyber-physical attacks} implies that, for instance, state attacks cannot be identified and that an undetectable output attack of cardinality $k$ exists if and only if $\operatorname{Im} (D_{\bar K}) \cap \operatorname{Im} (C) \neq \{ 0 \}$, for some attack set $\bar K$ with $|\bar K| \le 2k$. \begin{pfof}{Theorem \ref{Theorem: Static identifiability of cyber-physical attacks}} Due to linearity of the system \eqref{eq: cyber_physical_fault}, the unidentifiability condition in Lemma \ref{unidentifiable_input} is equivalent to $y(x_K-x_R,u_{K}-u_{R},t) = 0$, for some initial conditions $x_K$, $x_R$, and attack modes $u_K (t)$, $u_R(t)$. The equivalence between statements (i) and (ii) follows. The last statement follows from Theorem \ref{Theorem: Static detectability of cyber-physical attacks}. \end{pfof} \subsection{Fundamental limitations of dynamic monitors}\label{sec:limitations_dynamic} As opposed to a static monitor, a dynamic monitor checks for the presence of attacks at every time $t \in \mathbb{R}_{\geq 0}$. Intuitively, a dynamic monitor is harder to mislead than a static monitor. The following theorem formalizes this expected result. \begin{theorem}{\bf \emph{(Dynamic detectability of cyber-physical attacks)}}\label{Theorem: Dynamic detectability of cyber-physical attacks} For the cyber-physical descriptor system \eqref{eq: cyber_physical_fault} and an attack set $K$, the following statements are equivalent: \begin{enumerate} \item the attack set $K$ is undetectable by a dynamic monitor; \item there exists an attack mode $u_K (t)$ satisfying, for some $x (0)$ and for every $t \in \mathbb{R}_{\ge 0}$, \begin{align*} E \dot x (t) &= Ax(t) + B_K u_K(t) \,,\\ 0 &= C x(t) + D_K u_K (t) \,; \end{align*} \item there exist $s \in \mathbb{C}$, $g \in \mathbb{R}^{|K|}$, and $x \in \mathbb{R}^{n}$, with $x \neq 0$, such that $(s E - A)x - B_K g = 0$ and $C x + D_K g = 0$. \end{enumerate} \noindent Moreover, there exists an attack set $K$, with $|K| = k$, undetectable by a dynamic monitor if and only if there exist $s \in \mathbb{C}$ and $x \in \mathbb{R}^{n}$ such that $\|(sE - A) x \|_0 + \|C x\|_0 = k$. \end{theorem} \smallskip Before proving Theorem \ref{Theorem: Dynamic detectability of cyber-physical attacks}, some comments are in order. First, differently from the static case, state attacks {\em can be detected} in the dynamic case. Second, in order to mislead a dynamic monitor an attacker needs to inject a signal which is consistent with the system dynamics at every instant of time. Hence, as opposed to the static case, the condition $D_K u_{K}(t) = u_{y,K}(t) \in \operatorname{Im} (C)$ needs to be satisfied for every $t \in \mathbb{R}_{\ge 0}$, and it is only necessary for the undetectability of an output attack. Indeed, for instance, state attacks can be detected even though they automatically satisfy the condition $D_K u_{K}(t) = 0 \in \operatorname{Im} (C)$. Third and finally, according to the last statement of Theorem \ref{Theorem: Dynamic detectability of cyber-physical attacks}, the existence of invariant zeros\footnote{For the system $(E,A,B_K,C,D_K)$, the value $s \in \mathbb{C}$ is an invariant zero if there exists $x \in \mathbb{R}^{n}$, with $x \neq 0$, $g \in \mathbb{R}^{|K|}$, such that $(sE- A)x - B_K g = 0$ and $C x + D_K g = 0$.} for the system $(E,A,B_K, C, D_K)$ is equivalent to the existence of undetectable attacks. As a consequence, a dynamic monitor performs better than a static monitor, while requiring, possibly, fewer measurements. We refer to Section \ref{sec:example_2} for an illustrative example of this last statement. \begin{pfof}{Theorem \ref{Theorem: Dynamic detectability of cyber-physical attacks}} By Lemma \ref{undetectable_input} and linearity of the system \eqref{eq: Kron-reduced model}, the attack mode $u_K(t)$ is undetectable by a dynamic monitor if and only if there exists $x_0$ such that $y(x_0,u_K,t) = 0$ for all $t \in \mathbb R_{\geq 0}$, that is, if and only if the system \eqref{eq: cyber_physical_fault} features zero dynamics. Hence, statements (i) and (ii) are equivalent. For a linear descriptor system with smooth input and consistent initial condition, the existence of zero dynamics is equivalent to the existence of invariant zeros \cite[Theorem 3.2 and Proposition 3.4]{TG:93}. The equivalence of statements (ii) and (iii) follows. The last statement follows from (iii), and the fact that $B = \begin{bmatrix}I , 0\end{bmatrix}$ and $D = \begin{bmatrix}0 , I\end{bmatrix}$. \end{pfof} We now consider the identification problem. \begin{theorem} {\bf \emph{(Dynamic identifiability of cyber-physical attacks)}} \label{Theorem: Dynamic identifiability of cyber-physical attacks} For the cyber-physical descriptor system \eqref{eq: cyber_physical_fault} and an attack set $K$, the following statements are equivalent: \begin{enumerate} \item the attack set $K$ is unidentifiable by a dynamic monitor; \item there exists an attack set $R$, with $|R| \le |K|$ and $R\neq{K}$, and attack modes $u_K(t)$, $u_R(t)$ satisfying, for some $x (0)$ and for every $t \in \mathbb{R}_{\ge 0}$, \begin{align*} E \dot x (t) &= Ax(t) + B_K u_K(t) + B_R u_R(t) \,,\\ 0 &= C x(t) + D_K u_K (t) + D_R u_R (t) \,; \end{align*} \item there exists an attack set $R$, with $|R| \le |K|$ and $R\neq{K}$, $s \in \mathbb{C}$, $g_K \in \mathbb{R}^{|K|}$, $g_R \in \mathbb{R}^{|R|}$, and $x \in \mathbb{R}^{n}$, with $x \neq 0$, such that $(sE - A) x - B_K g_K - B_R g_R = 0$ and $C x + D_K g_K + D_R g_R = 0$. \end{enumerate} Moreover, there exists an attack set $K$, with $|K| = k \in \mathbb{N}_0$, unidentifiable by a dynamic monitor if and only if there exists an attack set $\bar K$, with $|\bar K| \le 2 k$, which is undetectable by a dynamic monitor. \end{theorem} \begin{pf} Notice that, because of the linearity of the system \eqref{eq: cyber_physical_fault}, the unidentifiability condition in Lemma \ref{unidentifiable_input} is equivalent to the condition $y(x_K-x_R,u_{K}-u_{R},t) = 0$, for some initial conditions $x_K$, $x_R$, and attack modes $u_K (t)$, $u_R(t)$. The equivalence between statements (i) and (ii) follows. Finally, the last two statements follow from Theorem \ref{Theorem: Dynamic detectability of cyber-physical attacks}, and the fact that $B = \begin{bmatrix}I , 0\end{bmatrix}$ and $D = \begin{bmatrix}0 , I\end{bmatrix}$. \end{pf} In other words, the existence of an unidentifiable attack set $K$ of cardinality $k$ is equivalent to the existence of invariant zeros for the system $(E,A,B_{\bar K},C,D_{\bar K})$, for some attack set $\bar K$ with $|\bar K| \le 2k$. We conclude this section with the following remarks. The existence condition in Theorem 3.4 is hard to verify because of its combinatorial complexity: in order to check if there exists an unidentifiable attack set $K$, with $|K| = k$, one needs to certify the absence of invariant zeros for all possible $2k$-dimensional attack sets. Thus, a conservative verification scheme requires $\binom{n + p }{2k}$ tests. In Section \ref{sec:graph_conditions} we present intuitive graph-theoretic conditions for the existence of undetectable and unidentifiable attack sets for a given sparsity pattern of the system matrices and generic system parameters. Finally, Theorem \ref{Theorem: Dynamic identifiability of cyber-physical attacks} includes as a special case Proposition 4 in \cite{FH-PT-SD:11}, which considers exclusively output attacks. \subsection{Fundamental limitations of active monitors}\label{sec:active} An active monitor uses a control signal (unknown to the attacker) to reveal the presence of attacks; see \cite{YM-BS:10a} for the case of replay attacks. In the presence of an active monitor with input signal $w(t) = [w_x^\transpose(t) \; w_y^\transpose(t)]^\transpose$, the system \eqref{eq: cyber_physical_fault} reads as \begin{align*} \begin{split} E \dot x(t) &= Ax(t) + B_K u_K(t) + w_x(t),\\ y(t) &= C x(t) + D_Ku_K(t) + w_y(t). \end{split} \end{align*} Although the attacker is unaware of the signal $w(t)$, active and dynamic monitors share the same limitations. \begin{theorem}{\bf \emph{(Limitations of active monitors)}}\label{thm:active_det} For the cyber-physical descriptor system \eqref{eq: cyber_physical_fault}, let $w(t)$ be an additive signal injected by an active monitor. The existence of undetectable (respectively unidentifiable) attacks does not depend upon the signal $w(t)$. Moreover, undetectable (respectively unidentifiable) attacks can be designed independently of $w(t)$. \end{theorem} \begin{proof} For the system \eqref{eq: cyber_physical_fault}, let $u(t)$ be the attack mode, and let $w(t)$ be the monitoring input. Let $y(x,u,w,t)$ denotes the output generated by the inputs $u(t)$ and $w(t)$ with initial condition $x = x_1 + x_2$. Observe that, because of the linearity of \eqref{eq: cyber_physical_fault}, we have $y(x,u,w,t) = y(x_1,u,0,t) + y(x_2,0,w,t)$, with consistent initial conditions $x_1$ and $x_2$. Then, an attack $u(t)$ is undetectable if and only if $y(x,u,w,t) = y(\bar x,0,w,t)$, or equivalently $y(x_1,u,0,t) + y(x_2,0,w,t) = y(\bar x_1,0,0,t) + y(x_2,0,w,t)$, for some initial conditions $x$ and $\bar x = \bar x_1 + x_2$. The statement follows, since, from the equality above, the detectability of $u(t)$ does not depend upon $w(t)$. \end{proof} As a consequence of Theorem \ref{thm:active_det}, the existence of undetectable attacks is independent of the presence of known control signals. Therefore, in a worst-case scenario, active monitors are as powerful as dynamic monitors. Since replay attacks are detectable by an active monitor \cite{YM-BS:10a}, Theorem \ref{thm:active_det} shows that replay attacks are not worst-case attacks. \begin{remark}{\bf \emph{(Undetectable attacks in the presence of state and measurements noise)}} The input $w(t)$ in Theorem \ref{thm:active_det} may represent sensors and actuators noise. In this case, Theorem \ref{thm:active_det} states that the existence of undetectable attacks for a noise-free system implies the existence of undetectable attacks for the same system driven by noise. The converse does not hold, since attackers may remain undetected by injecting a signal compatible with the noise statistics. \oprocend \end{remark} \subsection{Specific results for index-one singular systems} For many interesting real-world descriptor systems, including the examples in Section \ref{example:power} and \ref{example:water}, the algebraic system equations can be solved explicitly, and the descriptor system \eqref{eq: cyber_physical_fault} can be reduced to a nonsingular state space system. For this reason, this section presents specific results for the case of \emph{index-one} systems \cite{FLL:86}. In this case, without loss of generality, we assume the system \eqref{eq: cyber_physical_fault} to be written in the canonical form \begin{align} \label{eq:weierstrass} \begin{split} \begin{bmatrix} E_{11} & 0\\ 0 & 0 \end{bmatrix} \begin{bmatrix} \dot x_1 \\ \dot x_2 \end{bmatrix} &= \begin{bmatrix} A_{11} & A_{12}\\ A_{21} & A_{22} \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} + \begin{bmatrix} B_1 \\ B_2 \end{bmatrix} u_K(t),\\ y(t) &= \begin{bmatrix} C_1 & C_2 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} + D_K u_K(t), \end{split} \end{align} where $E_{11}$ is nonsingular and $A_{22}$ is nonsingular. Consequently, the state $x_1$ and $x_2$ are referred to as \emph{dynamic state} and \emph{algebraic state}, respectively. The algebraic state can be expressed via the dynamic state and the attack mode as \begin{align}\label{eq: elimination of bus voltages} x_2(t) = -A_{22}^{-1} A_{21} x_1 (t) - A_{22}^{-1} B_2 u_K(t). \end{align} The elimination of the algebraic state $x_2$ in the descriptor system \eqref{eq:weierstrass} leads to the nonsingular state space system \begin{align} \dot x_1 =&\; \underbrace{E_{11}^{-1}\left( A_{11} - A_{12}A_{22}^{-1}A_{21} \right)}_{\tilde A} x_1(t) \nonumber\\ &\;+ \underbrace{ E_{11}^{-1}\left( B_1 - A_{12} A_{22}^{-1} B_2 \right)}_{\tilde B_K} u_K(t), \label{eq: Kron-reduced model}\\ y(t) =&\; \underbrace{\left( C_1 - C_2 A_{22}^{-1} A_{21}\right)}_{\tilde C} x_1(t) + \underbrace{\left( D_K - C_2 A_{22}^{-1} B_2 \right)}_{\tilde D_K} u_K(t). \nonumber \end{align} This reduction of the algebraic states is known as Kron reduction in the literature on power networks and circuit theory \cite{FD-FB:11d}. Hence, we refer to \eqref{eq: Kron-reduced model} as the {\em Kron-reduced system}. Clearly, for any state trajectory $x_{1}(t)$ of the Kron-reduced system \eqref{eq: Kron-reduced model}, the corresponding state trajectory $[x_{1}^\transpose(t) \;x_{2}^\transpose(t)]^\transpose$ of the (non-reduced) cyber-physical descriptor system \eqref{eq: cyber_physical_fault} can be recovered by identity \eqref{eq: elimination of bus voltages} and given knowledge of the input $u_K(t)$. % The following subtle issues are easily visible in the Kron-reduced system \eqref{eq: elimination of bus voltages}. First, a state attack affects directly the output $y(t)$, provided that $C_2 A_{22}^{-1}B_2 u_K(t) \neq 0$. Second, since the matrix $A_{22}^{-1}$ is generally fully populated, an attack on a single algebraic component can affect not only the locally attacked state or its vicinity but larger parts of the system. According to the transformations in \eqref{eq: Kron-reduced model}, for each attack set $K$, the attack signature $(B_K,D_K)$ is mapped to the corresponding signature $(\tilde B_K, \tilde D_K)$ in the Kron-reduced system. As an apparent disadvantage, the sparsity pattern of the original (non-reduced) cyber-physical descriptor system \eqref{eq: cyber_physical_fault} is lost in the Kron-reduced representation \eqref{eq: Kron-reduced model}, and so is, possibly, the physical interpretation of the state and the direct representation of system components. However, as we show in the following lemma, the notions of detectability and identifiability of an attack set $K$ defined for the original descriptor system \eqref{eq: cyber_physical_fault} are equivalent for the Kron-reduced system \eqref{eq: Kron-reduced model}. This property renders the low-dimensional and nonsingular Kron-reduced system \eqref{eq: Kron-reduced model} attractive from a computational point of view to design attack detection and identification monitors; see \cite{FP-FD-FB:10yb}. \begin{lemma}{\bf \emph{(Equivalence of detectability and identifiability under Kron reduction)}} \label{lemma:equivalence} For the cyber-physical descriptor system \eqref{eq: cyber_physical_fault}, the attack set $K$ is detectable (respectively identifiable) if and only if it is detectable (respectively identifiable) for the associated Kron-reduced system \eqref{eq: Kron-reduced model}. \end{lemma} \begin{proof} The lemma follows from the fact that the input and initial condition to output map for the system \eqref{eq: cyber_physical_fault} coincides with the corresponding map for the Kron-reduced system \eqref{eq: Kron-reduced model} and equation \eqref{eq: elimination of bus voltages}. Indeed, according to Theorem \ref{Theorem: Dynamic detectability of cyber-physical attacks}, the attack set $K$ is undetectable if and only if there exist $s \in \mathbb{C}$, $g \in \mathbb{R}^{|K|}$, and $x = [x_{1}^{\transpose} \; x_{2}^{\transpose}]^{\transpose} \in \mathbb{R}^{n}$, with $x \neq 0$, such that \begin{equation*} (s E - A)x - B_K g = 0 \mbox{ and } C x + D_K g = 0 \,. \end{equation*} Equivalently, by eliminating the algebraic constraints as in \eqref{eq: elimination of bus voltages}, the attack set $K$ is undetectable if and only if the conditons \begin{equation*} (s I - \tilde A)x_{1} - \tilde B_K g = 0 \mbox{ and } \tilde C x_{1} + \tilde D_K g = 0 \end{equation*} are satisfied together with $ x_2 = -A_{22}^{-1} A_{21} x_1 - A_{22}^{-1} B_2 g$. Notice that the latter equation is always satisfied due to the consistency assumption (A2), and the equivalence of detectability of the attack set $K$ follows. The equivalence of attack identifiability follows by analogous arguments. \end{proof} \section{Abstract for workshop} \section{Introduction} \input{introduction} \section{Examples of cyber-physical systems}\label{sec:example_systems} We now motivate our study by introducing important cyber-physical systems requiring advanced security mechanisms. \subsection{Power networks}\label{example:power} Future power grids will combine physical dynamics with a sophisticated coordination infrastructure. The cyber-physical security of the grid has been identified as an issue of primary concern \cite{ARM-RLE:10,SS-AH-MG:12}, which has recently attracted the interest of the control and power systems communities, see \cite{AT-AS-HS-KHJ-SSS:10,DG-HS:10,ES:04,FP-AB-FB:10u,FP-FD-FB:11i,AHMR-ALG:11}. We adopt the small-signal version of the classical structure-preserving power network model; see \cite{ES:04,FP-AB-FB:10u} for a detailed derivation from the full nonlinear structure-preserving power network model. Consider a connected power network consisting of $n$ generators $\{g_{1},\dots,g_{n}\}$ and $m$ load buses $\{b_{n+1},\dots,b_{n+m}\}$. The interconnection structure of the power network is encoded by a connected susceptance-weighted graph. The generators $g_{i}$ and buses $b_{i}$ are the vertex set of this graph, and the edges are the transmission lines $\{b_{i},b_{j}\}$ weighted by the susceptance between buses $b_{i}$ and $b_{j}$, as well as the connections $\{g_{i},b_{i}\}$ weighted by the transient susceptance between generator $g_{i}$ and its adjacent bus $b_{i}$. The Laplacian associated with the susceptance-weighted graph is the symmetric susceptance matrix $\mathcal L = \left[ \begin{smallmatrix} \subscr{\mathcal L}{gg} & \subscr{\mathcal L}{gl}\\ \subscr{\mathcal L}{lg} & \subscr{\mathcal L}{ll} \end{smallmatrix} \right] \in \mathbb R^{(n+m) \times (n+m)}$, where the first $n$ rows are associated with the generators and the last $m$ rows correspond to the buses. The dynamic model of the power network is \begin{align} \label{eq: power network descriptor system model} \begin{bmatrix} I\!\! & 0\!\! & 0\\ 0\!\! & \subscr{M}{g}\!\! & 0\\ 0\!\! & 0\!\! & 0 \end{bmatrix} \begin{bmatrix} \dot \delta(t) \\ \dot \omega(t) \\ \dot\theta(t) \end{bmatrix} \!=\! - \begin{bmatrix} 0\!\! & -I\!\! & 0\\ \subscr{\mathcal L}{gg}\!\! & \subscr{D}{g}\!\! & \subscr{\mathcal L}{gl}\\ \subscr{\mathcal L}{lg}\!\! & 0\!\! & \subscr{\mathcal L}{ll} \end{bmatrix} \!\! \begin{bmatrix} \delta(t) \\ \omega(t) \\ \theta(t) \end{bmatrix} \!+\! \begin{bmatrix} 0 \\ P_{\omega}(t) \\ P_{\theta}(t) \end{bmatrix} \!, \end{align} where $\delta(t) \in \mathbb R^{n}$ and $\omega(t) \in \mathbb R^{n}$ denote the generator rotor angles and frequencies, and $\theta(t) \in \mathbb R^{m}$ are the voltage angles at the buses. The terms $\subscr{M}{g}$ and $D_\textup{g}$ are the diagonal matrices of the generator inertial and damping coefficients, and the inputs $P_{\omega}(t)$ and $P_{\theta}(t)$ are due to {\em known} changes in mechanical input power to the generators or real power demand at the loads. \subsection{Mass transport networks}\label{example:water} Mass transport networks are prototypical examples of cyber-physical systems modeled by differential-algebraic equations, such as gas transmission and distribution networks \cite{AO:87}, large-scale process engineering plants \cite{AK-PD:99}, and water networks. Examples of water networks include open channel flows \cite{XL-VF:09} for irrigation purposes and municipal water networks \cite{JB-BG-MCS:09,PFB-KEL-BWK:06}. The vulnerability of open channel networks to cyber-physical attacks has been studied in \cite{SA-XL-SS-AMB:10,RS:11}, and municipal water networks are also known to be susceptible to attacks on the hydraulics \cite{JS-MM:07} and biochemical contamination threats \cite{DGE-MMP:10}. We focus on the hydraulics of a municipal water distribution network, as modeled in \cite{JB-BG-MCS:09,PFB-KEL-BWK:06}. The water network can be modeled as a directed graph with node set consisting of reservoirs, junctions, and storage tanks, and with edge set given by pipes, pumps, and valves that are used to convey water from source points to consumers. The key variables are the pressure head $h_{i}$ at each node $i$ in the network as well as the flows $Q_{ij}$ from node $i$ to $j$. The hydraulic model governing the network dynamics includes constant reservoir heads, flow balance equations at junctions and tanks, and pressure difference equations along all edges: \begin{align} \begin{split} \mbox{reservoir } i:&\quad h_{i} = \supscr{h_{i}}{reservoir} = \text{constant}\,, \\ \mbox{junction } i:&\quad d_{i} = \sum\nolimits_{j \to i} \!Q_{ji} - \sum\nolimits_{i \to k} \!Q_{ik} \,, \\ \mbox{tank } i:&\quad A_{i} \dot h_{i} = \sum\nolimits_{j \to i} \!Q_{ji} - \sum\nolimits_{i \to k} \!Q_{ik}\,, \\ \mbox{pipe } (i,j):&\quad Q_{ij} = Q_{ij}(h_{i} - h_{j}) \,, \\ \mbox{pump } (i,j):&\quad h_{j} - h_{i} = + \supscr{\Delta h_{ij}}{pump} = \text{constant}\,, \\ \mbox{valve } (i,j):&\quad h_{j} - h_{i} = - \supscr{\Delta h_{ij}}{valve} = \text{constant}\,. \end{split} \label{eq:water_network_model} \end{align} Here $d_{i}$ is the demand at junction $i$, $A_{i}$ is the (constant) cross-sectional area of storage tank $i$, and the notation ``$j \to i$'' denotes the set of nodes $j$ connected to node $i$. The flow $Q_{ij}$ depends on the pressure drop $h_{i} - h_{j}$ along pipe according to the Hazen-Williams equation $Q_{ij}(h_{i} - h_{j}) = g_{ij} |h_{i} - h_{j}|^{1/1.85-1} \cdot (h_{i} - h_{j})$, where $g_{ij}>0$ is the pipe conductance. Other interesting examples of cyber-physical systems captured by our modeling framework are sensor networks, dynamic Leontief models of multi-sector economies, mixed gas-power energy networks, and large-scale control systems. \section{Mathematical Modeling Of Cyber-physical Systems, Monitors, and Attacks}\label{sec:setup} \input{setup_new} \section{Limitations of static, dynamic and active monitors for detection and identification}\label{sec:static_dynamic} \input{limitation} \input{smoothness_issues} \section{Graph theoretic detectability conditions}\label{sec:graph_conditions} \input{graph_theoretic_analysis} \section{Illustrative examples}\label{sec:example} \input{example} \section{Conclusion}\label{sec:conclusion} \input{conclusion} \bibliographystyle{IEEEtran} \subsection{Model of cyber-physical systems under attack} \smallskip \noindent \textbf{Model of cyber-physical systems under attack.} We consider the linear time-invariant descriptor system% \footnote{The results stated in this paper for continuous-time descriptor systems hold also for discrete-time descriptor systems and nonsingular systems. Moreover, we neglect the presence of known inputs, since, due to the linearity of system \eqref{eq: cyber_physical_fault}, they do not affect our results on the detectability and identifiability of unknown input attacks.} \begin{align}\label{eq: cyber_physical_fault} \begin{split} E \dot x(t) &= Ax(t) + Bu(t),\\ y(t) &= C x(t) + Du(t), \end{split} \end{align} where $x(t) \in \mathbb{R}^n$, $y(t) \in \mathbb{R}^p$, $E \in \mathbb{R}^{n \times n}$, $A \in \mathbb{R}^{n \times n}$, $B \in \mathbb{R}^{n \times m}$, $C \in \mathbb{R}^{p \times n}$, and $D \in \mathbb{R}^{p \times m}$. Here the matrix $E$ is possibly singular, and the input terms $Bu(t)$ and $Du(t)$ are unknown signals describing disturbances affecting the plant. Besides reflecting the genuine failure of systems components, these disturbances model the effect of an attack against the cyber-physical system (see below for our attack model). For notational convenience and without affecting generality, we assume that each state and output variable can be independently compromised by an attacker. Thus, we let $B = \begin{bmatrix}I , 0\end{bmatrix}$ and $D = \begin{bmatrix}0 , I\end{bmatrix}$ be partitioned into identity and zero matrices of appropriate dimensions, and, accordingly, $u(t) = \begin{bmatrix} u_{x}(t)^{\transpose},u_{y}(t)^{\transpose} \end{bmatrix}^{\transpose}$. Hence, the \emph{attack} $(Bu(t),Du(t)) = (u_{x}(t),u_{y}(t))$ can be classified as \emph{state attack} affecting the system dynamics and as \emph{output attack} corrupting directly the measurements vector. The attack signal $t \mapsto u(t) \in \mathbb R^{n+p}$ depends upon the specific attack strategy. In the presence of $k \in \mathbb{N}_0$, $k \le n+p$, attackers indexed by the {\em attack set} $K \subseteq \until{n+p}$ only and all the entries $K$ of $u(t)$ are nonzero over time. To underline this sparsity relation, we sometimes use $u_K(t)$ to denote the {\em attack mode}, that is the subvector of $u(t)$ indexed by $K$. Accordingly, the pair $(B_{K},D_{K})$, where $B_K$ and $D_K$ are the submatrices of $B$ and $D$ with columns in $K$, to denote the {\em attack signature}. Hence, $Bu(t) = B_K u_K (t)$, and $Du(t) = D_K u_K (t)$. Since the matrix $E$ may be singular, we make the following assumptions on system \eqref{eq: cyber_physical_fault}: \begin{enumerate}\setcounter{enumi}{2} \item[(A1)] the pair $(E,A)$ is regular, that is, $\textup{det}(sE - A)$ does not vanish identically, \item[(A2)] the initial condition $x(0) \in \mathbb R^{n}$ is consistent, that is, $(Ax(0) + B u(0)) \perp \operatorname{Ker}(E^{\transpose}) = 0$; and \item[(A3)] the input signal $u(t)$ is smooth. \end{enumerate} The regularity assumption (A1) assures the existence of a unique solution $x(t)$ to \eqref{eq: cyber_physical_fault}. Assumptions (A2) and (A3) simplify the technical presentation in this paper since they guarantee smoothness of the state trajectory $x(t)$ and the measurements $y(t)$; see \cite[Lemma 2.5]{TG:93} for further details. The degree of smoothness in assumption (A3) depends on the index of $(E,A)$, see \cite[Theorem 2.42]{PK-VLM:06}, and continuity of $u(t)$ is sufficient for the index-one examples presented in Section \ref{sec:example_systems}. In Section \ref{Subsection: smoothness issues} we discuss the results in this paper if assumptions (A2) and (A3) are dropped. \smallskip \noindent \textbf{Model of static, dynamic, and active monitors.} A \emph{monitor} is a pair $(\Phi, \gamma(t))$, where $\Phi \, :\, \Lambda \,\rightarrow \, \Psi$ is an algorithm, and $\gamma \,:\, \mathbb{R} \mapsto \mathbb R^{n+p}$ is a signal. In particular, $\Lambda$ is the algorithm input to be specified later, $\Psi = \{\psi_1,\psi_2\}$, with $\psi_1 \in \{\textup{True},\textup{False}\}$ and $\psi_2 \subseteq \until{n+p}$, is the algorithm output, and $(B \gamma (t),D \gamma (t))$ is an auxiliary input injected by the monitor into the system \eqref{eq: cyber_physical_fault}. In this work we consider the following classes of monitors for the system \eqref{eq: cyber_physical_fault}. \begin{definition}{\bf \emph{(Static monitor)}}\label{static_monitor} A \emph{static monitor} is a monitor with $\gamma (t) = 0 \; \forall t \in \mathbb{R}_{\ge 0}$, and $\Lambda = \{C,y(t) \; \forall t \in \mathbb{N}\}$. \end{definition} Note that static monitors do not exploit relations among measurements taken at different time instants. An example of static monitor is the \emph{bad data detector} \cite{AA-AGS:04}. \begin{definition}{\bf \emph{(Dynamic monitor)}}\label{dynamic_monitor} A \emph{dynamic monitor} is a monitor with $\gamma (t) = 0 \; \forall t \in \mathbb{R}_{\ge 0}$, and $\Lambda = \{E,A,C,y(t) \;\forall t \in \mathbb{R}_{\ge 0}\}$. \end{definition} Differently from static monitors, dynamic monitors have knowledge of the system dynamics generating $y(t)$ and may exploit temporal relations among different measurements. The filters defined in \cite{FP-FD-FB:11i} are examples of dynamic monitors. \begin{definition}{\bf \emph{(Active monitor)}}\label{dynamic_monitor} An \emph{active monitor} is a monitor with $\gamma (t) \neq 0$ for some $t \in \mathbb{R}_{\ge 0}$, and $\Lambda = \{E,A,C,y(t) \;\forall t \in \mathbb{R}_{\ge 0}\}$. \end{definition} Active monitors are dynamic monitors with the ability of modifying the system dynamics through an input. An example of active monitor is presented in \cite{YM-BS:10a} to detect replay attacks. The objective of a monitor is twofold: \begin{definition}{\bf \emph{(Attack detection)}}\label{attack_detection} A nonzero attack $(B_K u_K (t), D_K u_K(t))$ is detected by a monitor if $\psi_1 = \textup{True}$. \end{definition} \begin{definition}{\bf \emph{(Attack identification)}}\label{attack_identification} A nonzero attack $(B_K u_K (t), D_K u_K(t))$ is identified by a monitor if $\psi_2 = K$. \end{definition} An attack is called \emph{undetectable} (respectively \emph{unidentifiable}) by a monitor if it fails to be detected (respectively identified) by every monitor in the same class. Of course, an undetectable attack is also unidentifiable, since it cannot be distinguished from the zero attack. By extension, an attack set $K$ is undetectable (respectively unidentifiable) if there exists an undetectable (respectively unidentifiable) attack $(B_K u_K,D_K u_K)$. \smallskip \noindent \textbf{Model of attacks.} In this work we consider colluding omniscient attackers with the ability of altering the cyber-physical dynamics through exogenous inputs. In particular we let the attack $(B u(t),Du(t))$ in \eqref{eq: cyber_physical_fault} be designed based on knowledge of the system structure and parameters $E, A, C$, and the full state $x(t)$ at all times. Additionally, attackers have unlimited computation capabilities, and their objective is to disrupt the physical state or the measurements while avoiding detection. \smallskip \begin{remark}{\bf \emph{(Existing attack strategies as subcases)}} \begin{figure}[tb] \centering \subfigure[Static stealth attack]{ \includegraphics[width=.485\columnwidth]{./img/stealth_attack} \label{stealth_attack} } \!\!\!\!\!\!\subfigure[Replay attack]{ \includegraphics[width=.485\columnwidth]{./img/replay_attack} \label{replay_attack} }\\ \subfigure[Covert attack]{ \includegraphics[width=.485\columnwidth]{./img/covert_attack} \label{covert_attack} } \!\!\!\!\!\!\subfigure[Dynamic false data injection]{ \includegraphics[width=.485\columnwidth]{./img/dynamic_attack} \label{dynamic_attack} } \caption[Optional caption for list of figures]{ A block diagram illustration of prototypical attacks is here reported. In Fig. \ref{stealth_attack} the attacker corrupts the measurements $y(t)$ with the signal $D_K u_K (t) \in \operatorname{Im} (C)$. Notice that in this attack the dynamics of the system are not considered. In Fig. \ref{stealth_attack} the attacker affects the output so that $y(t) = y(x(0),[\bar u_K^\transpose \; u_K^\transpose]^\transpose,t) = y(\tilde x(0),0,t)$. The covert attack in Fig. \ref{covert_attack} is a feedback version of the replay attack, and it can be explained analogously. In Fig. \ref{dynamic_attack} the attack is such that the unstable pole $p$ is made unobservable.} \label{fig:prototipical_attacks} \end{figure} The following prototypical attacks can be modeled and analyzed through our theoretical framework: \begin{enumerate} \item \emph{stealth attacks} defined in \cite{DG-HS:10} correspond to output attacks compatible with the measurements equation; \item \emph{replay attacks} defined in \cite{YM-BS:10a} are state and output attacks which affect the system dynamics and reset the measurements; \item \emph{covert attacks} defined in \cite{RS:11} are closed-loop replay attacks, where the output attack is chosen to cancel out the effect on the measurements of the state attack; and \item \emph{(dynamic) false-data injection attacks} defined in \cite{YM-BS:10b} are output attacks rendering an unstable mode (if any) of the system unobservable. \end{enumerate} A possible implementation of the above attacks in our model is illustrated in Fig. \ref{fig:prototipical_attacks}. \oprocend \end{remark} To conclude this section we remark that the examples presented in Section \ref{sec:example_systems} are captured in our framework. In particular, classical power networks failures modeled by additive inputs include sudden change in the mechanical power input to generators, lines outage, and sensors failure; see \cite{FP-FD-FB:11i} for a detailed discussion. Analogously, for a water network, faults modeled by additive inputs include leakages, variation in demand, and failures of pumps and sensors. Possible cyber-physical attacks in both power and water networks include comprising measurements \cite{SA-XL-SS-AMB:10,AT-AS-HS-KHJ-SSS:10,YL-MKR-PN:09} and attacks on the control architecture or the physical state itself \cite{SS-AH-MG:12,CLD-JVS-FA:96,AHMR-ALG:11,JS-MM:07}. \subsection{Attack detection and identification in presence of inconsistent initial conditions and impulsive attack signals}\label{Subsection: smoothness issues} We now discuss the case of non-smooth attack signal and inconsistent initial condition. If the consistency assumption (A3) is dropped, then discontinuities in the state $x(t \downarrow 0)$ may affect the measurements $y(t \downarrow 0)$. For instance for index-one systems, an inconsistent initial condition leads to an initial jump for the algebraic variable $x_{2}(t \downarrow 0)$ to obey equation \eqref{eq: elimination of bus voltages}. Consequently, the inconsistent initial value $[0^{\transpose} \; x_{2}(0)^{\transpose}]^{\transpose} \in \operatorname{Ker}(E)$ cannot be recovered through measurements. % Assumption (A4) requires the attack signal to be sufficiently smooth such that $x(t)$ and $y(t)$ are at least continuous. Suppose that assumption (A4) is dropped and the input $u(t)$ belongs to the class of impulsive smooth distributions $\mathcal C_{\textup{imp}} = \mathcal C_{\textup{smooth}} \cup \mathcal C_{\textup{p-imp}}$, that is, loosely speaking, the class of functions given by the linear combination of a smooth function on $\mathbb R_{\geq 0}$ (denoted by $\mathcal C_{\textup{smooth}}$) and Dirac impulses and their derivatives at $t=0$ (denoted by $\mathcal C_{\textup{p-imp}}$), see \cite{TG:93},\cite[Section 2.4]{PK-VLM:06}. In this case, an attacker commanding an impulsive input $u(0)\in \mathcal C_{\textup{imp}}$ can reset the initial state $x(0)$ and, possibly, evade detection. The discussion in the previous two paragraphs can be formalized as follows. % Let $\mathcal V_{c}$ be the subspace of points $x_{0} \in \mathbb R^{n}$ of consistent initial conditions for which there exists an input $u \in \mathcal C^{m}_{\textup{smooth}}$ and a state trajectory $x \in \mathcal C^{n}_{\textup{smooth}}$ to the descriptor system \eqref{eq: cyber_physical_fault} such that $y(t) = 0$ for all $t \in \mathbb{R}_{\geq 0}$. % Let $\mathcal V_{d}$ (respectively $\mathcal W$) be the subspace of points $x_{0} \in \mathbb R^{n}$ for which there exists an input $u \in \mathcal C^{n+p}_{\textup{imp}}$ (respectively $u \in \mathcal C^{n+p}_{\textup{p-imp}}$) and a state trajectory $x \in \mathcal C^{n}_{\textup{imp}}$ (respectively $x \in \mathcal C^{n}_{\textup{p-imp}}$) to the descriptor system \eqref{eq: cyber_physical_fault} such that $y(t) = 0$ for all $t \in \mathbb{R}_{\geq 0}$. % The output-nulling subspace $\mathcal V_{d}$ can be decomposed as follows: \begin{lemma} \label{Lemma: Decomposition of output-nulling subspace} {\bf \emph{(Decomposition of output-nulling space \cite[Theorem 3.2 and Proposition 3.4]{TG:93}))}} $\mathcal V_{d} = \mathcal V_{c} + \mathcal W + \operatorname{Ker}(E)$. \end{lemma} In words, from an initial condition $x(0) \in \mathcal V_{d}$ the output can be nullified by a smooth input or by an impulsive input (with consistent or inconsistent initial conditions in $\operatorname{Ker}(E)$). In this work we focus on the smooth output-nulling subspace $\mathcal V_{c}$, which is exactly space of zero dynamics identified in Theorems \ref{Theorem: Dynamic detectability of cyber-physical attacks} and \ref{Theorem: Dynamic identifiability of cyber-physical attacks}. % Hence, by Lemma \ref{Lemma: Decomposition of output-nulling subspace}, for inconsistent initial conditions, the results presented in this section are valid only for strictly positive times $t>0$. On the other hand, if an attacker is capable of injecting impulsive signals, then it can avoid detection for initial conditions $x(0) \in \mathcal W$.
1,941,325,220,617
arxiv
\section*{Introduction} \label{sec:int} Chaotic and stochastic systems have been extensively studied and the fundamental difference between them is well known: in a chaotic system an initial condition always leads to the same final state, following a fixed rule, while in a stochastic system, an initial condition leads to a variety of possible final states, drawn from a probability distribution \cite{ott2002chaos}. However, the signals generated by chaotic and stochastic systems are not always easy to distinguish and many methods have been proposed to differentiate chaotic and stochastic time series \cite{ikeguchi1997difference,rosso2007distinguishing,lacasa_2010,zunino2012distinguishing, ravetti2014distinguishing,kulp2014discriminating, quintero_njp_2015,toker2020simple,lopes2020parameter}. A related important problem is how to appropriately quantify the strength and length of the temporal correlations present in a time series \cite{simonsen1998determination,weron2002estimating,carbone2007algorithm,witt_2013}.{\textcolor{black} {The performance of these methods varies with the characteristics of the time series. As far as we know, no method works well with all data types, because methods have different limitations, in terms of the length of the time series, the level of noise, the stationarity or seasonality of the underlying process, the presence of linear or nonlinear correlations, etc. Moreover, any time series analysis method will return, at least, one number. Therefore, to obtain interpretable results, the values obtained from the analysis of the time series of interest need to be compared with those obtained from other ``reference'' time series, where we have previous knowledge of the underlying system that generates the data. Here we use as ``reference'' model a well-known stochastic process: flicker noise (FN).}} A FN time series is characterized by a power spectrum $P(f) \propto 1/f^\alpha$, with $\alpha$ being a quantifier of the correlations present in the signal \cite{beran2016long}. Flicker noise has been extensively studied in diverse areas such as electronics \cite{voss1976flicker,hooge1981experimental}, biology \cite{peng1992long,peng1993long}, physics \cite{bak1987self,silva2019correlated}, economy \cite{granger1996varieties,mandelbrot1997variation}, meteorology \cite{koscielny1998indication}, astrophysics \cite{press1978flicker}, etc. Furthermore, related to this issue, many methods described in the literature are able to evaluate the time correlation quantification $\alpha$, such as the Hurst exponent $\mathcal{H}$ \cite{ikeguchi1997difference,simonsen1998determination,carbone2007algorithm,beran2016long,olivares2016quantifying,weron2002estimating,silva2019correlated}. In this paper we propose a new methodology that simultaneously allows to distinguish chaotic from stochastic time series, and to quantify the strength of the correlations. Our algorithm, based on an Artificial Neural Network (ANN)~\cite{koza1996automated}, is easy to run and freely available \cite{ann_repository}. We first train the ANN with flicker noise to predict the value of the $\alpha$ parameter that was used to generate the noise. The input features to the ANN are probabilities extracted from the FN time series using ordinal analysis \cite{bandt2002permutation}, {\textcolor{black}{a symbolic method widely used to identify patterns and nonlinear correlations in complex time series \cite{rosso2009detecting,rosso2009detecting2,parlitz2012classifying}. Each sequence of $D$ data points (consecutive, or with a certain lag between them), is converted into a sequence of $D$ relative values (smallest to largest), which defines an ordinal pattern. Then, the frequencies of occurrence of the different patterns in the time series define the set of ordinal probabilities, which in turn allow to calculate information-theoretic measures such as the permutation entropy (PE, described in {\it Methods})}}. {\textcolor{black}{The PE has been extensively used in the literature, due to the fact that is straightforward to calculate, and it is robust to observational noise. Interdisciplinary applications have been discussed in Ref.~\cite{zanin2012permutation} and, more recently, in a Special Issue~\cite{focusissue}.}} After training the ANN with different FN time series, $x_s(\alpha)$, generated with different values of $\alpha$, we input to the ANN ordinal probabilities extracted from the time series of interest, $x$, and analyze the output of the ANN, $\alpha_{\mathrm{e}}$. We find that $\alpha_{\mathrm{e}}$ is informative of the temporal correlations present in the time series $x$. Moreover, by comparing the PE values of $x$ and of $x_s(\alpha_{\mathrm{e}})$ (a FN time series generated with the value of $\alpha$ returned by the ANN), we can differentiate between chaotic and stochastic signals: the PE values of $x$ and $x_s$ are similar when $x$ is mainly stochastic, but they differ when $x$ is mainly deterministic. Therefore, the difference of the two PE values serves as a quantifier to distinguish between chaotic and stochastic signals. We use several datasets to validate this approach. We also analyze its robustness with respect to the length of the time series and noise contamination. This paper is organized as follows. In the main text we present the results of the analysis of synthetic and empirical time series, which are described in section {\it Data sets}. Typical examples of the time series analyzed are presented in Fig.~\ref{fig:datasets}. In section {\it Methods} we describe the ordinal method and the implementation of the algorithm, schematically represented in Fig.~\ref{fig:method}. \begin{figure}[t] \centering \includegraphics[width=0.95\columnwidth]{Figs/Fig_dataset.png} \caption{\textbf{Examples of time series analyzed, their probability density functions (PDFs) and power spectral densities (PSDs).} (a),(b) Time series generated by iteration of the $\beta x$ map [Eq. (\ref{eq:betax}) with $\beta=2$] and its PDF. (c),(d) Uniformly distributed white noise and its PDF. We see that the PDF of the deterministic map is identical to the PDF of the noise. (e) and (f) PSD of the Schuster map, Eq. (\ref{eq:schuster}) with parameter $z=1.5$, and of a Flicker noise with $\alpha=1$. We note that the PSD of the Schuster map has a long decay that is very similar to a $1/f^\alpha$ decay of the noise. (g) PDF of $m$ summed logistic maps (Eq. \ref{eq:logistic} with $r=4$), which approaches a Gaussian as $m$ increases. (h-k) Examples of empirical time series analyzed: (h) a stride-to-stride of an adult walking in a slow velocity, interpreted as an stochastic process; (i) daily number of sunspots as a function of time (in years), where its fluctuations are interpreted as stochastic; (k) voltage across the capacitor of an inductor-less Chua electronic circuit, whose oscillations are known to be chaotic.} \label{fig:datasets} \end{figure} \begin{figure}[tb!] \centering \definecolor{blue(pigment)}{rgb}{0.51, 0.56, 0.59} \begin{tikzpicture}[scale=.9] \draw [darkgray,->,very thick] (-4.0,0) .. controls (-4.0,-1) .. (-3.0,-1); \draw [darkgray,->,very thick] (-1.6,-1) .. controls (-1.6,-2) .. (-0.6,-2); \draw [darkgray,->,very thick] (0,-2) .. controls (0,-3) .. (1,-3); \draw [darkgray,->,very thick] (3,-3) .. controls (3,-4) .. (4,-4); \node[fill={blue(pigment)},text=white,align=left, anchor=base ,rounded corners,blur shadow={shadow blur steps=5}] at (-1,0) {1. Extract the probabilities of the time series.} ; \node[fill={blue(pigment)},text=white,align=left, anchor=base ,rounded corners,blur shadow={shadow blur steps=5}] at (2.1,-1) {2. Input them to the ANN; obtain the correlation coefficient $\alpha_\mathrm e$.} ; \node[fill={blue(pigment)},text=white,align=left, anchor=base ,rounded corners,blur shadow={shadow blur steps=5}] at (3.8,-2) {3. Generate a time series of Flicker noise with $\alpha=\alpha_\mathrm e$.}; \node[fill={blue(pigment)},text=white,align=left, anchor=base ,rounded corners,blur shadow={shadow blur steps=5}] at (6.8,-3) {4. Compare the permutation entropies of the time series and of the noise.}; \node[fill={blue(pigment)},text=white,align=left, anchor=base ,rounded corners,blur shadow={shadow blur steps=5}] at (8.1,-4) {5. Classify the time series as chaotic or stochastic.}; \end{tikzpicture} \caption{\textbf{Schematic representation of the methodology.} We compute the probabilities of the ordinal patterns and then use them as input features to the ANN. The ANN returns the temporal correlation coefficient $\alpha_\mathrm e$. Then we compare the permutation entropy of the analyzed time series with the permutation entropy of a FN time series generated with the same $\alpha_\mathrm e$ value. Based on this comparison, we use Eq. (\ref{delta_h}) to classify the time series as chaotic or stochastic.} \label{fig:method} \end{figure} \section*{Results}\label{sec:results} \subsection*{Analysis of synthetic datasets} The main result is depicted in Fig. \ref{fig:det_1}. Panel (a) shows the normalized permutation entropy $\bar S(\alpha_\mathrm{e})$ (Eq. (\ref{eq:normalized_permutation_entropy})) vs. the time-correlation coefficient $\alpha_{\mathrm{e}}$. The filled (empty) symbols correspond to different types of stochastic (chaotic) time series, and the solid black line corresponds to FN time series generated with $\alpha \in [-1,3]$, which is accurately evaluated by the ANN. For $\alpha_\mathrm{e} = 0$, FN has a uniform power spectrum, characteristic of an uncorrelated signal (white noise), with equal ordinal probabilities $\mathcal{P}(i) \approx 1/D!$ and, hence, $\bar S = 1$. Otherwise, for $\alpha \ne 0$, some ordinal patterns occur in the time series more often than others, and the ordinal probabilities are not all equal, which decreases the permutation entropy. These results are consistent with those that have been obtained by using different methodologies \cite{kulp2014discriminating, olivares2016quantifying, lopes2020parameter}. In Fig.~\ref{fig:det_1} we note that fBm signals have larger time-correlation ($\alpha_\mathrm e$ closer to $2$, a classic Brownian motion) than the other three stochastic systems $\alpha_\mathrm e \approx 0$. However, their permutation entropies are very close to those of the FN signals. The key observation is that stochastic time series all fall close to the FN curve, while chaotic ones do not, namely, $\beta x$ map, Lorenz system, logistic map, skew tent map, and Schuster map. The distance to the FN curve thus serves to distinguish stochastic and chaotic time series. This is quantified by \begin{equation} \Omega(\alpha_\mathrm e) = \frac{|\bar S_\mathrm{fn}(\alpha_{\mathrm{e}}) - \bar S|}{\bar S_{\mathrm{fn}}(\alpha_{\mathrm{e}})}, \label{delta_h} \end{equation} where $\bar S$ is the permutation entropy of the analyzed time series and $\bar S_{\mathrm{fn}}(\alpha_{\mathrm{e}})$ is the PE of a flicker noise time series generated with the value of $\alpha$ returned by the ANN, $\alpha_{\mathrm{e}}$. The results are presented in Fig.~\ref{fig:det_1}, panel (b), where we see that stochastic signals have $\Omega \approx 0$, and deterministic signals have $\Omega > 0$. To summarize this finding, Table \ref{tab:table_1} depicts $\alpha_\mathrm e$ and $\Omega$ for ten representative systems. \begin{figure}[tb!] \centering \includegraphics[width=.9\columnwidth]{Figs/Fig_det.png} \caption{\textbf{Temporal correlation and distinction of chaotic and stochastic synthetic signals.} Panel (a) shows the normalized permutation entropy $\bar S$ as a function of $\alpha_{\mathrm{e}}$, evaluated through the ANN, for different time-series (stochastic and chaotic). The black solid line represents FN signals, which are used for training and testing the ANN. Filled symbols represent different stochastic signals, which are very close to the FN curve. Empty symbols represent chaotic signals, which are far away. The distance to the FN curve is measured by $\Omega$ [Eq.~(\ref{delta_h})] and it is shown in panel (b): higher values of $\Omega$ indicate chaotic time series, while lower, vanishing values indicate stochastic ones.} \label{fig:det_1} \end{figure} \begin{table}[tb] \setlength{\tabcolsep}{7pt} \centering \begin{tabular}{l c c } \hline \hline Stochastic process $\;\;\;\;\;\;\;\;$ & $\alpha_\mathrm e$ & $\Omega (\alpha_\mathrm e)$ \\ \hline FN ($\alpha = 0$) & $-0.008 \pm 0.013$ & $0.00006 \pm 0.00005$ \\ fBm ($\mathcal H = 0.5$) & $\;\;\;1.74 \pm 0.01$ & $0.005 \pm 0.001$ \\ fGn ($\mathcal H = 0.5$) & $-0.009 \pm 0.014$ & $0.00006 \pm 0.00004$ \\ Cauchy & $-0.008 \pm 0.004$ & $0.00007 \pm 0.00005$ \\ Uniform & $-0.008 \pm 0.003$ & $0.00007 \pm 0.00006$ \\ \hline \hline Chaotic systems & $\alpha_\mathrm e$ & $\Omega (\alpha_\mathrm e)$ \\ \hline $\beta x$ map & $\;\;\;1.277 \pm 0.002$ & $0.2612 \pm 0.0002$ \\ Lorenz system ($\mathrm{max}(x)$) & $\;\;\;0.79 \pm 0.04$ & $0.33 \pm 0.004$ \\ Logistic map & $\;\;\;0.823 \pm 0.003$ & $0.3585 \pm 0.0001$ \\ Schuster map & $\;\;\;1.364 \pm 0.002$ & $0.3855 \pm 0.0004$ \\ Skew Tent map & $-0.142 \pm 0.002$ & $0.5256 \pm 0.0004$ \\ \hline \hline \end{tabular} \caption{Results obtained from stochastic and deterministic time series: mean and standard deviation of the $\alpha_\mathrm e$ parameter and of $\Omega(\alpha_\mathrm e)$ (Eq.~\ref{delta_h}), calculated from $1000$ time series generated with different initial conditions and noise seeds.} \label{tab:table_1} \end{table} Next, we study the applicability of our methodology to noise-contaminated signals. We analyze the signal \begin{equation} Z_t = (1-\eta)X_t + \eta Y_t, \;\;\; t = 1,\cdots,N, \label{eq: noise} \end{equation} where $X_t$ is a deterministic (chaotic) signal ``contaminated'' by a uniform white noise, $Y_{t}$, and $\eta \in [0,1]$ controls the stochastic component of $Z_t$. For $\eta = 0 \, (1)$ the signal is fully deterministic (fully stochastic). Figure~\ref{fig:det_2}(a) shows $\Omega$ as a function of $\eta$ for different chaotic signals. As expected, for $\eta=0$, $\Omega$ is high, but as $\eta$ grows, the level of stochasticity increases and $\Omega$ decreases. At $\eta > 0.5$, the signal is strongly stochastic, as reflected by $\Omega \approx 0.0$. For comparison, in Fig. \ref{fig:det_2}(a) we also present results obtained by shuffling a chaotic time series. As expected, $\Omega \approx 0$ for all $\eta$ because temporal correlations are destroyed by the shuffling process. \begin{figure}[tb!] \centering \includegraphics[width=\columnwidth]{Figs/Fig_det_2.png} \caption{\textbf{Limits for identifying determinism.} Panel (a) shows the influence of noise contamination: the quantifier $\Omega$, Eq. (\ref{delta_h}) is plotted as a function of the level of noise, $\eta$, in Eq. (\ref{eq: noise}). We see that as $\eta$ increases, $\Omega$ decreases, but it remains, for high values of $\eta$, different from the value obtained from shuffled data (black pentagons). Therefore, small values of $\Omega$ can still reveal determinism in noise-contaminated signals. Panel (b) shows the effect of adding several independent chaotic signals: $\Omega$ is plotted as a function of the number $m$ of signals added. We see that as $m$ increases, $\Omega$ decreases, indicating that the deterministic nature of the signal can no longer be detected. } \label{fig:det_2} \end{figure} We expect that the addition of a sufficiently large number of independent chaotic signals gives a signal that is indistinguishable from a fully stochastic one. This is verified in Fig. \ref{fig:det_2}(b), where the horizontal axis represents the number, $m$, of independent chaotic signals added. Here a high value of $\Omega$ is observed for $m = 1$ (a single chaotic signal), but as $m$ increases $\Omega \rightarrow 0$ since the chaotic nature of added signals is no longer captured (examples of the PDFs of the time series obtained from the addition of $m$ Logistic maps were presented in Fig. \ref{fig:datasets}(g), where we can observe a clear evolution towards a Gaussian shape). To further explore the robustness of our methodology, we investigate the role of the length $N$ of the analyzed time series in the evaluation of the $\Omega$ quantifier (Eq. (\ref{delta_h})). Figure \ref{fig:fig_det_3} shows $\Omega$ as a function of $N$, where panel (a) depicts stochastic signals, and (b) chaotic ones. We see that even for $N<10^{2}$, for all stochastic signals in panel (a) $\Omega < 0.1$, which indicates that we can identify the stochastic nature of short signals. For the chaotic signals in panel (b), for $N>10^2$ $\Omega > 0.1$ (except for $\beta x$ map), and for $N \geq 10^{3}$, $\Omega > 0.2$ for all signals, which demonstrates that our method can also detect determinism in short signals. \begin{figure}[tb!] \centering \includegraphics[width=0.95\columnwidth]{Figs/Fig_det_3.png} \caption{\textbf{Robustness with respect to the time series length.} We evaluate $\Omega$ as a function of the time series length $N$ for stochastic signals (panel (a)) and chaotic ones (panel (b)). In (a) we observe that even for $N < 100$, $\Omega < 0.1$, which confirms the stochastic nature of the signal. In (b), even for $N < 100$, $\Omega > 0.1$ (except for $\beta x$ map). Also, $\Omega > 0.2$ for all cases for $N \geq 1000$, which classifies the signals as chaotic even with only 1000 datapoints. The error bars are the standard deviation over $1000$ of samples. (c) Mean absolute error, $\mathcal{E}$, between the output of the ANN ($\alpha_{\mathrm{e}}$), and the parameter $\alpha$ used to generate the time series of flicker noise (depicted in color-code) as a function of the length of the time series, $N$.} \label{fig:fig_det_3} \end{figure} As discussed in Sec. {\it {Methods}}, the ANN was trained with flicker noise signals with $2^{20}$ data points. However, it is interesting to analyze how much data the trained ANN needs, in order to correctly predict the $\alpha$ value of a flicker noise time series. To address this point, we generate $L=1\,000$ FN time series and analyze the error of the ANN output, $\alpha_{\mathrm{e}}$, as a function of the length of the time series, $N$, and of the value of $\alpha$ used to generate the time series. The results are presented in panel (c) of Fig.~\ref{fig:fig_det_3} that displays the mean absolute error, $\mathcal{E} =\frac{1}{L} \sum_ {\ell=1}^ L|\alpha_{\mathrm e} - \alpha|$. We see that as $N$ increases, $\mathcal{E}$ decreases. The error depends on both, $\alpha$ and $N$, and tends to be larger for high $\alpha$ due to non-stationarity and finite time sampling \cite{lopes2020parameter}. For FN time series longer than $10^{4}$ datapoints, the ANN returns a very accurate value of $\alpha$. \subsection*{Analysis of empirical time series Here we present the analysis of time series recorded under very different experimental conditions, as described in section {\it Data sets}. Figure \ref{fig:experimental} displays the results in the plane ($\alpha_{\mathrm{e}}$, $\Omega$). The $\Omega$ values obtained for the Chua circuit data and for the laser data confirm their chaotic nature \cite{chua1994chua,gershenfeld1993future} ($\Omega \approx 0.55$ and $\Omega \approx 0.20$ respectively). For the star light intensity $\Omega \approx 0$, confirming the stochastic nature of the signal~\cite{weigend2018time}. For the number of sunspots, which is a well-known long-memory noisy time series, $\Omega \approx 0$. In this case the value of $\alpha$ obtained ($\alpha \approx 2$) confirms the results of Singh \textit{et al.} \cite{singh2017early} where a Hurst exponent close to 1 was found. Regarding the five time series of RR-intervals of healthy subjects, our algorithm identifies stochasticity ($\Omega \approx 0$) in all of them, which is consistent with findings of Ref. ~\cite{toker2020simple}. The last empirical set analyzed reveals the nature of the dynamics of human gait: regardless of the age of the subjects, $\Omega \approx 0$ confirming the stochastic behavior discussed in \cite{hausdorff1996fractal}. In the inset we show that the $\alpha_\mathrm e$ value returned by the ANN decreases with the age, which is also in line with the results presented in \cite{hausdorff1999maturation}, obtained with Detrended Fluctuation Analysis (see Fig. 6 of Ref.~\cite{hausdorff1999maturation}). The authors interpret this variation as due to an age-related change in stride-to-stride dynamics, where the gait dynamics of young adults (healthy) appears to fluctuate randomly, but with less time-correlation in comparison to young children \cite{hausdorff1999maturation}. \begin{figure}[tb!] \centering \includegraphics[width=0.95\columnwidth]{Figs/Fig_exp.png} \caption{\textbf{Analysis of empirical time series.} Results obtained from each time series are presented in the plane ($\alpha_{\mathrm{e}}$, $\Omega$). Deterministic signals are the Chua circuit data (brown triangle) and the laser data (orange `X' marker) that have $\Omega > 0.0$. The other signals [the light intensity of a star (yellow dot), the number of sunspots (cyan diamond), the heart variability of healthy subjects (magenta thin diamond), and the different groups of human gait dynamics (green, blue, red, and black squares)] are stochastic and have $\Omega \approx 0$. For the human gait, the inset depicts the $\alpha_\mathrm e$ as a function of the age of each subject. Consistent with \cite{hausdorff1999maturation}, the correlation coefficient $\alpha_\mathrm e$ decrease with the ages. } \label{fig:experimental} \end{figure} \section*{Discussion}\label{sec:discussions} We have proposed a new time series analysis technique that allows to differentiate stochastic from chaotic signals, and also, to quantify temporal correlations. We have demonstrated the methodology by using synthetic and empirical signals. Our method is based on locating a time series in a two dimensional plane determined by the permutation entropy and the value of a temporal correlation coefficient, $\alpha_e$, returned by a machine learning algorithm. In this plane, stochastic signals are very close to a curve defined by Flicker noise, while chaotic signals are located far from this curve. We have used this fact to define a quantifier, $\Omega$, that is the distance to the FN curve. $\Omega$ serves to distinguish stochastic and chaotic time series, and it can be used to analyze time series, even when they are very short (with time series of 100 datapoints we found that $\Omega<0.1$ or $\Omega>0.1$, if the time series is stochastic or chaotic respectively, Fig.~\ref{fig:fig_det_3}). We also found that small values of $\Omega$ can be used to identify underlying determinism in noise-contaminated signals, and in signals that result from the addition of a number of independent chaotic signals (Fig.~\ref{fig:det_2}). We have also used our algorithm to analyze six empirical datasets, and obtained results that are consistent with prior knowledge of the data (Fig.~\ref{fig:experimental}). {\textcolor{black}{Taken together, these results show that the proposed methodology allows answering the questions of how to quantify stochasticity and temporal correlations in a time series.}} Our algorithm is fast, easy-to-use, and freely available~\cite{ann_repository}. Thus, we believe that it will be a valuable tool for the scientific community working on time series analysis. {\textcolor{black}{Existing methods have limitations in terms of the characteristics of the data (length of the time series, level of noise, etc.)}}. A limitation of our algorithm lies in the analysis of noise-contaminated periodic signals, because their temporal structure may not be distinguished from the temporal structure of a fully stochastic signal with a large $\alpha$ value. Future work will be directed at trying to overcome this limitation. {\textcolor{black}{Here, for a ``proof-of-concept'' demonstration, we have used a well-known machine learning algorithm (a feed-forward ANN), a rather simple training procedure, and a popular entropy measure (the permutation entropy). We have not tried to optimize the performance of the algorithm. We expect that different machine learning algorithms, training procedures, and entropy measures can give different performances, depending on the characteristics of the data analyzed. Therefore, the methodology proposed here has a high degree of flexibility, which can allow to optimize performance for the analysis of particular types of data.}} \section*{Methods \subsection*{Ordinal analysis and permutation entropy Ordinal analysis allows the identification of patterns and nonlinear correlations in complex time series \cite{bandt2002permutation}. For each sequence of $D$ data points in the time-series (consecutive, or with a certain lag between them), their values are replaced by their relative amplitudes, ordered from $0$ to $D-1$. {\textcolor{black}{For instance, a sequence $\{0,5,10,13\}$ in the time series transforms into the ordinal pattern ``0123'', while $\{0,13,5,10\}$ transforms into ``0312''. As an example, Fig.~\ref{fig:ordinal_patterns} shows the ordinal patterns formed with $D=4$ consecutive values.}}. We evaluate the frequency of occurrence of each word, defined as the ordinal probability $\mathcal{P}(i)$ with $\sum_{i=1}^{D!}\mathcal{P}(i)=1$, where $i$ represents each possible word. Then, we evaluate the Shannon entropy, known as permutation entropy \cite{bandt2002permutation}: \begin{equation} S(D) = - {\sum_{i=1}^{D!}\mathcal{P}(i)\ln{(\mathcal{P}(i)})}. \label{eq:permuation_entropy} \end{equation} The permutation entropy varies from $S(D)=0$ if the $j$-th state $\mathcal P(j)=1$ (while $\mathcal P(i)=0$ $\forall\; i\ne j$) to $S(D)=\ln({D!})$ if $\mathcal{P}(i)=1/D!$ $\forall\; i$. The normalized permutation entropy used in this work is given by: \begin{equation} \bar S(D) = \frac{S(D)}{\ln D!}. \label{eq:normalized_permutation_entropy} \end{equation} \textcolor{black}{In this work, to calculate the ordinal patterns, we have used the algorithm proposed by Parlitz and coworkers~\cite{parlitz2012classifying}. We have used $D = 6$ and no lag, i.e., $D-1$ values overlap in the definition of two consecutive ordinal patterns. Therefore, we use as features to the ANN (see below) the $D!=720$ probabilities of the ordinal patterns. For a robust estimation of these probabilities, a time series of length $N>>D!$ is needed. However, as we show in Fig.~\ref{fig:fig_det_3}, the algorithm returns meaningful values even for time series that are much shorter.} \begin{figure} \centering \includegraphics[width=.9\columnwidth]{Figs/tikz.pdf} \caption{\textcolor{black}{Schematic illustration of 24 ordinal patterns that can be defined from $D=4$ consecutive data values in a time series.}} \label{fig:ordinal_patterns} \end{figure} \subsection*{Artificial Neural Network} \textcolor{black}{Deep learning is part of a broader family of machine learning methods based on artificial neural networks (ANNs) \cite{wiki}. In this work, we use the deep learning framework \textit{Keras} \cite{kerasio} to compile and train an ANN. Since we want to regress the information of the features into a real value (classical scalar regression problem \cite{chollet2017deep}) an appropriate option is a feed-forward ANN. The ANN is trained to evaluate the time correlation coefficient considering as features the $720$ probabilities of the ordinal patterns. We connect the input layer to a single dense layer with $64$ output units connected to a final layer, regressing all the information of the inputs into a real number. Other combinations were tested with different numbers of units ($16,\,512,\,1024$) and layers. We found that a single layer with 64 units was sufficient to accurately predict the $\alpha$ value. The ANN parameters and the compilation setup are given in Table \ref{table_feed_forward}. As explained in the discussion we have used the feed-forward ANN as a simple option for a “proof-of-concept” demonstration. Other deep learning/machine learning methods or a different compilation setup may give better results depending on the type of data that is analyzed.} \begin{table}[htb!] \centering \caption{\textcolor{black}{Compilation setup and parameters of the feed forward ANN.}} \vspace*{0.0cm} \begin{tabular}{l l l l} \hline \hline Compilation setup &\hspace{3.8cm} & & \\ \hline Optimizer & & & Adam \\ Loss function & & & Mean square error \\ Metrics & & & Mean absolute error \end{tabular} \begin{tabular}{l l r l} \hline \hline Trainable Parameters & & & \\ \hline Layer (type) & Output Shape & Param \# & activation\\ \hline Dense \# 1 & $64$ & 46144 & `relu' \\ Dense \# 2 & $1$ & 65 & None\\ \hline \hline & & Total params: 46209 \end{tabular} \label{table_feed_forward} \end{table} \textcolor{black}{The training stage of the ANN is performed using a dataset of $50\,000$ flicker noise time series with $N=2^{20}$ points, where the parameter $\alpha$ of each time series is randomly chosen in $-1\leq \alpha \leq 3$ (see Section {\it Datasets} for details). We separate the dataset into two sets: the training dataset ($L_\mathrm{train}=40\,000$), and the test dataset ($L_\mathrm{test}=10\,000$). To quantify the error between the output of the ANN and the target, $\alpha$, we use the \textit{mean absolute error}: \begin{equation} \mathcal{E} =\frac{1}{L} \sum_ {\ell=1}^ L|\alpha_{\mathrm e,\ell} - \alpha_{\ell}|. \label{eq:mae} \end{equation} where $L$ is the number of samples. The training stage is concluded and then the parameters of the ANN are fixed. After that, we apply the ANN to the test dataset, and the error in the evaluation of $\alpha_{\mathrm{e}}$ regarding $\alpha$ is $\mathcal{E}(L_\mathrm{test}) \approx 0.01$.} \section*{Datasets} \label{sec:datasets} \subsection*{Stochastic systems}\label{sec:stochastic_data} In this paper we use three types of stochastic signals: flicker noise (FN), fractional Brownian motion (fBm) and fractional Gaussian noise (fGn). Flicker noise (FN) or colored noise time series are used for the training of the Artificial Neural Network. They are generated with the open Python library \textit{colorednoise.py} \cite{colorednoisepy,timmer1995generating}. fBm and fGn time series are generated with the Python library \textit{fmb.py} \cite{fbmpy}. Both time series depend on a Hurst index $\mathcal H$ \cite{zunino2007characterization}. For the fBm $\mathcal H = 0.5$ corresponds to the classical Brownian motion. If $\mathcal H > 0.5$ ($\mathcal H < 0.5$) the time-series is positively (negatively) correlated. For fGn $\mathcal H = 0.5$ characterizes a white noise \cite{zunino2007characterization}; if $\mathcal H>0.5$ ($\mathcal H<0.5$) the fGn time series exhibits long-memory (short-memory). The Hurst index is related to the $\alpha$ of flicker noise: for a fBm stochastic process, $\alpha = 2\mathcal H+1$ and $1<\alpha<3$; for fGn, $\alpha = 2\mathcal H-1$ and $-1<\alpha<1$ \cite{zunino2007characterization}. \subsection*{Chaotic systems} In this paper, we analyze time series generated by five chaotic systems: 1) The generalized Bernoulli chaotic map, also known as $\beta x$ map, described by: \begin{equation}\label{eq:betax} x_{t+1} = \beta x_t \; \mathrm{(mod1)}, \end{equation} where $\beta$ controls the dynamical characteristic of the map. Throughout this paper, we use $\beta = 2$, which leads to a chaotic signal \cite{ott2002chaos}. 2) The well-known logistic map \cite{ott2002chaos}: \begin{equation}\label{eq:logistic} x_{t+1} = r x_t(1-x_t), \end{equation} we use $r = 4$ to obtain a chaotic signal \cite{ott2002chaos}. 3) The Schuster map \cite{schuster2006deterministic}, which exhibits intermittent signals with a power spectrum $P(f)\sim 1/f^{z}$. It is defined as: \begin{equation}\label{eq:schuster} x_{t+1} = x_t + x^z _t \; \mathrm{(mod1)}, \end{equation} where we use $z = 0.5$. 4) The skew tend map, which is defined as \begin{equation}\label{eq:tent} x_{t+1} = \begin{cases} x_t/\omega &\mbox{if } x_t \in [0,\omega], \\ (1-x_t)/(1-\omega) & \mbox{if } x_t \in (\omega,1]. \end{cases} \end{equation} Here, we use $\omega = 0.1847$ in order to obtain a chaotic signal \cite{rosso2007distinguishing}. 5) The also well-known Lorenz system, defined as: \begin{eqnarray}\label{eq:lorenz} \frac{dx(t)}{dt} & = & \sigma (y-x),\\ \frac{dy(t)}{dt} & = & x(R-z) -y,\\ \frac{dz(t)}{dt} & = & xy-bz. \end{eqnarray} with parameters $\sigma = 16$, $R = 45.92$, and $b = 4$, which lead to a chaotic motion \cite{wolf1985determining}. For analyze the time series of consecutive maxima of the $x$ variable. \subsection*{Empirical datasets We test our methodology with empirical datasets recorded from diverse chaotic or stochastic systems. Additional information of the datasets can be found in Table \ref{tab:table_2}. These are: \textbf{Dataset E-I:} Data recorded from an inductorless Chua's circuit constructed as \cite{torres2000inductorless}. The circuit was set up and the data was kindly sent to us by Vandertone Santos Machado \cite{exp_data_I}. The voltages across the capacitors depict chaotic oscillations. To detect this chaoticity we compute the maxima values of the first capacitor $C_1$. \textbf{Dataset E-II:} Fluctuations in a chaotic laser data approximately described by three coupled nonlinear differential equations \cite{gershenfeld1993future}. To detect the chaoticity of the laser, we compute the maxima values of the time series. The data is available in \cite{exp_data_II}. \textbf{Dataset E-III:} Light intensity of a variable dwarf star (PG1599-035) \cite{gershenfeld1993future} with $17$ time-series (segments). These variations may be caused by an intrinsic change in emitted light (superposition of multiple independent spherical harmonics \cite{gershenfeld1993future}), or by an object partially blocking the brightness as seen from Earth. The fluctuations in the intensity of the star have been observed to result in a noisy signal \cite{gershenfeld1993future}. The data is open and freely available in \cite{exp_data_III}. \textbf{Dataset E-IV:} Three time-series of the sunspots numbers for the period of 1976 -- 2013 \cite{singh2017early}, the daily sunspots numbers depicts a noisy "pseudo-sinusoidal" behavior. It is accepted that magnetic cycles in the Sun are generated by a solar dynamo produced through nonlinear interactions between solar plasmas and magnetic fields \cite{allen2010derivation,choudhuri2000current}. However, the fluctuations in the period in the cycles is still difficult to understand \cite{passos2008low}. This type of data has been analyzed in \cite{singh2017early}, where its stochastic fluctuations depict a Hurst exponent $\mathcal H \approx 1$, meaning the data carries memory. The data can be found at \cite{exp_data_IV_1,exp_data_IV_2,exp_data_IV_3}. \textbf{Dataset E-V:} Five RR-interval time-series from healthy subjects. Each time series have $\sim 100\,000$ RR intervals (the signals were recorded using continuous ambulatory electrocardiograms during 24 hours). It still a debate if the heart rate variability is chaotic or stochastic \cite{toker2020simple}. While some studies suggest that heart rate variability is a stochastic process \cite{toker2020simple,baillie2009normal,zhang2009stochastic}. Much chaos-detection analysis has been identified as a chaotic signal \cite{toker2020simple,glass2009introduction}. The dataset is open and freely available in \cite{exp_data_V}. \textbf{Dataset E-VI:} Fractal dynamics of the human gait as well as the maturation of the gait dynamics. The stride interval variability can exhibit randomly fluctuations with long-range power-law correlations, as observed in \cite{hausdorff1996fractal}. Moreover, this time-correlation tends to decrease in older children \cite{hausdorff1996fractal,hausdorff1999maturation}. The analyzed dataset is then separated into $3$ groups, related to the subjects' ages. Group No. $1$ has data for $n = 14$ subjects with 3 - to 5 - yrs old; Group No. $2$ has data for $n = 21$ subjects with 6 - to 8 - yrs old; Group No. $3$ has data for $n = 15$ subjects with 10 - to 13 - yrs old; Group No. $4$ has data for $n = 10$ subjects with 18 - to 29 - yrs old \cite{hausdorff1996fractal}. The data is open and freely available in \cite{goldberger2000PhysioBank,exp_data_VI_1,exp_data_VI_2}. \begin{table}[htb] \setlength{\tabcolsep}{7pt} \centering \begin{tabular}{l c c c} \hline \hline Dataset & Number of samples & Mean length $\langle N\rangle$ & Availability \\ \hline {\bf E-I} & $1$ & $6\,000$ & \cite{exp_data_I} \\ {\bf E-II} & $1$ & $750$ & \cite{exp_data_II} \\ {\bf E-III} & $17$ & $1\,600$ & \cite{exp_data_III} \\ {\bf E-IV} & $3$ & $15\,000$ & \cite{exp_data_IV_1,exp_data_IV_2,exp_data_IV_3} \\ {\bf E-V} & $5$ & $100\,000$ & \cite{exp_data_V} \\ {\bf E-VI} & $60$ & $800$ & \cite{exp_data_VI_1,exp_data_VI_2} \\ \hline \hline \end{tabular} \caption{Characteristics of empirical datasets} \label{tab:table_2} \end{table} \section*{Acknowledgments} The authors thank Vandertone Santos Machado for providing the deterministic data of a chaotic Chua circuit. B.R.R.B., R.C.B, K.L.R., T.L.P. and S.R.L acknowledge partial support of Conselho Nacional de Desenvolvimento Científico e Tecnológico, CNPq, Brazil, Grant No. 302785/2017-5, 308621/2019-0 and the Coordenação de Aper\-fei\-çoamento de Pessoal de Nível Superior, Brasil (CAPES), Finance Code 001. C.M. acknowledges funding by the Spanish Ministerio de Ciencia, Innovacion y Universidades (PGC2018-099443-B-I00) and the ICREA ACADEMIA program of Generalitat de Catalunya.
1,941,325,220,618
arxiv
\section{Introduction} The high-voltage nanosecond pulses are widely used in modern low-temperature plasma physics research and technology. At high pressures (of the order of $10^4-10^5$~Pa) they, e.g., provide ionization for fast plasma switches and pumping for powerful pulsed gas lasers \cite{KorMes}, or generate plasmas for biomedical applications \cite{BioMed}. In the pressure range of $10^2-10^4$~Pa such pulses are able to launch the so-called fast ionization waves propagating at a speed comparable to the speed of light \cite{FIW}, which makes it possible to use them for the fast ignition of chemically reactive gas mixtures \cite{Ignition}. High-voltage pulses can also stabilize discharges in powerful CO$_2$ lasers \cite{Laser}. At the same time, there is a growing interest in applying the high-voltage nanosecond pulses under low-pressure conditions, i.e. in the range of $10^{-1}-10^2$~Pa. Amirov et. al. \cite{Amirov1, Amirov2}, who were the first to combine a classical dc glow discharge in a glass tube with a short high-voltage pulse, reported on the so-called ``glow pause'' (or ``dark phase'', as it was termed later in Refs. \onlinecite{DP1, DP2})~\textendash~a period of time after the pulse when the discharge becomes practically dark. Later, several experiments were reported, in which high-voltage nanosecond pulses were used to manipulate the dust particles levitating in weak low-pressure discharges [\onlinecite{NanoVas1}-\onlinecite{NanoPust2}]. Another idea was to combine capacitively-coupled radiofrequency (RF) and high-voltage nanopulse discharges, in order to enhance the production of H$^-$ ions in low-pressure hydrogen plasmas. Particle-in-cell (PIC) simulations of such sources were recently published in Refs.~\onlinecite{Hmin1, Hmin2}. Thus, the evolution of self-sustained low-pressure discharges disturbed by a high-voltage nanosecond pulse needs to be investigated at the very basic level. In this work we performed a comprehensive study of single nanosecond pulses applied to a steady-sate low-pressure capacitively coupled RF plasma. We approached the problem experimentally, by combining time-resolved imaging of the discharge and microwave interferometry measurements. Furthermore, we supplemented our measurements with PIC simulations of our plasma. By comparing the multi-timescale evolution of the plasma in the simulations and experiments, we investigated the physical mechanisms underlying different discharge regimes. \section{Experimental setup} \begin{figure}[t!] \centering \includegraphics [width=3.1in]{Setup.pdf} \caption{Experimental setup for investigations of the nanopulse discharge. High-voltage nanosecond pulses are applied to the discharge gap, in which a steady-state capacitively-coupled plasma is sustained. The $50~\Omega$ load reduces the reflection of the pulse. The ICCD camera is synchronized with the pulse generator. A $30$~m long $50~\Omega$ coaxial line delays the pulse, allowing us to observe the initial stage of the discharge development.} \label{Fi:STP} \end{figure} The experiments were performed in a parallel-plate reactor with the aluminium electrodes of $150$~mm diameter, separated by $54$~mm (Fig.~\ref{Fi:STP}). The reactor was filled with argon at a pressure $p$ of $0.1-10$~Pa. The lower electrode was connected to the RF generator via a blocking capacitor. The upper electrode was connected to the pulse generator in parallel with the $50~\Omega$ load. The RF generator continuously supplied sinusoidal voltage at the frequency $\omega/2\pi=13.56$~MHz and peak-to-peak voltage $U_{pp}=40-100$~V to the lower electrode, producing the steady-state capacitively-coupled plasma between the electrodes. A reaction of this plasma to a high-voltage nanosecond pulse is the subject of our investigation. Pulses from the pulse generator (FID Technologies, FPG 20-M), with the fixed duration of $20$~ns, risetime of about $2$~ns, and variable amplitude $U_A=3-17$~kV were applied to the upper electrode at the repetition frequency of $20$~Hz. The electrode design was aiming to minimize the stray capacitance, which was as low as $\approx 20$~pF and the corresponding electrode charging time $\approx1$~ns. This ensured that the electrode followed the waveform of the pulse. The discharge gap was imaged with an Andor DH-740 ICCD camera. Opening of the image intensifier gate of the camera was synchronized with the high-voltage pulse and could be precisely positioned in time with respect to it. This allowed us to record the evolution of our discharge in a sequence of video frames using repetitive pulses. For each position of the camera gate the image was integrated on the camera CCD over $1$~s. The gate width, being the effective exposure time of each image, and step, being the effective interframe interval, were set according to the particular type of measurement. A $30$~m long $50~\Omega$ coaxial cable, through which the pulse was supplied to the chamber, served as a delay line, enabling to observe the plasma before the pulse arrival to the electrode. An interference filter with the central wavelength of $750$~nm and $10$~nm width, selecting two atomic transitions of argon, $2p_1\to 1s_2$ and $2p_5\to 1s_4$ with the lifetimes of $22.5$ and $24.9$~ns \cite{ArOpt}, respectively, was placed in front of the camera lens. To measure the evolution of the plasma density at a late stage of the pulsed discharge we used a microwave interferometer (Miwitron, MWI 2650) with the frequency of $26.5$~GHz \cite{MWI}. Emitter was sending a probing electromagnetic wave to the plasma through a lateral glass window. The receiver was aligned with respect to the emitter in front of the opposite glass window, so that the horizontal line of sight was formed. The time-resolved phase shift $\phi$ of the probing electromagnetic wave, proportional to the line-of-sight averaged electron density $n_e$, was monitored by the oscilloscope. The time resolution of this measurement was $10$~$\mu$s. Simultaneously, we measured the intensity of integral plasma emission $I_{int}$, collecting the light from the plasma by a small collimating lens and guiding it via a $600$~$\mu$m diameter optic fiber cable to a photomultiplyer module (Hamamatsu, H7827-012) with the $200$~kHz bandwidth. The resulting curves, averaged over $32$ pulses, were recorded both for the interferometric phase shift and integral emission intensity. \section{PIC simulations}\label{Sc:PIC} We employed a 1D3V PIC code with Monte-Carlo collisions (MCC) \cite{Birdsall, Verboncoeur} to simulate a discharge with two parallel-plate electrodes separated by a gap of $L=50$~mm and filled with pure argon. The MCC part of the code was based on a standard approach for argon \cite{MCCArgon}. An important modification of the standard model, caused by the need to monitor the transient processes on ns timescale, was treatment of the argon excited states which have the lifetime of the order of the pulse duration. For instance, the $1s_4$ state, which was considered in the present simulation, has the lifetime of $8.6$~ns \cite{ArRes} and the energy of transition to ground state of $11.6$~eV. The resulting vacuum ultraviolet (VUV) photons are able to produce photoemission from the electrodes with the yield $\gamma\sim 0.1$ \cite{Raizer}. Therefore, in our simulations we counted the number of $1s_4$ excited states (created by electron impact excitation of ground state atoms and decayed according to their natural lifetime). Since the plasma between the electrodes was considered to be optically thin for these VUV photons, each act of decay led with the probability $\gamma$ to immediate creation of a photoelectron at one of the electrodes. Given the short lifetime of the excited states, thus allowed us to discard their spatial distribution and only account for their total number. Similar to the experiment, in our simulations we first generated a steady-state discharge. We set appropriate boundary conditions on the electrodes, i.e. a sinusoidal voltage of $13.56$~MHz frequency was applied to one electrode and the other electrode was grounded. After the RF discharge reached equilibrium, a high voltage was applied to the previously grounded electrode during the period of $\tau=20$~ns and then grounding was restored again. Subsequent relaxation of the discharge was monitored. In the experiment, ICCD camera registered the light emission intensity $I_{exp}$, whereas the simulation dealt with the plasma kinetics and therefore allowed us to access the excitation rate. Evolution of the emission intensity is determined by the convolution of the excitation rate $\Gamma_{exc}(t)$ and the exponential decay function $\exp{(-t/T)}$, where $T$ is the lifetime of the upper level of the transition. In order to compare the simulation and experimental results, we therefore recalculated simulated excitation rate into the emission intensity $I_{sim}$, using $T=24.9$~ns for the lifetime of the $2p_5\to 1s_4$ transition. \section{Experimental results} The plasma relaxation after the high-voltage pulse turned out to be quite a complicated multi-timescale process with two characteristic regimes: A bright flash at the initial stage of the discharge with the characteristic width of the order of $100$~ns (when emission intensity increases $2-3$~orders of magnitude above the steady-state level), and the so-called dark phase lasting from several hundreds of $\mu s$ to several ms (when the emission intensity drops $1-2$~orders of magnitude below the steady-state level). The latter regime appears to be similar to that reported in Refs. [\onlinecite{Amirov1}-\onlinecite{DP2}]. Typical space-time diagrams for both regimes are presented in Fig.~\ref{Fi:XTs}a and~\ref{Fi:XTs}b. \begin{figure*}[t!] \centering \includegraphics [width=6in]{XTs.pdf} \caption{(a) Space-time diagram of the flash (ICCD gate width $10$~ns, gate step $2$~ns). The black line shows the evolution of the voltage on the upper electrode. An afterpulse (at $\simeq200$~ns) and a re-reflected pulse (at $\simeq350$~ns) give rise to additional emission peaks. (b) Space-time diagram of the dark phase (ICCD gate width $2~\mu$s, gate step $2$~$\mu$s). The shown results are for $p=3$~Pa, $U_{pp}=100$~V and $U_A=8$~kV. For each value of $z$ the intensity is averaged over approximately $4$~cm horizontally. Note that intensities in the two panels cannot be directly compared due to different gate widths.} \label{Fi:XTs} \end{figure*} We note that the flash does not occur during the pulse, like in pulsed discharges at atmospheric pressure \cite{KorMes}. Significant growth of emission intensity starts when the high voltage is removed from the electrode. The light emission at the flash stage is characterized by a complicated dependence on $U_A$. We compared these dependencies measured for three different values of $U_{pp}$. For the smallest $U_{pp}=40$~V (Fig.~\ref{Fi:IUppUA}a) the intensity primarily decreases with $U_A$, for $U_{pp}=56$~V (Fig.~\ref{Fi:IUppUA}b) it first increases and then decreases, and for $U_{pp}=100$~V (Fig.~\ref{Fi:IUppUA}c) it primarily increases. This is another distinct feature of our discharge. Usually, in high-pressure pulsed discharges \cite{KorMes} the flash intensity monotonously increases with the pulse amplitude. \begin{figure}[t!] \centering \includegraphics [width=3.1in]{ImaxvsUppUA.pdf} \caption{Temporal evolution of the light emission at the initial (flash) stage, for $p=1.5$~Pa and (a) $U_{pp}=40$~V, (b) $U_{pp}=56$~V, (c) $U_{pp}=100$~V. Growth of the emission intensity after the peak is associated with the afterpulse (Fig.~\ref{Fi:XTs}a). The dashed area indicates the high-voltage pulse.} \label{Fi:IUppUA} \end{figure} We note here that in our experiments the emission intensity exhibits a series of flashes (rather than a single initial flash), as can be seen in Fig.~\ref{Fi:XTs}a. The same is evident in Fig.~\ref{Fi:IUppUA}, where the intensity starts growing again after the first flash. These ``follow-up flashes'' occur due to the afterpulse (with amplitude $\simeq20\%$ of $U_A$) produced by the pulse generator, as well as due to the re-reflection of the main pulse. Nevertheless, in all cases the initial flash is well separated and its intensity can be easily determined. Unfortunately, the presence of follow-up flashes did not allow us to perform careful studies of the effect of $U_A$ on the dark phase. \begin{figure}[t!] \centering \includegraphics [width=3.1in]{DPExp.pdf} \caption{Temporal evolution of the light emission during the dark phase, obtained for $U_A=8$~kV. The figure shows (a) curves for different $p$ at fixed $U_{pp}=100$~V and (b) curves for different $U_{pp}$ at fixed $p=1.5$~Pa. All curves are normalized to the steady-state emission intensity.} \label{Fi:DPExp} \end{figure} In our experiments the dark phase was observed for practically all studied plasma conditions, as illustrated in Fig.~\ref{Fi:DPExp}. With the increase of pressure an overshoot of emission intensity (obvious also in a space-time diagram on Fig.~\ref{Fi:XTs}b) starts to develop at the end of the dark phase. The results of the microwave interferometry measurements for $p=10$~Pa are shown in Fig.~\ref{Fi:MWI}. They indicate that the plasma density, tremendously increased during the flash, remains very high (compared to its steady-state value) also during the dark phase. Measurements at smaller pressures exhibit similar dynamics of plasma density in the dark phase, whereas the steady-state values are too small to be measured reliably. \begin{figure}[t!] \centering \includegraphics [width=3.1in]{MWI.pdf} \caption{Temporal evolution (a) of the integral emission intensity and (b) of the phase shift in the microwave interferometry for $p=10$~Pa and $U_A=8$~kV. The dark phase is accompanied by a dramatic increase of the plasma density.} \label{Fi:MWI} \end{figure} The importance of ``high-pressure'' measurements shown in Fig.~\ref{Fi:MWI} is that they allow us to track the variation of plasma density also in the overshoot. They clearly demonstrate that during the overshoot $n_e$ drops below the steady-state value. \section{Discussion} In order to identify the physical mechanisms underlying the observed behavior of the plasma, we compare our experimental results with the results of PIC simulations. We do it separately for the flash and the dark phase regimes. \subsection{Flash}\label{Sc:IEP} Although the profile of the high-voltage pulse is not free from some spurious features seen in Fig.~\ref{Fi:XTs}a, the resulting flash, as mentioned above, is always easy to identify. Therefore, below we discuss the flash assuming that it was created by a single high-voltage pulse of $20$~ns duration and given amplitude. For our PIC simulations (Sec.~\ref{Sc:PIC}) we used pulses of this idealized shape. \subsubsection{Mechanism of flash generation} \label{Sc:QA} \textit{Qualitative analysis.} Before starting a detailed comparison between our experiments and simulations, we shall demonstrate that even a simple approach based on elementary estimates and scalings can explain the main characteristics of the flash regime. Let us consider a quasineutral plasma slab with the density $n$ between two infinite plain electrodes, separated by a gap $L$. At a certain moment the voltage $U_A$ is applied to one of the electrodes for the period $\tau$. In this consideration we completely neglect the RF electric field (which sustained the steady-state discharge), since it is supposed to be much smaller than the pulse field $E_A=U_A/L$. After the pulse field is applied, electrons in the plasma start moving. For $p\sim1$~Pa, $L=50$~mm and $U_A\sim1$~kV the electron-neutral collisions can be neglected and the electron motion can be considered as ballistic. Time required for an electron to cross the gap is then \begin{equation} \tau_e=L\sqrt{\frac{2m_e}{eU_A}}, \end{equation} where $m_e$ and $e$ are the electron mass and charge, respectively. For the range of $U_A$ employed in our experiments $\tau_e$ lies between $1$ and $3$~ns and therefore is much smaller than the pulse time $\tau$. Hence, electrons are able to leave the gap during the pulse. As electrons are leaving the plasma, the bulk positive charge due to immobile ions (their characteristic time of flight $\tau_i=\tau_e\sqrt{m_i/m_e}$ is in the sub-$\mu$s range) starts building up. Therefore, after removal of the pulse, the \textit{``residual''} electric field $E_{res}$ is generated: It is determined by $\partial E_{res}/\partial z\sim e(n-n_{res})/\epsilon_0$, where $z$ is the discharge axis and $n_{res}$ is the \textit{residual} electron density (remaining in the discharge gap after the removal of the pulse field). Hence we get the following estimate for the ``residual'' field: \begin{equation} E_{res}\sim\frac{e(n-n_{res})L}{\epsilon_0}. \label{Eq:ResE} \end{equation} We conclude that the ``residual'' electric field can vary in the range $0<E_{res}\lesssim E_i$, where \begin{equation} E_i=\left. E_{res}\right|_{n_{res}=0}\sim\frac{enL}{\epsilon_0} \end{equation} is the electric field of immobile bulk ions. For a typical plasma density $n=2\times10^{14}$~m$^{-3}$ we get $E_i\sim2\times10^5$~V/m. Thus, after removal of the pulse field, residual electrons get accelerated by the ``residual'' field and start ionizing the neutral gas. Significant electric field will therefore be present until the excess ions are diluted by the newly generated electrons and ions. Since the ``residual'' field is due to positive bulk charge, electrons are trapped inside the gap, which provides ideal conditions for the ionization boost. Now let us qualitatively consider the dependence of the flash intensity on the pulse field $E_A$. When the pulse field is sufficiently small, $E_A\ll E_i$, it is effectively screened by the plasma, so that $E_{res}\sim E_A$. This naturally causes the flash intensity to grow with $E_A$. On the other hand, for $E_A\gg E_i$ the plasma cannot provide the pulse screening, since the ``residual'' field is limited by $E_i$. In this case, there are (practically) no electrons left in the discharge gap after the pulse~\textendash\, electrons have to be first generated before they start effectively ionizing and exciting neutral gas. This implies that the flash intensity reaches maximum at some $E_A\lesssim E_i$, i.e, it is a non-monotonic function of the ratio $E_A/E_i$. This non-monotonic dependence of the flash intensity on $E_A$ was observed in the experiment, as illustrated in Fig.~\ref{Fi:IUppUA}. The used values of $U_A$ provided the variation of $E_A$ in the range of $(0.6-3.4)\times10^5$~V/m, which was sufficient to observe evidence of both increase and decrease of the flash intensity at fixed plasma conditions (Fig.~\ref{Fi:IUppUA}b). Moreover, by varying $U_{pp}$ and, in this way extending the range of $E_A/E_i$, (since $E_i$ grows with plasma density which, in its turn, grows with $U_{pp}$) we were able to achieve the regimes of the major increase (Fig.~\ref{Fi:IUppUA}c) and decrease (Fig.~\ref{Fi:IUppUA}a) of the flash intensity with $U_A$, occurring at higher and lower values of the plasma density, respectively. The flash is therefore caused by the electric field of bulk ions, which remain uncompensated for a short time after the high-voltage pulse. Possibility of such mechanism of discharge ignition was discussed in Ref.~\onlinecite{SchneiderTVT}. For instance, similar transient decompensation can ignite discharges in solid dielectric materials irradiated by pulsed electron beams of MeV energy \cite{MeVElectr}. Also, the so-called transient luminous events (TLEs) in the upper Earth atmosphere occur as a result of such a decompensation caused by lightning \cite{TLE}. \begin{figure}[t!] \centering \includegraphics [width=3.1in]{ImaxvsUASim.pdf} \caption{Simulated behavior of the emission peak during the flash stage, demonstrating the effect of the secondary electrons for (a) $\gamma=0.1$ and (b) $\gamma=0$. The results are for $p=3$~Pa, $U_{pp}=100$~V and different values of $U_A$. The dashed area indicates the high-voltage pulse.} \label{Fi:ISim} \end{figure} \textit{Comparison with PIC simulations.} Our simulations confirm main qualitative findings. Figure~\ref{Fi:ISim}a presents the dependence of the simulated flash intensity on $U_A$. Its non-monotonic character becomes evident for $U_A$ between $4$ and $16$~kV. In the experiment, as we already mentioned, this range of $U_A$ was insufficient to clearly demonstrate the non-monotonic dependence. This suggests that steady-state plasma density in the experiment was higher than in simulation. \begin{figure}[t!] \centering \includegraphics [width=3.1in]{PeakForm.pdf} \caption{Simulated evolution of plasma parameters (for $p=3$~Pa, $U_{pp}=100$~V, $U_A=4$~kV), showing the formation of the emission peak at the flash stage: (a) emission intensity $I_{sim}$, (b) mean electron energy $K$, (c) electron density $n_e$, (d) relaxation rates $\nu_K=\left|K^{-1}\partial K/\partial t\right|$ and $\nu_N=\left|N^{-1}\partial N/\partial t\right|$, where $N$ is determined by Eq.~(\ref{Eq:N}). The dashed area indicates the high-voltage pulse.} \label{Fi:PF} \end{figure} Figure~\ref{Fi:PF} illustrates formation of the first emission peak. Here we discuss a particular example of a relatively small $E_A$ ($U_A=4$~kV), when significant fraction of electrons is left in the discharge gap after the pulse. However, the same physics is also valid in a case of strong $E_A$. The pulse field and, later, the ``residual'' field (see two respective peaks in Fig.~\ref{Fi:PF}b) accelerate electrons to energies $K\sim100$~eV. After that electrons start loosing energy. At the same time, the number of electrons grows (Fig.~\ref{Fi:PF}c). Remarkably, the emission peak occurs very close to the local maximum of the electron energy loss rate, $\nu_K=\left|K^{-1}\partial K/\partial t\right|$ (compare Fig.~\ref{Fi:PF}a and Fig.~\ref{Fi:PF}d), suggesting that the main mechanism of electron energy loss are inelastic collisions. Therefore, one can argue that the emission peak occurs when the energy spectrum of electrons (which are instantaneously cooled down) becomes ``optimized'' for the impact excitation (whose cross section is a non-monotonic function of electron energy). Delay of the emission peak seen in simulations at higher $U_A$ (Fig.~\ref{Fi:ISim}a) is then explained by much higher initial electron energy (due to much larger $E_A$ and $E_{res}$). We notice, however, that the position of the emission peak has a significantly different tendency with $U_A$ in the experiment, where it is practically independent of $U_A$. Figure~\ref{Fi:PF}d shows the behavior of two logarithmic derivatives, $\nu_K$ and $\nu_N=\left|N^{-1}\partial N/\partial t\right|$, where \begin{equation} N=\frac{n_i-n_e}{n_e} \label{Eq:N} \end{equation} is the relative density disparity, with $n_e$ and $n_i$ being the momentary electron and ion densities, respectively. The parameter $\nu_N$ reflects the decay rate of the ``residual'' field. We notice that $\nu_N>\nu_K$ during a certain period of time before the emission peak (from $110$~ns to $160$~ns). This suggests that even in a 1D case some diffusion cooling takes place: as the newly generated electrons and ions appear and dilute the excessive ion charge, the most energetic electrons become no longer trapped and leave the plasma. In 1D simulations the electrons can only be lost at the electrode surfaces, whereas in a real 3D discharge a significantly larger fraction of electrons can leave the discharge volume in the lateral direction and also be lost on the surfaces of dielectric insulators. Therefore, in a 3D discharge a higher diffusion cooling rate is expected. These additional losses can, in principle, reduce sensitivity of the position of the emission peak to the initial electron energy spectrum. \subsubsection{Role of secondary electrons} In Sec.~\ref{Sc:PIC} we pointed out that surface electron production may play an important role at the initial stage of the discharge development. For large $E_A$ and small $n_{res}$, the electron impact ionization is suppressed right after the pulse, and therefore electron emission from surfaces may become essential for the further evolution. In our case, the secondary electron emission can be produced by ions, neutral metastable particles, electrons, and photons. Since heavy particles are too slow, they can be \textit{a priori} excluded from the consideration on ns timescales. Electron-electron emission can also be ruled out because the typical yield for metallic electrodes never exceeds unity. At the same time, high-energy photons, being fast and insensitive to the electric field, certainly can affect the processes evolving on ns timescales. There are two sources of high-energy photons in our discharge: (i) bremsstrahlung generated during the pulse by energetic (keV) electrons hitting the electrode surface and (ii) VUV light emitted by resonant transitions in Ar atoms. Small electron-photon conversion efficiency ($\sim10^{-5}$ \cite{XRay}) suggests that bremsstrahlung photons cannot play any important role. As regards the effect of the resonance states of argon, it was taken into account in our simulations. Figure~\ref{Fi:ISim} shows the evolution of the plasma emission at the initial flash phase calculated for two different values of the photoemission yield. One can see that for $\gamma=0$ the peak emission intensity retains the same non-monotonic dependence on $U_A$ as in the case $\gamma=0.1$. However, after the maximum is reached, the peak intensity drops with $U_A$ much faster in the case $\gamma=0$. This effect can be easily understood: During the pulse a certain number of the resonant Ar states is generated in the discharge. Since their decay time ($8.6$~ns) is comparable to the pulse duration, they primarily remain in the discharge gap after the pulse and supply the plasma with additional electrons. This leads to effective increase of the residual electron density and, hence, increase of the flash intensity. The effect of Ar resonant states slowly decreases with $U_A$, since the excitation cross-section decreases with the electron energy (in the keV range). \subsection{Dark phase}\label{Sc:DP} \begin{figure}[t!] \centering \includegraphics [width=3.1in]{SimLongTime.pdf} \caption{Evolution of (a) experimentally measured light emission (combination of central cross-sections of Figs.~\ref{Fi:XTs}a and~\ref{Fi:XTs}b) and simulated (b) emission intensity, (c) average electron energy and (d) electron density. In (d), also the experimental microwave interferometric phase shift $\phi\propto n_e$ is shown (steady state value, is close to the limit of detection and therefore is noisy). The shown results are for $U_{pp}=100$~V, $p=3$~Pa and $U_A=8$~kV. The \emph{steady-state discharge} is characterized by relatively small plasma density and low average energy of electrons. \emph{During the pulse} the electron density is drastically decreased, since electrons are swept away by the pulse field. When the high voltage is removed, the plasma density (see also Fig.~\ref{Fi:MWI}b) and average electron energy increase and acquire the values much higher than those in a steady-state plasma (see Sec.~\ref{Sc:IEP} and Fig.~\ref{Fi:PF}) This occurs within a fraction of $\mu$s due to the presence of a strong ``residual'' field. The subsequent relaxation to the steady-state discharge is accompanied by the \emph{dark phase} lasting a fraction of ms (see Sec.~\ref{Sc:DP}). The dashed area indicates the high-voltage pulse.} \label{Fi:SLT} \end{figure} Figures~\ref{Fi:SLT}a and~\ref{Fi:SLT}b present a typical multi-timescale evolution of the discharge emission in our experiments and simulations, respectively. A remarkable qualitative agreement is observed: the dark phase~\textendash~its position and duration~\textendash~are well reproduced in the simulations. High values of the plasma density during the dark phase (Fig.~\ref{Fi:MWI}) are obtained in the simulated discharge as well (compare evolution of $\phi$ and $n_e$ in Fig.~\ref{Fi:SLT}d). We see that a tremendous increase of $n_e$ is accompanied by a steep reduction in the average electron energy (Fig.~\ref{Fi:SLT}c). By comparing Figs.~\ref{Fi:SLT}b and~\ref{Fi:SLT}c we notice that the minimum of emission intensity occurs about $200$~$\mu$s before the minimum of $K$, whereas the electron density is monotonically decreasing during the dark phase (Fig.~\ref{Fi:SLT}d). Obviously, the excitation processes (reflected by the emission intensity) are governed by high-energy tail of the electron distribution function whose kinetics can differ significantly from that of the mean energy $K$. We note that in simulations the RF peak-to-peak voltage was kept constant, whereas in experiments, where the discharge is a part of a real electrical circuit, steadiness of the peak-to-peak voltage cannot be guaranteed: a dramatic increase of the electron density inside the discharge gap causes significant drop of the discharge active resistance, leading to voltage redistribution in the circuit. Such a mechanism is responsible for the formation of a dark phase during the ignition of dc glow discharges [\onlinecite{Amirov1}-\onlinecite{DP2}]. Our experiment is also not completely free from this effect. We found that after the pulse the RF peak-to-peak voltage on the electrode typically experiences a $15\%$ drop. This drop, however, is hardly enough to reduce the emission intensity even by a factor of two, whereas Figs.~\ref{Fi:SLT}a and~\ref{Fi:SLT}b show the reduction by an order of magnitude and more. Our simulations clearly suggest a different mechanism which leads to the electron cooling in the high-density capacitively-coupled RF plasma. To understand this mechanism, let us consider how the dielectric permittivity of a collisionless plasma, $\epsilon=1-(\omega_p/\omega)^2$ (where $\omega_p\sim n_e^{1/2}$ is the electron plasma frequency) varies after the pulse. Already in steady-state conditions $\omega_p/\omega\approx 10$, so that $\epsilon$ is strongly negative and the discharge is stabilized by a weak screening of the RF field. As the electron density grows after the pulse, the screening becomes even stronger. In this regime the penetration depth of the RF field into the plasma is $\delta\simeq c/\omega_p$ \cite{RFRaizer}. For the peak electron density $\sim10^{16}$~m$^{-3}$ (see Fig.~\ref{Fi:SLT}d) $\omega_p/\omega\approx 80$ and $\delta\approx5.3$~cm, which is very close to the interelectrode separation $L$. This indicates that the RF field at this stage is effectively screened by the high density plasma, and electrons can cool down. As the significant fraction of excess charges leave the plasma due to the ambipolar diffusion, RF field starts penetrating deeper into it and heats up electrons again. In the electrotechnical sense, the discharge gap becomes strongly inductive during the dark phase and, therefore, efficiently reflects the RF power. In our experiment the plasma relaxation was sometimes accompanied by two features which we were not able to reproduce in simulations: a ``knee'' following the bottom of the dark phase and an ``overshoot'' at its end (Fig.~\ref{Fi:SLT}a). A hint about the origin of the knee can be found in Ref.~\onlinecite{Aglow}, where similar features occurring at similar timescales were observed during the afterglow of the inductively-coupled low-pressure RF discharge in argon. The authors explained the observed features by the decay of excited atoms created as a result of three-body recombination. In our case, the electron impact excitation rate drops significantly during the dark phase and, therefore, contribution of the recombination (which is not taken into account in our simulations) to the population of 2p levels might indeed be significant. Microwave interferometry measurements performed at higher pressures (Fig.~\ref{Fi:MWI}) deliver further information on the overshoot and provide additional support for the suggested mechanism of the dark phase formation. The plasma density during the overshoot is reduced below the steady-state level, which weakens the screening of RF field and, therefore, leads to more efficient heating of electrons. This, however, should not be understood as a self-consistent explanation of the observed overshoot since the reason for the reduction of electron density remains unknown. Refs.~\cite{DP1, DP2} attribute similar features to the kinetics of metastable atoms, which is not considered in our simulations. \section{Conclusion} Our experiments and simulations showed that a high-voltage nanosecond pulse applied to a steady-state capacitively-coupled low-pressure weakly ionized plasma produces a profound long-lasting disturbance. The resulting effects are governed by a variety of different mechanisms operating in plasma at essentially different time scales. One can identify two principal regimes: the flash, lasting about a hundred ns (up to several hundreds ns in the simulations) after the pulse, and the dark phase, with the duration from a few hundreds $\mu$s to a few ms. Since the pulse field is comparable to the electric field of bare ions in the steady-state plasma, a significant fraction of electrons is swept away from the discharge gap during the pulse. Directly demonstrated in simulations, this effect also found an indirect confirmation in the experimental dependencies of the flash intensity on the pulse amplitude: the intensity increased with the pulse amplitude at higher densities of the steady-state plasma, whereas at lower densities it starts decreasing. After the pulse field is removed, the residual electrons start accelerating in a strong field of immobile (at this stage) ions, which generates the flash. We showed that the flash intensity is maximal when the pulse field is somewhat smaller than the field of (immobile) bare ions. Secondary electron emission due to VUV radiation from the excited argon states turned out to be important at this stage. During the dark phase following the bright flash, the emission intensity drops below the steady-state value. Both the simulations and time-resolved microwave interferometry measurements showed that the electron density during this phase is much higher than that in the steady-state plasma. In such a dense plasma the penetration depth of the RF field decreases and becomes comparable to the interelectrode gap. This screening effect leads to effective cooling of electrons and the subsequent decrease of the emission intensity. Thus, 1D3V PIC simulations provide good qualitative explanation of the major features observed in our experiments. Investigation of unresolved issues, such as the effect of the pulse amplitude on the flash delay or the origin of the knee and overshoot seen in emission intensity during the dark phase, require additional careful experiments and numerical simulations. \section{Acknowledgements} This work was partially carried out within the framework of the European Fusion Development Agreement and the French Research Federation for Fusion Studies.
1,941,325,220,619
arxiv
\section{Introduction } Air pollution in the form of particulate matter is associated with negative health effects and is considered as the largest contributor to premature deaths worldwide \cite{kim2015review}\cite{wong2015satellite}\cite{rajak2020short}\cite{shehab2019effects}. According to the World Health Organization, more than 90 percent of the world's population are exposed to harmful pollutants with levels exceeding up to five times the new guidelines updated in September 2021 by W.H.O \cite{world2021global}. The major sources of air pollution are attributed to anthropogenic activities like industries, use of highly polluting car for transport, agriculture, and pollution from cooking with fossil fuel. Data on pollution levels and chemical composition are scarce in many low and middle income countries particularly in Africa. This is due to high cost of equipment needed to collect and analyze samples. In most low and middle-income countries, the only data available comes from estimates from satellites. A limited number of research outputs exist in some parts of the world, like the Sub-Saharan African region. The existing literature on air pollution in Africa focus more on the use of low-cost sensors to measure some specific pollutants most likely particulate matter with diameter less than 2.5 micro-meter\cite{okure2022characterization}\cite{raheja2022network} \cite{mcfarlane2021application} \cite{gahungu2022trend}. Some studies have considered short-term and localized campaigns for source apportionment \cite{schwander2014ambient} \cite{kalisa2018characterization}. This paper focuses on the spatio-temporal variations of air pollution across Africa and explores predictive models for future trends. The remainder of this work is organized as follows. The second session gives a brief description of the data and models used in this study. The third session presents results for spatio-temporal trends and predictions based on three data-driven models: Auto-Regressive Integrated Moving Average, Neural Network, and Gaussian processes. We conclude the work in session four. \section{Data and Methods} \subsection{Data} In this work, we focus on particulate matter with a diameter of less than 2.5 micrometers (PM2.5). It is considered the most dangerous pollutant that affects human health with its ability to easily penetrate the lungs. Reliable measurements come from reference monitors. Reference monitors like the Beta Attenuation Monitors (BAMs) are very expensive. This is the main reason why these monitors are not widely used in many African countries, where the data available comes from low-cost sensors. The table \textbf{[\ref{africa}]} and map \textbf{[\ref{africa}]} below describe the African cities studied based on air pollution data collected by the US Embassies across Africa. \begin{table}[ht!] \caption{The summary of PM2.5 ($\SI{}{\micro\gram}/m^3$) levels from African cities includes the latitude and longitude of data collection, the time period, and the countries and cities.} \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{ c@{\hspace{2mm}} c @{\hspace{1.5\tabcolsep}} c @{\hspace{1.5\tabcolsep}} c @{\hspace{2mm}}c } \toprule \textbf{African Cities} & \textbf{Country} & \textbf{Period} & \textbf{Latitude} & \textbf{Longitude }\\ \toprule Kigali & Rwanda & Feb - June 2022 & 1.9441° S & 30.0619° E \\ Conakry & Guinea & Jan 2020 - June 2022& 9.6412° N & 13.5784° W \\ Bamako & Mali& Oct 2019 - Jan 2022 & 12.6392° N & 8.0029° W \\ Lagos & Nigeria & Feb 2021 - June 2022 & 6.5244° N & 3.3792° E \\ Abuja& Nigeria & Feb 2021 - June 2022&9.0765° N & 7.3986° E \\ Libreville& Gabon & Apr 2021- Mar 2022 & 0.4162° N & 9.4673° E \\ Algiers& Algeria & Feb 2019 - June 2022& 36.7538° N & 3.0588° E \\ Accra& Ghana & Jan 2020 - June 2022 &5.6037° N & 0.1870° W \\ Kinshasa& DRC & Mar - June 2022 &4.4419° S & 15.2663° E \\ Kampala& Uganda & Feb 2017 - June 2022 &0.3476° N & 32.5825° E \\ Nairobi& Kenya & Mar 2021 - June 2022& 1.2921° S& 36.8219° E \\ N'Djamena& Tchad & May 2020 - June 2022 & 12.1348° N & 15.0557° E \\ Khartoum& Sudan & Jan 2020 - June 2022 & 15.5007° N & 32.5599° E \\ Addis Ababa& Ethiopia & Aug 2016 - June 2022 & 8.9806° N & 38.7578° E \\ Antananarivo& Madagascar & Jan 2020 - June 2022 & 18.8792° S & 47.5079° E \\ Abidjan& C\^{o}te d'Ivoire & Feb 2020 - June 2022 & 5.3600° N & 4.0083° W \\ \bottomrule \end{tabular} % } \label{table} \end{table} \FloatBarrier \begin{figure}[ht!] \centering \caption{Air pollution monitoring stations at US Embassies in Africa} \includegraphics[width=1.02\linewidth]{Photos/Remynew.pdf} \label{africa} \end{figure} \FloatBarrier \subsection{Methods} We model the data generation process with the Auto-Regressive Integrated Moving Average (ARIMA) model and two universal function approximators: Neural networks and Gaussian processes. \subsection*{Auto-Regressive Integrated Moving Average} We model $y_t$, the observation at time $t$ as a function of historical values $y_{0}, \cdots, y_{t-1}$. \begin{eqnarray} y_t=f(y_0,\cdots, y_{t-1}). \end{eqnarray} where \begin{align*} f= \alpha+ \beta_1 y_{t-1}+\beta_2 y_{t-2}+\cdots+ \beta_{p} y_{t-p} + \epsilon_t \\ \nonumber +\mu+W_t+\theta_1W_{t-1}+\theta_2W_{t-2}+\cdots+\theta_qW_{t-q} \nonumber, \end{align*} where the model coefficients are $\beta_1, \cdots, \beta_p$, and $\theta_1, \cdots, \theta_q$, $p$ and $q$ are non-negative integers at any lag, $\alpha$ and $\mu$ are the model intercepts, and $\epsilon_t$ is an error term such that $\epsilon_t \sim \mathcal{N} (0,1)$, $\mu$ is the long run average, $W_t$ is the shock for the process such that $W_t= \sigma \times \epsilon_t$ where $\sigma$ is the conditional standard deviation. \subsection*{Neural networks} We denote the observations at time $j$ by $x_j$ with $j=1, \cdots, n$. Here $n$ is size of the dataset. An approximation of the data generation process using a neural networks can be expressed as follows: \begin{eqnarray} y=f\left (\sum_{j=1}^{n-1} W_j x_j+b \right), \end{eqnarray} where $W$ are weights, $b$ is the bias and $f$ is the activation function. The following activation functions are commonly used. \begin{itemize} \item Rectified Linear Unit (ReLU). \begin{eqnarray} f(.)=\max(0,.). \end{eqnarray} \item Sigmoid function. \begin{eqnarray} f(.)=\frac{1}{1+\exp(-.)}. \end{eqnarray} \item Hyperbolic tangent. \begin{eqnarray} f(.)=\tanh(.)=\frac{\exp(.)-\exp(-.)}{\exp(.)+\exp(-.)}. \end{eqnarray} \end{itemize} \subsection*{Gaussian processes} Gaussian processes are considered as infinite-dimensional normal distributions with a mean and a covariance functions denoted by $m(.)$ and $k(.,.')$ respectively \cite{rasmussen2003gaussian}. \begin{eqnarray} f(.)\sim GP(m(.),k(.,.')) \end{eqnarray} The following covariance functions are commonly used: \begin{itemize} \item Radial Basis Function (RBF). \begin{eqnarray} k(.,.')=\sigma^2\exp(-{\frac{||x-x'||^2}{2l^2}}) \end{eqnarray} \item Linear function. \begin{eqnarray} k(.,.')=\sum_{i=1}^{N} \sigma_i^2 x_i x_{i}^{'} \end{eqnarray} \end{itemize} \section{Results and Discussion} \subsection{Trend analysis} Air pollution and its emission in African cities differ based on the country's population, industrialization, urbanization, economic status, emission sources, physical regions, and meteorology. The trend analysis of PM2.5 levels included sixteen cities from across Africa. The analysis shows that the mass concentrations of PM2.5 in those cities are currently increasing, negatively impacting people's lives by causing respiratory and cardiovascular diseases. The hourly, daily and seasonal analyses were made based on the cities' locations. The figure \textbf{[\ref{fig12}]} shows hourly mean concentrations of PM2.5 in the countries from East Africa. From the figure, the highest hourly mean concentration of PM2.5 in the morning hours was observed in Kampala at 8:00 a.m. with the values of $\SI{77.46}{\micro\gram}/m^3$, and the trend started to decline in the period of 8:00 a.m. – 3:00 p.m. After 3:00 p.m., the hourly trend begins to rise as many workers with vehicles return home and emissions from biomass burning as a source of energy at home were suggested to contribute. \vspace{0.9mm} Kigali ranks second in the morning rush hours when workers are on their way to work, with high levels of hourly mean concentration at 9:00 a.m. PM2.5 levels in Kigali begin to rise again between 4:00 p.m. and late at night, with the highest hourly mean concentration of $\SI{54.82}{\micro\gram}/m^3$ recorded at 0:00 a.m. These differences in PM2.5 levels in Kigali are thought to be caused by traffic, industries, and combustion activities. Other developing cities, such as Nairobi, Antananarivo and Addis Ababa have PM2.5 concentrations that are less than the ones for Kigali and Kampala. Though the mass concentrations of PM2.5 in Nairobi are not as high as in other cities such as Kampala, Kigali, and Addis Ababa, according to \textbf{\cite{gaita2014source}}, mineral dust and traffic-related factors account for $74\%$ of annual PM2.5 concentrations of the entire country. \textbf{\cite{kinney2011traffic}} In their findings have shown that PM2.5 concentrations in Nairobi, particularly in the street and areas adjacent to a city road, have exceeded WHO guidelines of $\SI{25 }{\micro\gram}/m^3$ as a 24-hour mean. The East African traffic network is expanding on a daily basis, resulting in high levels of traffic activity; however, the majorities of traffic in these cities are old and in poor condition. These issues contribute to the respiratory problems that pedestrians may experience as a result of the improved road networks, particularly during morning and evening rush hours. \begin{figure}[ht!] \centering \caption{Hourly PM2.5 concentrations in East Africa} \includegraphics[width=1.02\linewidth]{Photos/hourly_mean_PM25_Concentration_east_africa.pdf} \label{fig12} \end{figure} \FloatBarrier \textbf{\cite{leon2021pm}} Environmental degradation has put pressure on West African countries as a result of rapid population growth, urbanization, and a growing economy. Small particle concentrations in West Africa are frequently much higher than the WHO recommended limits \textbf{\cite{ world2021global}}. These fine particles are caused by human activities such as charcoal fires, waste combustion in cities, and Savannah fires. Other particles originate in North Africa as a result of the wind blowing dust from the Sahara desert. The figure \textbf{[\ref{fig13}]} depicts the hourly mean mass concentrations of PM2.5 in West African cities. According to the figure Bamako experiences the highest hourly mean PM2.5 concentrations due to its location. Emissions in Bamako have been linked to traffic, biomass burning, dust, and desert dust events. All of these factors are thought to have contributed to this city having the highest observable PM2.5 levels of any West African city studied. Higher morning peak time averages in Bamako was observed at 9:00 a.m., with values of $\SI{85.89}{\micro\gram}/m^3$, and during the evening hours, the trend began to rise from 3:00 p.m. to 9:00 p.m. where the highest values of $\SI{115.95 }{\micro\gram}/m^3$ was recorded at 9:00 p.m. This has resulted in this city having the highest PM2.5 emissions among the remaining West African cities considered in the study. \textbf{\cite{ezeh2017elemental}} Nigeria, as one of Africa's fastest developing countries, is being affected by the rise in mass concentrations of PM2.5 caused by human activities and industrialization in cities like Lagos and Abuja. According to the World Bank \textbf{\cite{ WorldBank2020lagos}}, Nigeria had the highest number of premature deaths due to ambient PM2.5, particularly in the cities mentioned above. Road transport, industrial emissions, and power generation are the primary sources of air pollution in Nigeria, though more research is needed to determine the contribution of each sector separately. \textbf{\cite{croitoru2020cost}} Because some cars in Nigeria are older than five years, the quality of imported fuels (diesel and gasoline), and the unlimited means of transportation in cities, road transport contributes much more than other sources. All of these factors contributed to Abuja and Lagos having the highest mass concentrations of PM2.5 in the morning rush hours when compared to other remaining cities such as Accra, Conakry, and Bamako, particularly between the hours of 7:00 a.m. and 9:00 a.m. Between 6:00 p.m. and 10:00 p.m. in Abuja, PM2.5 emissions begin to rise due to mineral dust, vehicle exhaust, industrial emissions, and the use of firewood as a cooking fuel. \textbf{\cite{weinstein2010characterization}} In Guinea, particularly in Conakry, some of the proposed air pollution emissions are caused by unregulated combustion and processing emissions from industrial sites, unregulated emissions from gasoline vehicles and residential wood burning. According to \textbf{\cite{weinstein2010characterization}}, cement manufacturing plants, electric power plants, brick manufacturing operations, steel smelters, and medical waste incinerators are among the industries that are thought to contribute to the mass concentration of PM2.5 in Conakry. Conakry has low hourly emissions when compared to other cities such as Bamako, Lagos, and Abuja. At 9:00 a.m., the highest hourly mean mass concentration of PM2.5 was recorded, with a value of $\SI{40.25 }{\micro\gram}/m^3$ whereas the highest hourly mean concentrations of PM2.5 in the morning hours in Bamako, Lagos, and Abuja are $\SI{85.89 }{\micro\gram}/m^3$ at 9:00 a.m., $\SI{57.06 }{\micro\gram}/m^3$ at 8:00 a.m., and $\SI{51.94 }{\micro\gram}/m^3$ at 7:00 a.m respectively. Other cities, such as Abidjan and Accra, which are close to the Atlantic Ocean, have emission rates similar to Conakry, as shown by the hourly trends in the figure \textbf{[\ref{fig13}]}. \begin{figure}[ht!] \centering \caption{Hourly PM2.5 concentrations in West Africa} \includegraphics[width=1.02\linewidth]{Photos/hourly_mean_PM25_Concentration_west_africa.pdf} \label{fig13} \end{figure} \FloatBarrier The hourly mean variation of PM2.5 in figure \textbf{[\ref{fig11}]} compares African cities from central region. According to the graph \textbf{[\ref{fig11}]}, the highest hourly mean mass concentrations of PM2.5 were observed in N'Djamena and Khartoum. These two cities are close to the Sahara desert, which influences PM2.5 emissions in neighboring countries. Like in other cities, in the morning hours from 6:00 am to 9:00 am in N'Djamena, the emission has increased up to $\SI{104.53}{\micro\gram}/m^3$ at 8:00 am, while in the evening, the highest hourly mean concentration of $\SI{139.83}{\micro\gram}/m^3$ was observed at 8:00 pm. In Khartoum, the highest peak of $\SI{64.36}{\micro\gram}/m^3$ was observed at 8:00 a.m., and levels of PM2.5 decreased after 8:00 a.m. until noon. The levels gradually increased at a slow rate during the afternoon hours until the evening. The highest peak in the evening and late night was at 9:00 a.m., with a mass concentration of $\SI{76.94}{\micro\gram}/m^3$. Though PM2.5 concentrations in Kinshasa and Libreville in figure \textbf{[\ref{fig11}]} are not as high as in other cities, the sources of PM2.5 emissions include road transportation, poorly maintained vehicles, smoke from open-air barbeques, burning trash, and non-standard fuels such as gasoline and diesel \textbf{\cite{mcfarlane2021first}}. Gabon as a developing country, it has policies in place to control the quality of imported gasoline and diesel, but no clear policy in place to control vehicle emission standards and air quality regulations. \begin{figure}[ht!] \centering \caption{Hourly PM2.5 concentrations in Central Africa} \includegraphics[width=1.02\linewidth]{Photos/hourly_mean_PM25_Concentration_central_africa.pdf} \label{fig11} \end{figure} \FloatBarrier \textbf{\cite{belarbi2020road}} Fuel combustion, power generation, and industrial facilities are some of the primary sources of PM2.5 in Algiers. Residential fireplaces and wood burning as a source of energy at home are two other sources that have increased emissions. This rise in PM2.5 mass concentrations in Algiers is linked to various cases of mortality and morbidity throughout the country. From figure \textbf{[\ref{fig14}]}, the highest and lowest hourly mean mass concentration of PM2.5 of $\SI{22.45}{\micro\gram}/m^3$ and $\SI{18.94}{\micro\gram}/m^3$ was observed at 10:00 a.m. and 7:00 a.m. respectively. The increase in PM2.5 mass concentrations in Algiers from 7:00 a.m. to 10:00 a.m. is associated with an increase in traffic activities, particularly traffic movements as people commute to work. PM2.5 levels dropped from 10:00 a.m. throughout the afternoon hours to 19.44/m3 at 6:00 p.m. Then, from 7:00 a.m. until late, the levels of PM2.5 in Algiers increased, reaching $\SI{21.908}{\micro\gram}/m^3$ at 2:00 a.m. \begin{figure}[ht!] \centering \caption{Hourly PM2.5 concentrations in North Africa} \includegraphics[width=1.02\linewidth]{Photos/hourly_mean_PM25_Concentration_north_africa.pdf} \label{fig14} \end{figure} \FloatBarrier The daily trend analysis in figure \textbf{[\ref{fig2}]} shows that weekdays (Monday through Friday) have high PM2.5 concentrations during the weekends (Saturday and Sunday) in some cities. These differences in emission during the weekdays and weekends are linked to the air pollution resulted from human and traffic activities in east Africa. Human activities have influenced the rate of emissions during the week, and the decrease on weekends is due to minimal activities. In Kigali, Nairobi, and Antananarivo, the highest peak was observed on weekdays rather than weekends, whereas weekends had higher levels of PM2.5 than weekdays in Kampala and Addis Ababa. The lower levels of PM2.5 in Antananarivo on weekends compared to weekdays were caused by a decrease in traffic activity. The daily variation of PM2.5 concentration in Kampala, Kigali, and Nairobi shows that the highest levels of PM2.5 were observed on Saturday, Wednesday, and Tuesday, with values of $\SI{57.35}{\micro\gram}/m^3$, $\SI{42.97}{\micro\gram}/m^3$, and $\SI{17.40}{\micro\gram}/m^3$, respectively. This implies that because these cities are among the most populated in East Africa, the daily mass concentration of PM2.5 varies due to daily activities. In Addis Ababa, weekday emissions were lower than on weekends, with Saturday having the highest daily mass concentration of $\SI{25.51}{\micro\gram}/m^3$. \begin{figure}[ht!] \centering \caption{Daily PM2.5 concentrations in East Africa} \includegraphics[width=1.02\linewidth]{Photos/daily_mean_PM25_Concentration_east_africa_1.pdf} \label{fig2} \end{figure} \FloatBarrier The figure \textbf{[\ref{fig3}]} shows the weekly average PM2.5 for cities in West African countries. The mass concentration of PM2.5 is higher in cities such as Bamako, Lagos, and Abuja than in Accra, Abidjan, and Conakry. In these coastal cities, PM2.5 levels are lower than in other cities, with the exception of Lagos, Africa's most populous city, where daily emissions that pollute the air are high. \begin{figure}[ht!] \centering \caption{Daily PM2.5 concentrations in West Africa} \includegraphics[width=1.02\linewidth]{Photos/daily_mean_PM25_Concentration_west_africa.pdf} \label{fig3} \end{figure} \FloatBarrier According to figure \textbf{[\ref{fig1}]}, the concentration of PM2.5 was higher on weekdays than on weekends in Khartoum and N'Djamena. The average mass concentration of PM2.5 in Khartoum was $\SI{78.78}{\micro\gram}/m^3$ during the week and $\SI{72.979}{\micro\gram}/m^3$ at the weekend. The average PM2.5 level in N'Djamena was $\SI{92.26}{\micro\gram}/m^3$ during the week and $\SI{84.64}{\micro\gram}/m^3$ on weekends. When these results are compared, weekday emissions are more significant than weekends because they are the days with the most traffic and the influence of agricultural and construction activities is visible on weekdays. Although daily emissions are significant in Khartoum and N'Djamena, they are not the same in Libreville and Kinshasa. The highest mass concentration in PM2.5 was $\SI{13.68}{\micro\gram}/m^3$ in Libreville on Thursday, and $\SI{4.62}{\micro\gram}/m^3$ in Kinshasa on Monday. The daily emissions in these cities were not as high as in Khartoum and N'Djamena. \begin{figure}[ht!] \centering \caption{Daily PM2.5 concentrations in Central Africa} \includegraphics[width=1.02\linewidth]{Photos/daily_mean_PM25_Concentration_central_africa.pdf} \label{fig1} \end{figure} \FloatBarrier Figure \textbf{[\ref{fig4}]} shows the daily variation of PM2.5 levels in Algiers. The overall weekdays mean concentration is $\SI{20.83}{\micro\gram}/m^3$ and $\SI{20.76}{\micro\gram}/m^3$ on weekends. This indicates that emissions were higher during the week than on weekends. The highest value of $\SI{21.33}{\micro\gram}/m^3$ PM2.5 was observed on Monday, while the lowest value of $\SI{20.409}{\micro\gram}/m^3$ was observed on Friday. \begin{figure}[ht!] \centering \caption{Daily PM2.5 concentrations in North Africa} \includegraphics[width=1.02\linewidth]{Photos/daily_mean_PM25_Concentration_north_africa.pdf} \label{fig4} \end{figure} \FloatBarrier The figure \textbf{[\ref{fig:sub-first_monthly}]} depicts the seasonal variation of PM2.5 concentrations in East African cities. During the dry season, mineral dust is one of the suggested contributors to the increase in particulate matter due to unpaved road surfaces, especially when the wind blows dust. During the dry season, Savannah fires are associated with carbonaceous aerosols detected in this region of East Africa. In figure \textbf{[\ref{fig:sub-first_monthly}]} seasonal variations in PM2.5 levels were higher in Kampala and Addis Ababa than in other East African cities such as Nairobi and Antananarivo. The highest measured PM2.5 levels in Kampala during the dry season were observed in July, and as the country entered the short rainy season, emissions began to decline until the start of the short dry season. The monthly mean concentration of PM2.5 in the dry season is $\SI{67.29}{\micro\gram}/m^3$ in July and $\SI{73.51}{\micro\gram}/m^3$ in January whereas the lowest values observed in April and May were $\SI{38.37}{\micro\gram}/m^3$ and $\SI{42.06}{\micro\gram}/m^3$, respectively. Similarly, seasonal variation was observed in Addis Ababa, with PM2.5 averaging $\SI{36.71}{\micro\gram}/m^3$ during the dry season and $\SI{21.17}{\micro\gram}/m^3$ during the rainy season. There are only two seasons in Madagascar: a hot, rainy season from November to April and a cooler, dry season from May to October. The highest level of PM2.5 in November was $\SI{36.11}{\micro\gram}/m^3$ during the hot, rainy season, while it was $\SI{31.84}{\micro\gram}/m^3$ during the other season. Nairobi appears to have lower PM2.5 levels than other cities. This implies that the factors that would be expected to cause seasonal variations (mineral dust suspension, open-air waste burning, and agricultural burning) have emitted fewer pollutants than in other cities. PM2.5 concentrations in Nairobi are higher in July and August than in other months. \begin{figure}[ht!] \centering \caption{Monthly PM2.5 concentrations in East Africa} \includegraphics[width=1.02\linewidth]{Photos/PM25_Concentration_east_africa.pdf} \label{fig:sub-first_monthly} \end{figure} \FloatBarrier \textbf{\cite{weinstein2010characterization}} Western countries experience the Harmattan season, which occurs between November and March during the dry season. The dry and dusty northeasterly trade wind blows from the Saharan desert across west Africa into the Gulf of Guinea during this season. The Harmattan brings desert-like weather conditions that reduce humidity, raise daily temperatures, prevent rainfall formation, and produce large clouds of dust that cause dust storms or sandstorms. Tornadoes can form when the Harmattan interacts with monsoon winds. Moist equatorial air masses from the Atlantic and Pacific oceans bring annual monsoon rains from May to September. This wind blows from the south-west and brings rain between May and September (wet monsoon). The seasonal variation of PM2.5 levels in West African cities is described in the figure \textbf{[\ref{fig:sub-second_monthly}]}. The decline of PM2.5 concentrations in the period of April to July is due to the rainy season across the region. In the rainy season, Conakry had the lowest monthly average PM2.5 concentrations of $\SI{8.40}{\micro\gram}/m^3$ and $\SI{9.37}{\micro\gram}/m^3$ in September and August, while in the dry season, Abuja, Bamako, and Lagos had the highest monthly average PM2.5 concentrations of $\SI{120.84}{\micro\gram}/m^3$, $\SI{106.92}{\micro\gram}/m^3$ and $\SI{93.02}{\micro\gram}/m^3$ in March, February, and January, respectively. \begin{figure}[ht!] \centering \caption{Monthly PM2.5 concentrations in West Africa} \includegraphics[width=1.02\linewidth]{Photos/PM25_Concentration_west_africa.pdf} \label{fig:sub-second_monthly} \end{figure} \FloatBarrier The monthly variations of PM2.5 levels in cities in Africa's central region are shown in figure \textbf{[\ref{fig:sub-third_monthly}]}. According to the figure, the highest levels of PM2.5 in N'Djamena were observed in March and February. After March, emissions decrease as the country enters the rainy season, which reduces the amount of dust in N'Djamena. The rate of emission in Libreville is lower than in Khartoum and N'Djamena, indicating that these two countries which are close to the Sahara desert have high monthly mean mass concentrations of PM2.5. Khartoum has a subtropical desert climate, with relative rainfall from the African monsoon from the south in July and September. High emissions occur in December and February, when the temperature is warm during the day and cool at night. This temperature variation, especially during the day, raises the rate of PM2.5 emissions in Khartoum. \begin{figure}[ht!] \centering \caption{Monthly PM2.5 concentrations in Central Africa} \includegraphics[width=1.02\linewidth]{Photos/PM25_Concentration_central_africa.pdf} \label{fig:sub-third_monthly} \end{figure} \FloatBarrier The seasonal spatio-temporal variation of PM2.5 in Algiers is represented in figure \textbf{[\ref{fig:sub-fourth_monthly}]}. The monthly average value of PM2.5 ratios falls month after month. Many sandstorms hit Algiers in January, February, and March, raising PM2.5 levels. The monthly mean mass concentrations of PM2.5 in February, January, and March were $\SI{22.86}{\micro\gram}/m^3$, $\SI{23.0005}{\micro\gram}/m^3$, and $\SI{24.58}{\micro\gram}/m^3$, respectively. PM2.5 levels have risen in the months of May, June, and July with monthly average of $\SI{}{\micro\gram}/m^3$, $\SI{}{\micro\gram}/m^3$, and $\SI{}{\micro\gram}/m^3$, respectively because the weather is warm. The lowest monthly mean PM2.5 concentration of $\SI{15.79}{\micro\gram}/m^3$ was observed in November, as the country entered the winter season. \begin{figure}[ht!] \centering \caption{Monthly PM2.5 concentrations in North Africa} \includegraphics[width=1.02\linewidth]{Photos/PM25_Concentration_north_africa.pdf} \label{fig:sub-fourth_monthly} \end{figure} \FloatBarrier \subsection{Forecasting} The figures from \textbf{[\ref{model1} - \ref{model15}]} are for forecasting trends in real-time PM2.5 data in all of the cities studied. To train models, data was divided into two sets: training and testing. Both models were used to forecast PM2.5 levels in various African cities. \begin{figure}[ht!] \centering \caption{Forecasting with ARIMA, Neural Networks, and Gaussian Processes on Kigali PM2.5 data} \includegraphics[width=1.02\linewidth]{Photos/all_models_kigali.pdf} \label{model1} \end{figure} \FloatBarrier \begin{figure}[ht!] \centering \caption{Forecasting with ARIMA, Neural Networks, and Gaussian Processes on Conakry PM2.5 data} \includegraphics[width=1.02\linewidth]{Photos/all_models_conakry.pdf} \label{model2} \end{figure} \FloatBarrier \FloatBarrier \begin{figure}[ht!] \centering \caption{Forecasting with ARIMA, Neural Networks, and Gaussian Processes on Bamako PM2.5 data} \includegraphics[width=1.02\linewidth]{Photos/all_models_bamako.pdf} \label{model3} \end{figure} \FloatBarrier \FloatBarrier \begin{figure}[ht!] \centering \caption{Forecasting with ARIMA, Neural Networks, and Gaussian Processes on Lagos PM2.5 data} \includegraphics[width=1.02\linewidth]{Photos/all_models_lagos.pdf} \label{model33} \end{figure} \FloatBarrier \FloatBarrier \begin{figure}[ht!] \centering \caption{Forecasting with ARIMA, Neural Networks, and Gaussian Processes on Abuja PM2.5 data} \includegraphics[width=1.02\linewidth]{Photos/all_models_abuja.pdf} \label{model4} \end{figure} \FloatBarrier \FloatBarrier \begin{figure}[ht!] \centering \caption{Forecasting with ARIMA, Neural Networks, and Gaussian Processes on Libreville PM2.5 data} \includegraphics[width=1.02\linewidth]{Photos/all_models_libreville.pdf} \label{model5} \end{figure} \FloatBarrier \FloatBarrier \begin{figure}[ht!] \centering \caption{Forecasting with ARIMA, Neural Networks, and Gaussian Processes on Algiers PM2.5 data} \includegraphics[width=1.02\linewidth]{Photos/all_models_algiers.pdf} \label{model6} \end{figure} \FloatBarrier \FloatBarrier \begin{figure}[ht!] \centering \caption{Forecasting with ARIMA, Neural Networks, and Gaussian Processes on Accra PM2.5 data} \includegraphics[width=1.02\linewidth]{Photos/all_models_accra.pdf} \label{model7} \end{figure} \FloatBarrier \FloatBarrier \begin{figure}[ht!] \centering \caption{Forecasting with ARIMA, Neural Networks, and Gaussian Processes on Kinshasa PM2.5 data} \includegraphics[width=1.02\linewidth]{Photos/all_models_kinshasa.pdf} \label{model8} \end{figure} \FloatBarrier \FloatBarrier \begin{figure}[ht!] \centering \caption{Forecasting with ARIMA, Neural Networks, and Gaussian Processes on Kampala PM2.5 data} \includegraphics[width=1.02\linewidth]{Photos/all_models_kampala.pdf} \label{model9} \end{figure} \FloatBarrier \FloatBarrier \begin{figure}[ht!] \centering \caption{Forecasting with ARIMA, Neural Networks, and Gaussian Processes on Nairobi PM2.5 data} \includegraphics[width=1.02\linewidth]{Photos/all_models_nairobi.pdf} \label{model10} \end{figure} \FloatBarrier \FloatBarrier \begin{figure}[ht!] \centering \caption{Forecasting with ARIMA, Neural Networks, and Gaussian Processes on N'Djamena PM2.5 data} \includegraphics[width=1.02\linewidth]{Photos/all_models_ndjamena.pdf} \label{model11} \end{figure} \FloatBarrier \FloatBarrier \begin{figure}[ht!] \centering \caption{Forecasting with ARIMA, Neural Networks, and Gaussian Processes on Khartoum PM2.5 data} \includegraphics[width=1.02\linewidth]{Photos/all_models_khartoum.pdf} \label{model12} \end{figure} \FloatBarrier \FloatBarrier \begin{figure}[ht!] \centering \caption{Forecasting with ARIMA, Neural Networks, and Gaussian Processes on Addis Ababa PM2.5 data} \includegraphics[width=1.02\linewidth]{Photos/all_models_addis_ababa.pdf} \label{model13} \end{figure} \FloatBarrier \FloatBarrier \begin{figure}[ht!] \centering \caption{Forecasting with ARIMA, Neural Networks, and Gaussian Processes on Antananarivo PM2.5 data} \includegraphics[width=1.02\linewidth]{Photos/all_models_antananarivo.pdf} \label{model14} \end{figure} \FloatBarrier \FloatBarrier \begin{figure}[ht!] \centering \caption{Forecasting with ARIMA, Neural Networks, and Gaussian Processes on Abidjan PM2.5 data} \includegraphics[width=1.02\linewidth]{Photos/all_models_abidjan.pdf} \label{model15} \end{figure} \FloatBarrier \section{Model Comparison} The prediction of air pollution, particularly particulate matter less than 2.5 microns in diameter, is location dependent because the sources of emissions differ from city to city and country to country. The forecasting results show that neural network and ARIMA models outperformed Gaussian Process Regression algorithms in adapting to real-time PM2.5 trends. The models were compared using three statistical metrics: root mean square, mean absolute error, and mean absolute percentage error to determine the accuracy of the model in adapting to actual PM2.5 levels. The table \textbf{[\ref{tab:my_label}]} compares the performance metrics of ARIMA, Neural Networks, and Gaussian Processes. \begin{table}[ht!] \caption{The Root Mean Square Errors (RMSEs) of ARIMA, Neural Networks, and Gaussian Processes.} \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{ p{1.82cm}p{0.89cm}p{0.89cm}p{0.89cm} } \toprule \multicolumn{1}{c}{\textbf{US Embassy in cities}} &\multicolumn{3}{c}{\textbf{RMSE}}\\ \midrule & \textbf{{\footnotesize ARIMA}} &\textbf{{\footnotesize AN N}}&\textbf{{\footnotesize GPR}}\\ \midrule Kigali & 18.441 &17.770 &26.295 \\ Conakry & 10.544 & 11.386 &27.336 \\ Bamako &7.989 &8.138 &29.383 \\ Lagos &11.494 &11.932 &42.882 \\ Abuja & 28.168 &29.253 &67.153 \\ Libreville &3.756 &4.244 &3.888 \\ Algiers &5.785 &6.260 &10.954 \\ Accra &15.328 &16.672 &39.869 \\ Kinshasa& 0.789 &0.773 &1.631 \\ Kampala&12.179 &13.056 &27.059 \\ Nairobi&4.265 & 4.480 &2.692 \\ N'Djamena&108.71 &123.67 &165.84 \\ Khartoum& 60.488 &60.170 &231.69 \\ Addis Ababa& 5.801 &6.237 & 15.368 \\ Antananarivo& 4.282 &4.534 &5.664 \\ Abidjan& 7.660 &7.594 &20.509 \\ \bottomrule \end{tabular} % } \label{tab:my_label} \end{table} \FloatBarrier \vspace{0.9mm} According to the RMSE results, both the ARIMA and the Neural Network outperform the Gaussian process. This means that neural networks, as a machine learning approach, can be used to analyze and predict time series data, as well as to solve real-world problems like air pollution. Although the results of the Gaussian process regression are not as precise as those of the other models, the study shows that they can be used to forecast air quality data. \section{Conclusion} This study has focused on the analysis of PM2.5 variations across Africa. As it has been demonstrated in this paper, data-driven models such as Auto-Rregressive Integrated Moving Average, Neural Network, and Gaussian Process Regression can be used to forecast PM2.5 trends in Africa. Future work should consider other air pollutants that affect human health, such as PM10, SO2, NOx, CO, and O3. \section{Acknowledgments} The authors, P.G. and J-R.K., would like to thank the African Institute for Mathematical Sciences for their assistance, as well as the financial support provided by the Canadian government through Global Affairs Canada and the International Development Research Centre. \bibliographystyle{abbrv}
1,941,325,220,620
arxiv
\section{Introduction}\label{sec:intro} Video recordings collapse a moving three-dimensional world onto a two-dimensional screen. How do the projected two-dimensional images vary with time? This apparent two-dimensional motion is called {\em optical flow}. More precisely, the optical flow at a frame in a video is a vector field, where the vector at each pixel points to where that pixel appears to move for the subsequent frame~\cite{beauchemin1995computation}. Algorithms estimating optical flow must exploit or make assumptions about the prior statistics of optical flow. Indeed, it is impossible to recover the optical flow vector field using only a video recording. For example, if one is given a video of a spinning barber's pole, one does not know (a priori) whether the pole is moving up, or instead spinning horizontally. The estimation of optical flow from a video sequence is a useful step in many computer vision tasks~\cite{barron1994performance,fleet2006optical}, including for example robotic motion. Therefore there is a substantial interest in the statistics of optical flow. One example database of optical flow is from the video short \emph{Sintel}, which is a computer generated 3D film. The movements and textures in \emph{Sintel} are complex and the scenes are relatively long. Furthermore, since the film is open source, the optical flow data is available for analysis (see Figure~\ref{fig:sampleFlow}), as described in detail by~\cite{butler2012naturalistic}. As no instrument measures ground-truth optical flow, databases of optical flow must be carefully generated, and \emph{Sintel} is one of the richest such datasets. \begin{figure}[htb] \centering \includegraphics[scale=0.1]{ambush4frame7horiz.png} \hspace{0.3mm} \includegraphics[scale=0.1]{ambush4frame7vert.png}\\ \vspace{1mm} \includegraphics[scale=0.1]{temple3frame22horiz.png} \hspace{0.3mm} \includegraphics[scale=0.1]{temple3frame22vert.png} \caption{Two sample optical flows extracted from the \emph{Sintel} database. (Left column) horizontal components and (Right column) vertical components of optical flow. White corresponds to flow in the positive direction ($+x$ or $+y$), and black corresponds to the negative direction.} \label{fig:sampleFlow} \end{figure} We study the nonlinear statistics of a space of high-contrast $3\times 3$ optical flow patches from the \emph{Sintel} dataset using the topological machinery of~\cite{carlsson2008local} and~\cite{Range}. We identify the topologies of dense subsets of this space using Vietoris--Rips complexes and persistent homology. The densest patches lie near a circle, the \emph{horizontal flow circle}~\cite{NEBinTDA}. In a more refined analysis, we select out the optical flow patches whose predominant direction of flow is in a small bin of angle values. The patches in each such bin are well-modeled by a circle; each such circle is explained by the nonlinear statistics of range image patches. Together these circles at different angles stitch together to form a torus model for optical flow. We explain the torus model via the mathematical data of a \emph{fiber bundle}, and experimentally verify the torus using zigzag persistence. One could use the torus model for the nonlinear statistics of optical flow for optical flow compression. Indeed, a $3\times 3$ optical flow patch could be stored not as a list of 9 vectors in $\mathbb{R}^2$, but instead as: \begin{itemize}[noitemsep] \item an average flow vector in $\mathbb{R}^2$, \item two real numbers parametrizing a patch on a 2-dimensional torus, and \item a $3\times 3$ collection of error vectors, whose entries will tend to be small in magnitude. \end{itemize} This is the first step towards a Huffman-type code~\cite{huffman1952method}, as used for example in JPEG~\cite{wallace1992jpeg}, and considered for optical images in~\cite{carlsson2008local} using a Klein bottle model. \cite{perea2014klein} projects image patches to this Klein bottle and use Fourier-theoretic ideas to create a rotation-invariant descriptor for texture; perhaps similar ideas with projections to the flow torus could be used to identify different classes of optical flow (for example, flow from an indoor scene versus flow from an outdoor scene). In Section~\ref{sec:related} we overview prior work, in Section~\ref{sec:topo} we introduce our topological techniques, and in Section~\ref{sec:space} we describe the spaces of high-contrast optical flow patches. We present our main results in Section~\ref{sec:res}. Our code is available at \url{https://bitbucket.org/Cross_Product/optical_flow/}. A preliminary version of this paper appeared in~\cite{adams2019nonlinear}. We have added an orientation check to distinguish between the torus and the Klein bottle, and furthermore, we describe the sense in which the base space of our fiber bundle model arises from the (nonlinear) statistics of range image patches. \section{Prior Work}\label{sec:related} In the field of computer vision, a computer takes in visual data, analyzes the data via various statistics, and then outputs information or a decision based on the data. Optical flow is commonly computed in computer vision tasks such facial recognition~\cite{bao2009liveness}, autonomous driving~\cite{kitti2013dataset}, and tracking problems~\cite{horn1981determining}. Even though no instrument measures optical flow, there are variety of databases that have reconstructed ground truth optical flow via secondary means. The Middlebury dataset in~\cite{baker2011database} ranges from real stereo imagery of rigid scenes to realistic synthetic imagery; the database contains public ground truth optical flow training data along with sequestered ground truth data for the purpose of testing algorithms. The data from~\cite{ucl2012opticalFlow} consists of twenty different synthetic scenes with the camera and movement information provided. The KITTI Benchmark Suite~\cite{kitti2013dataset} uses a car mounted with two cameras to film short clips of pedestrians and cars; attached scanning equipment allows one to reconstruct the underlying truth optical flow for data testing and error evaluation. The database by~\cite{roth2007spatial} does not include accompanying video sequences. Indeed, Roth and Black generate optical flow for a wide variety of natural scenes by pairing camera motions with range images (a range image contains a distance at each pixel); the resulting optical flow can be calculated from the geometry of the static scene and of the camera motion. Only static scenes (with moving cameras) are included in this database: no objects in the field of view move independently. By contrast, the ground-truth \emph{Sintel} optical flow database by~\cite{butler2012naturalistic}, which we study in this paper, is computed directly by projecting the the 3-dimensional geometry underlying the film. Foundational papers that have analyzed the statistics of optical images from the perspective of computational topology include~\cite{lee2003nonlinear}, which proposes a circular model for $3\times 3$ optical image patches, and~\cite{carlsson2008local}, which uses persistent homology to extend this circular model to both a three-circle model and a Klein bottle model for different dense core subsets. \section{Methods}\label{sec:topo} Using only a finite sampling from some unknown underlying space, it is possible to estimate the underlying space's topology using persistent homology, as done by~\cite{carlsson2008local} for optical image patches. For more information on homology see~\cite{armstrong2013basic,Hatcher}, for introductions to persistent homology see~\cite{Carlsson2009,EdelsbrunnerHarer,edelsbrunner2000topological,zomorodian2005computing}, and for applications of persistent homology to sensor networks, machine learning, biology, medical imaging, see~\cite{PersistentImages,baryshnikov2009target,bendich2016persistent,bubenik2015statistical,chung2009persistence,lum2013extracting,Coordinate-free,topaz2015topological,xia2014persistent}. First we thicken our finite sampling $X$ into a larger space giving an approximate cellularization of the unknown underlying space. We use a \emph{Vietoris--Rips simplical complex} of $X$ at scale $r\ge 0$, denoted $\vr{X}{r}$. The vertex set is some metric space (or data set) $(X,d)$, and $\vr{X}{r}$ has a finite subset $\sigma\subseteq X$ as a face whenever $\mathrm{diam}(\sigma)\leq r$ (i.e., whenever $d(x,x')\leq r$ for all vertices $x,x'\in\sigma$). By definition, $\vr{X}{r} \subseteq \vr{X}{r'}$ whenever $r\leq r'$, so this forms a nested sequence of spaces as the scale $r$ increases. For example, let $X$ be 21 points which (initially unknown to us) are sampled with noise from a circle. Figure~\ref{fig:rips} contains four nested Vietoris--Rips complexes built from $X$, with $r$ increasing. The black dots denote $X$. At first $r$ is small enough that a loop has not yet formed. As $r$ increases, we recover instead a figure-eight. For larger $r$, $\vr{X}{r}$ recovers a circle. Finally, $r$ is large enough that the loop has filled to a disk. \begin{figure}[htp] \centering \includegraphics[scale=0.23]{Rips2} \includegraphics[scale=0.23]{Rips3} \includegraphics[scale=0.23]{Rips4} \includegraphics[scale=0.23]{Rips6} \caption{Four nested Vietoris--Rips complexes, with $\beta_0$ equal to 1 in all four complexes, and with $\beta_1$ equal to 0, 2, 1, and 0.} \label{fig:rips} \end{figure} \begin{figure}[htp] \centering \subfigure{\includegraphics[width=3.5in]{barcode1}} \subfigure{\includegraphics[width=3.5in]{barcode2rescaled}} \caption{(Top) The $0$-dimensional persistence barcode associated to the dataset in Figure~\ref{fig:rips}. (Bottom) The $1$-dimensional persistence barcode associated to the same dataset.} \label{fig:circleBarcodes} \end{figure} Next we apply homology, an algebraic invariant. The $k$-th Betti number of a topological space, denoted $\beta_k$, roughly speaking counts the number of ``$k$-dimensional holes" in a space. More precisely, $\beta_k$ is the rank of the $k$-th homology group. As a first example, the number of 0-dimensional holes in any space is the number of connected components. For an $n$-dimensional sphere with $n\ge1$, we have $\beta_0=1$ (one connected component) and $\beta_n=1$ (one $n$-dimensional hole). The choice of scale $r$ is important when attempting to estimate the topology of an underlying space by a Vietoris--Rips complex $\vr{X}{r}$ of a finite sampling $X$. Indeed, without knowing the underlying space, we do not know how to choose the scale $r$. We therefore use persistent homology~\cite{edelsbrunner2000topological,EdelsbrunnerHarer,zomorodian2005computing}, which allows us to compute the Betti numbers over a range of scale parameters $r$. Persistent homology relies on the the fact that the map from a topological space $Y$ to its $k$-th homology group $H_k(Y)$ is a \emph{functor}: for $r\leq r'$, the inclusion $\vr{X}{r}\hookrightarrow \vr{X}{r'})$ of spaces induces a map $H_k\bigl(\vr{X}{r}\bigr)\to H_k\bigl(\vr{X}{r'}\bigr)$ between homology groups. Figure~\ref{fig:circleBarcodes} displays persistent homology barcodes, with the horizontal axis encoding the varying $r$-values. At a given scale $r$, the Betti number $\beta_k$ is the number of intervals in the dimension $k$ plot that intersect the vertical line through scale $r$. The dimension $0$ plot shows $21$ disjoint vertices joining into one connected component as $r$ increases. In the dimension $1$, the two intervals plot correspond to the two loops that appear: each interval begins when a loop forms and ends when that loop fills to a disk. For a long range of $r $-values, the topological profile $\beta_0=1$ and $\beta_1=1$, is obtained. Hence, this barcode reflects the fact that our points $X$ were noisily sampled from a circle. Indeed, the idea of persistent homology is that long intervals in the persistence barcodes typically correspond to real topological features of the underlying space. In zigzag persistence~\cite{carlsson2009zigzag,ZigzagPersistence}, a generalization of persistent homology, the direction of maps along a sequence of topological spaces is now arbitrary. For example, given a large dataset $Y$, one may attempt to estimate the topology of $Y$ by instead estimating the topology of a number of smaller subsets $Y_i\subseteq Y$. Indeed, consider the following diagram of inclusion maps between subsets of the data. \begin{equation*} Y_1\hookrightarrow Y_1 \cup Y_2 \hookleftarrow Y_2\hookrightarrow Y_2 \cup Y_3\hookleftarrow Y_3\hookrightarrow\cdots \hookleftarrow Y_n. \end{equation*} Applying the Vietoris--Rips construction at scale parameter $r$ and $k$-dimensional homology, we obtain an induced sequence of linear maps \begin{center} \begin{tikzpicture} \node at (-0.5, 0.45) (a) {$H_k\bigl(\vr{Y_1}{r}\bigr)$}; \node at (1, -0.45) (b) {$H_k\bigl(\vr{Y_1 \cup\, Y_2}{r}\bigr)$}; \node at (2.5, 0.45) (c) {$H_k\bigl(\vr{Y_{2}}{r}\bigr)$}; \node at (4, -0.45) (d) {$\cdots$}; \node at (5.5, 0.45) (e){$H_k\bigl(\vr{Y_n}{r}\bigr)$}; \draw [->] (a) -- (b); \draw [->] (c) -- (b); \draw [->] (c) -- (d); \draw [->] (e) -- (d); \end{tikzpicture} \end{center} which is an example of a \textit{zigzag diagram}. Such a sequence of linear maps provides the ability to track features contributing to homology among the samples $Y_i$. Generators for $H_k(\vr{Y_i}{r})$ and $H_k(\vr{Y_{i+1}}{r})$ which map to the same generator of $H_k\bigl(\vr{Y_i \cup\, Y_{i+1}}{r}\bigr)$ indicate a feature common to both $Y_i$ and $Y_{i+1}$. By tracking features common to all samples $Y_i$, we can estimate the topology of $Y$ without explicitly computing the persistent homology of the entire dataset. \section{Experiments on Spaces of Flow Patches}\label{sec:space} \textit{Sintel}~\cite{roosendaal2010} is an open-source computer animated film containing a variety of realism-enhancing effects, including widely-varied motion, illumination, and blur. The MPI-\emph{Sintel} optical flow dataset~\cite{butler2012naturalistic} contains $1041$ optical flow fields from 23 indoor or outdoor scenes in this film. Each flow field is $1024\times 436$ pixels, and scenes are up to 49 frames long. In similar preprocessing steps to those done by~\cite{Range,carlsson2008local,lee2003nonlinear}, we create two types of spaces of high-contrast optical flow patches, $X(k,p)$ and $X_\theta(k,p)$. The version $X_\theta(k,p)$ includes only those optical flow patches whose predominant angle is near $\theta\in[0,\pi)$. Step 1: From the MPI-\emph{Sintel} database, we choose a random set of $4\cdot10^5$ optical flow patches of size $3\times3$. Each patch is a matrix of ordered pairs, where we denote by $u_i$ and $v_i$ the horizontal and vertical components of the flow vector at pixel $i$, arranged as follows. \[\begin{bmatrix} (u_1,v_1) & (u_4,v_4) & (u_7,v_7)\\ (u_2,v_2) & (u_5,v_5) & (u_8,v_8)\\ (u_3,v_3) & (u_6,v_6) & (u_9,v_9) \end{bmatrix}\] We rearrange each patch $x$ to be a length-18 vector, \\ $x=(u_1, \ldots, u_9, v_1, \ldots, v_9)^T \in \mathbb{R}^{18}$. Let $u$ and $v$ to be the vectors of horizontal and vertical flow, namely $u=(u_1, u_2,\ldots, u_9)^T$ and $v=(v_1, v_2,\ldots, v_9)^T$. Step 2: We compute the contrast norm $\|x\|_D$ for each patch $x$ by summing the squared differences between all adjacent pixels and then taking the square root: \begin{align*} \|x\|_D^2 &=\sum_{i \sim j} \|(u_i,v_i)-(u_j,v_j)\|^2 \\ & =\sum_{i \sim j}(u_i-u_j)^2+(v_i-v_j)^2 =u^TDu+v^TDv. \end{align*} Here $i\sim j$ denotes that pixels $i$ and $j$ are adjacent in the $3\times 3$ patch, and $D$ is a symmetric positive-definite $9\times9$ matrix that stores the adjacency information of the pixels in a $3\times 3$ patch~\cite{lee2003nonlinear}. Step 3: We study only high-contrast flow patches, which we expect to follow a different distribution than low-contrast patches. Indeed, we select those patches that have a contrast norm among the top 20\% of the entire sample. We replace each selected patch $x$ with its contrast-normalized patch $x/\|x\|_D$, mapping each patch onto the surface of an ellipsoid. Dividing by contrast norm zero is not a concern, as such patches are not high-contrast. Step 4: We further normalize the patches to have zero average flow. We replace each contrast-normalized vector $x$ with $(u_1-\bar{u}, \ldots, u_9-\bar{u}, v_1-\bar{v}, \ldots, v_9-\bar{v})^T$, where $\bar{u}=\frac{1}{9}\sum_{i=1}^{9}u_i$ is the average horizontal flow, and $\bar{v}=\frac{1}{9}\sum_{i=1}^{9}v_i$ is the average vertical flow. The significance of studying mean-centered optical flow patches is that one can represent any optical flow patch as its mean vector plus a mean-centered patch. Step 5: In the case of $X_\theta(k,p)$ (as opposed to $X(k,p)$), we compute the predominant direction of each mean-centered flow patch, as follows. For each $3 \times 3$ patch, construct a $9 \times 2$ matrix $X$ whose $i$-th row is $(u_i , v_i)\in\mathbb{R}^2$. Apply principal component analysis (PCA) to $X$ to retrieve the principal component with the greatest component variance (i.e., the direction that best approximates the deviation from the mean). We define the \emph{predominant direction} of this patch to be the angle of this direction (in $[0,\pi)$ or $\mathbb{RP}^1$) . Select only those patches whose predominant direction is in the range of angles from $\theta-\frac{\pi}{12}$ to $\theta+\frac{\pi}{12}$. Step 6: If we have more than 50,000 patches, then for the sake of computational feasibility we subsample down to 50,000 random patches. Step 7: At this stage we have at most 50,000 high-contrast normalized optical flow patches. We restrict to dense core subsets thereof, instead of trying to approximate the topology of such a diverse space. We use the density estimator $\rho_k$, where $\rho_k(x)$ is the distance from $x$ to its $k$-th nearest neighbor; $\rho_k$ is inversely proportional to density. We obtain a more local (or global) estimate of density by decreasing (or increasing) the choice of $k$. Based on the density estimator $\rho_k$, we select out the top $p\%$ densest points. We denote this set of patches by $X(k,p)$ (or $X_\theta(k,p)$ in the case where Step~5 is performed). Some remarks on the preprocessing steps are in order. Studying only high-contrast patches (Step~3) prevents the dataset from being a ``cone" (with apex a constant gray patch), and hence contractible. The dataset is still extremely high-dimensional, however. Dividing by the contrast norm in Step~3 maps the data from $\mathbb{R}^{18}$ to a 17-dimensional sphere thereof, and Step~4 maps the data to a 15-dimensional sphere. Though the normalizations are important for our analysis, the normalized data is still 15-dimensional, and hence it is not at all clear that we will succeed in Section~\ref{sec:res} in finding 1- and 2-dimensional models for dense core subsets of this 15-dimensional data. Our models will only be for dense core subsets of the data, which are produced via density thresholding in Step~5. Some optical flow patches, such as those created by zooming in or zooming out on a flat wall, are certainly present in the \emph{Sintel} dataset, but not with high-enough frequency to remain after the density thresholding in Step~5 nor to appear in our 1- and 2-dimensional models in Section~\ref{sec:res}. Though \emph{Sintel} is one of the richest optical flow datasets, we emphasize that it is created synthetically, and to some degree its statistics will vary from the optical flow in real-life videos. We would be interested in the patterns that arise in larger (say $5\times 5$ or $7\times 7$) patches, though in this paper we restrict attention to $3\times 3$ patches. \section{Results and Theory}\label{sec:res} Before describing our results on optical flow patches, we first describe the nonlinear statistics of range image patches (which contain a distance at each pixel), which will play an important role in the theory behind our results. \cite{lee2003nonlinear} observes that high-contrast $3\times3$ range patches from~\cite{huang2000statistics} cluster near binary patches. \cite{Range} uses persistent homology to find that the densest range clusters are arranged in the shape of a circle. After enlarging to $5\times5$ or $7\times7$ patches, the entire primary circle in Figure~\ref{fig:primBin}(a) is dense. The patches forming the range primary circle are binary approximations to linear step edges; see Figure~\ref{fig:primBin}(b). \begin{figure}[htp] \centering \includegraphics[height=2.5cm]{primarycircle} \hspace{1mm} \includegraphics[scale=0.15]{binaryApprox} \caption{(Left) Range patch primary circle. White regions are background; black regions are foreground. (Right) The top row contains linear step edges; the bottom row contains their range image binary approximations.} \label{fig:primBin} \end{figure} \subsection{The horizontal flow circle}\label{ssec:hor} Using the the nudged elastic band method,~\cite{NEBinTDA} found that the dense core subset $X(300,30)$ is well-modeled by a horizontal flow circle. We instead project onto suitable basis vectors in order to explain this circular model. Let $e_1, e_2, \ldots, e_8$ be the discrete cosine transform (DCT) basis for $3\times3$ scalar patches, normalized to have mean zero and contrast norm one~\cite{lee2003nonlinear}. We rearrange each $e_i$ to be a vector of length 9. For each $i=1,2, \ldots,8$, we define optical flow vectors $e_i^u=\begin{pmatrix}e_i \\ \vec{0}\end{pmatrix}$ and $e_i^v=\begin{pmatrix}\vec{0} \\ e_i\end{pmatrix}$, where $\vec{0}\in\mathbb{R}^9$ is the vector of all zeros. The vectors $e_i^u,e_i^v\in\mathbb{R}^{18}$ correspond respectively to optical flow in the horizontal and vertical directions; four of these basis vectors are in Figure~\ref{fig:dct}. We change coordinates from the canonical basis for $\mathbb{R}^{18}$ to the 16 basis vectors $e_1^u, \ldots, e_8^u, e_1^v, \ldots,e_8^v$ (only 16 basis vectors are needed to model patches with zero average flow). Projecting $X(300,30)$ onto basis vectors $e_1^u$ and $e_2^u$, as shown in Figure~\ref{fig:gk300c30}~(left), reveals the circular topology. \begin{figure}[htp] \centering \includegraphics[height=1.63in]{patches_k300p30.pdf} \hspace{1mm} \includegraphics[height=1.63in]{HorizontalFlowCircle.pdf} \caption{ (Left) Projection of $X(300,30)$ onto $e_1^u$ and $e_2^u$. (Right) The horizontal flow circle. The patch at angle $\alpha$ is $\cos(\alpha)e_1^u+\sin(\alpha)e_2^u$. } \label{fig:gk300c30} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=3in]{dct.pdf} \caption{In the $e_1$ and $e_2$ DCT patches, white pixels are positive and black negative. The arrows in the flow patches $e_1^u$, $e_2^u$, $e_1^v$, and $e_2^v$ show the optical flow vector field patch. } \label{fig:dct} \end{figure} We denote by $S^1$ the interval $[0,2\pi]$ with its endpoints identified. The patches in $X(300,50)$ lie near $\{\cos(\alpha)e_1^u+\sin(\alpha)e_2^u \ |\ \alpha\in S^1\}$, which we call the {\em horizontal flow circle}. To explain why the horizontal circle is high-density, we will use the statistics of both the camera motion database and the range image database. \begin{wrapfigure}{r}{0.6in} \centering \vspace{-\intextsep} \includegraphics[width=0.6in]{camera.pdf} \vspace{-2\intextsep} \end{wrapfigure} Any camera motion can be decomposed into six sub-motions: translation in the $x$, $y$, or $z$ direction (commonly referred to as right-left, up-down, or inward-outward translation), and rotation about the $x$, $y$, or $z$ axis (commonly referred to as pitch, yaw, or roll). For $\theta\in S^1$, we will refer to \emph{$\theta$ camera translation}, by which we mean translation of $\cos(\theta)$ units to the right, $\sin(\theta)$ units up, and no units inwards or outwards. The most common camera translations are when $\theta=0$ or $\pi$, i.e.\ when the camera is translated to the left or right, for example if the camera is mounted on a horizontally moving car or held by a horizontally walking human~\cite{roth2007spatial}. Adams and Carlsson~\cite{Range} show that high-contrast range patches are dense near the range patch primary circle $\{\cos(\alpha)e_1+\sin(\alpha)e_2\ |\ \alpha\in S^1\}$ (see Figure~\ref{fig:primBin}(a)). Consider pairing the common horizontal ($\theta=0$ or $\pi$) camera translations with primary circle range patches. Under camera translation in the $xy$ plane, the flow vector at a foreground pixel has the same direction but greater magnitude than at a background pixel. After the mean-centering normalization in Step~4 of Section~\ref{sec:space}, a $\theta=0$ camera translation over the range patch $\cos(\alpha)e_1+\sin(\alpha)e_2$ produces the optical flow patch $\cos(\alpha)e_1^u+\sin(\alpha)e_2^u$, and a $\theta=\pi$ camera translation produces flow patch $-\cos(\alpha)e_1^u-\sin(\alpha)e_2^u$. Hence when applied to all primary circle range patches, $\theta=0$ or $\pi$ translations produce the horizontal flow circle $\{\cos(\alpha)e_1^u+\sin(\alpha)e_2^u\ |\ \alpha\in S^1\}$ in Figure~\ref{fig:gk300c30}. \subsection{Fiber bundles}\label{ssec:fiber} Our torus model for the MPI-\emph{Sintel} optical flow dataset is closely related to the notion of a fiber bundle. A fiber bundle is a tuple $(E,B,f,F)$, where $E$, $B$, and $F$ are topological spaces, and where $f\colon E\to B$ is a continuous map satisfying the so-called \textit{local triviality} condition. Space $B$ is the \emph{base space}, $E$ is the \emph{total space}, and $F$ is the \emph{fiber}. The local triviality condition states that given $b\in B$, there exists an open set $U\subseteq B$ containing $b$ and a homeomorphism $\varphi\colon f^{-1}(U)\to U\times F$ such that $\text{proj}_{U}\circ \varphi=f|_{f^{-1}(U)}$, where $\text{proj}_U$ denotes the projection onto the $U$--component. In other words, we require $f^{-1}(U)$ to be homeomorphic to $U\times F$ in a consistent fashion. Therefore, for any $p\in B$, we have $f^{-1}(\{p\})\cong F$. Locally, the total space $E$ looks like $B\times F$, whereas globally, the different copies of the fiber $F$ may be ``twisted'' together to form $E$. Both the cylinder and the M\"{o}bius band are fiber bundles with base space the circle $S^1$, and with fibers the unit interval $[0,1]$. The cylinder is the product space $S^1\times [0,1]$, whereas in the M\"{o}bius band, the global structure encodes a ``half twist'' as one loops around the circle. Locally, however, both spaces look the same, as each have the same fiber above each point of $S^1$. The torus and the Klein bottle similarly are each fiber bundles over $S^1$, with fibers also $S^1$; indeed, they are the only two circle bundles over the circle. Here the torus is the product space $S^1\times S^1$, whereas the Klein bottle is again ``twisted." We use persistent homology in Section~\ref{ssec:tor} to justify a model for the MPI-\emph{Sintel} dataset that is naturally equipped with the structure of a fiber bundle over a circle, with each fiber being a circle. A priori it is not clear whether this fiber bundle model should be the orientable torus or the nonorientable Klein bottle (which do occur in nature, as in the space of optical image patches~\cite{carlsson2008local}). Via an orientation check, we will furthermore show that this optical flow fiber bundle model is a torus. \subsection{A torus model for optical flow}\label{ssec:tor} We now describe the torus model for high-contrast patches of optical flow. Define the map $f:S^1\times S^1 \mapsto \mathbb{R}^{18}$ via \begin{equation*} f(\alpha,\theta)=\cos(\theta)\Bigl(\cos(\alpha)e_1^u+\sin(\alpha)e_2^u\Bigr)+\sin(\theta)\Bigl(\cos(\alpha)e_1^v+\sin(\alpha)e_2^v\Bigr). \end{equation*} For $(\alpha,\theta)\in S^1\times S^1$, this means that $f(\alpha,\theta)$ is the optical flow patch produced from $\theta$ camera translation over the primary circle range patch $\cos(\alpha)e_1+\sin(\alpha)e_2$. One obtains the horizontal flow circle by restricting to common camera motions $\theta\in\{0,\pi\}$, and by allowing the range patch parameter $\alpha\in S^1$ to be arbitrary. When both parameters are allowed to vary over $S^1$, we hypothesize that a larger model for flow patches is produced. Therefore, we ask: What is the image space $\mathrm{im}(f)$, i.e., what space do we get when both inputs $\alpha$ and $\theta$ are varied? \begin{figure}[htp] \centering \includegraphics[scale=0.13]{tor1}\\ \includegraphics[scale=0.13]{tor2} \hspace{5mm} \includegraphics[scale=0.13]{tor3} \caption{(Top) The domain of $f$, namely $\{(\alpha,\theta)\in S^1\times S^1\}$. (Bottom) The flow torus $\mathrm{im}(f)$. The horizontal axis is the angle $\alpha$, and the vertical axis is the angle $\theta$ (respectively $\theta-\alpha$ on the right). The horizontal flow circle is in red.} \label{fig:tor} \end{figure} \begin{wrapfigure}{r}{0.3in} \centering \vspace{-\intextsep} \includegraphics[width=0.3in]{samplePatch.pdf} \vspace{-\intextsep} \end{wrapfigure} Figure~\ref{fig:tor}~(top) shows the domain of $f$, namely $\{(\alpha,\theta)\in S^1\times S^1\}$. This space, obtained by identifying the outside edges of the square as indicated by the arrows, is a torus. In the insert to the right we show a sample patch on this torus. The black and white rectangles are the foreground and background regions, respectively, of the underlying range patch. The angle of the line separating the the foreground and background region is given by the parameter $\alpha$. The direction $\theta$ of camera translation is given by the black arrow ($>$, $\vee$, $<$, or $\wedge$) in the white foreground rectangle. The black and white arrows together show the induced optical flow vector field $f(\alpha,\theta)$. In Figure~\ref{fig:tor}~(top), parameter $\alpha$ varies in the horizontal direction, and parameter $\theta$ varies in the vertical direction. For two points $(\alpha,\theta),(\alpha',\theta')\in S^1\times S^1$, we have \[f(\alpha,\theta)=f(\alpha',\theta') \Leftrightarrow(\alpha,\theta)=(\alpha',\theta') \mbox{ or } (\alpha,\theta)=(-\alpha',-\theta').\] This means that under the map $f$, antipodal points in Figure~\ref{fig:tor}~(top) produce the same flow patch. For instance, the horizontal flow circle appears twice in red (note the top and bottom edges of the square are identified). It follows that the image space $\mathrm{im}(f)$ is homeomorphic to the quotient space $\{(\alpha,\theta)\in S^1\times S^1\} / \sim$, where $\sim$ denotes the identification $(\alpha,\theta)\sim(-\alpha,-\theta)$. A torus with antipodal points identified remains a torus, and we refer to $\mathrm{im}(f)$ as the {\em optical flow torus}; see Figure~\ref{fig:tor}~(bottom). The right and left edges of the bottom right image are identified by shifting one upwards by half its length (not by twisting) before gluing, which suggests a change of coordinates. In Figure~\ref{fig:tor}~(bottom right) we plot the same flow torus, except that we replace the vertical parameter $\theta$ with $\theta-\alpha$. The horizontal flow circle in red now wraps once around one circular direction, and twice around the other direction. \begin{figure}[htb] \centering \includegraphics[scale=0.12]{angle0k300p50} \hspace{5mm} \includegraphics[scale=0.12]{angle2k300p50} \hspace{5mm} \includegraphics[scale=0.12]{angle4k300p50}\\ \vspace{5mm} \includegraphics[scale=0.12]{angle6k300p50} \hspace{5mm} \includegraphics[scale=0.12]{angle8k300p50} \hspace{5mm} \includegraphics[scale=0.12]{angle10k300p50} \caption{The set of patches in $X_\theta(300,50)$, projected to the plane spanned by the basis vectors $\cos(\theta)e_1^u+\sin(\theta)e_1^v$ (horizontal axis) and $\cos(\theta)e_2^u+\sin(\theta)e_2^v$ (vertical axis). From left to right, we show $\theta=0,\frac{\pi}{6},\frac{2\pi}{6},\frac{3\pi}{6},\frac{4\pi}{6},\frac{5\pi}{6}$. The projected circles together group together to form a torus. } \label{fig:torus_projections} \end{figure} We provide experimental evidence that $\mathrm{im}(f)$, the optical flow torus, is a good model for high-contrast optical flow. Figures~\ref{fig:torus_projections} and~\ref{fig:torus_fiber_persistence} show that for any angle $\theta$, the patches $X_\theta(300,30)$ (with predominant flow in direction $\theta$) form a circle. These circles group together to form a torus, which furthermore has a natural fiber bundle structure. Indeed, the map from the torus to the predominant angle $\theta$ of each patch is a fiber bundle, whose total space is a torus, whose base space is the circle of all possible predominant angles $\theta$, and whose fibers are circles (arising from the range image primary circle in Figure~\ref{fig:primBin}). \begin{figure}[htb] \centering \includegraphics[height=1.6in]{angle3k300c50_diagram1} \hspace{10mm} \includegraphics[height=1.6in]{angle7k300c50_diagram1} \caption{The 1-dimensional persistent homology of Vietoris--Rips complexes of $X_\theta(300,30)$, computed in Ripser~\cite{bauer2017ripser}, illustrate that these data sets are well-modeled by circles (one significant 1-dimensional feature in the top left of each plot). Persistence diagrams contain the same content as persistence intervals, just in a different format: each point is a topological feature with birth scale and death scale given by its $(x,y)$ coordinates. We plot two sample angles: $\theta=\frac{3\pi}{12}$ (left) and $\theta=\frac{7\pi}{12}$ (right).} \label{fig:torus_fiber_persistence} \end{figure} To confirm that the circular fibers glue together to form a fiber bundle, we do a zigzag persistence computation. \begin{center} \begin{tikzpicture} \node at (-0.5, 0.45) (a) {$X_0(300,50)$}; \node at (1, -0.45) (b) {$X_0(300,50) \cup X_\frac{\pi}{12}(300,50)$}; \node at (2.5, 0.45) (c) {$X_\frac{\pi}{12}(300,50)$}; \node at (4, -0.45) (d) {$\cdots$}; \node at (5.5, 0.45) (e){$X_\frac{11\pi}{12}(300,50)$}; \draw [right hook ->] (a) -- (b); \draw [left hook ->] (c) -- (b); \draw [right hook ->] (c) -- (d); \draw [left hook ->] (e) -- (d); \end{tikzpicture} \end{center} Figure~\ref{fig:zigzag} shows the one-dimensional zigzag persistence of Vietoris--Rips complexes built on top of the following zigzag diagram, confirming that the circles piece together compatibly into a fiber bundle structure. \begin{figure}[htb] \centering \includegraphics[width=3.5in]{zig-zag_v4.pdf} \caption{A 1-dimensional zigzag persistence computation, showing that the circles in Figure~\ref{fig:torus_projections} glue together with the structure of a fiber bundle. The 24 horizontal steps correspond to the 12 spaces $X_\theta(300,50)$ and the 12 intermediate spaces which are unions thereof.} \label{fig:zigzag} \end{figure} In more detail, for twelve different angle bins $\theta\in\{0,\frac{\pi}{12},\ldots\frac{11\pi}{12}\}$ we construct the dense core subsets $X_{\theta}(300,50)$. For computational feasibility, we then downsample via sequential maxmin sampling~\cite{de2004topological} to reduce each set $X_{\theta}(300,50)$ to a subset of $50$ data points; Ripser computations show the persistent homology is robust with regard to this downsampling. After building a zigzag filtration as described above, we use Dionysus~\cite{dionysus} to compute the zigzag homology barcodes in Figure~\ref{fig:zigzag}. The single long interval confirms that the circles indeed piece together compatibly. We have experimentally verified a fiber bundle model with base space a circle and with fiber a circle. As the only circle bundles over the circle are the torus and the Klein bottle, this rules out many possible shapes for our model --- our model can no longer be (for example) a sphere, a double torus, a triple torus, a projective plane, etc. It remains to identify this fiber bundle model as either the torus or the Klein bottle. One test would be to check if the orientation on a generator for the 1-dimensional homology of $X_0(300,50)$ is preserved after looping once around the circle; generator orientation would be preserved for a torus but not the Klein bottle. Another way to verify that this fiber bundle is a torus instead of a Klein bottle could be to use persistence for circle-valued maps~\cite{burghelea2013topological}, on the map from the total space to the circle that encodes the predominant angle $\theta$ of each flow patch. We instead take a computational approach that does not require tracking generators. Indeed, we sample a collection of patches near the idealized optical flow torus, and compute the persistent homology of a witness complex construction both with $\mathbb{Z}/2\mathbb{Z}$ and $\mathbb{Z}/3\mathbb{Z}$ coefficients (Figure~\ref{fig:orientation}). We obtain the Betti signature $\beta_0=1$, $\beta_1=2$, $\beta_2=1$ with both choices of coefficients, confirming that we indeed have a torus. Indeed, a Klein bottle with $\mathbb{Z}/3\mathbb{Z}$ coefficients would instead lose one long bar in both homological dimensions one and two, giving $\beta_0=1$, $\beta_1=1$, $\beta_2=0$ with $\mathbb{Z}/3\mathbb{Z}$ coefficients. \begin{figure}[htb] \centering \includegraphics[width=3.8in]{Mod3coefficients.png} \caption{With $\mathbb{Z}/3\mathbb{Z}$ coefficients, the Betti number signature $\beta_0=1$, $\beta_1=2$, $\beta_2=1$ identifies the model as a torus instead of a Klein bottle.} \label{fig:orientation} \end{figure} The 2-dimensional flow torus model does not model all common optical flow patches; for example it omits patches arising from zooming in, zooming out, or roll camera motions. \section{Conclusions}\label{sec:con} We explore the nonlinear statistics of high-contrast $3\times 3$ optical flow patches from the computer-generated video short \emph{Sintel} using topological machinery, primarily persistent homology and zigzag persistence. Since no instrument can measure ground-truth optical flow, an understanding of the nonlinear statistics of flow is needed in order to serve as a prior for optical flow estimation algorithms. With a global estimate of density, the densest patches lie near a circle, the horizontal flow circle. Upon selecting the optical flow patches whose predominant direction of flow lies in a small bin of angle values, we find that the patches in each such bin are well-modeled by a circle. By combining these bins together we obtain a torus model for optical flow, which furthermore is naturally equipped with the structure of a fiber bundle, over a circular base space of common range image patches. \section{Acknowledgements} We thank Gunnar Carlsson, Bradley Nelson, Jose Perea, and Guillermo Sapiro for helpful conversations. \bibliographystyle{plain}
1,941,325,220,621
arxiv
\section{Introduction} \label{intro} In his seminal paper \cite{G}, among many other results, Gromov gave a pretty combinatorial characterization of CAT(0) cubical complexes as simply connected cubical complexes in which the links of vertices are simplicial flag complexes. Based on this result, \cite{Ch_CAT,Rol} established a bijection between the 1-skeletons of CAT(0) cubical complexes and the median graphs, well-known in metric graph theory \cite{BaCh_survey}. A similar combinatorial characterization of CAT(0) simplicial complexes having regular Euclidean simplices as cells seems to be out of reach. Nevertheless, \cite{Ch_CAT} characterized the bridged complexes (i.e., the simplicial complexes having bridged graphs as 1-skeletons) as the simply connected simplicial complexes in which the links of vertices are flag complexes without embedded 4- and 5-cycles; the bridged graphs are exactly the graphs which satisfy one of the basic feature of CAT(0) spaces: the balls around convex sets are convex. Bridged graphs have been introduced and characterized in \cite{FaJa,SoCh} as graphs without embedded isometric cycles of length greater than 3 and have been further investigated in several graph-theoretical and algebraic papers; cf. \cite{AnFa,BaCh_weak,Ch_bridged,Po,Po1} and the survey \cite{BaCh_survey}. Januszkiewicz-Swiatkowski \cite{JanSwi} and Haglund \cite{Ha} rediscovered this class of simplicial complexes (they call them {\it systolic complexes}) using them (and groups acting on them geometrically---{\it systolic groups}) fruitfully in the context of geometric group theory. Systolic complexes and groups turned out to be good combinatorial analogs of CAT(0) (nonpositively curved) metric spaces and groups; cf. \cite{Ha,JanSwi,O-ciscg,OsPr,Pr2,Pr3}. One of the characteristic features of systolic complexes, related with convexity of balls around convex sets, is the following $SD_n(\sigma_0)$ property introduced in \cite{Osajda}: {\it if a simplex $\sigma$ of a simplicial complex $\bf X$ is located in the sphere of radius $n+1$ centered at some simplex $\sigma^*$ of $\bf X$, then the set of all vertices $x$ such that $\sigma\cup\{ x\}$ is a simplex and $x$ has distance $n$ to $\sigma^*$ is a nonempty simplex $\sigma_0$ of $\bf X$.} Relaxing this condition, Osajda \cite{Osajda} called a simplicial complex $\bf X$ {\it weakly systolic} if the property $SD_n(\sigma^*)$ holds whenever $\sigma^*$ is a vertex (i.e., a 0-dimensional simplex) of $\bf X$. He further showed that this $SD_n$ property is equivalent with the $SD_n(\sigma^*)$ property in which $\sigma^*$ is a vertex and $\sigma$ is a vertex or an edge (i.e., an 1-dimensional simplex) of $\bf X$. Finally, it is showed in \cite{Osajda} that weakly systolic complexes can be characterized as simply connected simplicial complexes satisfying some local combinatorial conditions, cf. also Theorem A below. This is analogous to the cases of $CAT(0)$ cubical complexes and systolic complexes. In graph-theoretical terms, the 1-skeletons of weakly systolic complexes (which we call {\it weakly bridged graphs}) satisfy the so-called triangle and quadrangle conditions \cite{BaCh_weak}, i.e., like median and bridged graphs, the weakly bridged graphs are weakly modular graphs. As is shown in \cite{Osajda} and in this paper, the properties of weakly systolic complexes resemble very much properties of spaces of non-positive curvature. The initial motivation of \cite{Osajda} for introducing weakly systolic complexes was to exibit a class of simplicial complexes with some kind of simplicial nonpositive curvature that will include the systolic complexes and some other classes of complexes appearing in the context of geometric group theory. As we noticed already, systolic complexes are weakly systolic. Moreover, for every simply connected locally $5$-large cubical complex (i.e. $CAT(-1)$ cubical complex \cite{G}) there exists a canonically associated simplicial complex, which is weakly systolic \cite{Osajda}. In particular, the class of {\it weakly systolic groups}, i.e., groups acting geometrically by automorphisms on weakly systolic complexes, contains the class of $CAT(-1)$ cubical groups and is therefore essentially bigger than the class of systolic groups; cf. \cite{O-ciscg}. Other classes of weakly systolic groups are presented in \cite{Osajda}. The ideas and results from \cite{Osajda} allowed to construct in \cite{O2} new examples of Gromov hyperbolic groups of arbitrarily large (virtual) cohomological dimension. Furthermore, Osajda \cite{Osajda} and Osajda-\' Swi\c atkowski \cite{OS} provide new examples of high dimensional groups with interesting asphericity properties. On the other hand, as we will show below, the class of weakly systolic complexes seems also to appear naturally in the context of graph theory and have not been studied before from this point of view. In this paper, we present further characterizations and properties of weakly systolic complexes and their 1-skeletons, weakly bridged graphs. Relying on techniques from graph theory we establish dismantlability of locally-finite weakly bridged graphs. This result is used to show some interesting nonpositive-curvature-like properties of weakly systolic complexes and groups (see \cite{Osajda} for other properties of this kind). As corollaries, we get also new results about systolic complexes and groups. We conclude this introductory section with the formulation of our main results (see respective sections for all missing definitions and notations as well as for other related results). We start with a characterization of weakly systolic complexes proved in Section \ref{char}: \medskip \noindent {\bf Theorem A.} {\it For a flag simplicial complex $\bold X$ the following conditions are equivalent: \begin{itemize} \item[(a)] ${\bold X}$ is weakly systolic; \item[(b)] the 1-skeleton of ${\bold X}$ is a weakly modular graph without induced $C_4$; \item[(c)] the 1-skeleton of ${\bold X}$ is a weakly modular graph with convex balls; \item[(d)] the 1-skeleton of ${\bold X}$ is a graph with convex balls in which any $C_5$ is included in a 5-wheel $W_5$; \item[(e)] $\bold X$ is simply connected, satisfies the $\widehat{W}_5$-condition, and does not contain induced $C_4.$ \end{itemize} } \medskip In Section \ref{dismantlability} we prove the following result: \medskip\noindent {\bf Theorem B.} {\it Any LexBFS ordering of vertices of a locally finite weakly systolic complex $\bf X$ is a dismantling ordering of its 1-skeleton.} \medskip This dismantlability result has several consequences presented in Section \ref{dismantlability}. This result also allows us to prove in Section \ref{fixedpt} the following fixed point theorem concerning group actions: \medskip\noindent {\bf Theorem C.} {\it Let $G$ be a finite group acting by simplicial automorphisms on a locally finite weakly systolic complex ${\bold X}$. Then there exists a simplex $\sigma \in {\bold X}$ which is invariant under the action of $G$.} \medskip The barycenter of an invariant simplex is a point fixed by $G$. An analogous theorem holds in the case of $CAT(0)$ spaces; cf. \cite[Corollary 2.8]{BrHa}. As a direct corollary of Theorem C, we get the fixed point theorem for systolic complexes. This was conjectured by Januszkiewicz-\' Swi\c atkowski (personal communication) and Wise \cite{Wi}, and later was formulated in the collection of open questions \cite[Conjecture 40.1 on page 115]{Chat}. A partial result in the systolic case was proved by Przytycki \cite{Pr2}. In fact, in Section \ref{final}, based on a result of Polat \cite{Po} for bridged graphs, we prove even a stronger version of the fixed point theorem in this case. There are several important group theoretical consequences of Theorem C. The first one follows directly from this theorem and \cite[Remarks 7.7$\&$7.8]{Pr2}. \medskip\noindent {\bf Theorem D.} {\it Let $k\geq 6$. Free products of $k$-systolic groups amalgamated over finite subgroups are $k$-systolic. HNN extensions of $k$-systolic groups over finite subgroups are $k$-systolic.} \medskip The following result (Corollary \ref{conj} below) has also its $CAT(0)$ counterpart; cf. \cite[Corollary 2.8]{BrHa}: \medskip\noindent {\bf Corollary.} {\it Let $G$ be a weakly systolic group. Then $G$ contains only finitely many conjugacy classes of finite subgroups.} \medskip The next important consequence of the fixed point theorem concerns classifying spaces for proper group actions. Recall that if a group $G$ acts properly on a space $\bf X$ such that the fixed point set for any finite subgroup of $G$ is contractible (and therefore non-empty), then we say that $\bf X$ is a \emph {model for $\eg$}---the classifying space for finite groups. If additionally the action is cocompact, then $\bf X$ is a \emph {finite model for $\eg$}. A (finite) model for $\eg$ is in a sense a ``universal'' $G$-space (see \cite{Lu} for details). The following theorem is a direct consequence of Theorem C and Proposition \ref{inv set contr} below. \medskip\noindent {\bf Theorem E.} {\it Let $G$ act properly by simplicial automorphisms on a finite dimensional weakly systolic complex $\bf X$. Then $\bf X$ is a finite dimensional model for $\eg$. If, moreover, the action of $G$ on $\bf X$ is cocompact, then $\bf X$ is a finite model for $\eg$.} \medskip As an immediate consequence we get an analogous result about $\eg$ for systolic groups. This was conjectured in \cite[Chapter 40]{Chat}. Przytycki \cite{Pr3} showed that the Rips complex (with the constant at least $5$) of a systolic complex is an $\eg$ space. Our result gives a systolic---and thus much nicer---model of $\eg$ in that case. \medskip In the final Section \ref{final} we present some further results about systolic complexes and groups. Besides a stronger version of the fixed point theorem mentioned above, we remark on another approach to this theorem initiated by Zawi\' slak \cite{Z} and Przytycki \cite{Pr2}. In particular, our Proposition \ref{round} proves their conjecture about round complexes; cf. \cite[Conjecture 3.3.1]{Z} and \cite[Remark 8.1]{Pr2}. Finally, we show (cf. Claim \ref{Z}) how our results about $\eg$ apply to the questions of existence of particular boundaries of systolic groups (and thus to the Novikov conjecture for systolic groups with torsion). This relies on earlier results of Osajda-Przytycki \cite{OsPr}. \section{Preliminaries} \subsection{Graphs and simplicial complexes} We continue with basic definitions used in this paper concerning graphs and simplicial complexes. All graphs $G=(V,E)$ occurring here are undirected, connected, and without loops or multiple edges. The {\it distance} $d(u,v)$ between two vertices $u$ and $v$ is the length of a shortest $(u,v)$-path, and the {\it interval} $I(u,v)$ between $u$ and $v$ consists of all vertices on shortest $(u,v)$-paths, that is, of all vertices (metrically) {\it between} $u$ and $v$: $$I(u,v)=\{ x\in V: d(u,x)+d(x,v)=d(u,v)\}.$$ An induced subgraph of $G$ (or the corresponding vertex set $A$) is called {\it convex} if it includes the interval of $G$ between any of its vertices. By the {\it convex hull} conv$(W)$ of $W$ in $G$ we mean the smallest convex subset of $V$ (or induced subgraph of $G$) that contains $W.$ An {\it isometric subgraph} of $G$ is an induced subgraph in which the distances between any two vertices are the same as in $G.$ In particular, convex subgraphs are isometric. The {\it ball} (or disk) $B_r(x)$ of center $x$ and radius $r\ge 0$ consists of all vertices of $G$ at distance at most $r$ from $x.$ In particular, the unit ball $B_1(x)$ comprises $x$ and the neighborhood $N(x)$ of $x.$ The {\it sphere} $S_r(x)$ of center $x$ and radius $r\ge 0$ consists of all vertices of $G$ at distance exactly $r$ from $x.$ The ball $B_r(S)$ centered at a convex set $S$ is the union of all balls $B_r(x)$ with centers $x$ from $S.$ The {\it sphere} $S_r(S)$ of center $S$ and radius $r\ge 0$ consists of all vertices of $G$ at distance exactly $r$ from $S.$ A graph $G$ is called {\it thin} if for any two nonadjacent vertices $u,v$ of $G$ any two neighbors of $v$ in the interval $I(u,v)$ are adjacent. A graph $G$ is {\it weakly modular} \cite{BaCh_weak,BaCh_survey} if its distance function $d$ satisfies the following conditions: \medskip\noindent {\it Triangle condition} (T): for any three vertices $u,v,w$ with $1=d(v,w)<d(u,v)=d(u,w)$ there exists a common neighbor $x$ of $v$ and $w$ such that $d(u,x)=d(u,v)-1.$ \medskip\noindent {\it Quadrangle condition} (Q): for any four vertices $u,v,w,z$ with $d(v,z)=d(w,z)=1$ and $2=d(v,w)\le d(u,v)=d(u,w)=d(u,z)-1,$ there exists a common neighbor $x$ of $v$ and $w$ such that $d(u,x)=d(u,v)-1.$ An abstract {\it simplicial complex} ${\bold X}$ is a collection of sets (called {\it simplices}) such that $\sigma\in {\bold X}$ and $\sigma'\subseteq \sigma$ implies $\sigma'\in {\bold X}.$ The {\it geometric realization} $\vert {\bold X}\vert$ of a simplicial complex is the polyhedral complex obtained by replacing every face $\sigma$ of $\bf X$ by a ``solid" regular simplex $|\sigma|$ of the same dimension such that realization commutes with intersection, that is, $|\sigma'|\cap |\sigma''|=|\sigma'\cap \sigma''|$ for any two simplices $\sigma'$ and $\sigma''.$ Then $\vert {\bold X}\vert=\bigcup\{ |\sigma|:\sigma\in {\bold X}\}.$ $\bold X$ is called {\it simply connected} if it is connected and if every continuous mapping of the 1-dimensional sphere $S^1$ into $|{\bold X}|$ can be extended to a continuous mapping of the disk $D^2$ with boundary $S^1$ into $|{\bold X}|$. For a simplicial complex $\bold X$, denote by $V({\bold X})$ and $E({\bold X})$ the {\it vertex set} and the {\it edge set} of ${\bold X},$ namely, the set of all 0-dimensional and 1-dimensional simplices of ${\bold X}.$ The pair $(V({\bold X}),E({\bold X}))$ is called the {\it (underlying) graph} or the {\it 1-skeleton} of ${\bold X}$ and is denoted by $G({\bold X})$. Conversely, for a graph $G$ one can derive a simplicial complex ${\bold X}(G)$ (the {\it clique complex} of $G$) by taking all complete subgraphs (cliques) as simplices of the complex. A simplicial complex $\bold X$ is a {\it flag complex} (or a {\it clique complex}) if any set of vertices is included in a face of $\bold X$ whenever each pair of its vertices is contained in a face of ${\bold X}$ (in the theory of hypergraphs this condition is called conformality). A flag complex can therefore be recovered by its underlying graph $G({\bold X})$: the complete subgraphs of $G({\bold X})$ are exactly the simplices of ${\bold X}.$ The {\it link} of a simplex $\sigma$ in ${\bold X},$ denoted lk$(\sigma,{\bold X})$ is the simplicial complex consisting of all simplexes $\sigma'$ such that $\sigma \cap \sigma'=\emptyset$ and $\sigma\cup\sigma'\in {\bold X}.$ For a simplicial complex $\bold X$ and a vertex $v$ not belonging to ${\bold X},$ the {\it cone} with apex $v$ and base $\bold X$ is the simplicial complex $v\ast {\bold X}={\bold X}\cup \{ \sigma\cup \{ v\}: \sigma\in {\bold X}\}.$ For a simplicial complex $\bold X$ and any $k\ge 1,$ the {\it Rips complex} ${\bold X}_k$ is a simplicial complex with the same set of vertices as $\bold X$ and with a simplex spanned by any subset $S\subset V({\bold X})$ such that $d(u,v)\le k$ in $G({\bold X})$ for each pair of vertices $u,v\in S$ (i.e., $S$ has diameter $\le k$ in the graph $G({\bold X})$). From the definition immediately follows that the Rips complex of any complex is a flag complex. Alternatively, the Rips complex ${\bold X}_k$ can be viewed as the clique complex ${\bold X}(G^k(\bold X))$ of the $k$th power of the graph of $\bold X$ (the {\it $k$th power} $G^k$ of a graph $G$ has the same set of vertices as $G$ and two vertices $u,v$ are adjacent in $G^k$ if and only if $d(u,v)\le k$ in $G$). \subsection{$SD_n$ property and weakly systolic complexes} The following generalization of systolic complexes has been presented by Osajda \cite{Osajda}. A flag simplicial complex ${\bold X}$ satisfies the property of {\it simple descent on balls} of radii at most $n$ centered at a simplex $\sigma^*$ (\emph{property $SD_n(\sigma^*)$} \cite{Osajda}) if for each $i=0,1,2,...,n$ and each simplex $\sigma$ located in the sphere $S_{i+1}(\sigma^*)$ the set $\sigma_0:=V(\rm{lk}(\sigma^*,{\bf X}))$$\cap B_i(\sigma^*)$ spans a non-empty simplex of $\bf X$. Systolic complexes are exactly the flag complexes which satisfy the $SD_n(\sigma^*)$ property for all simplices $\sigma^*$ and all natural numbers $n$. On the other hand, the 5-wheel is an example of a (2-dimensional) simplicial complex which satisfies the $SD_2$ property for vertices and triangles but not for edges. In view of this analogy and of subsequent results, we call {\it weakly systolic} a flag simplicial complex $\bold X$ which satisfies the $SD_n(v)$ property for all vertices $v\in V({\bold X})$ and for all natural numbers $n$. We also call {\it weakly bridged} the underlying graphs of weakly systolic complexes. It can be shown (cf. Theorem \ref{weakly-systolic}) that $\bold X$ is a weakly systolic complex if for each vertex $v$ and every $i$ it satisfies the following two conditions: \medskip\noindent {\it Vertex condition} (V): for every vertex $w \in S_{i+1}(v),$ the intersection $V(\rm{lk}$$(v,{\bf X}))\cap B_i(v)$ is a single simplex; \medskip\noindent {\it Edge condition} (E): for every edge $e \in S_{i+1}(v),$ the intersection $V(\rm{lk}$$(e,{\bf X}))\cap B_i(v)$ is nonempty. In fact, this is the original definition of a weakly systolic complex given in \cite{Osajda}. Notice that these two conditions imply that weakly systolic complexes are exactly the flag complexes whose underlying graphs are thin and satisfy the triangle condition. \subsection{Dismantlability of graphs and LC-contractibility of complexes} \label{dislc} Let $G=(V,E)$ be a graph and $u,v$ two vertices of $G$ such that any neighbor of $v$ (including $v$ itself) is also a neighbor of $u$. Then there is a retraction of $G$ to $G-v$ taking $v$ to $u$. Following \cite{HeNe}, we call this retraction a {\it fold} and we say that $v$ is {\it dominated} by $u.$ A finite graph $G$ is {\it dismantlable} if it can be reduced, by a sequence of folds, to a single vertex. In other words, an $n$-vertex graph $G=(V,E)$ is dismantlable if its vertices can be ordered $v_1,\ldots,v_n$ so that for each vertex $v_i, 1\le i<n,$ there exists another vertex $v_j$ with $j>i,$ such that $N_1(v_i)\cap V_i\subseteq N_1(v_j)\cap X_i,$ where $V_i:=\{ v_i,v_{i+1},\ldots,v_n\}.$ This order is called a {\it dismantling order}. Consider now for simplicial complexes $\bold X$ the analogy of dismantlability investigated in the papers \cite{CiYa,Ma}. A vertex $v$ of $\bold X$ is {\it LC-removable} if lk$(v,{\bold X})$ is a cone. If $v$ is an LC-removable vertex of $\bold X$, then ${\bold X}-v:= \{ \sigma\in {\bold X}: v\notin \sigma\}$ is obtained from $\bold X$ by an {\it elementary LC-reduction} (link-cone reduction) \cite{Ma}. Then $\bold X$ is called {\it LC-contractible} \cite{CiYa} if there is a sequence of elementary LC-reductions transforming $\bold X$ to one vertex. For flag simplicial complexes, the LC-contractibility of $\bold X$ is equivalent to dismantability of its graph $G({\bold X})$ because an LC-removable vertex $v$ is dominated by the apex of the cone lk$(v,{\bold X})$ and vice versa the link of any dominated vertex $v$ is a cone having the vertex dominating $v$ as its apex. It is clear that LC-contractible simplicial complexes are collapsible (see also \cite[Corollary 6.5]{CiYa}). Dismantlable graphs are closed under retracts and direct products, i.e., they constitute a variety \cite{NowWin}. Winkler and Nowakowski \cite{NowWin} and Quilliot \cite{Qui83} characterized the dismantlable graphs as cop-win graphs, i.e., graphs in which a single cop captures the robber after a finite number of moves for all possible initial positions of two players. Cops and robber game is a pursuit-evasion game played on finite (or infinite) undirected graphs in which the two players move alternatively starting from their initial positions, where a move is to slide along an edge or to stay at the same vertex. The objective of the cop is to capture the robber, i.e., to be at some moment of time at the same vertex as the robber. The objective of the robber is to continue evading the cops. The simplest algorithmic way to order the vertices of a graph is to apply the {\it Breadth-First Search} (BFS) starting from the root vertex (base point) $b.$ We number with 1 the vertex $u$ and put it on the initially empty queue. We repeatedly remove the vertex $v$ at the head of the queue and consequently number and place onto the queue all still unnumbered neighbors of $v$. BFS constructs a spanning tree $T_u$ of $G$ with the vertex $u$ as a root. Then a vertex $v$ is the {\it father} in $T_v$ of any of its neighbors $w$ in $G$ included in the queue when $v$ is removed (notation $f(w)=v$). The procedure is called once for each vertex $v$ and proceeds $v$ in $O(|deg(v)|)$ time, so the total complexity of its implementation if linear. Notice that the distance from any vertex $v$ to the root $u$ is the same in $G$ and in $T_u.$ Another method to order the vertices of a graph in linear time is the {\it Lexicographic Breadth-First Search} (LexBFS) proposed by Rose, Tarjan, and Lueker \cite{RoTaLu}. According to LexBFS, the vertices of a graph $G$ are numbered from $n$ to 1 in decreasing order. The {\it label} $L(w)$ of an unnumbered vertex $w$ is the list of its numbered neighbors. As the next vertex to be numbered, select the vertex with the lexicographic largest label, breaking ties arbitrarily. As in case of BFS, we remove the vertex $v$ at the head of the queue and consequently number according to the lexicographic order and place onto the queue all still unnumbered neighbors of $v$. LexBFS is a particular instance of BFS, i.e., every ordering produced by LexBFS can also be generated by BFS. Anstee and Farber \cite{AnFa} established that bridged graphs are cop-win graphs. Chepoi \cite{Ch_bridged} noticed that any order of a bridged graph returned by BFS is a dismantling order. Namely, he showed a stronger result: {\it for any two adjacent vertices $v_i,v_j$ with $i<j,$ their fathers $f(v_i),f(v_j)$ either coincide or are adjacent and moreover $f(v_j)$ is adjacent to $v_i$.} This property implies that bridged graphs admits a geodesic 1-combing and that the shortest paths participating in this combing are the paths to the root $u$ of the BFS tree $T_u$ \cite{Ch_CAT}. Similar results have been established in \cite{Ch_dpo} for larger classes of weakly modular graphs by using LexBFS instead of BFS. Notice that the notions of dismantlable graph, BFS, and LexBFS can be defined in a straightforward way to all locally finite graphs. Polat \cite{Po,Po1} defined dismantlability and BFS for arbitrary (not necessarily locally finite) graphs and extended the results of \cite{AnFa,Ch_bridged} to all bridged graphs. \subsection{Group actions on simplicial complexes} Let $G$ be a group acting by automorphisms on a simplicial complex $\bold X$. By $\mr {Fix}_G{\bold X}$ we denote the \emph{fixed point set} of the action of $G$ on $\bf X$, i.e. $\mr {Fix}_G{\bf X}=\lk x\in {\bf X}|\; Gx=\lk x\rk \rk$. Recall that the action is \emph{cocompact} if the orbit space $G\backslash {\bf X}$ is compact. The action of $G$ on a locally finite simplicial complex $\bf X$ is \emph{properly discontinuous} if stabilizers of simplices are finite. Finally, the action is \emph{geometric} (or $G$ \emph{acts geometrically} on $\bf X$) if it is cocompact and properly discontinuous. \section{Characterizations of weakly systolic complexes} \label{char} We continue with the characterizations of weakly systolic complexes and their underlying graphs; some of those characterizations have been presented also in \cite{Osajda}. We denote by $C_k$ an induced $k$-cycle and by $W_k$ an induced $k$-wheel, i.e., an induced $k$-cycle $x_1,\ldots,x_k$ plus a central vertex $c$ adjacent to all vertices of $C_k.$ $W_k$ can also be viewed as a 2-dimensional simplicial complex consisting of $k$ triangles $\sigma_1,\ldots,\sigma_k$ sharing a common vertex $c$ and such that $\sigma_i$ and $\sigma_j$ intersect in an edge $x_ic$ exactly when $|j-i|=1\; (\mr{mod}\; k).$ In other words, lk$(c,{\bold X})=C_k.$ By $\widehat{W}_k$ we denote a $k$-wheel $W_k$ plus a triangle $ax_ix_{i+1}$ for some $i<k$ (we suppose that $a\ne c$ and that $a$ is not adjacent to any other vertex of $W_k$). We continue with a condition which basically characterizes weakly bridged complexes among simply connected flag simplicial complexes: \medskip\noindent {\sf $\widehat{W}_5$-condition}: {\it for any $\widehat{W}_5,$ there exists a vertex $v\notin \widehat{W}_5$ such that $\widehat{W}_5$ is included in $\rm{lk}(v,{\bold X}),$ i.e., $v$ is adjacent in $G({\bold X})$ to all vertices of $\widehat{W}_5$} (see Fig.~\ref{5-wheel}). \begin{figure}[t \scalebox{0.60}{\input{5-wheel-victor.pstex_t}} \caption{The $\widehat{W}_5$-condition} \label{5-wheel} \end{figure} \begin{theorem}[Characterizations] \label{weakly-systolic} For a flag simplicial complex $\bold X$ the following conditions are equivalent: \begin{itemize} \item[(i)] ${\bold X}$ is weakly systolic; \item[(ii)] $\bold X$ satisfies the the vertex condition (V) and the edge condition (E); \item[(iii)] $G({\bold X})$ is a weakly modular thin graph; \item[(iv)] $G({\bold X})$ is a weakly modular graph without induced $C_4$; \item[(v)] $G({\bold X})$ is a weakly modular graph with convex balls; \item[(vi)] $G({\bold X})$ is a graph with convex balls in which any $C_5$ is included in a 5-wheel $W_5$; \item[(vii)] $\bold X$ is simply connected, satisfies the $\widehat{W}_5$-condition, and does not contain induced $C_4.$ \end{itemize} \end{theorem} \begin{proof} The implications (i)$\Rightarrow$(ii) and (iii)$\Rightarrow$(iv) are obvious. \medskip (ii)$\Rightarrow$(iii): The condition (V) implies that all vertices of $I(u,v)$ adjacent to $v$ are pairwise adjacent, i.e., that $G({\bold X})$ is thin. On the other hand, from the condition (E) we conclude that if $1=d(v,w)<d(u,v)=d(u,w)=i+1,$ then $v$ and $w$ have a common neighbor $x$ in the sphere $S_{i}(u),$ implying the triangle condition. Finally, in thin graphs the quadrangle condition is automatically satisfied. This shows that $G({\bold X})$ is a weakly modular thin graph. \medskip (iv)$\Rightarrow$(v): To show that any ball $B_i(u)$ is convex in $G({\bold X}),$ since $G({\bold X})$ is weakly modular and $B_i(u)$ induces a connected subgraph, according to \cite{Ch_triangle} it suffices to show that the ball $B_i(u)$ is locally convex, i.e., {\it if $x,y\in B_i(u)$ and $d(x,y)=2,$ then $I(x,y)\subseteq B_k(u).$} Suppose by way of contradiction that $z\in I(x,y)\setminus B_i(u).$ Then necessarily $d(x,u)=d(y,u)=i$ and $d(z,u)=i+1.$ Applying the quadrangle condition, we infer that there exists a vertex $z'$ adjacent to $x$ and $y$ at distance $i-1$ from $u.$ As a result, the vertices $x,z,y,z'$ induce a forbidden 4-cycle, a contradiction. \medskip (v)$\Rightarrow$(vi): Pick a 5-cycle induced by the vertices $x_1,x_2,x_3,x_4,x_5.$ Since $d(x_4,x_1)=d(x_4,x_2)=2,$ by the triangle condition there exists a vertex $y$ adjacent to $x_1,x_2,$ and $x_4.$ Since $G({\bold X})$ does not contain induced 4-cycles, necessarily $y$ must be also adjacent to $x_3$ and $x_5,$ yielding a 5-wheel. \medskip (v)$\Rightarrow$(i): Pick a simplex $\sigma$ in the sphere $S_{i+1}(u).$ Denote by $\sigma_0$ the set of all vertices $x\in S_i(u)$ such that $\sigma\cup \{ x\}$ is a simplex of $\bold X$. Since the balls of $G$ are convex, necessarily either $\sigma_0$ is empty or the vertices of $\sigma_0$ are pairwise adjacent, thus $\sigma_0$ and $\sigma\cup \sigma_0$ induce complete subgraphs of $G({\bold X}).$ Since $\bold X$ is a flag complex, $\sigma_0$ and $\sigma\cup \sigma_0$ are simplices. Notice that obviously $\sigma'\subseteq \sigma_0$ holds for any other simplex $\sigma'\subseteq S_i(u)$ such that $\sigma\cup\sigma'\in {\bold X}.$ Therefore, it remains to show that $\sigma_0$ is non-empty. Let $x$ be a vertex of $S_i(u)$ which is adjacent to the maximum number of vertices of $\sigma.$ Since $G({\bold X})$ is weakly modular and $\sigma$ is contained in $S_{i+1}(u)$, the vertex $x$ must be adjacent to at least two vertices of $\sigma.$ Suppose by way of contradiction that $x$ is not adjacent to a vertex $v\in \sigma.$ Pick any neighbor $w$ of $x$ in $\sigma.$ By triangle condition, there exists a vertex $y\in S_i(u)$ adjacent to $v$ and $w.$ Since $w$ is adjacent to $x,y\in S_i(u)$ and $w\in S_{i+1}(u),$ the convexity of $B_i(u)$ implies that $x$ and $y$ are adjacent. Pick any other vertex $w'$ of $\sigma$ adjacent to $x.$ Since $x$ is not adjacent to $v$ and $G({\bold X})$ does not contain induced 4-cycles, the vertices $y$ and $w'$ must be adjacent. Hence, $y$ is adjacent to $v\in \sigma$ and to all neighbors of $x$ in $\sigma,$ contrary to the choice of $x.$ Thus $x$ is adjacent to all vertices of $\sigma,$ i.e., $\sigma_0\ne \emptyset.$ This shows that $\bold X$ satisfies the $SD_n(u)$ property. \medskip (vi)$\Rightarrow$(vii): To show that a flag complex ${\bold X}$ is simply connected, it suffices to prove that every simple cycle in the underlying graph of $\bold X$ is a modulo 2 sum of its triangular faces. Notice that the isometric cycles of an arbitrary graph $G$ constitute a basis of cycles of $G$. Since $G({\bold X})$ is a graph with convex balls, the isometric cycles of $G({\bold X})$ have length 3 or 5 \cite{FaJa,SoCh}. By (vi), any 5-cycle $C$ of $G({\bold X})$ extends to a 5-wheel, thus $C$ is a modulo 2 sum of triangles. Hence ${\bold X}$ is indeed simply connected. That $\bold X$ does not contain induced 4-cycles and 4-wheels follows from the convexity of balls. Finally, pick an extended 5-wheel $\widehat{W}_5:$ let $x_1,x_2,x_3,x_4,x_5$ be the vertices of the 5-cycle, $y$ be the center of the 5-wheel, and $x_1,x_2,z$ be the vertices of the pendant triangle. Since $x_3$ and $x_5$ are not adjacent and the balls of $G({\bold X})$ are convex, necessarily $d(z,x_4)=2.$ Let $u$ be a common neighbor of $z$ and $x_4.$ If $u$ is adjacent to one of the vertices $x_2$ and $x_3,$ then by convexity of balls it will be also adjacent to the second vertex and to $y$. But if $u$ is adjacent to $y,$ then it will be adjacent to $x_1$ and therefore to $x_5$ as well. Hence, in this case $u$ will be adjacent to all vertices $x_1,x_2,x_3,x_4,x_5,$ and $y,$ and we are done. So, we can suppose that $u$ is not adjacent to anyone of the vertices $x_1,x_2,x_3,x_5,$ and $y.$ As a result, we obtain two 5-cycles induced by the vertices $z,x_2,x_3,x_4,u$ and $z,x_1,x_5,x_4,u.$ Each of these cycles extends to a 5-wheel. Let $v$ be the center of the 5-wheel extending the first cycle. To avoid a 4-cycle induced by the vertices $x_2,v,x_4,y,$ the vertices $v $ and $y$ must be adjacent. Subsequently, to avoid a 4-cycle induced by the vertices $y,v,z,x_1,$ the vertices $v$ and $x_1$ must be adjacent. Finally, to avoid a 4-cycle induced by $x_1,v,x_4,x_5,$ the vertices $v$ and $x_5$ must be adjacent. This way, we obtained that $v$ is adjacent to all six vertices of $\widehat{W}_5$, establishing the $\widehat{W}_5$-condition. \medskip (vii)$\Rightarrow$(iv): To prove this implication, as in \cite{Ch_CAT}, we will use the minimal disk diagrams. Let ${\mathcal D}$ and ${\bold X}$ be two simplicial complexes. A map $\varphi:V({\mathcal D})\rightarrow V({\bold X})$ is called {\it simplicial} if $\varphi(\sigma)\in {\bold X}$ for all $\sigma\in {\mathcal D}.$ If ${\mathcal D}$ is a planar triangulation (i.e. the 1--skeleton of ${\mathcal D}$ is an embedded planar graph whose all interior 2--faces are triangles) and $C=\varphi(\partial {\mathcal D}),$ then $({\mathcal D},\varphi)$ is called a {\it singular disk diagram} (or Van Kampen diagram) for $C$ (for more details see \cite[Chapter V]{LySch}). According to Van Kampen's lemma (\cite{LySch}, pp.150--151), for every cycle $C$ of a simply connected simplicial complex one can construct a singular disk diagram. A singular disk diagram with no cut vertices (i.e., its 1--skeleton is 2--connected) is called a {\it disk diagram.} A {\it minimal (singular) disk} of $C$ is a (singular) disk diagram ${\mathcal D}$ of $C$ with a minimum number of 2--faces. This number is called the {\it (combinatorial) area} of $C$ and is denoted Area$(C).$ The minimal disks diagrams $({\mathcal D},\varphi) $ of simple cycles $C$ in 1--skeletons of simply connected simplicial complexes have the following properties \cite{Ch_CAT}: (1) $\varphi$ bijectively maps $\partial {\mathcal D}$ to $C$ and (2) the image of a 2--simplex of $\mathcal D$ under $\varphi$ is a 2--simplex, and two adjacent 2--simplices of $\mathcal D$ have distinct images under $\varphi.$ Let $C$ be a simple cycle in the underlying graph $G({\bold X})$ of a flag simplicial complex $\bold X$ satisfying the condition (vii). \medskip\noindent {\bf Claim 1:} {\it If $C$ has length 5, then the minimal disk diagram of $C$ is a 5-wheel. Otherwise, $C$ admits a minimal disk diagram $\mathcal D$ which is a systolic complex, i.e., a plane triangulation whose all inner vertices have degrees $\ge 6.$} \medskip\noindent {\bf Proof of Claim 1:} First we show that any minimal disk diagram $\mathcal D$ of $C$ does not contain interior vertices of degrees 3 and 4. Let $x$ be any interior vertex of $\mathcal D$. Let $x_1,\ldots,x_k$ be the neighbors of $x,$ where $\sigma_i=xx_ix_{i+1(mod ~k)}$ $(i=1,\ldots,k)$ are the faces incident to $x.$ Trivially, $k\geq 3.$ Suppose by way of contradiction that $k\leq 4.$ By properties of minimal disk diagrams, $\varphi(\sigma_1),\ldots,\varphi(\sigma_k)$ are distinct 2--simplices of ${\bold X}.$ If $k=3$ then the 2-simplices $\varphi(\sigma_1),\varphi(\sigma_2), \varphi(\sigma_3)$ of $\bold X$ intersect in $\varphi(x)$ and pairwise share an edge of ${\bold X}.$ Therefore they are contained in a 3--simplex of ${\bold X}.$ This implies that $\delta=\varphi(x_1)\varphi(x_2)\varphi(x_3)$ is a 2--face of ${\bold X}.$ Let ${\mathcal D}'$ be a disk triangulation obtained from ${\mathcal D}$ by deleting the vertex $x$ and the triangles $\sigma_1, \sigma_2,\sigma_3,$ and adding the 2--simplex $x_1x_2x_3.$ The map $\varphi: V({\mathcal D}')\rightarrow V({\bold X})$ is simplicial, because it maps $x_1x_2x_3$ to $\delta.$ Therefore $({\mathcal D}',\varphi)$ is a disk diagram for $C,$ contrary to the minimality choice of ${\mathcal D}.$ Now, let $x$ has four neighbors. The cycle $(x_1,x_2,x_3,x_4,x_1)$ is sent to a 4--cycle of lk$(\varphi(x),{\bold X}),$ in which two opposite vertices, say $\varphi(x_1)$ and $\varphi(x_3),$ are adjacent. Consequently, $\delta'=\varphi(x_1)\varphi(x_3)\varphi(x_2)$ and $\delta''=\varphi(x_1)\varphi(x_3)\varphi(x_4)$ are 2--faces of ${\bold X}.$ Let ${\mathcal D}'$ be a disk triangulation obtained from ${\mathcal D}$ by deleting the vertex $x$ and the triangles $\sigma_i (i=1,\ldots,4),$ and adding the 2--simplices $\sigma'=x_1x_3x_2$ and $\sigma''=x_1x_3x_4.$ The map $\varphi$ remains simplicial, since it sends $\sigma',\sigma''$ to $\delta',\delta'',$ respectively, contrary to the minimality choice of ${\mathcal D}.$ This establishes that the degree of each interior vertex $x$ of any minimal disk diagram is $\ge 5.$ Suppose now additionally that $\mathcal D$ is a minimal disk diagram of $C$ having a minimum number of inner vertices of degree 5. With some abuse of notation, we will denote the vertices of $\mathcal D$ and their images in $\bold X$ under $\varphi$ by the same symbols. Let $x$ be any interior vertex of $\mathcal D$ of degree 5 and let $x_1,\ldots,x_5$ be the neighbors of $x.$ If $C=(x_1,x_2,x_3,x_4,x_5,x_1),$ then we are done because $\mathcal D$ is a 5-wheel. Now suppose that one of the edges of the 5-cycle $(x_1,x_2,x_3,x_4,x_5,x_1),$ say $x_1x_2$, belongs in $\partial {\mathcal D}$ to the second triangle $x_1x_2x_6.$ The minimality of $\mathcal D$ implies that $x_1,x_2,x_3,x_4,x_5,x_6$ induce in $\bold X$ a $\widehat{W}_5.$ By the $\widehat{W}_5$-condition, there exists a vertex $y$ of $\bold X$ which is adjacent to all vertices of this $\widehat{W}_5.$ Let ${\mathcal D}'$ be a disk triangulation obtained from ${\mathcal D}$ by deleting the vertex $x$ and the five triangles incident to $x$ as well as the triangle $x_1x_2x_6$ and replacing them by the six triangles of the resulting 6-wheel centered at $y$ (we call this operation a {\it flip}). The resulting map $\varphi$ remains simplicial. ${\mathcal D}'$ has the same number of triangles as $\mathcal D,$ therefore ${\mathcal D}'$ is also a minimal disk diagram for $C.$ The flip replaces the vertex $x$ of degree 5 by the vertex $y$ of degree 6; it preserves the degrees of all other vertices except the vertices $x_1$ and $x_2,$ whose degrees decrease by 1. If the degrees of $x_1$ and $x_2$ in $\mathcal D$ are $\ge 7,$ then we will obtain a contradiction with the minimality choice of $\mathcal D$. The same contradiction is obtained when the degree of $x_1$ or/and $x_2$ is at most 6 but the respective vertex belongs to $\partial {\mathcal D}$. So suppose that $x_1$ is an interior vertex of $\mathcal D$ and that its degree is at most 6. If the degree of $x_1$ is 5, then in ${\mathcal D}'$ the degree of $x_1$ will be 4, which is impossible by what has been shown above because ${\mathcal D}'$ is also a minimal disk diagram and $x_1$ is an interior vertex of ${\mathcal D}'.$ Hence the degree of $x_1$ in $\mathcal D$ is 6 and its neighbors constitute an induced 6-cycle $(x_6,x_2,x,x_5,u,v,x_6).$ Using the fact established above that the minimal disk diagrams for $C$ does not contain interior vertices of degree 3 and 4, the fact that $\bold X$ does not contain induced $C_4,$ it can be easily shown that the vertices $u,v,x_6,x_2,x_3,x_4,x_5,x_1,x$ induce in $\bold X$ the same subgraph as in $\mathcal D:$ a 6-wheel centered at $x_1$ plus a 5-wheel centered at $x$ which share two triangles $xx_1x_5$ and $xx_1x_2.$ The images in $\bold X$ of the vertices $x_5,y,x_6,v,u,x_1,x_4$ induce a $\widehat{W}_5$ constituted by a 5-wheel centered at $x_1$ and the pendant triangle $x_4yx_5.$ By the $\widehat{W}_5$-condition, there exists a vertex $z$ of $\bold X$ which is adjacent to all vertices of $\widehat{W}_5.$ If $z$ is adjacent in $\bold X$ with all vertices of the 7-cycle $(u,v,x_6,x_2,x_3,x_4,x_5,u),$ then replacing in $\mathcal D$ the 9 triangles incident to $x$ and $x_1$ by the 7 triangles of $\bold X$ incident to $z,$ we will obtain a disk diagram ${\mathcal D}''$ for $C$ having less triangles than $\mathcal D$, contrary to the minimality of $\mathcal D$. Therefore $z$ is different from $x$ and is not adjacent to one of the vertices $x_2,x_3.$ Since $x_1$ and $x_4$ are not adjacent and both $x$ and $z$ are adjacent to $x_1,x_4,$ to avoid an induced $C_4$ we conclude that $z$ is adjacent in $\bold X$ to $x.$ If $z$ is not adjacent to $x_2,$ then, since $x$ and $x_6$ are not adjacent, we will obtain a $C_4$ induced by $x,z,x_6,x_2.$ Thus $z$ is adjacent to $x_2,$ and therefore $z$ is not adjacent to $x_3.$ Since both $z$ and $x_3$ are adjacent to nonadjacent vertices $x_2$ and $x_4,$ we will obtain a $C_4$ induced by $z,x_2,x_3,x_4.$ This final contradiction show that all interior vertices of $\mathcal D$ have degrees $\ge 6,$ establishing Claim 1. \bigskip From Claim 1 we deduce that any simple cycle $C$ of the underlying graph of $\bold X$ admits a minimal disk diagram $\mathcal D$ which is either a 5-wheel or a systolic plane triangulation. We will call a {\it corner} of $\mathcal D$ any vertex $v$ of $\partial {\mathcal D}$ which belongs in $\mathcal D$ either to a unique triangle (first type) or to two triangles (second type). The corners of first type are the boundary vertices of degree two. The corners of second type are boundary vertices of degree three. In the first case, the two neighbors of $v$ are adjacent. In the second case, $v$ and its neighbors in $\partial {\mathcal D}$ are adjacent to the third neighbor of $v.$ From Gauss--Bonnet formula and Claim 1 we infer that $\mathcal D$ contains at least three corners, and if $\mathcal D$ has exactly three corners then they are all of first type. Furthermore, if $\mathcal D$ contains four corners, then at least two of them are corners of first type. Next we show that $G({\bold X})$ is weakly modular. To verify the triangle condition, pick three vertices $u,v,w$ with $1=d(v,w)<d(u,v)=d(u,w)=k.$ We claim that if $I(u,v)\cap I(u,w)=\{ u\},$ then $k=1.$ Suppose not. Pick two shortest paths $P'$ and $P''$ joining the pairs $u,v$ and $u,w,$ respectively, such that the cycle $C$ composed of $P',P''$ and the edge $vw$ has minimal Area$(C)$ (the choice of $v,w$ implies that $C$ is a simple cycle). Let $\mathcal D$ be a minimal disk diagram of $C$ satisfying Claim 1. Then either $\mathcal D$ has a corner $x$ different from $u,v,w$ or the vertices $u,v,w$ are the only corners of $\mathcal D.$ In the second case, $u,v,w$ are all three corners of first type, therefore the two neighbors of $v$ in $C$ will be adjacent. This means that $w$ will be adjacent to the neighbor of $v$ in $P',$ contrary to $I(u,v)\cap I(u,w)=\{ u\}.$ So, suppose that the corner $x$ exists. Let $x\in P'.$ Notice that $x$ is a corner of second type, otherwise its neighbors $y,z$ in $P'$ are adjacent, contrary to the assumption that $P'$ is a shortest path. Let $p$ be the vertex of $\mathcal D$ adjacent to $x,y,z.$ If we replace in $P'$ the vertex $x$ by $p,$ we will obtain a new shortest path between $u$ and $v.$ Together with $P''$ and the edge $vw$ this path forms a cycle $C'$ whose area is strictly smaller than Area$(C),$ contrary to the choice of $C.$ This establishes the triangle condition. As to the quadrangle condition, suppose by way of contradiction that we can find distinct vertices $u,v,w,z$ such that $v,w\in I(u,z)$ are neighbors of $z$ and $I(u,v)\cap I(u,w)=\{u\},$ however $u$ is not adjacent to $v$ and $w.$ Again, select two shortest paths $P'$ and $P''$ between $u,v$ and $u,w,$ respectively, so that the cycle $C$ composed of $P',P''$ and the edges $vz$ and $zw$ has minimum area. Choose a minimal disk $\mathcal D$ of $C$ as in Claim 1. From the initial hypothesis concerning the vertices $u,v,w,z$ we deduce that $\mathcal D$ has at most one corner of first type located at $u.$ Hence $\mathcal D$ contains at least four corners of second type. Since one corner $x$ is distinct from $u,v,w,z,$ then proceeding in the same way as before, we will obtain a contradiction with the choice of the paths $P',P''.$ This shows that $u$ is adjacent to $v,w,$ establishing the quadrangle condition. Hence $G({\bold X})$ is a weakly modular graph without induced $C_4,$ concluding the proof of the implication (vii)$\Rightarrow$(iv) and of the theorem. \end{proof} \section{Properties of weakly systolic complexes} \label{prop} In this section, we establish some further combinatorial and geometrical properties of weakly systolic complexes, which are well known for systolic complexes \cite{Ha,JanSwi}. In particular, we show that weakly systolic complexes $\bold X$ satisfy the $SD_n(\sigma^*)$ property for facets $\sigma^*$ (maximal by inclusion simplices of $\bold X$). \begin{lemma}[Edges descend on balls] \label{edge_desc} Let $\sigma$ be a simplex of a weakly systolic complex $\bold X$. Let $e=zz'$ be an edge contained in the sphere $S_i(\sigma).$ Then there exists a vertex $w\in \sigma$ and a vertex $v\in B_{i-1}(\sigma)$ such that $v$ is adjacent to $z,z'$ and $d(v,w)=i-1.$ \end{lemma} \begin{proof} If there exists a vertex $w\in \sigma$ such that $z,z'\in S_i(w)$ then the assertion follows from the $SD_n$ property. Therefore suppose that such a vertex of $\bold X$ does not exists. Let $w,w'$ be two vertices of $\sigma$ with $d(w,z)=d(w',z')=i.$ Since $d(w',z)=d(w,z')=i+1,$ we conclude that $z\in I(w,z').$ Since $w,z'\in B_i(w')$ and $z\notin B_i(w'),$ this contradicts the convexity of $B_i(w').$ \end{proof} \begin{lemma}[Big balls are convex] \label{bbac} Let $\sigma$ be a simplex of a weakly systolic complex $\bold X$ and let $i\ge 2.$ Then the ball $B_i(\sigma)$ is convex. In particular, $B_i(\sigma)\cap N(v)$ is a simplex for any vertex $z\in S_{i+1}(\sigma)$. \end{lemma} \begin{proof} Since $G({\bold X})$ is weakly modular and $B_i(\sigma)$ induces a connected subgraph, according to \cite{Ch_triangle} it suffices to show that $B_i(\sigma)$ is locally convex, i.e., if $x,y\in B_i(\sigma),$ $d(x,y)=2,$ and $z$ is a common neighbor of $x$ and $y,$ then $z\in B_i(\sigma)$. Suppose by way of contradiction that $z\in S_{i+1}(\sigma).$ Let $u$ and $v$ be vertices of $\sigma$ located at distance $i$ from $x$ and $y,$ respectively. If $u=v,$ then $x,y\in I(u,z),$ thus $x$ and $y$ must be adjacent because $G({\bold X})$ is thin. So, suppose that $u\ne v,$ i.e., $d(y,u)=d(z,u)=i+1$ holds. By triangle condition there exists a common neighbor $w$ of $z$ and $y$ having distance $i$ to $u$. Since $x,w\in I(z,u)$ and $G({\bold X})$ is thin, the vertices $x$ and $w$ are adjacent; moreover by triangle condition, there exists a common neighbor $u'$ of $w$ and $x$ having distance $i-1$ to $u.$ If $d(w,v)=i+1,$ then $y,u'\in I(w,v),$ thus $y$ and $u'$ must be adjacent because $G({\bold X})$ is thin. As a result, we obtain a 4-cycle defined by $x,z,y,u'.$ Since $d(z,u)=i+1$ and $d(u',u)=i-1,$ $z$ and $u'$ cannot be adjacent, thus this 4-cycle must be induced, which is impossible. Hence $d(w,v)=i.$ Let $u''$ be a neighbor of $u$ in the interval $I(u,u')$ (it is possible that $u''=u'$). Since $d(y,u)=i+1$ and $d(u',u)=i-1, d(u',y)=2,$ we conclude that $u'\in I(y,u),$ yielding $u''\in I(u,u')\subset I(u,y).$ Since $v$ also belongs to $I(u,y)$ and $G({\bold X})$ is thin, the vertices $u''$ and $v$ must be adjacent. But in this case $d(x,v)=1+d(u',u'')+1=i,$ contrary to the assumption that $d(x,v)=i+1.$ This contradiction shows that $B_i(\sigma)$ is convex for any $i\ge 2.$ \end{proof} \begin{proposition}[$SD_n$ property for maximal simplices] \label{SD_n-max} A weakly systolic complex satisfies the property $SD_n(\sigma^*)$ for any maximal simplex $\sigma^*$ of $\bold X$. \end{proposition} \begin{proof} Let $\sigma$ be a simplex of $\bold X$ located in the sphere $S_{i+1}(\sigma^*).$ For each vertex $v\in \sigma$ denote by $\sigma^*(v)$ the metric projection of $v$ in $\sigma^*,$ i.e., the set of all vertices of $\sigma^*$ located at distance $i+1$ from $v.$ Notice that the sets $\sigma^*(v)$ $(v\in \sigma)$ can be linearly ordered by inclusion. Indeed, if we suppose the contrary, then there exist two vertices $v',v''\in \sigma$ and the vertices $u'\in \sigma^*(v')\setminus \sigma^*(v'')$ and $u''\in \sigma^*(v'')\setminus \sigma^*(v').$ This is impossible because in this case $v''\in I(v',u'')\setminus B_{i+1}(u')$ and $v',u''\in B_{i+1}(u'),$ contrary to the convexity of $B_{i+1}(u').$ Therefore the simplices $\sigma^*(v)$ $(v\in \sigma)$ can be linearly ordered by inclusion. This means that $\sigma\subset S_{i+1}(u)$ holds for any vertex $u$ belonging to all metric projections $\sigma^*_0=\cap\{ \sigma^*(v): v\in \sigma\}.$ Applying the $SD_n(u)$ property to $\sigma$ we conclude that the set of all vertices $x\in S_i(u)\subseteq S_i(\sigma^*)$ adjacent to all vertices of $\sigma$ is a non-empty simplex. Pick two vertices $x,y\in S_i(\sigma^*)$ adjacent to all vertices of $\sigma.$ Let $x\in S_i(u)$ and $y\in S_i(w)$ for $u,w\in \sigma^*_0.$ We assert that $x$ and $y$ are adjacent. Let $v$ be a vertex of $\sigma$ whose projection $\sigma^*(v)$ is maximal by inclusion. If $\sigma^*(v)=\sigma^*,$ then applying the $SD_n(v)$ property we conclude that there exists a vertex $v'$ at distance $i$ to $v$ and adjacent to all vertices of $\sigma^*,$ contrary to maximality of $\sigma^*.$ Hence $\sigma^*(v)$ is a proper simplex of $\sigma^*.$ Let $s\in \sigma^*\setminus \sigma^*(v).$ Then $x,y\in I(v,s)$ and since $G({\bold X})$ is thin, the vertices $x$ and $y$ must be adjacent. \end{proof} We conclude this section by showing that the systolic complexes are exactly the flag complexes satisfying $SD_n(\sigma^*)$ for all simplices $\sigma^*.$ \begin{proposition}\label{systolic} A simplicial flag complex $\bold X$ is systolic if and only if $\bold X$ satisfies the property $SD_n(\sigma^*)$ for all simplices $\sigma^*$ of ${\bold X}$ and all $n\ge 0.$ \end{proposition} \begin{proof} If $\bold X$ satisfies the property $SD_n(v)$ for all vertices, then $\bold X$ is weakly systolic by Theorem \ref{weakly-systolic}. Since $\bold X$ satisfies the property $SD_n(e)$ for all edges, $\bold X$ does not contains 5-wheels. Hence $\bold X$ is systolic. Conversely, let $\sigma^*$ be an arbitrary simplex of a systolic complex $\bold X$ and let $\sigma$ be a simplex belonging to $S_{i+1}(\sigma^*).$ Since $B_i({\sigma^*})$ is convex because $G({\bold X})$ is bridged, the set $\sigma_0$ of all vertices $x\in B_i(\sigma^*)$ such that $\sigma\cup\{ x\}\in {\bold X},$ if nonempty, necessarily is a simplex. Thus it suffices to show that $\sigma_0\ne\emptyset.$ As in previous proof, for each vertex $v\in \sigma$ denote by $\sigma^*(v)$ the metric projection of $v$ in $\sigma^*.$ Then, as we showed in the proof of Proposition \ref{SD_n-max}, the sets $\sigma^*(v)$ can be linearly ordered by inclusion. Therefore there exists a vertex $u\in \sigma^*$ belonging to all projections $\sigma^*(v), v\in \sigma.$ Then $\sigma\subset S_{i+1}(u),$ whence $\sigma_0$ is nonempty because of the $SD_n(u)$ property. \end{proof} \section{Dismantlability of weakly bridged graphs} \label{dismantlability} In this section, we show that the underlying graphs of weakly systolic complexes are dismantlable and that a dismantling order can be obtained using LexBFS. Then we use this result to deduce several consequences about combings of weakly bridged graphs and about the collapsibility of weakly systolic complexes. Other consequences of dismantling are given in subsequent sections. \begin{theorem}[LexBFS dismantlability] \label{dismantle} Any LexBFS ordering of a locally finite weakly bridged graph $G=G({\bold X})$ is a dismantling ordering. In particular, locally finite weakly systolic complexes ${\bold X}$ and their Rips complexes ${\bold X}_k$ are LC-contractible and therefore collapsible. \end{theorem} \begin{proof} We will establish the result for finite weakly bridged graphs and finite weakly systolic complexes. The proof in the locally finite case is completely similar. Let $v_n,\ldots,v_1$ be the total order returned by the LexBFS starting from the basepoint $u=v_n.$ Let $G_i$ be the subgraph of $G$ induced by the vertices $v_n,\ldots,v_i.$ For a vertex $v\ne u$ of $G,$ denote by $f(v)$ its father in the LexBFS tree $T_u,$ by $L(v)$ the list of all neighbors of $v$ labeled before $v,$ and by $\alpha(v)$ the label of $v$ (i.e., if $v=v_i,$ then $\alpha(v)=i$). We decompose the label $L(v)$ of each vertex $v$ into two parts $L'(v)$ and $L''(v):$ if $d(v,u)=i,$ then $L'(v)=L(v)\cap S_{i-1}(u)$ and $L''(v)=L(v)\cap S_i(u).$ Notice that in the lexicographic order of $L(v),$ all vertices of $L'(v)$ precede the vertices of $L''(v);$ in particular, the father of $v$ belongs to $L'(v).$ The proof of the theorem is a consequence of the following assertion, which we call the {\it fellow traveler property}: \medskip\noindent {\bf Fellow Traveler Property:} {\it If $v,w$ are adjacent vertices of $G,$ then their fathers $v'=f(v)$ and $w'=f(w)$ either coincide or are adjacent. If $v'$ and $w'$ are adjacent and $\alpha(w)<\alpha(v),$ then $w'$ is adjacent to $v$ and $v'$ is not adjacent to $w.$} \medskip Indeed, if this assertion holds, then we claim that $v_n,\ldots,v_1$ is a dismantling order. To see this, it suffices to show that any vertex $v_i$ is dominated in $G_i$ by its father $f(v_i)$ in the LexBFS tree $T_u.$ Pick any neighbor $v_j$ of $v_i$ in $G_i.$ We assert that $v_j$ coincides or is adjacent to $f(v_i).$ This is obviously true if $f(v_j)=f(v_i).$ Otherwise, if $f(v_i)\ne f(v_j),$ then the Fellow Traveler Property implies that $f(v_i)$ and $f(v_j)$ are adjacent and since $i<j$ that $v_j$ is adjacent to $f(v_i).$ This shows that indeed any LexBFS order is a dismantling order. We will establish now the Fellow Traveler Property by induction on $i+1:=\mbox{max}\{ d(u,v),d(u,w)\}.$ First suppose that $d(u,v)<d(u,w).$ Then $v,w'\in I(w,u)$ and since $G$ is thin, we conclude that $v$ and $w'$ either coincide or are adjacent. In the first case we are done because $v$ (and therefore $w'$) is adjacent to its father $v'=f(v)$. If $v$ and $w'$ are adjacent, since $i=d(u,v)=d(u,w'),$ the vertices $v'$ and $f(w')$ coincide or are adjacent by the induction assumption. Again, if $v'=f(w'),$ the assertion is immediate. Now suppose that $v'$ and $f(w')$ are adjacent. Since $w'=f(w)$ was labeled before $v$ (otherwise the father of $w$ is $v$ and not $w'$), $f(w')$ must be labeled before $v',$ therefore by the induction hypothesis we deduce that $v'=f(v)$ must be adjacent to $w'=f(w).$ This concludes the analysis of the case $d(u,v)<d(u,w).$ From now on, suppose that $d(u,v)=d(u,w)=i+1$ and that $\alpha(w)<\alpha(v).$ If the vertices $v'=f(v)$ and $w'=f(w)$ coincide, then we are done. If the vertices $v'$ and $w'$ are adjacent, then the vertices $v,w,w',v'$ define a 4-cycle, which cannot be induced by the $SD_n$ property (see Theorem \ref{weakly-systolic}). Since $v$ was labeled before $w,$ the vertex $v'$ must be labeled before $w'.$ Therefore, if $v'$ is adjacent to $w,$ then LexBFS will label $w$ from $v'$ and not from $w',$ a contradiction. Thus $v'$ and $w$ are not adjacent, showing that $w'$ must be adjacent to $v,$ establishing the required assertion. So, assume by way of contradiction that the vertices $v'$ and $w'$ are not adjacent in $G.$ Then $w'$ is not adjacent to $v,$ otherwise $w',v'\in B_i(u)$ and $v\in I(v',w')\cap S_{i+1}(u),$ contrary to the convexity of the ball $B_i(u)$ (similarly, $v'$ is not adjacent to $w$). Since $G$ is weakly modular by Theorem \ref{weakly-systolic}, by triangle condition applied to the vertices $v,w,$ and $u,$ there exists a common neighbor $s$ of $v$ and $w$ located at distance $i$ from $u.$ Denote by $S$ the set of all such vertices $s.$ From the property $SD_n$ we infer that $S$ is a simplex of $\bold X$ (i.e., its vertices are pairwise adjacent in $G$). Since $v'$ and $w'$ do not belong to $S,$ necessarily all vertices of $S$ have been labeled later than $v'$ and $w'$ (but obviously before $v$ and $w$). Pick a vertex $s$ in $S$ with the largest label $\alpha(s)$ and set $z:=f(s).$ By induction assumption applied to the pairs of adjacent vertices $\{ v',s\}$ and $\{ s,w'\}$, we conclude that the vertices of each of the pairs $\{ f(v'),z\}$ and $\{ z,f(w')\}$ either coincide or are adjacent. Moreover, in all cases, the vertex $z$ must be adjacent to the vertices $v'$ and $w'.$ \medskip\noindent {\bf Claim 1:} $C:=L'(v')=L'(s)=L'(w')$ and $z$ is the father of $v'$ and $w'.$ \medskip\noindent {\bf Proof of Claim 1:} Since $s$ was labeled later than $v'$ and $w',$ it suffices to show that $L'(v')=L'(s).$ Indeed, if this is the case, then necessarily $z$ is the father of $v'.$ Then, as $z$ is adjacent to $w'$ and $\alpha(w')<\alpha(v'),$ necessarily $z$ is also the father of $w'.$ Now, if $L'(w')$ and $L'(s)=L'(v')$ do not coincide, since $L'(v')$ lexicographically precedes $L''(v')$ and $L'(w')$ precedes $L''(w'),$ the fact that LexBFS labeled $v'$ before $w'$ means that $L'(v')$ lexicographically precedes $L'(w').$ Since $L'(s)=L'(v'),$ then necessarily LexBFS would label $s$ before $w',$ a contradiction. This shows that $L'(s)=L'(v')$ implies the equality of the three labels $L'(v'),L'(s),$ and $L'(w').$ To show that $L'(v')=L'(s),$ since $\alpha(s)<\alpha(v'),$ it suffices to establish only the inclusion $L'(v')\subseteq L'(s).$ Suppose by way of contradiction that there exists a vertex in $L'(v')\setminus L'(s)$ i.e., a vertex $x\in S_{i-1}(u)$ which is adjacent to $v'$ but is not adjacent to $s.$ Let $x$ be the vertex of $L'(v')\setminus L'(s)$ having the largest label $\alpha(x).$ Since $s$ was labeled by LexBFS later than $v',$ necessarily any vertex of $L'(s)\setminus L'(v')$ must be labeled later than $x.$ Notice that $x$ cannot be adjacent to $w',$ otherwise we obtain an induced 4-cycle formed by the vertices $v',s,w',x.$ Since $x$ is not adjacent to $v,w,$ and $s,$ we conclude that the vertices $v,w,w',z,c',s,x$ induce an extended 5-wheel. By the $\widehat{W}_5$-condition, there exists a vertex $t$ adjacent to all vertices of this $\widehat{W}_5.$ Hence $t\in S.$ Further $t$ must be adjacent to $z,$ otherwise we obtain a forbidden 4-cycle induced by the vertices $s,z,x,$ and $t.$ For the same reason, $t$ must be adjacent to any other vertex $z'$ belonging to $L'(v')\cap L'(s).$ This means that LexBFS will label $t$ before $s.$ Since $t$ belongs to $S$ and $\alpha(t)>\alpha (s),$ we obtain a contradiction with the choice of the vertex $s.$ This contradiction concludes the proof of the Claim 1. \medskip Since $v'$ and $w'$ are not adjacent and $G$ does not contain induced 4-cycles, any vertex $s'\ne s$ adjacent to $v'$ and $w'$ is also adjacent to $s.$ In particular, this shows that $L''(v')\cap L''(w')\subseteq L''(s).$ Therefore, if $L''(w')\subseteq L''(v'),$ then $L''(w')\subseteq L''(s).$ Since $v'\in L''(s)\setminus L''(w')$ and $L'(s)=L'(w')$ by Claim 1, we conclude that the vertex $s$ must be labeled before $w',$ contrary to the assumption that $\alpha(s)<\alpha(w').$ Therefore the set $B:=L''(w')\setminus L''(v')$ is nonempty. Since $v'$ was labeled before $w'$ and $L'(v')=L'(w')$ by Claim 1, we conclude that the set $A:=L''(v')\setminus L''(w')$ is nonempty as well. Let $p$ be the vertex of $A$ with the largest label $\alpha(p)$ and let $q$ be the vertex of $B$ with the largest label $\alpha(q).$ Since LexBFS labeled $v'$ before $w'$ and $L'(v')=L'(w')$ holds, necessarily $\alpha(q)<\alpha(p)$ holds. Since $p\in L''(v'),$ we obtain that $\alpha(w')<\alpha(v')<\alpha(p).$ Since $v'=f(v)$ and $w'=f(w),$ this shows that $p$ cannot be adjacent to the vertices $v$ and $w.$ If $s$ is adjacent to $p,$ then $p\in L''(s).$ But then from Claim 1 and the inclusion $L''(v')\cap L''(w')\subseteq L''(s)$ we will infer that LexBFS must label $s$ before $w',$ contrary to the assumption that $\alpha(s)<\alpha(w').$ Therefore $p$ is not adjacent to $s$ either. On the other hand, since $\alpha(v')<\alpha(p),$ by the induction hypothesis applied to the adjacent vertices $p$ and $v',$ we infer that $z=f(v')$ must be adjacent to $p.$ Hence the vertices $v,w,w',z,v',s,p$ induce an extended 5-wheel. By the $\widehat{W}_5$-condition, there exists a vertex $t$ adjacent to all these vertices. Since $C:=L'(v')=L'(w')$ and $d(v',w')=2,$ to avoid induced 4-cycles, the vertex $t$ must be adjacent to any vertex of $C.$ For the same reason, $t$ must be adjacent to any vertex of $L''(v')\cap L''(w').$ Since additionally $t$ is adjacent to the vertex $p$ of $A$ with the highest label, necessarily $t$ will be labeled by LexBFS before $w'$ and $s.$ Since $t$ is adjacent to $v$ and $w,$ this contradicts the assumption that $w'=f(w).$ This shows that the initial assumption that $v'$ and $w'$ are not adjacent lead to a final contradiction. Hence the order returned by LexBFS is indeed a dismantling order of the weakly bridged graph $G=G({\bold X}).$ To show that any finite weakly systolic complex ${\bold X}$ is LC-contractible it suffices to notice that, since ${\bold X}$ is a flag complex, the LC-contractibility of ${\bold X}$ is equivalent to the dismantlability of its graph $G({\bold X}),$ and hence the result follows from the first part of the theorem. To show that the Rips complex ${\bold X}_k$ is LC-contractible, since ${\bold X}_k$ is a flag complex, it suffices to show that its graph $G({\bold X}_k)$ is dismantlable. From the definition of ${\bold X}_k,$ the graph $G({\bold X}_k)$ coincides with the $k$th power $G^k$ of the underlying graph $G$ of $\bold X$. Now notice that if a vertex $v$ is dominated in $G$ by a vertex $u,$ then $u$ also dominates $v$ in the graph $G^k.$ Indeed, pick any vertex $x$ adjacent to $v$ in $G^k.$ Then $d(v,x)\le k$ in $G.$ Let $y$ be the neighbor of $v$ on some shortest path $P$ connecting the vertices $v$ and $x$ in $G.$ Since $u$ dominates $v,$ necessarily $u$ is adjacent to $y.$ Hence $d(u,x)\le k$ in $G,$ therefore $u$ is adjacent to $x$ in $G^k.$ This shows that $v$ is dominated by $u$ in the graph $G^k$ as well. Therefore the dismantling order of $G$ returned by LexBFS is also a dismantling order of $G^k,$ establishing that the Rips complex ${\bold X}_k$ is LC-contractible. This completes the proof of the theorem. \end{proof} \medskip\noindent \begin{remark} BFS orderings of weakly bridged graphs do not satisfy the property that each vertex is dominated by its father. For example, let $G$ be a 5-wheel whose vertices are labeled as in Fig. \ref{5-wheel}. If BFS starts from the vertex $x_1$ and orders the remaining vertices as $x_2,x_5,c,x_3,x_4,$ then the father of the last vertex $x_4$ is $x_5$, however $x_5$ does not dominate $x_4$ in the whole graph. On the other hand, LexBFS starting from $x_1$ and continuing with $x_2,$ necessarily will label the vertex $c$ next. As a consequence, $c$ will be the father of the last labeled vertex $x_4$ and obviously $c$ dominates $x_4.$ Nevertheless, the order $x_1,x_2,x_5,c,x_3,x_4$ returned by BFS is a domination order of the 5-wheel. Is this true for all weakly bridged graphs? \end{remark} \begin{corollary} \label{Rips} Graphs of Rips complexes ${\bold X}_n$ of locally finite systolic and weakly systolic complexes are dismantlable. \end{corollary} \begin{corollary}\label{cop-win} Finite weakly bridged graphs are cop-win. \end{corollary} For a locally finite weakly bridged graph $G$ and integer $k$ denote by $G_k$ the subgraph of $G$ induced by the first $k$ labeled vertices in a LexBFS order, i.e., by the vertices of $G$ with $k$ lexicographically largest labels. \begin{corollary}\label{G_k} Any $G_k$ is an isometric weakly bridged subgraph of $G.$ \end{corollary} \begin{proof} By Theorem \ref{dismantle}, LexBFS returns a dismantling order of $G$, hence any $G_k$ is an isometric subgraph of $G.$ Therefore $G_k$ is a thin graph, because any interval $I(x,y)$ in $G_k$ is contained in the interval of $G$ between $x$ and $y.$ Moreover, $G_k,$ as an isometric subgraph of a $G$, does not contain isometric cycles of length $>5.$ Hence, by a result of \cite{SoCh,FaJa}, $G_k$ is a graph with convex balls. By Theorem \ref{weakly-systolic}(vi) it remains to show that any induced 5-cycle $C$ of $G_k$ is included in a 5-wheel. Suppose by induction assumption that this is true for $G_{k-1}$. Therefore $C$ must contain the last labeled vertex of $G_k,$ denote this vertex by $v.$ Let $x$ and $y$ be the neighbors of $v$ in $C.$ Let $v'=f(v)$ be the vertex (of $G_k$) dominating $v$ in $G_k.$ Since $C$ is induced, necessarily $v'$ is adjacent to $x$ and $y$ but different from these vertices. Denote by $C'$ the 5-cycle obtained by replacing in $C$ the vertex $v$ by $v'.$ If $C'$ is not induced, then $v'$ will be adjacent to a third vertex of $C,$ and since $G_k$ does not contain induced 4-cycles, $v'$ will be adjacent to all vertices of $C,$ showing that $C$ extends to a 5-wheel. So, suppose that $C'$ is induced. Applying the induction hypothesis to $G_{k-1},$ we conclude that $C'$ extends to a 5-wheel in $G_{k-1}.$ Let $w$ be the central vertex of this wheel. To avoid a 4-cycle induced by the vertices $x,y,v,$ and $w,$ necessarily $v$ and $w$ must be adjacent. Hence $C$ extends in $G_k$ to a 5-wheel centered at $w.$ This establishes that indeed $G_k$ is weakly bridged. \end{proof} A {\it homomorphism} of a graph $G=(V,E)$ to itself is a mapping $\varphi: V\rightarrow V$ such that for any edge $uv\in E$ we have $\varphi(u)\varphi(v)\in E$ or $\varphi(u)=\varphi(v).$ A set $S\subset V$ is fixed by $\varphi,$ if $\varphi(S)=S.$ A {\it simplicial map} on a simplicial complex ${\bold X}$ is a map $\varphi: V({\bold X})\rightarrow V({\bold X})$ such that for all $\sigma\in {\bold X}$ we have $\varphi({\sigma})\in {\bold X}.$ A simplicial map fixes a simplex $\sigma\in {\bold X}$ if $\varphi (\sigma)=\sigma$. Every simplicial map on $\bold X$ is a homomorphism of its graph $G({\bold X}).$ Every homomorphism of a graph $G$ is a simplicial map on its clique complex ${\bold X}(G).$ Therefore, if ${\bold X}$ is a flag complex, then the set of simplicial maps of ${\bold X}$ coincides with the set of homomorphisms of its graph $G({\bold X}).$ It is well know (see, for example, \cite[Theorem 2.65]{HeNe}) that any homomorphism of a finite dismantlable graph to itself fixes some clique. From Theorem \ref{dismantle} we know that the graphs of weakly systolic complexes as well as the graphs of their Rips complexes are dismantlable. Therefore from preceding discussion we obtain: \begin{corollary}\label{fixed-clique} Any homomorphism of a finite weakly bridged graph $G=G({\bold X})$ to itself fixes some clique. Any simplicial map of a weakly systolic complex $\bold X$ to itself or of its Rips complex ${\bold X}_k$ to itself fixes some simplex of respective complex. \end{corollary} Let $u$ be a base point of a graph $G.$ A {\it (geodesic)} $k$--{\it combing} \cite{ECHLPT} is a choice of a shortest path $P_{(u,x)}$ between $u$ and each vertex $x$ of $G,$ such that $P:=P_{(u,v)}$ and $Q:=P_{(u,w)}$ are $k$--fellow travelers for any adjacent vertices $v$ and $w$ of $G,$ i.e., $d(P(t),Q(t))\leq k$ for all integers $t\ge 0.$ One can imagine the union of combing paths as a spanning tree $T_u$ of $G$ rooted at $u$ and preserving the distances from $u$ to all vertices. A natural way to comb a graph $G$ from $u$ is to run BFS and to take as a shortest path $P_{(u,x)}$ the unique path of the BFS-tree $T_u$. It is shown in \cite{Ch_CAT} that for bridged graphs this geodesic combing satisfies the 1-fellow traveler property. We will show now that in the case of weakly bridged graphs the same combing property is satisfied by the paths of any LexBFS tree $T_u:$ \begin{corollary}\label{combing} Locally finite weakly bridged graphs $G$ admit a geodesic 1-combing defined by the paths of any LexBFS tree $T_u$ of $G$. \end{corollary} \begin{proof} Pick two adjacent vertices $v,w$ of $G$ and suppose that $w$ was labeled by LexBFS after $v.$ Then $d(u,v)\le d(u,w)=n.$ We proceed by induction on $n.$ Denote by $v'=f(v)$ and $w'=f(w)$ the fathers of $v$ and $w.$ By definition of the combing, $v'$ and $w'$ are the neighbors of $v$ and $w$ in the combings paths $P_{(u,v)}$ and $P_{(u,w)},$ respectively. If $d(u,v)=d(u,w),$ then the {\it fellow traveler property} established in Theorem \ref{dismantle} shows that $v'$ and $w'$ either are adjacent or coincide. In the second case, $P_{(u,v)}$ and $P_{(u,w)}$ coincide everywhere except the last vertices $v$ and $w.$ In the first case, since $d(u,v')=d(u,w')=n-1,$ the paths $P_{(u,v')}$ and $P_{(u,w')}$ are 1-fellow travelers. Since $P_{(u,v')}\subset P_{(u,v)}$ and $P_{(u,w')}\subset P_{(u,w)}$ we conclude that $P_{(u,v)}$ and $P_{(u,w)}$ are 1-fellow travelers as well. Now suppose that $d(u,v)<d(u,w).$ If $w'=v,$ then $P(u,v)\subset P(u,w)$ and we are done. Otherwise, $w'$ is adjacent to $v$ and $v'.$ Applying the induction hypothesis to the combing paths $P_{(u,v')}\subset P_{(u,v)}$ and $P_{(u,w')}\subset P_{(u,w)},$ again we conclude that $P_{(u,v)}$ and $P_{(u,w)}$ are 1-fellow travelers. \end{proof} \section{Fixed point theorem} \label{fixedpt} In this section, we establish the fixed point theorem (Theorem C from Introduction). We start with two auxiliary results. The first is an easy corollary of Theorem \ref{dismantle}: \begin{lemma}[Strictly dominated vertex] \label{str dominat} Let $\bold X$ be a finite weakly systolic complex. Then either $\bold X$ is a single simplex or it contains two vertices $v,w$ such that $B_1(v)$ is a proper subset of $B_1(w)$, i.e. $B_1(v)\subsetneq B_1(w)$ . \end{lemma} \begin{proof} Let $v$ be the last vertex of $\bold X$ labeled by LexBFS which started at vertex $u$ (see Theorem \ref{dismantle}). If $d(u,v)=1,$ then the construction of our ordering implies that $B_1(u)=V({\bold X})$. Hence, either there exists a vertex $w$, such that $B_1(w)\subset V({\bold X})=B_1(u),$ and we are done, or every two vertices of $\bold X$ are adjacent, i.e., $\bold X$ is a simplex. Now suppose that $d(u,v)\geq 2$. From Theorem \ref{dismantle} we know that $B_1(v)\subseteq B_1(w)$, where $w$ is the father of $v$. Since $d(u,v)=d(u,w)+1\geq 2$, we conclude that $u\neq w$ and that $z\in B_1(w)\setminus B_1(v),$ where $z$ is the father of $w.$ Hence $B_1(v)$ is a proper subset of $B_1(w).$ \end{proof} \begin{lemma}[Elementary LC-reduction] \label{LC} Let $\bold X$ be a finite weakly systolic complex. Let $v,w$ be two vertices such that $B_1(v)$ is a proper subset of $B_1(w)$. Then the full subcomplex ${\bold X}_0$ of $\bf X$ spanned by all vertices of $\bold X$ except $v$ is weakly systolic. \end{lemma} \begin{proof} It is easy to see that ${\bf X}_0$ is simply connected (see also the discussion in Section \ref{dislc}). Thus, by Theorem \ref{weakly-systolic}, it suffices to show that ${\bf X}_0$ does not contain induced $4$-cycles and satisfies the $\widehat{W}_5$-condition. Since, by Theorem \ref{weakly-systolic}, $\bold X$ does not contain induced $C_4,$ the same is true for its full subcomplex ${\bf X}_0$. Let $\widehat{W}_5\subseteq {\bf X}_0$ be a given $5$-wheel plus a triangle as defined in Section \ref{char}. By Theorem \ref{weakly-systolic} there exists a vertex $v'\in {\bold X}$ adjacent in $\bold X$ to all vertices of $\widehat{W}_5$. If $v'\neq v$ then $v'\in {\bf X}_0$ and if $v'=v$ then $\widehat{W}_5 \subseteq \mr{lk}(w,{\bf X}_0)$. In both cases all vertices of $\widehat{W}_5$ are adjacent to a vertex of ${\bf X}_0$. Thus $\bold X_0$ also satisfies the $\widehat{W}_5$-condition and hence the lemma follows. \end{proof} \begin{theorem}[The fixed point theorem] \label{fpt} Let $G$ be a finite group acting by simplicial automorphisms on a locally finite weakly systolic complex ${\bold X}$. Then there exists a simplex $\sigma \in {\bold X}$ which is invariant under the action of $G$. \end{theorem} \begin{proof} Let ${\bold X}'$ be the subcomplex of $\bold X$ spanned by the convex hull of the set $\{ gv:\; \; g\in G\}$. Then it is clear that ${\bold X}'$ is a bounded and $G$-invariant full subcomplex of $\bold X$. Moreover, as a convex subcomplex of a weakly systolic complex, ${\bold X}'$ is itself weakly systolic. Thus there exists a minimal non-empty $G$--invariant subcomplex ${\bold X}_0$ of $\bold X$, that is itself weakly systolic. Since $\bold X$ is locally finite, ${\bold X}_0$ is finite. We assert that ${\bold X}_0$ must be a single simplex. Assume by way of contradiction that ${\bold X}_0$ is not a simplex. Then, by Lemma \ref{str dominat}, ${\bf X}_0$ contains two vertices $v,w$ such that $B_1(v)\subsetneq B_1(w)$ (i.e., $v$ is a strictly dominated vertex). Since the strict inclusion of $1$-balls is a transitive relation and ${\bold X}_0$ is finite, there exists a finite set $S$ of strictly dominated vertices of $\bold X$ with the following property: for a vertex $x\in S$ there is no vertex $y$ with $B_1(y)\subsetneq B_1(x)$. Let ${\bold X}_0'$ be the full subcomplex of $\bf X$ spanned by $V({\bold X}_0)\setminus S$. It is clear that ${\bold X}_0'$ is a non-empty $G$-invariant proper subcomplex of ${\bold X}_0$. By Lemma \ref{LC}, ${\bold X}_0'$ is weakly systolic. This contradicts the minimality of ${\bold X}_0$ and thus shows that ${\bold X}_0$ has to be a simplex. \end{proof} \begin{corollary}[Conj. classes of finite subgroups] \label{conj} Let $G$ be a group acting geometrically by automorphisms on a weakly systolic complex $\bf X$ (i.e., $G$ is weakly systolic). Then $G$ contains only finitely many conjugacy classes of finite subgroups. \end{corollary} \begin{proof} Suppose by way of contradiction that we have infinitely many conjugacy classes of finite subgroups represented by $H_1,H_2,\ldots\subset G$. Since $G$ acts geometrically on ${\bf X},$ there exists a compact subset $K\subset V({\bf X})$ with $\bigcup_{g\in G} gK={\bf X}$. For $i=1,2,\ldots,$ let $\sigma_i$ be an $H_i$-invariant simplex of $\bf X$ (whose existence is assured by the fixed point Theorem \ref{fpt}) and let $g_i\in G$ be such that $g_i(\sigma_i)\cap K\neq \emptyset$. Then $g_i(\sigma_i)$ is $g_iH_iG_i^{-1}$ invariant and $\bigcup_i g_iH_ig_i^{-1}$ is infinite. But for every element $g\in \bigcup_i g_iH_ig_i^{-1}$ we have $g(B_1(K))\cap B_1(K)\neq \emptyset,$ a contradiction with the properness of $G$-action on $\bf X$. \end{proof} \section{Contractibility of the fixed point set} \label{contrfps} The aim of this section is to prove that for a group acting on a weakly systolic complex its fixed point set is contractible (Proposition \ref{inv set contr}). As explained in the Introduction, this result implies Theorem E asserting that weakly systolic complexes are models for $\eg$ for groups acting on them properly. Our proof closely follows Przytycki's proof of an analogous result for the case of systolic complexes \cite{Pr3}. There are however minor technical difficulties. In particular, since balls around simplices in weakly systolic complexes need not to be convex, we have to work with other convex objects that are defined as follows. For a simplex $\sigma$ of a simplicial complex $\bf X,$ set $K_0(\sigma)=\sigma$ and $K_i(\sigma)=\bigcap _{v\in \sigma} B_i(v)$ for $i=1,2,\ldots$. \begin{lemma}[Properties of $K_i(\sigma)$] \label{bint} Let $\sigma$ be a simplex of a weakly systolic complex $\bf X$. Then, for $i=0,1,2,...$, $K_i(\sigma)$ is convex and $K_{i+1}(\sigma)\subseteq B_1(K_i(\sigma))$. \end{lemma} \begin{proof} Trivially, $K_0(\sigma)=\sigma$ is convex. For $i>0,$ $K_i(\sigma)$ is the intersection of the balls $B_i(v), v\in \sigma.$ By Theorem \ref{weakly-systolic}, balls around vertices are convex, whence $K_i(\sigma)$ is convex as well. To establish the inclusion $K_{i+1}(\sigma)\subseteq B_1(K_i(\sigma)),$ pick any vertex $w\in K_{i+1}(\sigma).$ Let $l+1=d(w,\sigma)$ and denote by $\sigma_0$ the metric projection of $w$ in $\sigma$. By the property $SD_{l}(w),$ there exists a vertex $z\in S_{l}(w)$ adjacent to all vertices of the simplex $\sigma_0.$ Let $w'$ be a neighbor of $w$ in the interval $I(w,z).$ Then obviously $d(w',\sigma)=l$ and therefore $\sigma_0$ is the metric projection of $w'$ in $\sigma.$ Since $d(w',v)=d(w,v)-1$ for any vertex $v\in \sigma$ and $w\in K_{i+1}(\sigma),$ we conclude that $w'\in K_i(\sigma),$ whence $w\in B_1(w')\subset B_1(K_i(\sigma)).$ \end{proof} We recall now two general results that were proved in \cite{Pr3} and which will be important in the proof of Proposition \ref{inv set contr}. \begin{proposition}[{\cite[Proposition 4.1]{Pr3}}] \label{p4.1p3} If $\mathcal C, \mathcal D$ are posets and $F_0,F_1\colon \mathcal C \to \mathcal D$ are functors such that for each object $c$ of $\mathcal C$ we have $F_0(c) \leq F_1(c)$, then the maps induced by $F_0$, $F_1$ on the geometric realizations of $\mathcal C,\mathcal D$ are homotopic. Moreover this homotopy can be chosen to be constant on the geometric realization of the subposet of $\mathcal C$ of objects on which $F_0$ and $F_1$ agree. \end{proposition} \begin{proposition}[{\cite[Proposition 4.2]{Pr3}}] \label{p4.2p3} Let $F_0\colon \mathcal C' \to \mathcal C$ be the functor from the flag poset $\mathcal C'$ of a poset $\mathcal C$ into the poset $\mathcal C$, assigning to each object of $\mathcal C'$, which is a chain of objects of $\mathcal C$, its minimal element. Then the map induced by $F_0$ on geometric realizations of $\mathcal C',\mathcal C$ (that are homeomorphic in a canonical way) is homotopic to identity. \end{proposition} The following property of flag complexes will be crucial in the definition of expansion by projection below. It says that in weakly systolic case we can define projections on convex subcomplexes the same way as projections on balls. \begin{lemma}[Projections on convexes] \label{proj} Let ${\bf X}$ be a simplicial flag complex and let $Y$ be its convex subset. If a simplex $\sigma$ belongs to $S_1(Y),$ i.e. $\sigma \subseteq B_1(Y)$ and $\sigma \cap Y=\emptyset,$ then $\tau:=\mr{lk}(\sigma, {\bf X})\cap Y$ is a single simplex. \end{lemma} \begin{proof} By definition of links, $\tau$ consists of all vertices $v$ of $Y$ adjacent in $G({\bf X})$ to all vertices of $\sigma$. Since the set $Y$ is convex and $\sigma$ is disjoint from $Y,$ necessarily the vertices of $\tau$ are pairwise adjacent. As $\bf X$ is a flag complex, $\tau$ is a simplex of $\bf X$. \end{proof} We will call the simplex $\tau$ as in the lemma above the \emph{projection of $\sigma$ on $Y$}. Now we are in position to define the following notion introduced (in a more general version) by Przytycki \cite[Definition 3.1]{Pr3} in the systolic case. Let $Y$ be a convex subset of a weakly systolic complex $\bf X$ and let $\sigma$ be a simplex in $B_1(Y)$. The \emph{expansion by projection} $e_Y(\sigma)$ of $\sigma$ is a simplex in $B_1(Y)$ defined in the following way: if $\sigma \subseteq Y,$ then $e_Y(\sigma)=\sigma,$ otherwise $e_Y(\sigma)$ is the join of $\sigma \cap S_1(Y)$ and its projection on $Y$. A version of the following simple lemma was proved in \cite{Pr3} in the systolic case. Its proof given there is valid also in our case. \begin{lemma}[{\cite[Lemma 3.8]{Pr3}}] \label{L3.8p3} Let $Y$ be a convex subset of a weakly systolic complex $\bf X$ and let $\sigma_1\subseteq \sigma_2\subseteq\ldots \subseteq \sigma_n\subseteq B_1(Y)$ be an increasing sequence of simplices. Then the intersection $\left( \bigcap_{i=1}^{n}e_Y(\sigma_i)\right) \cap Y$ is nonempty. \end{lemma} Let $\sigma$ be a simplex of a weakly systolic complex $\bf X$. As in \cite{Pr3}, we define an increasing sequence of full subcomplexes ${\bf D}_{2i}(\sigma)$ and ${\bf D}_{2i+1}(\sigma)$ of the baricentric subdivision ${\bf X}'$ of $\bf X$ in the following way. Let ${\bf D}_{2i}(\sigma)$ be the subcomplex spanned by all vertices of ${\bf X}'$ corresponding to simplices of $\bf X$ which have all their vertices in $K_i(\sigma)$. Let ${\bf D}_{2i+1}(\sigma)$ be the subcomplex spanned by all vertices of ${\bf X}'$ which correspond to those simplices of $\bf X$ that have all their vertices in $K_{i+1}(\sigma)$ and at least one vertex in $K_i(\sigma)$. The proof of the main proposition in this section follows closely the proof of \cite[Proposition 1.4]{Pr3}. \begin{proposition}[Contractibility of the fixed point set] \label{inv set contr} Let $H$ be a group acting by simplicial automorphisms on a weakly systolic complex $\bf X$. Then the complex $\mr {Fix}_H {\bf X}'$ is contractible or empty. \end{proposition} \begin{proof} Assume that $\mr {Fix}_H{\bf X}'$ is nonempty and let $\sigma$ be a maximal $H$-invariant simplex. By ${\bf D}_i$ we will denote here ${\bf D}_i(\sigma)$. We will prove the following three assertions. \medskip\noindent (i) ${\bf D}_0\cap \mr {Fix}_H{\bf X}'$ is contractible; \medskip\noindent (ii) the inclusion ${\bf D}_{2i}\cap \mr {Fix}_H{\bf X}' \subseteq {\bf D}_{2i+1}\cap \mr {Fix}_H{\bf X}'$ is a homotopy equivalence; \medskip\noindent (iii) the identity on ${\bf D}_{2i+2}\cap \mr {Fix}_H{\bf X}'$ is homotopic to a mapping with image in ${\bf D}_{2i+1}\cap \mr {Fix}_H{\bf X}'\subseteq {\bf D}_{2i+2}\cap \mr {Fix}_H{\bf X}'$. \medskip As in the proof of \cite[Proposition 1.4]{Pr3}, the three assertions imply that ${\bf D}_{k}\cap \mr {Fix}_H{\bf X}'$ is contractible for every $k$, thus the proposition holds. To show (i), note that ${\bf D}_{0}\cap \mr {Fix}_H{\bf X}'$ is a cone over the barycenter of $\sigma$ and hence it is contractible. To prove (ii), let $\mathcal C$ be the poset of $H$-invariant simplices in $\bf X$ with vertices in $K_{i+1}(\sigma)$ and at least one vertex in $K_i(\sigma)$. Its geometric realization is ${\bf D}_{2i+1}\cap \mr {Fix}_H {\bf X}'$. Consider a functor $F\colon \mathcal C \to \mathcal C$ assigning to each object of $\mathcal C$ (i.e., each simplex of $\bf X$), its subsimplex spanned by its vertices in $K_i(A)$. By Proposition \ref{p4.1p3}, the geometric realization of $F$ is homotopic to identity (which is the geometric realization of the identity functor). Moreover this homotopy is constant on ${\bf D}_{2i}\cap \mr {Fix}_H{\bf X}'$. The image of the geometric realization of $F$ is contained in ${\bf D}_{2i}\cap \mr {Fix}_H{\bf X}'$. Hence ${\bf D}_{2i}\cap \mr {Fix}_H{\bf X}'$ is a deformation retract of ${\bf D}_{2i+1}\cap \mr {Fix}_H{\bf X}',$ as desired. To establish (iii), let $\mathcal C$ be the poset of $H$-invariant simplices of ${\bf X}'$ with vertices in $K_{i+1}(\sigma)$ and let $\mathcal C'$ be its flag poset. Let also $F_0\colon \mathcal C'\to \mathcal C$ be the functor assigning to each object of $\mathcal C'$ its minimal element; cf. Proposition \ref{p4.2p3}. Now we define another functor $F_1\colon \mathcal C'\to \mathcal C$. For any object $c'$ of $\mathcal C'$, which is a chain of objects $c_1<c_2<\ldots<c_k$ of $\mathcal C$, recall that $c_j$ are some $H$-invariant simplices in $K_{i+1}(\sigma)$. Let $c_j'=e_{K_i(\sigma)}(c_j).$ Then by Lemma \ref{L3.8p3} the intersection $\bigcap_{j=1}^{k}c_j'$ contains at least one vertex in $K_i(\sigma)$. Thus $\bigcap_{j=1}^{k}c_j'$ is an $H$-invariant non-empty simplex and hence it is an object of $\mathcal C$. We define $F_1(c')$ to be this object. In the geometric realization of $\mathcal C$, which is ${\bf D}_{2i+2}\cap \mr {Fix}_H{\bf X}'$, the object $F_1(c')$ corresponds to a vertex of ${\bf D}_{2i+1}\cap \mr {Fix}_H{\bf X}'$. It is obvious that $F_1$ preserves the partial order. Notice that for any object $c'$ of $\mathcal C'$ we have $F_0(c')\subseteq F_1(c')$, hence, by Proposition \ref{p4.2p3}, the geometric realizations of $F_0$ and $F_1$ are homotopic. We have that $F_0$ is homotopic to the identity and that $F_1$ has image in ${\bf D}_{2i+1}\cap \mr {Fix}_H{\bf X}',$ thus establishing (iii). \end{proof} \section{Final remarks on the case of systolic complexes} \label{final} In this final section, we restrict to the case of systolic complexes and we present some further results in that case. First, using Lemma 3.10 and Theorem 3.11 of Polat \cite{Po} for bridged graphs, we prove a stronger version of the fixed point theorem for systolic complexes. Namely, Polat \cite{Po} established that for any subset $\ov Y$ of vertices of a graph with finite intervals, there exists a minimal isometric subgraph of this graph which contains $\ov Y.$ Moreover, if $\ov Y$ is finite and the graph is bridged, then \cite[Theorem 3.11(i)]{Po} shows that this minimal isometric (and hence bridged) subgraph is also finite. We continue with two lemmata which can be viewed as $G$-invariant versions of these two results of Polat \cite{Po}. \begin{lemma}[Minimal subcomplex] \label{minsubcx} Let a group $G$ act by simplicial automorphisms on a systolic complex $\bf X$. Let $\ov Y$ be a $G$-invariant set of vertices of $\bf X$. Then there exists a minimal $G$-invariant subcomplex $\bf Y$ of $\bf X$ containing $\ov Y$, which is itself a systolic complex. \end{lemma} \begin{proof} Let $\Sigma$ be a chain (with respect to the subcomplex relation) of $G$-invariants subcomplexes of $\bf X$, which contain $\ov Y$ and induce isometric subgraphs of the underlying graph of $\bf X$ (and thus are systolic complexes themselves). Then, as in the proof of \cite[Lemma 3.10]{Po}, we conclude that the subcomplex ${\bf Y}=\bigcap \Sigma$ is a minimal $G$-invariant subcomplex of $\bf X$, containing $\ov Y$ and which is itself a systolic complex. \end{proof} \begin{lemma}[Minimal finite subcomplex] \label{minfin} Let a group $G$ act by simplicial automorphisms on a systolic complex $X$. Let $\ov Y$ be a finite $G$-invariant set of vertices of $X$. Then there exists a minimal (as a simplicial complex) finite $G$-invariant subcomplex $Y$ of $X$, which is itself a systolic complex. \end{lemma} \begin{proof} Let $co(\ov Y)$ be the convex hull of $\ov Y$ in $\bf X$. The full subcomplex $ {\bf Z}$ of $\bf X$ spanned by $co(\ov Y)$ is a bounded systolic complex. By Lemma \ref{minsubcx}, there exists a minimal $G$-invariant subcomplex $\bf Y$ of $\bf Z$ containing the set $\ov Y$ and which itself is a systolic complex. Then, applying the proof of \cite[Theorem 3.11]{Po} to the bounded bridged graphs which are the underlying graphs of the systolic complexes $\bf Y$ of $\bf Z$, it follows that $\bf Y$ is finite. \end{proof} \begin{theorem}[The fixed point theorem] \label{fpt_sc} Let $G$ be a finite group acting by simplicial automorphisms on a systolic complex $\bf X$. Then there exists a simplex $\sigma \in {\bf X}$ which is invariant under the action of $G$. \end{theorem} \begin{proof} Let $\ov Y=Gv=\lk gv|\; \; g\in G\rk$, for some vertex $v\in {\bf X}$. Then $\ov Y$ is a finite $G$-invariant set of vertices of $\bf X$ and thus, by Lemma \ref{minfin}, there exists a minimal finite $G$-invariant subcomplex $\bf Y$ of $\bf X$, which is itself a systolic complex. Then, the same way as in the proof of Theorem \ref{fpt}, we conclude that there exists a simplex in $\bf Y$ that is $G$-invariant. \end{proof} \begin{remark} We believe that, as in the systolic case, the stronger version of Theorem \ref{fpt} holds also for weakly systolic complexes, i.e., one can drop the assumption on the local finiteness of $\bf X$ in Theorem \ref{fpt}. This needs extensions of some results of Polat (in particular, Theorems 3.8 and 3.11 from \cite{Po}) to the class of weakly bridged graphs. \end{remark} Zawi\' slak \cite{Z} initiated another approach to the fixed point theorem in the systolic case based on the following notion of round subcomplexes. A systolic complex $\bf X$ of finite diameter $k$ is {\it round} (cf. \cite{Pr2}) if $\cap\{ B_{k-1}(v): v\in V({\bf X})\}=\emptyset.$ Przytycki \cite{Pr2} established that all round systolic complexes have diameter at most $5$ and used this result to prove that for any finite group $G$ acting by simplicial automorphisms on a systolic complex there exists a subcomplex of diameter at most 5 which is invariant under the action of $G$. Zawi\' slak \cite[Conjecture 3.3.1]{Z} and Przytycki (Remark 8.1 of \cite{Pr2}) conjectured that in fact the diameter of round systolic complexes must be at most 2. Zawi\'{s}lak \cite[Theorem 3.3.1]{Z} showed that if this is true, then it implies that $G$ has an invariant simplex, thus paving another way to the proof of Theorem \ref{fpt_sc}. We will show now that the positive answer to the question of Zawi\' slak and Przytycki directly follows from an earlier result of Farber \cite{Fa} on diameters and radii of finite bridged graphs. \begin{proposition}[Round systolic complexes]\label{round} Any round systolic complex $\bf X$ has diameter at most 2. \end{proposition} \begin{proof} Let diam$({\bf X})$ and rad$({\bf X})$ denote the diameter and the radius of a systolic complex $\bf X$, i.e., the diameter and radius of its underlying bridged graph $G=G({\bf X})$. Recall that rad$({\bf X})$ is the smallest integer $r$ such that there exists a vertex $c$ of $\bf X$ (called a central vertex) so that the ball $B_r(c)$ of radius $r$ and centered at $c$ covers all vertices of $\bf X$, i.e., $B_r(c)=V({\bf X}).$ Farber \cite[Theorem 4]{Fa} proved that if $G$ is a finite bridged graph, then $3\mr{rad}(G)\le 2\mr{diam}(G)+2.$ We will show first that this inequality holds for infinite bridged graphs $G$ of finite diameter $\mr{diam}(G)$ and containing no infinite simplices. Set $k:=\mr{rad}(G)\le \mr{diam}(G).$ By definition of $\mr{rad}(G)$ the intersection of all balls of radius $k-1$ of $G$ is empty. Then using an argument of Polat (personal communication) presented below, we can find a finite subset of vertices $Y$ of $G$ such that the intersection of the balls $B_{k-1}(v),$ $v$ running over all vertices of $Y,$ is still empty. By \cite[Theorem 3.11]{Po}, there exists a finite isometric bridged subgraph $H$ of $G$ containing $Y.$ From the choice of $Y$ we conclude that the radius of $H$ is at least $k$, while the diameter of $H$ is at most the diameter of $G.$ As a result, applying Farber's inequality to $H,$ we obtain $3\mr{rad}(G)\le 3\mr{rad}(H)\le 2\mr{diam}(H)+2\le 2\mr{diam}(G)+2,$ whence $3\mr{rad}(G)\le 2\mr{diam}(G)+2.$ To show the existence of a finite set $Y$ such that $\cap \{ B_{k-1}(v): v\in Y\}=\emptyset,$ we use an argument of Polat (personal communication). According to Theorem 3.9 of \cite{Po3}, any graph without isometric rays (in particular, any bridged graph of finite diameter) can be endowed with a topology, called {\it geodesic topology}, so that the resulting topological space is compact. On the other hand, it is shown in \cite[Corollary 6.26]{Po4} that any convex set of a bridged graph containing no infinite simplices is closed in the geodesic topology. As a result, the balls of a bridged graph $G$ of finite diameter containing no infinite simplices are compact convex sets. Hence any family of balls with an empty intersection contains a finite subfamily with an empty intersection, showing that such a finite set $Y$ indeed exists. Now suppose that $\bf X$ is a round systolic complex and let $k:=\mr{diam}({\bf X}).$ Since $\bf X$ is round, one can easily deduce that $\mr{rad}({\bf X})=k$: indeed, if $\mr{rad}({\bf X})\le k-1$ and $c$ is a central vertex, then $c$ will belong to the intersection $\cap\{ B_{k-1}(v): v\in V({\bf X})\},$ which is impossible. Applying Farber's inequality to the (bridged) underlying graph of $\bf X$, we conclude that $3k\le 2k+2,$ whence $k\le 2.$ \end{proof} \begin{remark} It would be interesting to extend Proposition \ref{round} and the relationship of \cite{Fa} between radii and diameters to weakly systolic complexes. \end{remark} \medskip Osajda-Przytycki \cite{OsPr} constructed a $Z$-set compactification $\cx={\bf X} \cup \partial {\bf X}$ of a systolic complex $\bf X$. The main result there (\cite[Theorem 6.3]{OsPr}) together with Theorem E from the Introduction of our paper, allowed them to claim the following result (without proving it): \begin{claim}[{\cite[Theorem 6.3 and Claim 14.2]{OsPr}}]\label{Z} Let a group $G$ act geometrically by simplicial automorphisms on a systolic complex $\bf X$. Then the compactification $\cx={\bf X}\cup \partial {\bf X}$ of ${\bf X}$ satisfies the following properties: \xms 1. $\cx$ is a Euclidean retract (ER); \xms 2. $\partial {\bf X}$ is a $Z$--set in $\cx$; \xms 3. for every compact set $K\subset {\bf X}$, $(gK)_{g\in G}$ is a null sequence; \xms 4. the action of $G$ on $\bf X$ extends to an action, by homeomorphisms, of $G$ on $\cx$; \xms 5. for every finite subgroup $F$ of $G$, the fixed point set $\mr {Fix}_F \cx$ is contractible; \xms 6. for every finite subgroup $F$ of $G$, the fixed point set $\mr{Fix}_F {\bf X}$ is dense in $\mr {Fix}_F \cx$. \end{claim} This result asserts that $\cx$ is an \emph{$EZ$-structure}, sensu Rosenthal \cite{Ro}, for a systolic group $G$; for details, see \cite{OsPr}. The existence of such a structure implies, by \cite{Ro}, the Novikov conjecture for $G$. \section*{Acknowledgements} Work of V. Chepoi was supported in part by the ANR grant BLAN06-1-138894 (projet OPTICOMB). Work of D. Osajda was supported in part by MNiSW grant N201 012 32/0718 and by the ANR grants Cannon and Th\'eorie G\'eom\'etrique des Groupes. We are grateful to Norbert Polat for his help in the proof of Proposition \ref{round}. \begin{bibdiv} \begin{biblist} \bib{AnFa}{article}{ author={Anstee, R. P.}, author={Farber, M.}, title={On bridged graphs and cop-win graphs}, journal={J. Combin. Theory Ser. B}, volume={44}, date={1988}, number={1}, pages={22--28}, issn={0095-8956}, review={\MR{923263 (89h:05053)}}, } \bib{BaCh_weak}{article}{ author={Bandelt, Hans-J{\"u}rgen}, author={Chepoi, Victor}, title={A Helly theorem in weakly modular space}, journal={Discrete Math.}, volume={160}, date={1996}, number={1-3}, pages={25--39}, issn={0012-365X}, review={\MR{1417558 (97h:52006)}}, } \bib{BaCh_survey}{article}{ author={Bandelt, Hans-J{\"u}rgen}, author={Chepoi, Victor}, title={Metric graph theory and geometry: a survey}, conference={ title={Surveys on discrete and computational geometry}, }, book={ series={Contemp. Math.}, volume={453}, publisher={Amer. Math. Soc.}, place={Providence, RI}, }, date={2008}, pages={49--86}, review={\MR{2405677 (2009h:05068)}}, } \bib{BrHa}{book}{ author={Bridson, Martin R.}, author={Haefliger, Andr{\'e}}, title={Metric spaces of non-positive curvature}, series={Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]}, volume={319}, publisher={Springer-Verlag}, place={Berlin}, date={1999}, pages={xxii+643}, isbn={3-540-64324-9}, review={\MR{1744486 (2000k:53038)}}, } \bib{Chat}{collection}{ title={Guido's book of conjectures}, series={Monographies de L'Enseignement Math\'ematique [Monographs of L'Enseignement Math\'ematique]}, volume={40}, note={A gift to Guido Mislin on the occasion of his retirement from ETHZ June 2006; Collected by Indira Chatterji}, publisher={L'Enseignement Math\'ematique}, place={Geneva}, date={2008}, pages={189}, isbn={2-940264-07-4}, review={\MR{2499538}}, } \bib{Ch_triangle}{article}{ author={Chepoi, V. D.}, title={Classification of graphs by means of metric triangles}, language={Russian}, journal={Metody Diskret. Analiz.}, number={49}, date={1989}, pages={75--93, 96}, issn={0136-1228}, review={\MR{1114014 (92e:05041)}}, } \bib{Ch_bridged}{article}{ author={Chepoi, Victor}, title={Bridged graphs are cop-win graphs: an algorithmic proof}, journal={J. Combin. Theory Ser. B}, volume={69}, date={1997}, number={1}, pages={97--100}, issn={0095-8956}, review={\MR{1426753 (97g:05150)}}, } \bib{Ch_dpo}{article}{ author={Chepoi, Victor}, title={On distance-preserving and domination elimination orderings}, journal={SIAM J. Discrete Math.}, volume={11}, date={1998}, number={3}, pages={414--436 (electronic)}, issn={0895-4801}, review={\MR{1628110 (99h:05032)}}, } \bib{Ch_CAT}{article}{ author={Chepoi, Victor}, title={Graphs of some ${\rm CAT}(0)$ complexes}, journal={Adv. in Appl. Math.}, volume={24}, date={2000}, number={2}, pages={125--179}, issn={0196-8858}, review={\MR{1748966 (2001a:57004)}}, } \bib{CiYa}{article}{ author={Civan, Yusuf}, author={Yal{\c{c}}{\i}n, Erg{\"u}n}, title={Linear colorings of simplicial complexes and collapsing}, journal={J. Combin. Theory Ser. A}, volume={114}, date={2007}, number={7}, pages={1315--1331}, issn={0097-3165}, review={\MR{2353125 (2009a:05067)}}, } \bib{Fa}{article}{ author={Farber, Martin}, title={On diameters and radii of bridged graphs}, journal={Discrete Math.}, volume={73}, date={1989}, number={3}, pages={249--260}, issn={0012-365X}, review={\MR{983023 (90d:05192)}}, } \bib{FaJa}{article}{ author={Farber, Martin}, author={Jamison, Robert E.}, title={On local convexity in graphs}, journal={Discrete Math.}, volume={66}, date={1987}, number={3}, pages={231--247}, issn={0012-365X}, review={\MR{900046 (89e:05167)}}, } \bib{ECHLPT}{book}{ author={Epstein, David B. A.}, author={Cannon, James W.}, author={Holt, Derek F.}, author={Levy, Silvio V. F.}, author={Paterson, Michael S.}, author={Thurston, William P.}, title={Word processing in groups}, publisher={Jones and Bartlett Publishers}, place={Boston, MA}, date={1992}, pages={xii+330}, isbn={0-86720-244-0}, review={\MR{1161694 (93i:20036)}}, } \bib{G}{article}{ author={Gromov, M.}, title={Hyperbolic groups}, conference={ title={Essays in group theory}, }, book={ series={Math. Sci. Res. Inst. Publ.}, volume={8}, publisher={Springer}, place={New York}, }, date={1987}, pages={75--263}, review={\MR{919829 (89e:20070)}}, } \bib{Ha}{article}{ title ={Complexes simpliciaux hyperboliques de grande dimension}, author ={Haglund, Fr\' ed\' eric}, status ={preprint}, journal ={Prepublication Orsay}, volume ={71}, date ={2003}, eprint ={http://www.math.u-psud.fr/~biblio/ppo/2003/fic/ppo_2003_71.pdf} } \bib{HeNe}{book}{ author={Hell, Pavol}, author={Ne{\v{s}}et{\v{r}}il, Jaroslav}, title={Graphs and homomorphisms}, series={Oxford Lecture Series in Mathematics and its Applications}, volume={28}, publisher={Oxford University Press}, place={Oxford}, date={2004}, pages={xii+244}, isbn={0-19-852817-5}, review={\MR{2089014 (2005k:05002)}}, } \bib{JanSwi}{article}{ author={Januszkiewicz, Tadeusz}, author={{\'S}wi{\c{a}}tkowski, Jacek}, title={Filling invariants of systolic complexes and groups}, journal={Geom. Topol.}, volume={11}, date={2007}, pages={727--758}, review={\MR{2302501 (2008d:20079)}}, } \bib{Lu}{article}{ author={L{\"u}ck, Wolfgang}, title={Survey on classifying spaces for families of subgroups}, conference={ title={Infinite groups: geometric, combinatorial and dynamical aspects}, }, book={ series={Progr. Math.}, volume={248}, publisher={Birkh\"auser}, place={Basel}, }, date={2005}, pages={269--322}, } \bib{LySch}{book}{ author={Lyndon, Roger C.}, author={Schupp, Paul E.}, title={Combinatorial group theory}, note={Ergebnisse der Mathematik und ihrer Grenzgebiete, Band 89}, publisher={Springer-Verlag}, place={Berlin}, date={1977}, pages={xiv+339}, isbn={3-540-07642-5}, review={\MR{0577064 (58 \#28182)}}, } \bib{Ma}{article}{ author={Matou{\v{s}}ek, Ji{\v{r}}{\'{\i}}}, title={LC reductions yield isomorphic simplicial complexes}, journal={Contrib. Discrete Math.}, volume={3}, date={2008}, number={2}, pages={37--39}, issn={1715-0868}, review={\MR{2455228 (2009g:55031)}}, } \bib{NowWin}{article}{ author={Nowakowski, Richard}, author={Winkler, Peter}, title={Vertex-to-vertex pursuit in a graph}, journal={Discrete Math.}, volume={43}, date={1983}, number={2-3}, pages={235--239}, issn={0012-365X}, review={\MR{685631 (84d:05138)}}, } \bib{O-ciscg}{article}{ author={Osajda, Damian}, title={Connectedness at infinity of systolic complexes and groups}, journal={Groups Geom. Dyn.}, volume={1}, date={2007}, number={2}, pages={183--203}, issn={1661-7207}, review={\MR{2319456 (2008e:20064)}}, } \bib{Osajda}{article}{ title ={A combinatorial non-positive curvature I: $SD_n$ property}, author ={Osajda, Damian}, status ={in preparation}, date={2009} } \bib{O2}{article}{ title ={A construction of hyperbolic Coxeter groups}, author ={Osajda, Damian}, status ={in preparation}, date={2009} } \bib{OsPr}{article}{ author={Osajda, Damian}, author={Przytycki, Piotr}, title={Boundaries of systolic groups}, journal={Geom. Topol.}, volume={13}, date={2009}, number={5}, pages={2807--2880}, } \bib{OS}{article}{ title ={On asymptotically hereditarily aspherical groups}, author ={Osajda, Damian}, author ={\'Swi{\c a}tkowski, Jacek}, status ={in preparation}, date={2009} } \bib{Po3}{article}{ author={Polat, Norbert}, title={Graphs without isometric rays and invariant subgraph properties. I}, journal={J. Graph Theory}, volume={27}, date={1998}, number={2}, pages={99--109}, issn={0364-9024}, review={\MR{1491562 (99a:05086)}}, } \bib{Po1}{article}{ author={Polat, Norbert}, title={On infinite bridged graphs and strongly dismantlable graphs}, journal={Discrete Math.}, volume={211}, date={2000}, number={1-3}, pages={153--166}, issn={0012-365X}, review={\MR{1735348 (2000k:05232)}}, } \bib{Po}{article}{ author={Polat, Norbert}, title={On isometric subgraphs of infinite bridged graphs and geodesic convexity}, note={Algebraic and topological methods in graph theory (Lake Bled, 1999)}, journal={Discrete Math.}, volume={244}, date={2002}, number={1-3}, pages={399--416}, issn={0012-365X}, review={\MR{1844048 (2003c:05070)}}, } \bib{Po4}{article}{ author={Polat, Norbert}, title={Fixed finite subgraph theorems in infinite weakly modular graphs}, journal={Discrete Math.}, volume={285}, date={2004}, number={1-3}, pages={239--256}, issn={0012-365X}, review={\MR{2062847 (2005c:05174)}}, } \bib{Pr2}{article}{ author={Przytycki, Piotr}, title={The fixed point theorem for simplicial nonpositive curvature}, journal={Math. Proc. Cambridge Philos. Soc.}, volume={144}, date={2008}, number={3}, pages={683--695} } \bib{Pr3}{article}{ author={Przytycki, Piotr}, title={$\underline EG$ for systolic groups}, journal={Comment. Math. Helv.}, volume={84}, date={2009}, number={1}, pages={159--169} } \bib{Qui83}{thesis}{ title ={Probl\`emes de jeux, de point fixe, de connectivit\'e et de repr\'esentation sur des graphes, des ensembles ordonn\'es et des hypergraphes}, language={French}, author ={Quilliot, Alain}, organization={Universit\'e de Paris VI}, date ={1983}, type ={Th\`ese de doctorat d'\'etat} } \bib{Rol}{article}{ title ={Poc sets, median algebras and group actions. An extended study of Dunwoody's construction and Sageev's theorem} author ={Roller, Martin A.}, status ={preprint}, journal ={Univ. of Southampton Preprint Ser.}, date ={1998}, } \bib{RoTaLu}{article}{ author={Rose, Donald J.}, author={Tarjan, R. Endre}, author={Lueker, George S.}, title={Algorithmic aspects of vertex elimination on graphs}, journal={SIAM J. Comput.}, volume={5}, date={1976}, number={2}, pages={266--283}, issn={0097-5397}, review={\MR{0408312 (53 \#12077)}}, } \bib{Ro}{article}{ title ={Split injectivity of the Baum--Connes assembly map} author ={Rosenthal, David}, status ={preprint}, date ={2003}, eprint ={arXiv: math/0312047} } \bib{SoCh}{article}{ author={Soltan, V. P.}, author={Chepoi, V. D.}, title={Conditions for invariance of set diameters under $d$-convexification in a graph}, language={Russian, with English summary}, journal={Kibernetika (Kiev)}, date={1983}, number={6}, pages={14--18}, issn={0023-1274}, translation={ journal={Cybernetics}, volume={19}, date={1983}, number={6}, pages={750--756 (1984)}, issn={0011-4235}, }, review={\MR{765117 (86k:05102)}}, } \bib{Wi}{article}{ title ={Sixtolic complexes and their fundamental groups}, author ={Wise, Daniel T.}, status ={unpublished manuscript}, date={2003} } \bib{Z}{thesis}{ title ={O pewnych w\l asno\' sciach $6$-systolicznych kompleks\' ow symplicjalnych}, language={Polish}, author ={Zawi\' slak, Pawe\l}, organization={Wroc{\l}aw University}, date ={2004}, type ={M.Sc. thesis} } \end{biblist} \end{bibdiv} \end{document}
1,941,325,220,622
arxiv
\section{Introduction} Recent results in precision cosmology have created a number of puzzles. Among the unexplained phenomena are several apparent coincidences in which the energy density of two components are comparable despite having different redshift properties. Another is the existence of a negative pressure cosmological fluid that accelerates the rate of expansion of the universe. The model of mass-varying neutrinos (MaVaNs), introduced by Fardon et al. in~\cite{Fardon:2003eh}, suggests that neutrino and dark energy densities have tracked each other throughout the lifetime of the universe through a new scalar field called the acceleron. The energy density of the scalar potential of this new field contributes to the dark energy. The authors showed that this new field is capable of explaining the present cosmological expansion. Since the original MaVaN proposal, several authors have investigated the stability of the \emph{dark sector}, composed of neutrinos and the scalar field, under perturbations to the neutrino density~\cite{Afshordi:2005ym,Takahashi:2006jt}. Both groups subject the model to a hydrodynamic analysis. In this approximate picture, the speed of sound squared in the cosmological fluid, given by~\cite{Mukhanov:1990me}, \begin{equation} c_s^2=\frac{\dot{P}}{\dot{\rho}}=w+\frac{\dot{w}\rho}{\dot{\rho}}=w-\frac{\dot{w}}{3H(1+w)} \end{equation} is positive only if \begin{equation} \frac{\partial w}{\partial z} \geq -\frac{3w(1+w)}{1+z} \end{equation} where $z$ is the redshift. However, for nonrelativistic neutrinos, $\frac{\partial w}{\partial z}$ is a negative quantity while the right hand side is positive for $w$ close to -1. Since at least one neutrino must be nonrelativistic today, the dark sector appears unstable. Additionally, Afshordi, et al. perform a stability analysis using kinetic theory to account for neutrino streaming~\cite{Afshordi:2005ym}. This analysis also shows that perturbations in the neutrino field become unstable when the mass of the neutrino is of order the neutrino temperature. The ratio of mass to temperature when the instability occurs is a function of the acceleron potential, but is approximately 7 for the potentials considered in this paper. Afshordi, et al., also examine the result of undergoing a phase transition in a neutrino component, and suggest that the unstable neutrino field may rapidly form nonlinear structures termed \emph{neutrino nuggets}, which redshift as dark matter and cannot drive the cosmic expansion. In this paper, we will assume that the neutrinos are initially relativistic. As the universe expands and cools, the neutrinos become less relativistic until their mass and temperature are approximately equal. At this point, we assume that they decouple from the scalar field into some structure such as the one described by Afshordi et al. The resulting dark matter will not provide a large contribution to the energy density, and we will not include the contribution in the dark sector energy. Note that in this paper dark sector will always refer to the energy density contributions of the stable neutrinos and the scalar field. In a model with multiple neutrinos, the dark energy may still be driven by relativistic species even after the heavier components have become unstable. However, the coupling between the neutrinos and acceleron create a feedback mechanism. The shift in the acceleron expectation value when a neutrino becomes unstable can be sufficient to necessarily change the mass of another species so that it goes from relativistic to nonrelativistic, causing it to become unstable as well. This \emph{cascaded instability} can make all neutrinos unstable at about the same time that the heaviest neutrino becomes unstable. This paper will examine a particular class of models, and will show that see-saw MaVaN models with flat scalar potentials suffer from precisely this problem. The timing between the instability in successive neutrinos is strongly dependent on the flatness of the scalar potential, but a flat potential is also required to generate the observed dark energy. This result may point toward models that do no suffer from a cascaded instability, and this paper concludes with a simple example. \section{See-Saw Models} Consider a model with $n$ active neutrinos in which each neutrino is paired with a sterile counterpart, which is coupled to a new scalar field, \begin{equation} -L \supset \sum_{i=1}^n (M^\prime_i \nu_i N_i + \lambda A N_i N_i) \end{equation} where $\nu_i$ are the active neutrinos, $N_i$ are the sterile neutrinos, and $A$ is the acceleron scalar field. $M_i$ and $\lambda$ describe coupling strengths. If we assume that the normal see-saw limit holds, $\langle \lambda A\rangle \gg M_i$, the system reduces to an effective Lagrangian describing active neutrinos with a Majorana mass term, \begin{equation} -L_{eff} \supset \sum_{i=1}^n \frac{M_i^2}{A} \nu_i \nu_i \equiv \sum_{i=1}^n m_i \nu_i \nu_i \end{equation} where $M_i^2=M^{\prime 2}_i/\lambda$, and $m_i$ is the effective mass. The energy density of the dark sector is \begin{equation} \rho_d=\rho_\nu+V(A) \label{eq_rho} \end{equation} where $V$ is the potential of the acceleron field. Assuming that the neutrino distribution function is a stretched thermal distribution, the value of the equation of state for this dark sector is (one derivation is given in~\cite{Peccei:2004sz}) \begin{equation} w=\frac{p_d}{\rho_d}= \frac{T^4 \sum_i \left[ 4 F\left(\frac{m_{i}^2}{T}\right)- J\left(\frac{m_{i}^2}{T}\right)\right]}{3 \left[T^4 \sum_i F\left(\frac{m_{i}^2}{T}\right) +V(A)\right]}-1 \label{eq_w} \end{equation} where $i$ runs over the active neutrino species, and we have defined the distribution function and its derivative, \begin{eqnarray} F(x)&=&\frac{1}{\pi^2}\int_0^\infty \frac{y^2\sqrt{y^2+x^2}dy}{e^y+1} \\ J(x)&=&\frac{x^2}{\pi^2}\int_0^\infty \frac{y^2dy}{\sqrt{y^2+x^2}(e^y+1)} \end{eqnarray} Fardon, et al., show in~\cite{Fardon:2003eh} that the system remains very close to the minimum of the effective potential and evolves adiabatically. For the model above, this minimization conditions becomes \begin{equation} \frac{\partial V}{\partial A}=\sum_{i=1}^n\int_0^{\infty}\frac{dy}{\pi^2}\frac{m_{i} T^2 y^2}{(y^2+\frac{m_{i}^2}{T^2})^\frac{1}{2}}\frac{1}{1+e^y}(-\frac{\partial m_{i}}{\partial A}) \label{eq_min} \end{equation} where $y=\frac{p_\nu}{T}$. Commonly employed forms for the potential include small power law ($V=BA^k,k\ll 1$), logarithmic ($V=Blog(A/A_0)$) and quadratic ($V=BA^2$). After starting with the small power law case, it will be easy to generalize to all cases by assuming $\partial V/\partial A=BkA^{k-1}$ with $B$ and $k$ unrestricted. \section{Approximations} The expectation value of the acceleron at a particular value of $z$ is determined by the minimization equation~(\ref{eq_min}). Unfortunately, this equation in general does not have a closed form solution. To examine the behavior of this equation it is useful to approximate the result in three ranges: relativistic ($m_\nu \ll T$), quasirelativistic ($m_\nu \sim T$), and nonrelativistic ($m_\nu \gg T$). In these approximation, the minimization equation becomes \begin{eqnarray} \label{eq_approxr} R: & \frac{\partial V}{\partial A} \simeq \frac{1}{A^3}\frac{M_i^4 T^2}{\pi^2}I_1\\ \label{eq_approxnr} NR: & \frac{\partial V}{\partial A} \simeq \frac{1}{A^2}\frac{M_i^2 T^3}{\pi^2}I_2\\ \label{eq_approxqr} QR: & \frac{\partial V}{\partial A} \simeq \frac{1}{A^3}\frac{M_i^4 T^2}{\pi^2}I_3 \end{eqnarray} with the unitless $O(1)$ integrals \begin{eqnarray} I_1&=&\int_0^\infty dy \frac{y}{1+e^y} \simeq 0.822 \\ I_2&=&\int_0^\infty dy \frac{y^2}{1+e^y} \simeq 1.803 \\ I_3&=&\int_0^\infty dy \frac{y^2}{(1+e^y)\sqrt{1+y^2}} \simeq 0.670 \end{eqnarray} A comparison of these approximations to unapproximated numerical simulation for the two neutrino models discussed in the next section is shown in in figure~\ref{fig_approx1}. This figure shows the contribution of the derivative of the neutrino term to the minimization equation, which are the right hand sides of equations~(\ref{eq_approxr}-~\ref{eq_approxnr}). The relativistic and nonrelativistic approximations (dashed curves) are a good match to the numerically calculated value in their respective regions of validity at high and low redshift. \begin{figure}[tbh] \includegraphics[width=8cm]{mav7-k0.1-approx.eps} \caption{The derivative of neutrino energy density with respect to A. The solid line is a numerically determined result, and the dashed lines are approximations in the nonrelativistic (low z) and relativistic (high z) regimes.} \label{fig_approx1} \end{figure} \section{Two Active Neutrinos, Small Power Law} As a warm-up, consider the case of two active neutrinos, with $m_1 \gg m_2$, and a power law potential with $0<k\ll1$. To first order, the derivative of the potential becomes (accurate to 10\% for typical small values of $k$) \begin{equation} \frac{\partial V}{\partial A} \simeq \frac{B k}{A} \end{equation} At large $z$, both neutrinos will be relativistic. As the universe cools, the neutrino temperature decreases while mass increases, and at some value of $z$ neutrino 1 becomes quasirelativistic, and will become unstable. Due to the original mass hierarchy, neutrino 2 is still relativistic at this point. Now consider the acceleron expectation value and neutrino masses both before and after the transition of neutrino 1 to a dark matter phase. Before the transition, when neutrino 1 has mass of order the temperature, we have \begin{eqnarray} m_{1,before} \sim T \rightarrow A_{before} \sim \frac{M_1^2}{T} \\ m_{2,before}=\frac{M_2^2}{A_{before}} \sim \frac{M_2^2}{M_1^2}T \end{eqnarray} So our mass hierarchy implies $M_2^2 \ll M_1^2$. The minimization equation~(\ref{eq_min}), with the approximations from~(\ref{eq_approxr}) and~(\ref{eq_approxqr}), yields the value of the acceleron prior to the transition, \begin{equation} A_{before}=\frac{T}{\pi}\frac{1}{\sqrt{Bk}}\sqrt{M_1^4 I_3 + M_2^4 I_1} \end{equation} Since the acceleron will suddenly change value when neutrino 1 becomes unstable and stops sourcing it, we do not know a priori whether neutrino 2 will be R, NR or QR after the transition. By trying each in turn, we quickly find that only the QR assumption is consistent. For instance, an assumption that neutrino 2 stays relativistic yields a value for $m_2$ that is of order the temperature, which violates the assumption. Looking carefully at the QR case, from the $m_1 \sim T$ condition before the transition, we have the relationship \begin{equation} \frac{\sqrt{B k}}{T}\frac{\pi}{\sqrt{I_3}} \sim T \end{equation} which provides the temperature at the time of transition in terms of the parameters of the system. Solving for the acceleron and neutrino 2 mass after the transition, we have \begin{eqnarray} A_{after}=\frac{M_2^2 T}{\pi}\sqrt{\frac{I_1}{Bk}} \\ m_{2,after}=\frac{\sqrt{Bk}}{T}\frac{\pi}{\sqrt{I_3}} \sim T \end{eqnarray} The picture that emerges from this small example is that after the first neutrino becomes unstable and decouples from the acceleron, the acceleron assumes a value that pushes the second neutrino into a quasirelativistic region. Once this occurs, the second neutrino will also soon become unstable. In the end there is nothing left to drive the dark energy. \section{Generalized Models} The simple two-neutrino, small-power law model generalizes easily to include a larger number of neutrino species and different potentials. Additional neutrinos do not improve the stability picture, but decreasing the flatness of the potential does. First consider the addition of a third neutrino that is much less massive than neutrino 1. This requires $M_2^2 \ll M_1^2$ and $M_3^2 \ll M_1^2$. After the transition, the value of the acceleron becomes \begin{equation} A_{after}=\frac{1}{\sqrt{Bk}}\frac{T}{\pi}\sqrt{M_2^4 I_1 + M_3^4 I_1} \end{equation} As a result, the mass of the second and third neutrinos become \begin{eqnarray} m_{2,after}=\sqrt{\frac{I_3}{I_1}}\frac{M_2^2}{\sqrt{M_2^4 + M_3^4}}T \\ m_{3,after}=\sqrt{\frac{I_3}{I_1}}\frac{M_3^2}{\sqrt{M_2^4 + M_3^4}}T \end{eqnarray} Neutrino 2 is relativistic only if $M_2 \ll M_3$, but similarly neutrino 3 is relativistic only if $M_3 \ll M_2$. Since we can not satisfy both these conditions, at least one of the remaining neutrinos is quasirelativistic, and becomes unstable. Using similar arguments, it is easy to see that for the general case of $n$ neutrinos, all neutrinos will become unstable within a short period of each other. Now consider a more general potential, $V=BA^k$ (for an arbitrary k), for the two-neutrino case. The mass of neutrino 2 after neutrino 1 becomes unstable is \begin{equation} m_{2,after} \simeq \left(\frac{I_3}{I_1}\right)^{\frac{1}{k+2}} M_1^{\frac{-2k}{k+2}} M_2^{\frac{2k}{k+1}} T \end{equation} In this case, to keep neutrino 2 relativistic, we require \begin{equation} \left(\frac{M_2}{M_1}\right)^{\frac{2k}{k+2}} \ll 1 \label{eq_kcond} \end{equation} Note that this does allow the light neutrino to continue to drive the acceleron, but only if the acceleron potential is flatter than the small-$k$ power we examined above. Unfortunately, this potential predicts a dark energy equation of state that is in conflict with observation. The value of $k$ is connected to the present value of the equation of state by \begin{equation} k=-\frac{1-w_0}{w_0} \end{equation} Requiring a value of $w_0$ close to $-0.9$ yields a value of $k$ that lies in the region where the lighter neutrinos become unstable very quickly. Conversely, values of $k$ that are large enough to keep the lighter neutrinos relativistic also predict a value of $w_0$ too large. \section{Numerical Simulation} We verified the above relationships using an unapproximated numerical simulation. An example of the dependence of the stability of lighter neutrinos on the steepness of the acceleron potential is shown in figure~\ref{fig_ev1}. The plots shows the evolution of the equation of state for the neutrino and dark energy components, as given in~(\ref{eq_w}), calculated numerically, of a two neutrino system for two values of $k$. Note that this $w$ does not include contributions from either the dark matter terms that resulted from unstable neutrino components, or from components that were not included in the dark sector as defined above. As $k$ increases, the lighter neutrino is longer-lived, and can continue to drive the dark energy closer to today ($z=0$). However, this also results in $w$ approaching a disallowed value. \begin{figure}[tbh] \includegraphics[width=8cm]{mav7-k0.1-ev.eps} \includegraphics[width=8cm]{mav7-k0.6-ev.eps} \caption{Simulation of the dark sector (neutrinos and dark energy) equation of state of a two neutrino system under the power law with two values of k. The dashed line is the result if instability is ignored, while the solid line removes an unstable component from the dark sector. The solid lines end when all neutrinos have become unstable. The contributions of the dark matter formed by unstable neutrino components is not included here.} \label{fig_ev1} \end{figure} Comparing the set of approximations made in simplifying the minimization equation to the numerical results show that they are accurate to 15-50\%. Noting that the order-of-magnitude arguments above already contain multiplicative $O(1)$ corrections, the contribution of numerical inaccuracy does not affect our conclusions. \section{Hybrid Model} The condition for the stability of lighter neutrino species, given in equation~(\ref{eq_kcond}), suggests that we need to find a model that does not require a flat potential to generate dark energy. There are several ways to achieve this, and here we will discuss a particularly simple extension. This new model employs a second scalar field that is not directly coupled to the neutrinos but provides large contribution to the dark energy. Since the configuration of the potential is borrowed from hybrid inflation (see~\cite{Linde:1993cn}), the new model is called the ``Hybrid MaVaN'' model. The hybrid model includes a scalar field, $\sigma$, that acts as the \emph{waterfall field}. The potential is \begin{equation} V = b^2A^2+g^2A^2\sigma^2+(h^2-\alpha\sigma^2)^2 \end{equation} where $b$,$g$,$h$ and $\alpha$ are new coupling constants. The precise details of the potential are unimportant, as long as it supports a false minimum as discussed below. The hybrid potential has two distinct regions of behavior under variation of $\sigma$. If $2\alpha h^2>g^2A^2$, then there are two minima at $\pm\sqrt{\frac{2\alpha h^2-g^2A^2}{2\alpha^2}}$. Otherwise, there is a single ``false minimum'' at $0$. Forcing the field into this false minimum by requiring \begin{equation} A>\sqrt{2\alpha}h/g \end{equation} the potential becomes \begin{equation} V \rightarrow b^2A^2+h^4 \label{eq_hybridv} \end{equation} which includes a cosmological-constant type term $h^4$. This term dominates the acceleron contribution to the potential if $h\gg \sqrt{2\alpha}b/g$, and from equations~(\ref{eq_rho}) and~(\ref{eq_w}) may also dominate over the neutrino contribution to the energy density. If we also assume the see-saw condition, $A\gg m_i/\lambda$, then the model described in previous sections can be used without any change other than using the form of the potential in equation~(\ref{eq_hybridv}). The quadratic dependence on $A$ in equation~(\ref{eq_hybridv}) means that the stability condition for the lighter neutrinos in equation~(\ref{eq_kcond}) is easily satisfied. The allowed parameter range is quite large, and it is straightforward to find coefficients that are produce observationally allowed values of neutrino mass and equation of state. The numerical simulation of the evolution of one such model, with a hierarchy of masses and $h=0.06$ is shown in figure~\ref{fig_hybridw1}. The lighter two neutrinos stay relativistic until $z=0$, despite the instability in the massive neutrino. Note this model achieves both the stability of the lighter neutrinos and has a dark sector equation of state of $w=-1$ at $z$ near $0$. \begin{figure}[tbh] \includegraphics[width=8cm]{mavh2-w.eps} \caption{Simulation of the equation of state of a three neutrino hybrid potential system. The kink at $z=200$ occurs when the heaviest neutrino becomes unstable. The remaining two neutrinos continue to be relativistic and stable until z=0.} \label{fig_hybridw1} \end{figure} \section{Conclusion and Discussion} Several authors have found that a nonrelativistic MaVaN neutrino field is unstable to inhomogeneous fluctuations. By considering the evolution of the acceleron expectation value when a neutrino field becomes unstable, we have argued above that all neutrino fields in the theory are susceptible to a cascaded instability in which they all become unstable at nearly the same time. This occurs as long as the scalar potential has a nearly flat dependence on the acceleron, and the neutrino masses vary inversely with the acceleron. Including a very light neutrino is not sufficient to avoid this problem. Since there are at least three neutrino species, and the atmospheric neutrino deficit requires at least one mass scale above 1eV, the instability poses a constraint on all physical MaVaN models. There are a number of possible resolutions. One is to increase the curvature of the scalar potential. In models with a single scalar field, this makes it difficult for the scalar field potential to form dark energy. However, this is easily remedied by including a second scalar field. A simple example is illustrated above in the hybrid MaVaN model. A similar potential arises naturally in supersymmetric models, such as those in~\cite{Fardon:2005wc} and~\cite{Takahashi:2005kw}. Another solution is to modify the theory so that the dark sector never reaches a state where the adiabatic condition applies. Such models do not suffer from the instability described above. One such theory is presented in~\cite{Brookfield:2005td}. Reducing the dependence of the acceleron on the heavy neutrino components also forms a class of possible solutions. If the acceleron is decoupled from each heavy component before it becomes unstable, the acceleron expectation value is not quickly driven to a new scale. The mass of the lighter neutrinos would be largely unaffected, avoiding instability. \section{Acknowledgments} I would like to thank Ann Nelson for useful conversations and guidance while working on this project. This work was supported in part by the Department of Energy.
1,941,325,220,623
arxiv
\section{Introduction\label{sec:intro}} In the last years a lot of attention has been paid to the discrete aspects of location theory and a large body of literature has been published on this topic \citep[see, e.g.,][]{Beasley85,ELP2004,EMPR2009,GLM2010,MNPV09,MNV2010,PRR2013,PT2005}. One of the reasons of this flourish is the recent development of integer programming and the success of MIP solvers. In spite of that, the reader might notice that the mathematical origins of this theory emerged very close to some classical continuous problems such as the well-known Fermat-Torricelli or Weber problem and the Simpson problem \citep[see, e.g.,][and the references therein]{DH2002,LNS2015, Nickel2005}. However, the continuous counterparts of location problems have been mostly analyzed and solved using geometric constructions valid on the plane and the three dimension space that are difficult to extend when the dimensions grow or the problems are slightly modified to include some side constraints \citep[]{BG21, BPP2017,CCMP1995,CMP1998,FMB2005,NPR2003,PR2011}. These problems although very interesting quickly fall within the field of global optimization and they become very hard to solve. Even those problems that might be considered as \textit{easy}, as for instance the classical Weber problem with Euclidean norms, are most of the times solved with algorithms (as the Weiszfeld algorithm, \cite{W1937}), whose convergence is unknown \citep[][]{CT1989}. Moreover, most problems studied in continuous location assume that a single facility is to be located, since their multifacility counterparts lead to difficult non-convex problems \citep[]{B19,BEP2014,B1995,CMP1998, MPR19,MTE2012, Puerto2020,R1992,VRE2013}. Apart from the applicability of these problems to find the \textit{optimal} positions of telecommunication services, these problems allow to extend most of the classical clustering algorithms, as $k$-Mean or $k$-Median approaches. Motivated by the recent advances on discrete multifacility location problems with ordered median objectives \citep[][]{Deleplanque2020,EPR21,FPP2014,LPP17,MPP2020} and the available results on conic optimization \citep{BEP2014,Puerto2020}, we want to address a family of difficult continuous multifacility location problems with ordered median objectives and distances induced by general $\ell_\tau$-norms, $\tau \geq 1$. These problems gather the essential elements of both areas (discrete and continuous) of location analysis making their solution a challenging question. The continuous multifacility Weber problem has been already studied using branch-and-price methods \citep[][]{K1997,duMerle1999,RZ2007,VM2015}. In addition, in discrete location, these techniques has also been applied to the $p$-median problem \citep[see, e.g.,][]{ASV2007}. However, to the best of our knowledge, a branch-and-price approach for location problems with ordered median objectives has been only developed for the discrete version in \cite{Deleplanque2020} beyond a multisource hyperplanes application \citep[][]{BJPP21}. Our goal in this paper is to analyze the \textit{continuous multifacility monotone ordered median problem} (MFMOMP, for short), in which we are given a finite set of demand points, $\mathcal{A}$, and the goal is to find the optimal location of $p$ new facilities such that: (1) each demand point is allocated to a single facility; and (2) the measure of the goodness of the solution is an ordered weighted aggregation of the distances of the demand points to their closest facility \citep[see, e.g.,][]{Nickel2005}. We consider a general framework for the problem, in which the demand points (and the new facilities) lie in $\mathbb{R}^d$, the distances between points and facilities are $\ell_\tau$-norms based distances for $\tau \geq 1$ and the ordered median functions are assumed to be defined by non-decreasing monotone weights. These problems have been analyzed in \cite{BEP2016} in which the authors provide a Mixed Integer Second Order Cone Optimization (MISOCO) reformulation of the problem able to solve, for the first time, problems of small to medium size (up to 50 demand points), using off-the-shell solvers. Our contribution in this paper is to introduce a new set partitioning-like (with side constraints) reformulation for this family of problems that allows us to develop a branch-and-price algorithm for solving it. This approach gives rise to a decomposition of the original problem into a master problem (set partitioning with side constraints) and a pricing problem that consists of a special form of the maximal weighted independent set problem combined with a single facility location problem. We compare this new strategy with the one obtained by solving the MISOCO formulations using standard solvers. Our results show that it is worth to use the new reformulation since it allows us to solve larger instances and reduce the gap when the time limit is reached. Moreover, we also exploit the structure of the branch-and-price approach to develop some new matheuristics for the problem that provide good quality feasible solutions for fairly large instances of several hundreds of demand points. The paper is organized in six sections and one appendix. Section \ref{sec:COMP} formally describes the problem considered in this paper, namely the MFMOMP, and develops MISOCO formulations. Section \ref{c4-ss31} is devoted to present the new set partitioning-like formulation and all the details of the branch-and-price algorithm proposed to solve it. There, we present how to obtain initial variables for the restricted master problem, we discuss and formulate the pricing problem and set properties for handling it and describe the branching strategies and variable selection rules implemented in our algorithm. The next section, namely Section \ref{sec:heur}, deals with some heuristics algorithms proposed to provide solutions for large-sized instances. In this section, we also describe how to solve heuristically the pricing problem which gives rise to a matheuristic algorithm consisting of applying the branch-and-price algorithm but solving the pricing problem only heuristically (without certifying optimality). Obviously, since in this case the pricing problem does not certify optimality we cannot ensure optimality for the solution of the master problem, although we always obtain feasible solutions. In addition, we also present another aggregation heuristic based on clustering strategies that provides bounds on the errors of the obtained solutions. Section \ref{sec:computational} reports the results of an exhaustive computational study with different sets of points. There, we compare the standard formulations with the branch-and-price approach and also with the heuristic algorithms. The paper ends with some conclusions in Section \ref{sec:conclusions}. Finally, Appendix \ref{ap:norms} reports the details of the computational experiment for different norms showing the usefulness of our approach. \section{The continuous multifacility monotone ordered median problem}\label{sec:COMP} In this section we describe the problem under study and fix the notation for the rest of the paper. We are given a set of $n$ demand points in $\mathbb{R}^d$, $\mathcal{A} = \{a_1, \ldots, a_n\} \subset \mathbb{R}^{d}$, and $p\in \mathbb{N}$ ($p>0$). Our goal is to find $p$ new facilities located in $\mathbb{R}^d$ that minimize a function of the closest distances from the demand points to the new facilities. We denote the index sets of demand points and facilities by $I=\{1, \ldots, n\}$ and $J=\{1, \ldots, p\}$, respectively. Several elements are involved when finding the \textit{best} $p$ new facilities to provide service to the $n$ demand points. In what follows we describe them: \begin{itemize} \item \textit{Closeness Measure:} Given a demand point $a_i$, $i \in I$, and a server $x \in \mathbb{R}^d$, we use norm-based distances to measure the point-to-facility closeness. Thus, we consider the following measure for the distance between $a_i$ and $x$: $$ \delta_i(x) = \|a_i-x\|, $$ \noindent where $\|\cdot\|$ is a norm in $\mathbb{R}^d$. In particular, we will assume that the norm is polyhedral or an $\ell_\tau$-norm (with $\tau\geq1$), i.e., $\delta_i(x) = \left(\displaystyle\sum_{l=1}^d |a_{il}-x_l|^\tau\right)^{\frac{1}{\tau}}$. \item \textit{Allocation Rule}: Given a set of $p$ new facilities, $\mathcal{X} = \{x_1, \ldots, x_p\} \subset \mathbb{R}^d$, and a demand point $a_i$, $i \in I$, once all the distances between $a_i$ and $x_j$ ($j\in J$) are calculated, one has to allocate the point to a single facility. As usual in the literature, we assume that each point is allocated to its closest facility, i.e., the closeness measure between $a_i$ and $\mathcal{X}$ is: $$ \delta_i(\mathcal{X}) = \min_{x \in \mathcal{X}} \delta_i(x), $$ \noindent and the facility $x\in \mathcal{X}$, reaching such a minimum is the one where $a_i$ is allocated to (in case of ties among facilities, a random assignment is performed). \item \textit{Aggregation of Distances}: Given the set of demand points $\mathcal{A}$, the distances $\{\delta_i(\mathcal{X}): i \in I\} = \{\delta_1,\ldots,\delta_n\}$ must be aggregated. To this end, we use the family of ordered median criteria. Given $\lambda \in \mathbb{R}_+^n$ the $\lambda$-ordered median function is defined as: \begin{equation}\label{omf}\tag{OM} \textsf{OM}_\lambda(\mathcal{A}; \mathcal{X})= \displaystyle\sum_{i\in I} \lambda_i \;\delta_{(i)}, \end{equation} \noindent where $\delta_{(1)}, \ldots, \delta_{(n)}$ are defined such that $\{\delta_1,\ldots,\delta_n\}$ for all $i\in I$ and $\delta_{(1)}\leq \cdots \leq \delta_{(n)}$. Some particular choices of $\lambda$-weights are shown in Table \ref{tab:lambdas}. Note that most of the classical continuous location problems can be cast under this \textit{ordered median} framework, e.g., the multisource Weber problem, $\lambda=(1, \ldots, 1)$, or the multisource $p $-center problem, $\lambda=(0, \ldots, 0, 1)$. \begin{table}[h] \centering \begin{adjustbox}{max width=1.0\textwidth} \small \begin{tabular}{clc} \hline Notation&$\lambda$-vector&Name\\ \cmidrule(lr){1-1}\cmidrule(lr){2-2}\cmidrule(lr){3-3} W&$ (1,\dots,1)$&$p$-Median\\ C&$(0,\dots,0,1)$&$p$-Center\\ K &$(0,\ldots,0,\overbrace{1,\dots,1}^k)$&$k$-Center\\ D &$ (\alpha,\ldots,\alpha,1)$&Centdian\\ S &$(\alpha,\ldots,\alpha,\overbrace{1,\dots,1}^k)$&$k$-Entdian\\[0.1cm] A &$(0=\frac{0}{n-1},\frac{1}{n-1},\frac{2}{n-1},\dots,\frac{n-2}{n-1},\frac{n-1}{n-1}=1)$&Ascendant\\[0.12cm] \hline \end{tabular} \end{adjustbox} \caption{Examples of Ordered Median aggregation functions \label{tab:lambdas}} \end{table} \end{itemize} Summarizing all the above considerations, given a set of $n$ demand points in $\mathbb{R}^d$, $\mathcal{A} = \{a_1, \ldots, a_n\} \subset \mathbb{R}^{d}$ and $\lambda \in \mathbb{R}_+^n$ (with $0 \leq \lambda_1\leq \cdots \leq \lambda_n$) the continuous multifacility monotone ordered median problem ($\mathbf{MFMOMP}_\lambda$) is the following: \begin{equation*} \label{mf:0}\tag{$\mathbf{MFMOMP}_\lambda$} \displaystyle\min_{\mathcal{X}=\{x_1, \ldots, x_p\} \subset \mathbb{R}^d} \textsf{OM}_\lambda(\mathcal{A};\mathcal{X}). \end{equation*} Observe that the problem above is $\mathcal{NP}$-hard since the multisource $p$-median problem is just a particular instance of \eqref{mf:0} where $\lambda=(1, \ldots, 1)$ \citep[see][]{sherali1988np}. In the following result we provide a suitable Mixed Integer Second Order Cone Optimization (MISOCO) formulation for the problem. \begin{thm}\label{thm:1} Let $\|\cdot\|$ be an $\ell_{\tau}$-norm in $\mathbb{R}^d$, where $\tau = \frac{r}{s}$ with $r, s\in\mathbb{N}\setminus\{0\}$, $r>s$ and $\gcd(r,s)=1$ or a polyhedral norm. Then, \eqref{mf:0} can be formulated as a MISOCO problem. \end{thm} \begin{proof} First, assume that $\{\delta_i(\mathcal{X}): i \in I\} = \{\delta_1,\ldots,\delta_n\}$ are given. Then, sorting the elements and multiply them by the $\lambda$-weights can be equivalently written as the following assignment problem \citep[see][]{BEP2014,BEP2016}, whose dual problem (right side) allows to compute, alternatively, the value of $\textsf{OM}_\lambda(\delta_1,\ldots,\delta_n)$: \begin{align*} \displaystyle\sum_{i\in I} \lambda_k \delta_{(k)} = \, \max \quad&\displaystyle\sum_{i, k \in I} \lambda_k \delta_i \sigma_{ik} \qquad \qquad = & \min \quad & \displaystyle\sum_{i \in I} u_i + \displaystyle\sum_{k \in I} v_k\\ \mbox{s.t.}\quad&\displaystyle\sum_{k\in I} \sigma_{ik} = 1,\, \forall i \in I,& \mbox{s.t.}\quad & u_i + v_k \geq \lambda_k \delta_i,\; \forall k \in I, \\ &\displaystyle\sum_{i\in I} \sigma_{ik} = 1,\; \forall k \in I,& &u_i, v_k \in\mathbb{R} ,\; \forall i, k \in I.\\ &\sigma_{ik} \in [0,1],\; \forall i, k \in I.& & \end{align*} Now, we can embed the above representation of the ordered median aggregation of $\delta_1, \ldots, \delta_n$, into \eqref{mf:0}. On the other hand, we have to represent the allocation rule (closest distances). This family of constraints is given by $$ \delta_{i} = \min_{j\in J} \|a_i - x_j\|,\; \forall i \in I. $$ In order to represent it, we use the following set of decision variables: $$ w_{ij} = \left\{\begin{array}{cl} 1 & \mbox{if $a_i$ is allocated to $x_j$,}\\ 0 & \mbox{otherwise,} \end{array}\right. \;\; z_{ij} = \|a_i -x_j\|, \quad r_i = \min_{j\in J} \|a_i-x_j\|,\; \forall i \in I, j \in J. $$ Then, a \textit{Compact} formulation for \eqref{mf:0} is: \begin{subequations} \makeatletter \def\@currentlabel{${\rm C}$} \makeatother \label{eq:C} \renewcommand{\theequation}{${\rm C}_{\arabic{equation}}$} \begin{align} \mathbf{(C)}\min \quad &\displaystyle\sum_{i\in I} u_i + \displaystyle\sum_{k \in I} v_k\nonumber\\ \mbox{s.t.} \quad & u_i + v_k \geq \lambda_k r_{i},\; \forall i, k\in I,\label{ctr:sortf}\\ &z_{ij} \geq \|a_i - x_j\|,\; \forall i\in I, j \in J, \label{ctr:normf} \\ & r_i \geq z_{ij} - M (1-w_{ij}),\; \forall i\in I, j \in J,\label{ctr:alloc1f} \\ & \displaystyle\sum_{j \in J} w_{ij} = 1,\; \forall i\in I,\label{ctr:alloc2f}\\ & x_j \in \mathbb{R}^d,\; \forall j \in J, \nonumber\\ & w_{ij} \in \{0,1\},\; \forall i\in I, j \in J,\nonumber\\ & z_{ij} \geq 0, \; \forall i\in I,j \in J,\nonumber\\ & r_i \geq 0,\; \forall i\in I,\nonumber \end{align} \end{subequations} \noindent where \eqref{ctr:alloc1f} allows to compute the distance between the points and its closest facility and \eqref{ctr:alloc2f} assures single allocation of points to facilities. Here $M$ is a big enough constant $M > \max_{i,k \in I} \|a_i - a_k\|$. Finally, in case $\|\cdot\|$ is the $\ell_{\frac{r}{s}}$-norm, constraint \eqref{ctr:normf}, as already proved in \cite{BEP2014}, can be rewritten as: \begin{align*} &t_{ijl} + a_{il} - x_{jl}\geq 0,\; \forall i\in I, j\in J, l=1, \ldots, d\\ &t_{ijl} - a_{il} + x_{jl}\geq 0,\; \forall i\in I, j\in J, l=1, \ldots, d,\\ &t_{ijl}^{r} \leq \xi^{s}_{ijl} z^{r-s}_{ij},\; \forall i\in I, j\in J, l=1, \ldots, d,\\ &\displaystyle\sum_{l=1}^d \xi_{ijl} \leq z_{ij},\; \forall i\in I, j\in J, \\ &\xi_{ijl} \geq 0,\; \forall i\in I, j\in J, l=1, \ldots, d. \end{align*} If $\|\cdot\|$ is a polyhedral norm, then, \eqref{ctr:normf}, is equivalent to: $$ \displaystyle\sum_{l=1}^g e_{gl} (a_{il} -x_{jl}) \leq z_{ij}, \quad \forall i \in I, j\in J, e \in {\rm Ext}_{\|\cdot\|^o}, $$ \noindent where ${\rm Ext}_{\|\cdot\|^o} = \{e^o_1, \ldots, e^o_g\}$ are the extreme points of the dual norm (polar set of the unit ball) of $\| \cdot\|$ \citep[see e.g.,][]{Nickel2005,WW85}. The final compact formulation depends on the norm, but in any case, we have a MISOCO reformulation for the \eqref{mf:0}. \end{proof} Note that \eqref{mf:0} is an extension of the single-facility ordered median location problem \citep[see, e.g.,][]{BEP2014}, where apart from finding the location of $p$ new facilities, the allocation patterns between demand points and facilities are also determined. In the rest of the document, we will exploit the combinatorial nature of the problem by means of a set partitioning-like formulation which is based on the following observation: \begin{prop}\label{prop:1} Any optimal solution of \eqref{mf:0} is characterized by $p$ sets \\$(S_1,x_1), \ldots, (S_p,x_p) \subseteq I \times \mathbb{R}^d$, such that: \begin{enumerate} \item $\bigcup_{j\in J} S_j = I$. \item $S_j \cap S_{j^\prime} = \emptyset,\; j,j\prime\in J$. \item For each $j \in J$, $x_j \in \arg\displaystyle\min_{x\in \{x_1, \ldots, x_p\}} \|a_i - x\|$, for all $i \in S_j$. \item $(x_1, \ldots, x_p) \in \arg\min \displaystyle\sum_{j\in J} \displaystyle\sum_{i \in S_j} \lambda_{(i)} \|a_i - x_j\|$, where ${(i)} \in I$ such that $\|a_i-x_j\|$ is the $(i)$-th smallest element in $\{\|a_i-x_j\|: i \in S_j, j \in J\}$. \end{enumerate} \end{prop} \section{A set partitioning-like formulation\label{c4-ss31}} The compact formulation shown in the previous section is affected by the size of $p$ and $d$, and it exhibits the same limitations as many other compact formulations for continuous location models even without ordering constraints. For this reason in the following we propose an alternative set partitioning-like formulation \citep[][]{duMerle1999,duMerle2002} that tries to improve the performance of solving Problem \eqref{mf:0}. Let $S\subset I$ be a subset of demand points that are assigned to the same facility. Let $R=(S,x)$ be the pair composed by a subset $S \subset I$ and facility $x\in \mathbb{R}^d$. We denote by $\delta_i^R$ the contribution of demand point $i \in S$ in the subset, and let $\delta^R=(\delta_i^R)_{(i\in S)}$ be the vector of distances induced by demand points in $S$ with respect to the facility $x$. Finally, for each pair $R=(S,x)$ we define the variable $$ y_R=\left\{ \begin{array}{ll} 1 & \mbox{if subset $S$ is selected and its associated facility is $x$,}\\ 0 & \mbox{otherwise.} \end{array} \right. $$ We denote by $\mathcal{R} = \{(S,x): S \subset I \text{ and } x \in \mathbb{R}^d\}$ The set partitioning-like formulation is \begin{subequations} \makeatletter \def\@currentlabel{${\rm MP}$} \makeatother \label{eq:MP} \renewcommand{\theequation}{${\rm MP}_{\arabic{equation}}$} \begin{align} \mathbf{(MP)}\min \quad &\displaystyle\sum_{i\in I}u_i+\sum_{k \in I}v_k\label{MP:of}\\ \mbox{s.t.} \quad&\displaystyle \sum_{R=(S,x): i\in S} y_R=1,\; \forall\,i\in I,\label{MP:eq1}\\ &\displaystyle \sum_{R\in \mathcal{R}}y_R= p,\label{MP:eq2}\\ &\displaystyle u_i+v_k\ge\lambda_k\displaystyle\sum_{R=(S,x): i\in S} \delta_i^Ry_R,\; \forall\,i,k \in I,\label{MP:eq3}\\ &y_R\in\{0,1\},\; \forall\,R\in \mathcal{R}.\label{MP:vars}\\ & u_i, v_k \in\mathbb{R} ,\; \forall i, k \in I. \end{align} \end{subequations} The objective function \eqref{MP:of} and constraints \eqref{MP:eq3} give the correct ordered median function of the distances from the demand points to the closest facility (see Section \ref{sec:COMP}). Constraints \eqref{MP:eq1} ensure that all demand points appear in exactly one set $S$ in each feasible solution. Exactly $p$ facilities are open due to constraint \eqref{MP:eq2}. Finally, \eqref{MP:vars} define the variables as binary. The reader might notice that this formulation has an exponential number of variables, and therefore in the following we describe the necessary elements to address its solution by means of a branch-and-price scheme. The crucial steps in the implementation of the branch-and-price approach are the following: \begin{enumerate} \item {\it Initial Pool of Variables:} Generation of initial feasible solutions induced by a set of initial subsets of demand points (and their costs). \item {\it Pricing Problem:} In set partitioning problems, instead of solving initially the problem with the whole set of variables, the variables have to be incorporated \textit{on-the-fly} by solving adequate pricing subproblems derived from previously computed solutions until the optimality of the solution is guaranteed. The pricing problem is derived from the relaxed version of the master problem and using the strong duality properties of the induced Linear Programming Problem. \item {\it Branching:} The rule that creates new nodes of the branch-and-bound tree when a fractional solution is found at a node of the search tree. We have adapted the Ryan and Foster branching rule to our problem. \item {\it Stabilization}: The convergence of column generation approaches can be sometimes erratic since the values of dual variables in the first iterations might oscillate, leading to variables of the master problem that will never appear in the optimal solution of the problem. Stabilization tries to avoid that behaviour. \end{enumerate} In what follows we describe how each of the items above is implemented in our proposal. \subsection{Initial variables} In the solution process of the set partitioning-like formulation using a branch-and-price approach, it is convenient to generate an initial pool of variables before starting solving the problem. The adequate selection of these initial variables might help to reduce the CPU time required to solve the problem. We apply an iterative strategy to generate this initial pool of $y$-variables. In the first iteration, we randomly generate $p$ positions for the facilities. The demand points are then allocated to their closest facilities and at most $p$ subsets of demand points are generated. We incorporate the variables associated to these subsets to the master problem \eqref{eq:MP}. In the next iterations, instead of generating $p$ new facilities, we keep those with more associated demand points and randomly generate the remainder. After a fixed number of iterations, we have columns to define the restricted master problem and also an upper bound of our problem. Since the optimal position of the facilities belongs to a bounded set contained in the rectangular hull of the demand points, the random facilities are generated in the minimum hyperrectangle containing $\mathcal{A}$. \subsection{The pricing problem\label{c4-sss31}} To apply the column generation procedure we define the restricted relaxed master of \eqref{eq:MP}, in the following \eqref{RRMP}. \begin{align*}\label{RRMP}\tag{RRMP} \min \quad&\displaystyle\sum_{i \in I}u_i+\sum_{k \in I} v_k&\textbf{Dual Multipliers}\\ \mbox{s.t.} \quad & \displaystyle \sum_{R=(S,x): i\in S}y_R \geq 1,\; \forall\,i\in I,&\alpha_i\ge 0\\ &\displaystyle -\sum_{R\in\mathcal{R}_0}y_R \ge-p,&\gamma\geq 0\\ &\displaystyle u_i+v_k-\lambda_k\displaystyle\sum_{R=(S,x): i\in S}\delta_i^Ry_R \geq 0,\; \forall\,i,k\in I, & \epsilon_{ik}\geq 0\\ &y_R \geq 0, \; \forall\,R\in \mathcal{R}_0,&\\ &u_i, v_k \in \mathbb{R},\; \forall i, k \in I, & \end{align*} \noindent where $\mathcal{R}_0\in \mathcal{R}$ represents the initial pool of columns used to initialize the set partitioning-like formulation \eqref{eq:MP}. Constraints \eqref{MP:eq1} and \eqref{MP:eq2} are slightly modified from equations to inequalities in order to get nonnegative dual multipliers. This transformations keeps the equivalence with the original formulation since coefficients affecting the $y$-variables in constraint \eqref{MP:eq3} are nonnegative. The value of the distances is unknown beforehand because the location of facilities can be anywhere in the continuous space. Hence, its determination requires solving continuous optimization problems. By strong duality, the objective value of the continuous relaxation \eqref{RRMP}, can be obtained as: \begin{align*}\tag{Dual RRMP} \max \quad &\displaystyle\sum_{i\in I} \alpha_i-p\gamma\\ \mbox{s.t.} \quad &\displaystyle\sum_{i \in I}\epsilon_{ik} =1,\; \forall\,k\in I,\\ &\displaystyle\sum_{k \in I}\epsilon_{ik}=1,\; \forall\,i\in I,\\ &\displaystyle\sum_{i\in S}\alpha_i-\gamma-\sum_{i\in S}\sum_{k\in I} \delta_i^R \lambda_k \epsilon_{ik} \leq 0,\; \forall\,R=(S,x)\in \mathcal{R}_0,\\ &\alpha_i,\gamma, \epsilon_{ik} \geq 0,\; \forall\,i,k\in I. \end{align*} Hence, for any variable $y_R$ in the master its reduced cost is $$c_R-z_R=\displaystyle-\sum_{i\in S}\alpha^*_i+\gamma^*+\sum_{i\in S}\sum_{k\in I} \delta_i^R \lambda_k\epsilon^*_{ik},$$ \noindent where $(\alpha^*,\gamma^*,\epsilon^*)$ is the dual optimal solution of the current \eqref{RRMP}. To certify optimality of the relaxed problem one has to check implicitly that all the reduced costs for the variables not currently included in the \eqref{RRMP} are nonnegative. Otherwise new variables must be added to the pool of columns. This can be done solving the so called pricing problem. The pricing problem consists of finding the minimum reduced cost among the non included variables. That is, we have to find the set $S\subset I$ and the position of the facility $x$ (its coordinates) which minimizes the reduced cost. For a given set of dual multipliers, $(\alpha^*, \gamma^*, \epsilon^*) \geq 0$, the problem to be solved is \begin{align*} \displaystyle\min_{S,x} \quad &\displaystyle-\sum_{i\in S}\alpha^*_i+\gamma^*+\sum_{i\in S}\sum_{k\in I}\delta_i^{S}\lambda_k\epsilon^*_{ik}\\ \mbox{s.t.}\quad &\delta _i^S\ge\| x-a_i\|,\; \forall i\in S. \end{align*} This problem can be reformulated by means of a mixed integer program. We define variables $w_i =1,\; i\in I$ if the demand point belongs to $S$, zero otherwise. We also define variables $r_i,\; i\in I$ to represent the distance from demand point $i$ to facility $x$ and zero in case $w_i=0$. Finally, $z_i,\; i\in I$ are the distances from demand point $i$ to facility $x$ in any case. \begin{align} \min \quad & -\displaystyle\sum_{i\in I} \alpha_i^*w_{i}+\gamma^*+\sum_{i\in I} c_i r_{i} \label{pricing:of}\\ \mbox{s.t.} \quad &z_i\ge \|x-a_{i}\|,\; \forall i\in I,\label{pricing:eq1}\\ &r_{i}+M(1-w_{i})\ge z_i,\; \forall i\in I,\label{pricing:eq2}\\ &w_{i}\in\{0,1\},\; \forall i\in I,\label{pricing:vars1}\\ &z_i,r_{i}\ge 0,\; \forall i\in I,\label{pricing:vars2} \end{align} \noindent where $M$ can be taken as the maximum distance between two demand points and $c_i=\sum_{k\in I} \lambda_k\epsilon^*_{ik}.$ Objective function \eqref{pricing:of} is the minimum reduced cost associated to the optimal solution of the pricing problem. Constraints \eqref{pricing:eq1} define the distances. As in Section \ref{sec:COMP} this family of constraints is defined \textit{`ad hoc'} for a given norm. Constraints \eqref{pricing:eq2} set correctly $r$-variables. Finally, constraints \eqref{pricing:vars1} and \eqref{pricing:vars2} are the domain of the variables. As it has been shown in the proof of Theorem \ref{thm:1} the above problem can be formulated as a MISOCO problem in case of polyhedral or $\ell_\tau$-norms. The so called \textit{Farkas pricing} should be defined when feasibility is not ensured from the beginning. We have solved this problem with our initial pool of variables. Furthermore, the feasibility of the master problem could be lost during the branch-and-price process when branching. In our case, we claim that Farkas pricing is not necessary because the feasibility of \eqref{eq:MP} is ensured adding an artificial variable $y_{(I,x_0)}$ to satisfy \eqref{MP:eq1} which lower bound is never set to zero and $\delta_i^R,\; i\in I$, is big enough. When the pricing problem is solved to optimality, one can obtain a theoretic lower bound even if more variables must be added. The following remark explains how the result is applied to our particular problem. \begin{rmk} When the \eqref{eq:MP} embedded in a branch-and-price algorithm uses binary variables and the number of them which could take value one is bounded above, it is possible to determine a theoretic lower bound \citep[see][]{DL2005}. For our problem the number of $y$-variables assuming the value 1 in any feasible solution is $p$. Therefore, the upper bound is $p$ due to \eqref{MP:eq2}. Let $z_{RRMP}$ be the current objective function of the \eqref{RRMP} and $\overline c_R$ the reduced cost of the variable defined by $R=(S,x)$. Hence, the lower bound is \begin{equation}LB=z_{RRMP}+p\min_{S,x}\overline c_R.\label{eq:LB}\end{equation} It is important to remark that it can be computed in every node of the branch-and-bound tree. This fact is particularly useful at the root node to accelerate the optimality certification or for big instances where the linear relaxation is not solved within the time limit. \end{rmk} For adding a variable to the master problem it suffices to find one variable $y_R$ with negative reduced cost. This search can be performed by solving exactly the pricing problem, although that might have a high computational load. Alternatively, one could also solve heuristically the pricing problem, hoping for variables with negative reduced costs. In what follows, this approach will be called the heuristic pricer. The key observation is to check if a candidate facility is promising. Given the coordinates of a facility, $x$, we construct a set $S$ compatible with the conditions of the node of the branch-and-bound tree by allocating demand points in $S$ to $x$ whenever the reduced cost $ c_R-z_R=\gamma^*+\sum_{i\in S}e_i < 0$, where $e_i=-\alpha^*_i+\sum_{k\in I} \delta_i(x) \lambda_k\epsilon^*_{ik}$. In that case, the variable $y_{(S,x)}$ is candidate to be added to the pool of columns. Here, we detail how the heuristic pricer algorithm is implemented at the root node. For deeper nodes in the branch-and-bound tree we refer the reader to Section \ref{section:branching}. For the root node, there is a very easy procedure to solve this problem, just selecting the negative ones, i.e., we define $S=\{i\in I: e_i<0\}$ and, in case $c_R-z_R<0$, the variable $y_{(S,x)}$ could be added to the problem. Additionally, the region where the facility is generated can be significantly reduced, in particular to the hyperrectangle defined by demand points with negative $e_i$. In both exact and heuristic pricer we use multiple pricing, i.e., several columns are added to the pool at each iteration, if possible. In the exact pricer we take advantage that the solver saves different solutions besides the optimal one, so it might provide us more than one column with negative reduced cost. In the heuristic pricer we add the best variables in terms of reduced cost as long as their associated reduced costs are negative. \subsection{Branching}\label{section:branching} When the relaxed \eqref{eq:MP} is solved, but the solution is not integer, the next step is to define an adequate branching rule to explore the searching tree. In this problem we can apply an adaptation of the Ryan and Foster branching rule \citep[][]{RF1981}. Given a solution with fractional $y$-variables in a node, it might occur that \begin{equation} 0<\sum_{R: i_1,i_2\in S}y_R<1 \text{ for some }i_1,i_2\in I,\; i_1< i_2. \label{eq:fractional} \end{equation} Provided that this happens, in order to find an integer solution, we create the following branches from the current node: \begin{itemize} \item {\bf Left branch: }$i_1$ and $i_2$ must be served by different facilities. $$\sum_{R: i_1,i_2\in S}y_R=0.$$ \item {\bf Right branch: }$i_1$ and $i_2$ must be served by the same facility. $$\sum_{R: i_1,i_2\in S}y_R=1.$$ \end{itemize} \begin{rmk} The above information is easily translated to the pricing problem adding one constraint to each one of the branches: 1) $w_{i_1}+w_{i_2}\le1$ for the left branch; and 2) $w_{i_1}=w_{i_2}$ for the right branch. \end{rmk} It might also happen that being some $y_R$ fractional, $\sum_{R: i_1,i_2\in S}y_R$ is integer for all $ i_1,i_2\in I,i_1< i_2$. The following result allows us to use this branching rule and provides a procedure to recover a feasible solution encoded in the current solution of the node. \begin{thm} \label{th:y(s,x)} If $\sum_{S\ni i_1,i_2}y_{(S,x)}\in \{0,1\} $ for all $ i_1,i_2\in I$ such that $i_1< i_2$ then an integer feasible solution of \eqref{eq:MP} be determined. \end{thm} \begin{proof} Let $\mathcal{X}_S$ be the set of all servers which are part of a variable $y_{(S,x)}$ belonging to the pool of columns. We define $\mathcal{X}_S$ for all used partitions $S$. First, it is proven in \cite{BJNSV1998} that, under the hypothesis of the theorem, the following expression holds for any set $S$ in a partition. $$\sum_{x\in \mathcal{X}_S}y_{(S,x)}\in \{0,1\}.$$ If $\sum_{x\in \mathcal{X}_S}y_{(S,x)}=0$, then $y_{(S,x)}=0$ for all $x\in \mathcal{X}_S$ because of the nonnegativity of the variables. However if \begin{equation}\label{c4:form-c1} \sum_{x\in \mathcal{X}_S}y_{(S,x)}=1 \end{equation} $y_{(S,x)}$ could be fractional for some $x\in \mathcal{X}_S$. Observe that, currently, the distance associated with client $i\in S$ in the problem is $$\delta_i^S=\sum_{x\in \mathcal{X}_S}y_{(S,x)}\delta_i(x).$$ Thus, from the above we construct a new demand point $x^*$ for $S$. \begin{equation}\label{c4:form-c2} x_{\ell}^*=\sum_{x\in \mathcal{X}_S}y_{(S,x)}x_{\ell},\; \forall \ell=1,\dots,d, \end{equation} so that $ \delta_i(x^*)\le \delta_i^S, \; \forall i\in S$. Indeed, by the triangular inequality and by \eqref{c4:form-c1} $$\delta_i(x^*)=\| x^*- a_i\| =\| \sum_{x\in \mathcal{X}_S}y_{(S,x)}(x-a_i)\|\le \sum_{x\in \mathcal{X}_S}y_{(S,x)}\|x-a_i\| =\delta_i^S,$$ for all $i\in S$. The inequality being strict unless $x-a_i$ for all $x\in \mathcal{X}_S$ are collinear. Finally, we have constructed the variable $y_{(S,x^*)}=1$ as part of a feasible integer solution of the master problem \eqref{eq:MP}. Therefore, it ensures that either the solution will be binary or the fractional solution will assume the same value that an alternative optimal which is binary. \end{proof} Among all the possible choices of pairs $i_1,i_2$ verifying \eqref{eq:fractional} we propose to select the one provided by the following rule: \begin{align} \displaystyle \arg\max_{i_1,i_2:0<\sum_{R: i_1,i_2\in S}y_R<1}\left\{\theta\min\left\{\sum_{R: i_1,i_2\in S}y_R,1-\sum_{R: i_1,i_2\in S}y_R\right\}+(1-\theta)\frac{1}{\|a_{i_1}-a_{i_2}\|}\right\}.\label{eq:thetarule} \tag{$\theta$-rule}\end{align} This rule uses the most fractional $y$-solution, but also pays attention to the pairs of demand points which are close to each other in the solution, assuming they will be part of the same variable with value one at the optimal solution. It has been successfully applied in a related discrete ordered median problem \citep{Deleplanque2020}. We can choose $\theta\in[0,1]$: for $\theta = 0$ the closest demand points among the pairs with fractional sum will be selected; for $\theta = 1$ the most fractional branching will be applied. The above branching rule affects to the heuristic pricer procedure since not all $S\subset I$ are compatible with the branching conditions leading to a node. In case that we have to respect some branching decisions the pricing problem gains complexity. Therefore, we develop a greedy algorithm which generates heuristic variables respecting the branching decision in the current node. This algorithm uses the information from the branching rule to build the new variable to add. The candidate set $S$ is built by means of a greedy algorithm similar to the one presented in \cite{Sakai2003}. First, we construct a graph of incompatibilities $G=(V,E)$, with $V$ and $E$ defined as follows: for each maximal subset of demand points $i_1< i_2< \dots<i_{m}$ that according to the branching rule have to be assigned to the same subset; next, we include a vertex $v_{i_1}$ with weight $\omega_{i_1}=\displaystyle\sum_{i\in \{i_1,\dots,i_m\}}e_i$; finally, for each $v_i,v_{i'}\in V$ such that $i$ and $i'$ cannot be assigned to the same subset at the current node, we define $\{v_i,v_{i'}\}\in E$. The subset $S$ minimizing the reduced cost for a given $x$ can be calculated solving the Maximum Weighted Independent Set problem over $G$. We solve this problem heuristically applying a variant of the algorithm proposed in \cite{Sakai2003}. \subsection{Convergence}\label{sec:convergence} Due to the huge number of variables that might arise in column generation procedures it is very important checking the possible degeneracy of the algorithm. Accelerating the convergence has been traditionally afforded by means of stabilization techniques. In recent papers \citep{BCPP2021,BJPP21} it has been shown how heuristic pricers avoid degeneracy. The ideas of stabilization and heuristic pricers have in common that both do not add in the first iterations the variable with the minimum associated reduced cost which helps in the right direction. For the sake of readability all the computational analysis is included in Section \ref{sec:computational}. There, the reader can see how our heuristic pricer needs less variables to certify optimality than the exact pricer from medium-sized instances, therefore, accelerating the convergence. \section{Matheuristics approaches\label{sec:heur}} \eqref{mf:0} is an $\mathcal{NP}$-hard combinatorial optimization problem and both the compact formulation and the proposed branch-and-price approach are limited by the number of demand points $(n)$ and facilities $(p)$ to be considered. Actually, as we will see in Section \ref{sec:computational}, the two exact approaches are only capable to solve, optimally, small- and medium-sized instances. In this section we derive two different matheuristic procedures capable to handle larger instances in reasonable CPU times. The first approach is based on using the branch-and-price scheme but solving only heuristically the pricing problem. The second is an aggregation based-approach that will also allow us to derive theoretical error bounds on the approximation. \subsection{Heuristic pricer} The matheuristic procedure described here has been successfully applied in the literature. See, e.g., \cite{AZ2021,BCPP2021,Deleplanque2020}, and the references therein. Recall that our pricing problem is $\mathcal{NP}$-hard. In order to avoid the exact procedure for large-sized instances, where not even a single iteration could be solved exactly, we propose a matheuristic. It consists of solving each pricing problem heuristically. The inconvenient of doing that is that we do not have a theoretic lower bound during the process. Nevertheless, for instances where the time limit is reached, we are able to visit more nodes in the branch-and-bound tree which could allow us to obtain better incumbent solutions than the unfinished exact procedure. \subsection{Aggregation schemes}\label{sec:aggregation} The second matheuristic approach that we propose is based on applying aggregation techniques to the input data (the set of demand points). This type of approaches has been successfully applied to solve large-scale continuous location problems \citep[see][]{BG21,BJPP21,BPS2018,CS90,daskin89,irawan16}. Let $\mathcal{A}=\{a_1,\ldots,a_n\}\subset \mathbb{R}^d$ be a set of demands points. In an aggregation procedure, the set $\mathcal{A}$ is replaced by a multiset $\mathcal{A}^\prime=\{a_1^\prime, \ldots, a_n^\prime\}$, where each point $a_i$ in $\mathcal{A}$ is assigned to a point $a^\prime_i$ in $\mathcal{A}^\prime$. In order to be able to solve \eqref{mf:0} for $\mathcal{A}^\prime$, the cardinality of the different elements of $\mathcal{A}^\prime$ is assumed to be smaller than the cardinality of $\mathcal{A}$, and then, several $a_i$ might be assigned to the same $a_i^\prime$. Once the points in $\mathcal{A}$ are aggregated into $\mathcal{A}^\prime$, the procedure consists of solving \eqref{mf:0} for the demand points in $\mathcal{A}^\prime$. We get a set of $p$ optimal facilities for the aggregated problem, $\mathcal{X}^*=\{x_1^\prime, \ldots, x_p^\prime\}$, associated to its objective value $\textsf{OM}_\lambda(\mathcal{A}^\prime;\mathcal{X}^\prime)^*$. These positions can also be evaluated in the original objective function of the problem for the demand points $\mathcal{A}$, $\textsf{OM}_\lambda(\mathcal{A};\mathcal{X}^\prime)^*$. The following result allows us to get upper bound of the error incurred when aggregating demand points. \begin{thm} Let $\mathcal{X}^*$ be the optimal solution of \eqref{mf:0} and $\Delta = \displaystyle\max_{i=1,\ldots,n} \|a_i - a_i^\prime\|$. Then \begin{equation} | \text{\sf{OM}}_\lambda(\mathcal{A}; \mathcal{X}^*) - \text{\sf{OM}}_\lambda(\mathcal{A}^\prime, \mathcal{X}^\prime)| \le 2 \Delta . \end{equation} \end{thm} \begin{proof} During the proof we assume, without loss of generality, that $\sum_{i\in I} \lambda_i=1$. By the triangular inequality and the monotonicity and sublinearity of the ordered median function we have that ${\rm \textsf{OM}}_\lambda (\mathcal{A};\mathcal{X}) \leq \textsf{OM}_\lambda (\mathcal{A}^\prime;\mathcal{X}^*) + \textsf{OM}_\lambda (\mathcal{A}^\prime;\mathcal{A})$. Since $\Delta \geq \|a_i-a_i^\prime\|$ for all $i \in I$, $|\textsf{OM}_\lambda (\mathcal{A};\mathcal{X}^*) - \textsf{OM}_\lambda(\mathcal{A}^\prime;\mathcal{X}^*)|\le \displaystyle\sum_{i\in I} \lambda_i \Delta = \Delta$. Finally, the result is obtained applying \citep[][Theorem 5]{geoffrion1977objective}. \end{proof} There are different strategies to reduce the dimensionality by aggregating points. In our computational experiments we consider two differentiated approaches: the \textit{$k$-Mean Clustering} (KMEAN) and the \textit{Pick The Farthest} (PTF). In the former we replace the original points by the obtained centroids while in the latter, an initial random demand point from $\mathcal{A}$ is chosen and the rest are selected as the farthest demand point from the last one chosen \citep[see][for further details on this procedure]{daskin89} until a the predefined number of points is reached. \section{Computational study} \label{sec:computational} In order to test the performance of our branch-and-price and our matheuristic approaches, we report the results of our computational experience. We consider different sets of instances used in the location literature with size ranging from 20 to 654 demand points in the plane. In all of them, the number of facilities to be located, $p$, ranges in $\{2, 5, 10\}$ and we solve the instances for the $\lambda$-vectors in Table \ref{tab:lambdas}, $\{\mbox{W, C, K, D, S, A}\}$. We set $k = \frac{n}{2}$ for the $k$-Center and $k$-Entdian, and $\alpha = 0.9$ for the Centdian and $k$-Entdian. For the sake of readability, we restrict the computational study of this document to $\ell_1$-norm based distances. However the reader can find extensive computational results for other norms in Appendix \ref{ap:norms}. The models were coded in C and solved with SCIP v.7.0.2 \citep{scip} using as optimization solver SoPLex 5.0.2 in a Mac OS Catalina with a Core Intel Xeon W clocked at 3.2 GHz and 96 GB of RAM memory. \subsection{Computational performance of the branch-and-price procedure}\label{sec:compBaP} In this section we report the results for our branch-and-price approach based on the classical dataset provided by \cite{EWC74}. From this dataset, we randomly generate five instances with sizes $n \in \{20,30,40,45\}$ and we also consider the entire complete original instance with $n=50$. Together with the number of servers $p$ and the different ordered weighted median functions (\texttt{type}), a total of 378 instances has been considered. Firstly, concerning convergence (Section \ref{sec:convergence}), each line in Table \ref{tab:Heur20-40} shows the average results of 45 instances, five for each type of ordered median objective function to be minimized \{W, D, S\} and $p\in\{2, 5, 10\}$, solved to optimality. The results has been split by size ($n$) and by \texttt{Heurvar}: \texttt{FALSE} when only the exact pricer is used; \texttt{TRUE} if the heuristic pricer is used and the exact pricer is called when it does not provide new columns to add. The reader can see how the necessary number of variables to certify optimality (\texttt{Vars}) is less when heuristic is applied as the size of the instance increases. Additionally, a second effect is a reduction of the CPU time (\texttt{Time}) decreasing the number of calls to the exact pricer (\texttt{Exact}) even though the number of total iterations (\texttt{Total}) is bigger. Hence, we will use the heuristic pricer for the rest of the experiments. \begin{table} \centering{ \begin{adjustbox}{max width=1.0\textwidth} \begin{tabular}{ccrrrrrrrrrr}\hline $n$& \texttt{Heurvar}& \multicolumn{2}{c}{ \texttt{Iterations}} & \texttt{Vars} & \texttt{Time} \\ \cmidrule(lr){1-2}\cmidrule(lr){3-4}\cmidrule(lr){5-5}\cmidrule(lr){6-6} & & \texttt{Exact} &\texttt{Total} \\ \cline{1-6}\multirow{2}{*}{20} & \texttt{FALSE} & 13 & 13 & 2189 & 64.92 \\ & \texttt{TRUE} & 4 & 23 & 2219 & 18.02 \\ \cline{1-6}\multirow{2}{*}{30} & \texttt{FALSE} & 15 & 15 & 2827 & 1034.97 \\ & \texttt{TRUE} & 3 & 60 & 2856 & 191.84 \\ \cline{1-6}\multirow{2}{*}{40} & \texttt{FALSE} & 50 & 50 & 4713 & 9086.33 \\ & \texttt{TRUE} & 13 & 136 & 4511 & 2229.21 \\ \hline \end{tabular}% \end{adjustbox} \caption{Average number of pricer iterations, variables and time using the combined heuristic and exact pricers or only using the exact pricer \label{tab:Heur20-40}}} \end{table}% Secondly, we have tuned the values of $\theta$ for the branching rule \eqref{eq:thetarule} for each of the objective functions (different values for the $\lambda$-vector) based in our computational experience. In Table \ref{tab:gaptheta} we show the average gap at termination of the above-mentioned 378 instances when we apply our branch-and-price approach fixing a time limit of 2 hours. \begin{table}[H] \centering \begin{adjustbox}{max width=1.0\textwidth} \begin{tabular}{crrrrrrr} \hline \texttt{type} &$\theta = 0.0$ &$\theta = 0.1$ &$\theta = 0.3$ &$\theta = 0.5$ &$\theta = 0.7$ &$\theta = 0.9$ &$\theta = 1.0$\\ \cmidrule(lr){1-1}\cmidrule(lr){2-2}\cmidrule(lr){3-3}\cmidrule(lr){4-4}\cmidrule(lr){5-5}\cmidrule(lr){6-6}\cmidrule(lr){7-7}\cmidrule(lr){8-8} W & 0.04 & 0.04 & 0.04 & 0.04 & 0.04 & 0.04 &\bf 0.02 \\ C &\bf 27.94 & 28.34 & 28.29 & 28.47 & 28.64 & 28.74 & 28.19 \\ K & 12.83 & 12.63 & 12.80 &\bf 12.46 & 12.73 & 13.15 & 12.88 \\ D & 0.09 & 0.07 & 0.09 & 0.09 & 0.09 & 0.09 &\bf 0.02 \\ S & 0.11 & 0.14 & 0.14 & 0.14 & 0.14 & 0.13 &\bf 0.10 \\ A & 7.73 & 7.66 & 7.69 & 7.71 & 7.64 & 7.73 &\bf 7.33 \\ \hline \end{tabular} \end{adjustbox} \caption{\texttt{GAP(\%)} for $\ell_1$-norm, \cite{EWC74} instances\label{tab:gaptheta}} \end{table} Therefore, we set $\theta = 0$ when we are in the Center problem, $\theta = 0.5$ when we are in the $k$-Center and $\theta = 1$ when we are in the $p$-median, Centdian, $k$-Entdian or Ascendant. Recall that when we use $\theta = 0$ we are selecting a pure distance branching rule. In contrast, when $\theta = 1$, we select the most fractional variable. On the other hand, when $\theta= 0.5$ we use a hybrid selection between the two extremes of the \eqref{eq:thetarule}. In the following, the above fixed parameters will be used in the computational experiments for exact and matheurisitic methods. The average results obtained for the \cite{EWC74} instances, with a CPU time limit of 2 hours, are shown in Table \ref{tab:EilonL1}. There, for each combination of $n$ (size of the instance), $p$ (number of facilities to be located) and type (ordered median objective function to be minimized), we provide the average results for $\ell_1$-norm with a comparative between the compact formulation \eqref{eq:C} (Compact) and the branch-and-price approach (B\&P). The table is organized as follows: the first column gives the CPU time in seconds needed to solve the problem (\texttt{Time}) and within parentheses the number of unsolved instances (\texttt{\#Unsolved}); the second column shows the gap at the root node; the third one gives the gap at termination, i.e., the remaning MIP gap in percentage (\texttt{GAP(\%)}) when the time limit is reached, 0.00 otherwise; in the fourth column we show the number of variables (\texttt{Vars}) needed to solve the problem; in the fifth column we show the number of nodes (\texttt{Nodes}) explored in the branch-and-bound tree; and in the last one the RAM memory (\texttt{Memory (MB)}) in Megabytes required during the execution process is reported. Within each column, we highlight in bold the best result between the two formulations, namely Compact or B\&P. \begin{table} \centering{ \begin{adjustbox}{max width=0.9\textwidth} \vspace*{-6cm} \begin{tabular}{cccrcrcrrrrrrrrrr}\hline $n$ & \texttt{type} &$p$ & \multicolumn{4}{c}{\texttt{Time (\#Unsolved)}} & \multicolumn{2}{c}{\texttt{GAProot(\%)}} & \multicolumn{2}{c}{\texttt{GAP(\%)}} & \multicolumn{2}{c}{\texttt{Vars}} & \multicolumn{2}{c}{\texttt{Nodes}} & \multicolumn{2}{c}{\texttt{Memory (MB)}} \\ \cmidrule(lr){1-3}\cmidrule(lr){4-7}\cmidrule(lr){8-9}\cmidrule(lr){10-11}\cmidrule(lr){12-13}\cmidrule(lr){14-15}\cmidrule(lr){16-17} &&& \multicolumn{2}{c}{Compact} & \multicolumn{2}{c}{B\&P}& Compact & B\&P & Compact & B\&P &Compact & B\&P&Compact & B\&P&Compact & B\&P\\ \hline \multirow{18}{*}{20} & \multirow{3}{*}{W} & 2 &\bf 1.59 &\bf( 0 )& 22.90 &( 0 )& 93.92 &\bf 0.00 & 0.00 & 0.00 &\bf 224 & 2131 & 9518 &\bf 1 &\bf 4 & 103 \\ & & 5 & 1588.99 &( 1 )&\bf 8.34 &\bf( 0 )& 100.00 &\bf 0.00 & 3.38 &\bf 0.00 &\bf 470 & 2408 & 10967305 &\bf 1 & 1278 &\bf 49 \\ & & 10 & --- &( 5 )&\bf 3.95 &\bf( 0 )& 100.00 &\bf 0.46 & 43.84 &\bf 0.00 &\bf 880 & 2127 & 19785215 &\bf 2 & 3425 &\bf 28 \\ \cline{2-17} & \multirow{3}{*}{C} & 2 &\bf 0.06 &\bf( 0 )& 237.96 &( 4 )& 78.92 &\bf 22.59 &\bf 0.00 & 10.78 &\bf 224 & 97635 &\bf 7 & 4652 &\bf 4 & 2239 \\ & & 5 &\bf 12.58 &\bf( 0 )& --- &( 5 )& 100.00 &\bf 29.46 &\bf 0.00 & 17.16 &\bf 470 & 15251 & 40379 &\bf 18660 &\bf 12 & 464 \\ & & 10 &\bf 511.69 &\bf( 2 )& 1831.83 &( 4 )& 100.00 &\bf 37.64 &\bf 7.59 & 20.28 &\bf 880 & 4243 & 7928195 &\bf 21617 & 725 &\bf 160 \\ \cline{2-17} & \multirow{3}{*}{K} & 2 &\bf 0.35 &\bf( 0 )& 1412.69 &( 1 )& 91.43 &\bf 7.55 &\bf 0.00 & 1.42 &\bf 224 & 37917 &\bf 630 & 670 &\bf 3 & 953 \\ & & 5 &\bf 243.88 &\bf( 0 )& 404.99 &( 3 )& 100.00 &\bf 15.40 &\bf 0.00 & 3.85 &\bf 470 & 9363 & 657827 &\bf 6642 &\bf 77 & 279 \\ & & 10 & 32.22 &( 4 )&\bf 3156.63 &\bf( 2 )& 100.00 &\bf 18.53 & 36.95 &\bf 3.26 &\bf 880 & 4071 & 12150962 &\bf 9244 & 2265 &\bf 111 \\ \cline{2-17} & \multirow{3}{*}{D} & 2 &\bf 2.18 &\bf( 0 )& 30.36 &( 0 )& 93.78 &\bf 0.03 &\bf 0.00 & 0.00 &\bf 224 & 2135 & 9222 &\bf 1 &\bf 5 & 108 \\ & & 5 & 1535.82 &( 1 )&\bf 12.18 &\bf( 0 )& 100.00 &\bf 0.00 & 6.69 &\bf 0.00 &\bf 470 & 2401 & 8972062 &\bf 1 & 1225 &\bf 49 \\ & & 10 & 5030.79 &( 4 )&\bf 6.88 &\bf( 0 )& 100.00 &\bf 0.46 & 48.19 &\bf 0.00 &\bf 880 & 2127 & 15660031 &\bf 4 & 2798 &\bf 28 \\ \cline{2-17} & \multirow{3}{*}{S} & 2 &\bf 2.24 &\bf( 0 )& 54.23 &( 0 )& 93.77 &\bf 0.16 & 0.00 & 0.00 &\bf 224 & 2119 & 7677 &\bf 1 &\bf 5 & 106 \\ & & 5 & 1238.87 &( 1 )&\bf 15.75 &\bf( 0 )& 100.00 &\bf 0.06 & 4.40 &\bf 0.00 &\bf 470 & 2401 & 8141244 &\bf 2 & 745 &\bf 50 \\ & & 10 & --- &( 5 )&\bf 7.61 &\bf( 0 )& 100.00 &\bf 0.53 & 50.12 &\bf 0.00 &\bf 880 & 2126 & 16072018 &\bf 5 & 2835 &\bf 28 \\ \cline{2-17} & \multirow{3}{*}{A} & 2 &\bf 0.85 &\bf( 0 )& 783.95 &( 1 )& 91.63 &\bf 4.45 &\bf 0.00 & 0.35 &\bf 224 & 16973 & 1340 &\bf 400 &\bf 4 & 738 \\ & & 5 &\bf 411.21 &\bf( 0 )& 2304.77 &( 0 )& 100.00 &\bf 10.18 &\bf 0.00 & 0.00 &\bf 470 & 7405 & 878975 &\bf 1697 &\bf 126 & 222 \\ & & 10 & 60.27 &( 4 )&\bf 883.79 &\bf( 1 )& 100.00 &\bf 17.10 & 31.87 &\bf 1.73 &\bf 880 & 3288 & 10637608 &\bf 2721 & 1723 &\bf 79 \\ \cline{1-17}\multirow{18}{*}{30} & \multirow{3}{*}{W} & 2 &\bf 139.91 &\bf( 0 )& 526.92 &( 0 )& 93.86 &\bf 0.00 & 0.00 & 0.00 &\bf 334 & 3142 & 787145 &\bf 1 &\bf 38 & 260 \\ & & 5 & --- &( 5 )&\bf 64.66 &\bf( 0 )& 100.00 &\bf 0.00 & 52.05 &\bf 0.00 &\bf 700 & 2963 & 17382888 &\bf 1 & 8647 &\bf 109 \\ & & 10 & --- &( 5 )&\bf 19.51 &\bf( 0 )& 100.00 &\bf 0.00 & 76.26 &\bf 0.00 &\bf 1310 & 2472 & 12250097 &\bf 1 & 4692 &\bf 55 \\ \cline{2-17} & \multirow{3}{*}{C} & 2 &\bf 0.11 &\bf( 0 )& 39.44 &( 4 )& 79.19 &\bf 21.41 &\bf 0.00 & 15.46 &\bf 334 & 125429 &\bf 66 & 931 &\bf 8 & 1443 \\ & & 5 &\bf 30.64 &\bf( 0 )& 1564.58 &( 4 )& 100.00 &\bf 31.68 &\bf 0.00 & 22.73 &\bf 700 & 30216 & 69019 &\bf 2817 &\bf 19 & 389 \\ & & 10 &\bf 4212.55 &\bf( 3 )& --- &( 5 )& 100.00 &\bf 34.18 &\bf 16.67 & 27.51 &\bf 1310 & 12928 & 9619002 &\bf 6027 & 1823 &\bf 190 \\ \cline{2-17} & \multirow{3}{*}{K} & 2 &\bf 4.44 &\bf( 0 )& 409.69 &( 4 )& 90.88 &\bf 8.65 &\bf 0.00 & 7.58 &\bf 334 & 45846 & 8511 &\bf 147 &\bf 10 & 1696 \\ & & 5 & 2956.65 &( 4 )&\bf 5199.43 &\bf( 3 )& 100.00 &\bf 12.01 & 17.79 &\bf 5.82 &\bf 700 & 18893 & 12169516 &\bf 815 & 2570 &\bf 534 \\ & & 10 & --- &( 5 )&\bf 2740.67 &\bf( 4 )& 100.00 &\bf 18.84 & 69.60 &\bf 12.31 &\bf 1310 & 7416 & 9299590 &\bf 2992 & 3105 &\bf 187 \\ \cline{2-17} & \multirow{3}{*}{D} & 2 &\bf 201.28 &\bf( 0 )& 454.39 &( 0 )& 93.77 &\bf 0.00 & 0.00 & 0.00 &\bf 334 & 3087 & 757445 &\bf 1 &\bf 49 & 258 \\ & & 5 & --- &( 5 )&\bf 65.46 &\bf( 0 )& 100.00 &\bf 0.00 & 57.16 &\bf 0.00 &\bf 700 & 2957 & 9914066 &\bf 1 & 7439 &\bf 111 \\ & & 10 & --- &( 5 )&\bf 21.25 &\bf( 0 )& 100.00 &\bf 0.00 & 79.34 &\bf 0.00 &\bf 1310 & 2464 & 10108803 &\bf 1 & 4631 &\bf 55 \\ \cline{2-17} & \multirow{3}{*}{S} & 2 &\bf 203.04 &\bf( 0 )& 370.63 &( 0 )& 93.68 &\bf 0.00 &\bf 0.00 & 0.00 &\bf 334 & 3184 & 566382 &\bf 1 &\bf 41 & 263 \\ & & 5 & --- &( 5 )&\bf 160.85 &\bf( 0 )& 100.00 &\bf 0.03 & 56.47 &\bf 0.00 &\bf 700 & 2963 & 9283122 &\bf 2 & 7054 &\bf 112 \\ & & 10 & --- &( 5 )&\bf 42.86 &\bf( 0 )& 100.00 &\bf 0.09 & 79.91 &\bf 0.00 &\bf 1310 & 2469 & 9530286 &\bf 3 & 4686 &\bf 56 \\ \cline{2-17} & \multirow{3}{*}{A} & 2 &\bf 21.89 &\bf( 0 )& 3640.13 &( 2 )& 91.15 &\bf 4.46 &\bf 0.00 & 3.26 &\bf 334 & 12721 & 26764 &\bf 41 &\bf 12 & 845 \\ & & 5 & 5403.72 &( 4 )&\bf 2750.01 &\bf( 3 )& 100.00 &\bf 7.76 & 28.99 &\bf 2.60 &\bf 700 & 8615 & 8044660 &\bf 188 & 2288 &\bf 357 \\ & & 10 & --- &( 5 )&\bf 804.71 &\bf( 4 )& 100.00 &\bf 13.38 & 70.51 &\bf 6.64 &\bf 1310 & 5529 & 7159232 &\bf 1465 & 2364 &\bf 165 \\ \cline{1-17}\multirow{18}{*}{40} & \multirow{3}{*}{W} & 2 & 4028.70 &( 4 )&\bf 1675.34 &\bf( 0 )& 93.79 &\bf 0.01 & 12.34 &\bf 0.00 &\bf 444 & 5211 & 26828725 &\bf 1 & 2515 &\bf 645 \\ & & 5 & --- &( 5 )&\bf 1647.86 &\bf( 0 )& 100.00 &\bf 0.02 & 67.11 &\bf 0.00 &\bf 930 & 4028 & 12240990 &\bf 3 & 10977 &\bf 229 \\ & & 10 & --- &( 5 )&\bf 348.57 &\bf( 0 )& 100.00 &\bf 0.09 & 81.57 &\bf 0.00 &\bf 1740 & 4001 & 7841923 &\bf 2 & 4267 &\bf 125 \\ \cline{2-17} & \multirow{3}{*}{C} & 2 &\bf 0.25 &\bf( 0 )& --- &( 5 )& 75.52 &\bf 30.52 &\bf 0.00 & 29.73 &\bf 444 & 136451 &\bf 237 & 259 &\bf 15 & 1541 \\ & & 5 &\bf 116.02 &\bf( 0 )& --- &( 5 )& 100.00 &\bf 42.30 &\bf 0.00 & 41.65 &\bf 930 & 27041 & 195892 &\bf 158 &\bf 42 & 224 \\ & & 10 &\bf 3022.45 &\bf( 4 )& --- &( 5 )& 100.00 &\bf 36.47 &\bf 31.47 & 33.88 &\bf 1740 & 12733 & 7126207 &\bf 667 & 2024 &\bf 110 \\ \cline{2-17} & \multirow{3}{*}{K} & 2 &\bf 58.78 &\bf( 0 )& --- &( 5 )& 90.67 &\bf 14.52 &\bf 0.00 & 14.52 &\bf 444 & 14164 & 93918 &\bf 11 &\bf 27 & 897 \\ & & 5 & --- &( 5 )& --- &( 5 )& 100.00 &\bf 21.45 & 56.58 &\bf 21.44 &\bf 930 & 10132 & 6803627 &\bf 28 & 5632 &\bf 360 \\ & & 10 & --- &( 5 )& --- &( 5 )& 100.00 &\bf 19.04 & 75.08 &\bf 17.71 &\bf 1740 & 8823 & 5436226 &\bf 280 & 2606 &\bf 198 \\ \cline{2-17} & \multirow{3}{*}{D} & 2 & 5908.68 &( 4 )&\bf 436.48 &\bf( 1 )& 93.67 &\bf 0.02 & 15.22 &\bf 0.01 &\bf 444 & 5669 & 16542227 &\bf 2 & 3164 &\bf 709 \\ & & 5 & --- &( 5 )&\bf 855.62 &\bf( 1 )& 100.00 &\bf 0.11 & 68.93 &\bf 0.08 &\bf 930 & 4094 & 7984937 &\bf 2 & 10233 &\bf 233 \\ & & 10 & --- &( 5 )&\bf 331.54 &\bf( 0 )& 100.00 &\bf 0.07 & 83.85 &\bf 0.00 &\bf 1740 & 4004 & 5704188 &\bf 2 & 4413 &\bf 126 \\ \cline{2-17} & \multirow{3}{*}{S} & 2 & 4977.44 &( 4 )&\bf 429.96 &\bf( 1 )& 93.60 &\bf 0.47 & 14.33 &\bf 0.47 &\bf 444 & 5195 & 12853124 &\bf 1 & 2430 &\bf 657 \\ & & 5 & --- &( 5 )&\bf 2159.56 &\bf( 1 )& 100.00 &\bf 0.14 & 70.18 &\bf 0.02 &\bf 930 & 4082 & 7715457 &\bf 4 & 9805 &\bf 233 \\ & & 10 & --- &( 5 )&\bf 615.35 &\bf( 0 )& 100.00 &\bf 0.17 & 84.62 &\bf 0.00 &\bf 1740 & 3999 & 5409994 &\bf 4 & 4687 &\bf 126 \\ \cline{2-17} & \multirow{3}{*}{A} & 2 &\bf 533.79 &\bf( 0 )& --- &( 5 )& 90.85 &\bf 8.30 &\bf 0.00 & 8.19 &\bf 444 & 6506 & 455652 &\bf 3 &\bf 48 & 769 \\ & & 5 & --- &( 5 )& --- &( 5 )& 100.00 &\bf 14.76 & 58.65 &\bf 14.17 &\bf 930 & 5538 & 3684557 &\bf 10 & 3409 &\bf 331 \\ & & 10 & --- &( 5 )& --- &( 5 )& 100.00 &\bf 12.35 & 74.52 &\bf 10.21 &\bf 1740 & 6285 & 4403838 &\bf 161 & 2000 &\bf 214 \\ \cline{1-17}\multirow{18}{*}{45} & \multirow{3}{*}{W} & 2 & --- &( 5 )&\bf 483.59 &\bf( 1 )& 94.05 &\bf 0.04 & 27.06 &\bf 0.02 &\bf 499 & 7219 & 24989615 &\bf 2 & 5854 &\bf 1085 \\ & & 5 & --- &( 5 )&\bf 1745.55 &\bf( 2 )& 100.00 &\bf 0.32 & 71.65 &\bf 0.27 &\bf 1045 & 4855 & 11473640 &\bf 4 & 11171 &\bf 374 \\ & & 10 & --- &( 5 )&\bf 635.43 &\bf( 0 )& 100.00 &\bf 0.03 & 83.54 &\bf 0.00 &\bf 1955 & 4239 & 5717627 &\bf 1 & 3767 &\bf 168 \\ \cline{2-17} & \multirow{3}{*}{C} & 2 &\bf 0.46 &\bf( 0 )& --- &( 5 )& 74.99 &\bf 39.01 &\bf 0.00 & 38.99 &\bf 499 & 109398 & 628 &\bf 104 &\bf 17 & 1364 \\ & & 5 &\bf 144.75 &\bf( 0 )& --- &( 5 )& 100.00 &\bf 40.69 &\bf 0.00 & 40.62 &\bf 1045 & 20483 & 215522 &\bf 31 &\bf 44 & 176 \\ & & 10 & --- &( 5 )& --- &( 5 )& 100.00 &\bf 32.80 & 37.16 &\bf 31.54 &\bf 1955 & 14204 & 6469050 &\bf 219 & 1915 &\bf 110 \\ \cline{2-17} & \multirow{3}{*}{K} & 2 &\bf 342.18 &\bf( 0 )& --- &( 5 )& 91.23 &\bf 16.93 &\bf 0.00 & 16.93 &\bf 499 & 10490 & 497310 &\bf 4 &\bf 59 & 845 \\ & & 5 & --- &( 5 )& --- &( 5 )& 100.00 &\bf 22.98 & 64.55 &\bf 22.98 &\bf 1045 & 6631 & 5434589 &\bf 11 & 6520 &\bf 295 \\ & & 10 & --- &( 5 )& --- &( 5 )& 100.00 &\bf 16.68 & 77.74 &\bf 16.48 &\bf 1955 & 8738 & 4555667 &\bf 78 & 2681 &\bf 220 \\ \cline{2-17} & \multirow{3}{*}{D} & 2 & --- &( 5 )&\bf 364.11 &\bf( 1 )& 93.96 &\bf 0.02 & 29.64 &\bf 0.02 &\bf 499 & 6473 & 14042725 &\bf 1 & 6338 &\bf 973 \\ & & 5 & --- &( 5 )&\bf 1744.42 &\bf( 2 )& 100.00 &\bf 0.17 & 73.98 &\bf 0.11 &\bf 1045 & 4731 & 7322624 &\bf 3 & 10301 &\bf 365 \\ & & 10 & --- &( 5 )&\bf 667.13 &\bf( 0 )& 100.00 &\bf 0.02 & 84.73 &\bf 0.00 &\bf 1955 & 4231 & 5228591 &\bf 1 & 4875 &\bf 169 \\ \cline{2-17} & \multirow{3}{*}{S} & 2 & --- &( 5 )&\bf 623.35 &\bf( 2 )& 93.87 &\bf 0.10 & 28.98 &\bf 0.09 &\bf 499 & 7260 & 10776521 &\bf 2 & 4776 &\bf 1093 \\ & & 5 & --- &( 5 )& --- &( 5 )& 100.00 &\bf 0.72 & 76.35 &\bf 0.62 &\bf 1045 & 4899 & 7356115 &\bf 7 & 10281 &\bf 378 \\ & & 10 & --- &( 5 )&\bf 1848.85 &\bf( 0 )& 100.00 &\bf 0.18 & 85.38 &\bf 0.00 &\bf 1955 & 4258 & 4378226 &\bf 4 & 4283 &\bf 168 \\ \cline{2-17} & \multirow{3}{*}{A} & 2 &\bf 4681.25 &\bf( 0 )& --- &( 5 )& 91.28 &\bf 11.43 &\bf 0.00 & 11.43 &\bf 499 & 6849 & 3057137 &\bf 2 &\bf 121 & 975 \\ & & 5 & --- &( 5 )& --- &( 5 )& 100.00 &\bf 17.39 & 65.13 &\bf 17.17 &\bf 1045 & 5476 & 2139768 &\bf 4 & 2517 &\bf 415 \\ & & 10 & --- &( 5 )& --- &( 5 )& 100.00 &\bf 10.28 & 76.58 &\bf 8.88 &\bf 1955 & 6105 & 3577556 &\bf 56 & 1976 &\bf 244 \\ \cline{1-17}\multirow{18}{*}{50} & \multirow{3}{*}{W} & 2 & --- &( 1 )&\bf 331.87 &\bf( 0 )& 94.13 &\bf 0.00 & 34.44 &\bf 0.00 &\bf 554 & 8094 & 24416531 &\bf 1 & 6585 &\bf 1464 \\ & & 5 & --- &( 1 )&\bf 410.87 &\bf( 0 )& 100.00 &\bf 0.00 & 76.08 &\bf 0.00 &\bf 1160 & 5292 & 9438723 &\bf 1 & 9646 &\bf 466 \\ & & 10 & --- &( 1 )&\bf 1005.02 &\bf( 0 )& 100.00 &\bf 0.00 & 84.68 &\bf 0.00 &\bf 2170 & 4914 & 5017512 &\bf 1 & 3878 &\bf 225 \\ \cline{2-17} & \multirow{3}{*}{C} & 2 &\bf 0.34 &\bf( 0 )& --- &( 1 )& 75.02 &\bf 30.33 &\bf 0.00 & 30.31 &\bf 554 & 80356 & 367 &\bf 37 &\bf 22 & 1064 \\ & & 5 &\bf 379.06 &\bf( 0 )& --- &( 1 )& 100.00 &\bf 41.37 &\bf 0.00 & 41.37 &\bf 1160 & 15314 & 443313 &\bf 14 &\bf 137 & 143 \\ & & 10 & --- &( 1 )& --- &( 1 )& 100.00 &\bf 37.49 & 46.67 &\bf 36.94 &\bf 2170 & 12538 & 5837062 &\bf 213 & 2937 &\bf 98 \\ \cline{2-17} & \multirow{3}{*}{K} & 2 &\bf 1135.78 &\bf( 0 )& --- &( 1 )& 91.28 &\bf 15.46 &\bf 0.00 & 15.46 &\bf 554 & 10042 & 1334361 &\bf 3 &\bf 84 & 926 \\ & & 5 & --- &( 1 )& --- &( 1 )& 100.00 &\bf 24.86 & 68.28 &\bf 24.86 &\bf 1160 & 6541 & 4368607 &\bf 4 & 6095 &\bf 324 \\ & & 10 & --- &( 1 )& --- &( 1 )& 100.00 &\bf 23.05 & 79.98 &\bf 23.04 &\bf 2170 & 7164 & 2448072 &\bf 15 & 1851 &\bf 205 \\ \cline{2-17} & \multirow{3}{*}{D} & 2 & --- &( 1 )&\bf 328.07 &\bf( 0 )& 94.07 &\bf 0.00 & 37.30 &\bf 0.00 &\bf 554 & 8035 & 12235346 &\bf 1 & 7096 &\bf 1485 \\ & & 5 & --- &( 1 )&\bf 4430.48 &\bf( 0 )& 100.00 &\bf 0.08 & 78.70 &\bf 0.00 &\bf 1160 & 5415 & 5502769 &\bf 5 & 8618 &\bf 485 \\ & & 10 & --- &( 1 )&\bf 1408.05 &\bf( 0 )& 100.00 &\bf 0.00 & 86.81 &\bf 0.00 &\bf 2170 & 4914 & 3617149 &\bf 2 & 4412 &\bf 219 \\ \cline{2-17} & \multirow{3}{*}{S} & 2 & --- &( 1 )&\bf 516.57 &\bf( 0 )& 93.95 &\bf 0.00 & 37.29 &\bf 0.00 &\bf 554 & 7579 & 8797114 &\bf 1 & 4912 &\bf 1387 \\ & & 5 & --- &( 1 )& --- &( 1 )& 100.00 &\bf 0.57 & 79.68 &\bf 0.57 &\bf 1160 & 5704 & 5750451 &\bf 5 & 9004 &\bf 508 \\ & & 10 & --- &( 1 )&\bf 3413.97 &\bf( 0 )& 100.00 &\bf 0.00 & 87.36 &\bf 0.00 &\bf 2170 & 4962 & 3126980 &\bf 5 & 5175 &\bf 230 \\ \cline{2-17} & \multirow{3}{*}{A} & 2 & --- &( 1 )& --- &( 1 )& 91.49 &\bf 10.36 & 19.20 &\bf 10.36 &\bf 554 & 8056 & 3163853 &\bf 2 &\bf 1161 & 1369 \\ & & 5 & --- &( 1 )& --- &( 1 )& 100.00 &\bf 19.06 & 67.25 &\bf 18.75 &\bf 1160 & 5872 & 2177800 &\bf 4 & 3290 &\bf 542 \\ & & 10 & --- &( 1 )& --- &( 1 )& 100.00 &\bf 10.17 & 78.48 &\bf 9.36 &\bf 2170 & 5764 & 2718050 &\bf 16 & 1862 &\bf 268 \\ \cline{1-17}\multicolumn{3}{c}{\bf Total Average:} & 645.10 &( 229 )&\bf 772.80 &\bf( 171 )& 96.71 &\bf 10.19 & 35.81 &\bf 7.98 &\bf 897 & 13958 & 6581088 &\bf 1111 & 3014 &\bf 427 \\ \hline \end{tabular}% \end{adjustbox} \caption{Results for \cite{EWC74} instances for $\ell_1$-norm \label{tab:EilonL1}}} \end{table}% The branch-and-price algorithm is able to solve optimally 58 instances more than the compact formulation. However, for some instances (mainly Center and $k$-Center problems or when $p=2$) the solved instances with the compact formulation need less CPU time. Thus, the first conclusion could be that when $p$ increases decomposition techniques become more important because the number of variables is not so dependant of this parameter. The second conclusion from the results is that the branch-and-price is a very powerful tool when the gap at the root node is close to zero which does not happen when a big percentage of the positions of the $\lambda$-vector are zeros. Concerning the memory used by the tested formulations, the compact formulation needs bigger branch-and-bound trees to deal with fractional solutions whereas that branch-and-price uses more variables. Since the average gap at termination for the branch-and-price algorithm is much smaller than the one obtained by the compact formulation (7.98\% against 35.81\%) we will use decomposition-based algorithms to study medium- and large-sized instances. \subsection{Computational performance of the matheuristics} Finally, in this section, we show the performance of our matheuristics procedures to solve larger instances. Firstly, we will test them for $n=50$ where the solutions can be compared with the theoretic bounds provided by the exact method. Secondly, we will compare them using a bigger instance of 654 demand points in the following section. In order to obtain Table \ref{tab:Heur50}, instances generated with the 50 demand points described in \cite{EWC74} are solved with 18 different configurations of ordered weighted median functions and number of open servers. A time limit of 2 hours was fixed for this experiment. Each of these 18 problems has been solved by means of the following strategies: branch-and-price procedure (B\&P); the heuristic used to generate initial columns (InitialHeur); the decomposition-based heuristic (Matheur); and the aggregation-based approaches of Section \ref{sec:aggregation} (KMEAN-20, KMEAN-30, PTF-20, PTF-30) for $|\mathcal{A}^\prime|=\{20,30\}$. The reported results are the CPU time and the gap (\texttt{GAP$_{LB}$(\%)}) which is calculated with respect to the lower bound of the branch-and-price algorithm when the time limit is reached. Thereby we have a theoretic gap knowing exactly the room for improvement of our heuristics. The branch-and-price methods report always the best performance except in one instance. In general they present less gap and, in average, it is better not wasting the time solving the exact pricer letting the algorithm go further adding columns or branching before certifying optimality. Thus, with Matheur strategy we obtain an 11.23\% of average gap. In fact, this matheuristic finds the optimal solution (certified by the exact method) at least in six instances. Concerning the time the other heuristics obtain good quality solution in much smaller CPU times. For the Eilon dataset aggregating in more demand points gives better results although the CPU time increases. However, for some instances where the application for $n=20$ certifies optimality but $n=30$ does not, the first option could work better in a fixed time limit. \begin{table} \centering{ \begin{adjustbox}{max width=1.0\textwidth} \begin{tabular}{ccrrrrrrrrrrrrrrcc}\hline \texttt{type} &$p$ & \multicolumn{2}{c}{ B\&P} & \multicolumn{2}{c}{ InitialHeur}& \multicolumn{2}{c}{ Matheur}& \multicolumn{2}{c}{ KMEAN-20}& \multicolumn{2}{c}{ KMEAN-30}& \multicolumn{2}{c}{ PTF-20}& \multicolumn{2}{c}{ PTF-30} \\ \cmidrule(lr){1-2}\cmidrule(lr){3-4}\cmidrule(lr){5-6}\cmidrule(lr){7-8}\cmidrule(lr){9-10}\cmidrule(lr){11-12}\cmidrule(lr){13-14}\cmidrule(lr){15-16} & & \texttt{Time} &\texttt{GAP$_{LB}$(\%)}& \texttt{Time} &\texttt{GAP$_{LB}$(\%)}& \texttt{Time} &\texttt{GAP$_{LB}$(\%)}& \texttt{Time} &\texttt{GAP$_{LB}$(\%)}& \texttt{Time} &\texttt{GAP$_{LB}$(\%)}& \texttt{Time} &\texttt{GAP$_{LB}$(\%)}& \texttt{Time} &\texttt{GAP$_{LB}$(\%)} \\ \hline \cline{1-16}\multirow{3}{*}{W} & 2 & 331.87 &\bf 0.00 & 0.00 & 1.44 & 134.54 &\bf 0.00 & 74.60 & 3.10 & 5204.85 & 2.80 & 35.14 & 7.58 & 7200.18 & 2.50 \\ & 5 & 410.87 &\bf 0.00 & 0.00 & 11.53 & 9.51 &\bf 0.00 & 21.47 & 12.55 & 398.93 & 11.25 & 16.38 & 17.88 & 187.03 & 7.90 \\ & 10 & 1005.02 &\bf 0.00 & 0.00 & 21.57 & 2.84 &\bf 0.00 & 4.23 & 17.16 & 59.08 & 8.79 & 16.11 & 23.45 & 57.35 & 9.88 \\ \cline{1-16}\multirow{3}{*}{C} & 2 & 7200.64 & 43.49 & 0.00 & 43.49 & 6.61 & 39.68 & 7200.16 & 47.05 & 7200.52 & 32.36 & 7200.21 &\bf 31.46 & 7200.47 & 41.39 \\ & 5 & 7200.19 & 70.56 & 0.00 & 73.45 & 7200.00 &\bf 46.66 & 7200.09 & 82.76 & 7200.10 & 76.52 & 7200.09 & 94.74 & 7200.09 & 72.70 \\ & 10 & 7200.19 & 58.58 & 0.00 & 101.43 & 7200.18 &\bf 28.93 & 7200.16 & 104.98 & 7200.19 & 73.44 & 7200.16 & 154.72 & 7200.18 & 70.54 \\ \cline{1-16}\multirow{3}{*}{K} & 2 & 7200.13 &\bf 18.28 & 0.00 & 21.40 & 227.64 & 19.79 & 3589.02 & 21.19 & 7200.40 & 22.00 & 7200.35 & 20.05 & 7200.57 & 18.81 \\ & 5 & 7200.34 & 33.09 & 0.00 & 33.09 & 392.07 &\bf 22.37 & 7200.10 & 26.84 & 7200.15 & 36.08 & 7200.10 & 42.99 & 7200.13 & 30.43 \\ & 10 & 7200.28 & 29.94 & 0.00 & 41.73 & 864.26 &\bf 11.79 & 7200.17 & 35.38 & 7200.20 & 19.91 & 7200.17 & 41.97 & 7200.19 & 15.93 \\ \cline{1-16}\multirow{3}{*}{D} & 2 & 328.07 &\bf 0.00 & 0.00 & 1.45 & 130.00 &\bf 0.00 & 42.82 & 3.76 & 5213.35 & 2.82 & 18.77 & 7.70 & 5105.83 & 2.47 \\ & 5 & 4430.48 &\bf 0.00 & 0.00 & 11.43 & 10.73 &\bf 0.00 & 7.87 & 12.53 & 162.45 & 10.95 & 17.50 & 16.36 & 191.88 & 8.13 \\ & 10 & 1408.05 &\bf 0.00 & 0.00 & 21.38 & 3.06 & 0.43 & 4.76 & 17.00 & 50.22 & 10.18 & 22.26 & 23.27 & 135.57 & 9.07 \\ \cline{1-16}\multirow{3}{*}{S} & 2 & 516.57 &\bf 0.00 & 0.00 & 1.62 & 111.73 &\bf 0.00 & 25.43 & 2.97 & 7200.20 & 2.59 & 35.77 & 7.34 & 7200.27 & 2.33 \\ & 5 & 7200.43 &\bf 0.57 & 0.00 & 12.28 & 14.96 & 0.64 & 36.68 & 12.60 & 498.07 & 11.81 & 41.88 & 18.33 & 746.57 & 8.26 \\ & 10 & 3413.97 &\bf 0.00 & 0.00 & 21.24 & 3.22 & 0.66 & 10.99 & 16.86 & 29.98 & 7.98 & 50.52 & 23.29 & 114.10 & 8.97 \\ \cline{1-16}\multirow{3}{*}{A} & 2 & 7200.39 & 11.56 & 0.00 & 11.56 & 155.18 & 10.91 & 2611.77 &\bf 9.05 & 7200.15 & 10.62 & 7200.41 & 13.49 & 7200.09 & 13.69 \\ & 5 & 7200.40 & 23.08 & 0.00 & 23.61 & 325.35 &\bf 14.00 & 7200.10 & 19.96 & 7200.16 & 18.63 & 7200.11 & 26.03 & 7200.13 & 22.18 \\ & 10 & 7200.24 & 10.33 & 0.00 & 31.65 & 129.77 &\bf 6.29 & 7200.17 & 17.48 & 7200.21 & 16.63 & 7200.17 & 33.49 & 7200.21 & 21.82 \\ \cline{1-16}\multicolumn{2}{c}{\bf Total Average:} & 4658.23 & 16.64 & 0.00 & 26.97 & 940.09 &\bf 11.23 & 3157.25 & 25.73 & 4645.51 & 20.85 & 3614.23 & 33.56 & 4763.38 & 20.39 \\ \hline \end{tabular}% \end{adjustbox} \caption{Heuristic results for instances of $n=50$, \cite{EWC74} \label{tab:Heur50}}} \end{table}% Table \ref{tab:Heur654} presents a similar structure that the previous table. However for $n=654$ the branch-and-price is not able to give us a good lower bound even increasing the time limit to 24 hours or using \eqref{eq:LB}. Hence, here we calculate \texttt{GAP$_{Best}$(\%)} as the gap with respect to the best known integer solution. For these large-sized problems the best solutions are found by the decomposition-based matheuristic in average, but the improvement from the initial heuristic is null for some cases. Some instances have the best performance using KMEAN-20 or PTF-20 matheusristics. It is not appreciated a big improvement taking 30 points instead of 20 for the aggregation method. To find an explanation for that, Figure \ref{fig:626Wp5} depicts the aggregation (triangular points) and the solution (square points) for a particular instance. The reader can see how the demand points are concentrated by zones. Adding more points to $\mathcal{A}^\prime$ gives an importance to some aggregated points that does not represent properly the original data of this instance of $n=654$. In this case, we can see an example for which the aggregation algorithm works better under the {\em less is more} paradigm. \begin{table} \centering{ \begin{adjustbox}{max width=1.0\textwidth} \begin{tabular}{ccrrrrrrrrrrrrrrcc}\hline \texttt{type} &$p$ & \multicolumn{2}{c}{ B\&P} & \multicolumn{2}{c}{ InitialHeur}& \multicolumn{2}{c}{ Matheur}& \multicolumn{2}{c}{ KMEAN-20}& \multicolumn{2}{c}{ KMEAN-30}& \multicolumn{2}{c}{ PTF-20}& \multicolumn{2}{c}{ PTF-30} \\ \cmidrule(lr){1-2}\cmidrule(lr){3-4}\cmidrule(lr){5-6}\cmidrule(lr){7-8}\cmidrule(lr){9-10}\cmidrule(lr){11-12}\cmidrule(lr){13-14}\cmidrule(lr){15-16} & & \texttt{Time} &\texttt{GAP$_{Best}$(\%)}& \texttt{Time} &\texttt{GAP$_{Best}$(\%)}& \texttt{Time} &\texttt{GAP$_{Best}$(\%)}& \texttt{Time} &\texttt{GAP$_{Best}$(\%)}& \texttt{Time} &\texttt{GAP$_{Best}$(\%)}& \texttt{Time} &\texttt{GAP$_{Best}$(\%)}& \texttt{Time} &\texttt{GAP$_{Best}$(\%)} \\ \hline \cline{1-16}\multirow{3}{*}{W} & 2 & 86441.18 & 9.74 & 0.06 & 9.74 & 86441.18 & 9.74 & 13.31 & \bf 0.00 & 755.99 & 30.34 & 32.59 & 10.32 & 697.98 & 29.60 \\ & 5 & 86444.11 & \bf 0.00 & 0.06 & 0.51 & 86444.11 & \bf 0.00 & 6.56 & 28.51 & 97.79 & 58.33 & 7.46 & 49.02 & 74.85 & 55.91 \\ & 10 & 86439.34 & \bf 0.00 & 0.06 & 15.39 & 86439.34 & \bf 0.00 & 2.64 & 74.95 & 31.49 & 84.48 & 1.58 & 123.62 & 68.11 & 103.19 \\ \cline{1-16}\multirow{3}{*}{C} & 2 & 86407.88 & 4.16 & 0.07 & 4.16 & 86407.32 & \bf 0.00 & 420.97 & 8.91 & 86400.83 & 3.43 & 5765.05 & \bf 0.00 & 86401.06 & \bf 0.00 \\ & 5 & 86407.52 & 6.54 & 0.06 & 8.92 & 51003.35 & 4.47 & 86424.42 & 32.61 & 86400.09 & \bf 0.00 & 86400.15 & 20.98 & 86400.09 & 1.64 \\ & 10 & 86408.46 & 18.78 & 0.06 & 18.78 & 83212.74 & 1.85 & 42425.62 & 29.26 & 86400.17 & \bf 0.00 & 86400.16 & 60.83 & 86400.13 & 0.64 \\ \cline{1-16}\multirow{3}{*}{K} & 2 & 86424.57 & 0.92 & 0.06 & 0.92 & 86424.57 & 0.92 & 274.39 & \bf 0.00 & 40549.49 & 10.08 & 365.79 & 8.36 & 15942.12 & 6.04 \\ & 5 & 86424.89 & \bf 0.00 & 0.06 & \bf 0.00 & 86424.89 & \bf 0.00 & 854.51 & 24.50 & 86400.13 & 47.40 & 2868.28 & 34.04 & 86400.26 & 28.16 \\ & 10 & 86425.31 & \bf 0.00 & 0.06 & \bf 0.00 & 86425.31 & \bf 0.00 & 6695.50 & 88.17 & 86400.19 & 55.15 & 13189.85 & 97.91 & 86400.14 & 70.97 \\ \cline{1-16}\multirow{3}{*}{D} & 2 & 86440.98 & 11.02 & 0.07 & 11.02 & 86440.98 & 11.02 & 29.22 & \bf 0.00 & 737.65 & 31.24 & 23.55 & 11.31 & 669.43 & 30.96 \\ & 5 & 86440.05 & \bf 0.00 & 0.06 & 0.51 & 86440.05 & \bf 0.00 & 15.08 & 29.39 & 123.71 & 60.16 & 6.47 & 51.81 & 96.06 & 61.08 \\ & 10 & 86439.96 & \bf 0.00 & 0.06 & 15.35 & 86439.96 & \bf 0.00 & 7.19 & 73.21 & 21.26 & 75.23 & 3.05 & 122.59 & 106.33 & 103.25 \\ \cline{1-16}\multirow{3}{*}{S} & 2 & 86440.27 & 8.02 & 0.07 & 8.02 & 86440.27 & 8.02 & 19.02 & \bf 0.00 & 427.39 & 27.13 & 27.17 & 9.27 & 515.89 & 25.98 \\ & 5 & 86439.85 & \bf 0.00 & 0.07 & 0.52 & 86439.85 & \bf 0.00 & 9.14 & 28.84 & 200.77 & 57.12 & 6.68 & 49.57 & 143.32 & 59.67 \\ & 10 & 86440.73 & \bf 0.00 & 0.06 & 15.80 & 86440.73 & \bf 0.00 & 6.69 & 75.73 & 32.23 & 81.06 & 2.44 & 122.72 & 107.15 & 103.53 \\ \cline{1-16}\multirow{3}{*}{A} & 2 & 86439.83 & 3.58 & 0.06 & 3.59 & 86439.83 & 3.58 & 264.51 & \bf 0.00 & 11439.37 & 8.96 & 416.03 & 6.13 & 6083.81 & 7.90 \\ & 5 & 86443.12 & \bf 0.00 & 0.06 & \bf 0.00 & 86443.12 & \bf 0.00 & 256.67 & 23.94 & 40907.89 & 38.23 & 414.57 & 31.07 & 15986.23 & 29.51 \\ & 10 & 86438.72 & \bf 0.00 & 0.06 & \bf 0.00 & 86438.72 & \bf 0.00 & 4315.73 & 39.30 & 86400.18 & 65.88 & 29377.52 & 69.98 & 86400.14 & 66.13 \\ \cline{1-16}\multicolumn{2}{c}{\bf Total Average:} & 86432.60 & 3.49 & 0.06 & 6.29 & 84288.13 & \bf 2.20 & 7891.18 & 30.96 & 34095.92 & 40.79 & 12517.13 & 48.86 & 31049.62 & 43.56 \\ \hline \end{tabular}% \end{adjustbox} \caption{Heuristic results for instances of $n=654$, \cite{Beasley} \label{tab:Heur654}}} \end{table}% \begin{figure} \begin{subfigure}{0.5\textwidth} \centering \fbox{\includegraphics[scale=0.3]{InitialHeur.jpeg}} \caption{InitialHeur} \label{performanceProfilesBranching} \end{subfigure}% \begin{subfigure}{0.5\textwidth} \centering \fbox{\includegraphics[scale=0.3]{matheuristico.jpeg}} \caption{Matheur} \label{performanceProfilesBranching} \end{subfigure}% \begin{subfigure}{0.5\textwidth} \centering \fbox{\includegraphics[scale=0.3]{20kmean.jpeg}} \caption{KMEAN-20} \label{performanceProfilesBranching} \end{subfigure}% \begin{subfigure}{0.5\textwidth} \centering \fbox{\includegraphics[scale=0.3]{30kmean.jpeg}} \caption{KMEAN-30} \label{performanceProfilesBranching} \end{subfigure}% \caption{Solutions for $n=654$ \citep[][]{Beasley}, W, $p=5$, and $\ell_1$-norm} \label{fig:626Wp5} \end{figure} \section{Conclusions}\label{sec:conclusions} In this work, the Continuous Multifacility Monotone Ordered Median Problem is analyzed. This problem finds solutions in a continuous space and to solve it we have proposed two exact methods, namely a compact formulation and a branch-and-price procedure, using binary variables. Along the paper, we give full details of the branch-and-price algorithm and all its crucial steps: master problem, restricted relaxed master problem, pricing problem, initial pool of columns, feasibility, convergence, and branching. Moreover, theoretic and empirical results have proven the utility of the obtained lower bound. Using that bound, we have tested the new proposed matheuristics. The decomposition-based heuristics have shown a very good performance on the computational experiments. For large-sized instances, it is worth highlighting that the exact procedure has improved the initial heuristic from 6.29\% to 3.49\% in average what in real applications could make a difference. Among the extensive computational experiments and configurations of the problem, we highlight the usefulness of the branch-and-price approach for medium- to large-sized instances, but also the utility of the compact formulation and the aggregation-based heuristics for small values of $p$ or for some particular ordered weighted median functions. Further research on the topic includes the design of similar branch-and-price approaches to other continuous facility location and clustering problems. Specifically, the application of set-partitioning column generation methods to hub location and covering problems with generalized upgrading \citep[see, e.g.,][]{BM19} where the index set for the $y$-variables must be adequately defined. \section*{Acknowledgements} The authors of this research acknowledge financial support by the Spanish Ministerio de Ciencia y Tecnolog\'ia, Agencia Estatal de Investigacion and Fondos Europeos de Desarrollo Regional (FEDER) via project PID2020-114594GB-C21. The first, third and fourth authors also acknowledge partial support from projects FEDER-US-1256951, Junta de Andalucía P18-FR-1422, CEI-3-FQM331, FQM-331, and NetmeetData: Ayudas Fundación BBVA a equipos de investigación cient\'ifica 2019. The first and second authors were partially supported by research group SEJ-584 (Junta de Andaluc\'ia). The second author was supported by Spanish Ministry of Education and Science grant number PEJ2018-002962-A and the Doctoral Program in Mathematics at the Universidad of Granada. The third author also acknowledges the grant Contrataci\'on de Personal Investigador Doctor (Convocatoria 2019) 43 Contratos Capital Humano L\'inea 2 Paidi 2020, supported by the European Social Fund and Junta de Andaluc\'ia. \bibliographystyle{apa}
1,941,325,220,624
arxiv
\section{Introduction} Photometric monitoring of Wolf-Rayet (W-R) binaries revealed that many of them display a shallow eclipse when the W-R star passes in front of its O-type companion (e.g.\ Lamontagne et al.\ \cite{lamontagne}). These so-called atmospheric eclipses arise when part of the light of the O-type companion is absorbed by the wind of the W-R star. In a few cases, the light curve displays an eclipse at both conjunctions and the analysis of this phenomenon can provide important information about the physical parameters of W-R stars and their winds. In this context, the most famous example is V444\,Cyg (WN5 + O6\,V), which has been extensively investigated by the Moscow group (e.g.\ Antokhin \& Cherepashchuk \cite{Igor} and references therein). Cherepashchuk and coworkers (e.g.\ Cherepashchuk \cite{Cher75}, Antokhin \& Cherepashchuk \cite{Igor} and references therein) developed a sophisticated method to handle the ill-posed problem of light curve inversion for V444\,Cyg. Based on the minimum {\it a priori} assumptions about the transparency function, this method not only yields the radii of both components, but also provides information about the structure of the WN5's stellar wind. However, because of a number of fundamental hypotheses that are not necessarily valid for all eclipsing W-R binaries, this method cannot be readily applied to all eclipsing W-R + O systems. For eccentric systems in particular (e.g.\ WR22, Gosset et al.\ \cite{wr22}) some assumptions (such as spherical symmetry of the problem) break down and a different technique must be used. We initiated our study of the observed light changes of eclipsing Wolf-Rayet binary systems when two of us (J.B. and C.P.) tried to confirm the 25.56 day period found by Hoffmann et al.\ (\cite{Hoffmann}) for the SMC star HD\,5980, the first extragalactic Wolf-Rayet binary then known to display eclipses. This exercise led to the discovery of the correct orbital period of HD\,5980, $P = 19.266 \pm 0.003$ days (Breysacher \& Perrier \cite{BP80}), the light curve revealing, in addition, a rather eccentric orbit of $e = 0.47$ assuming $i = 80^{\circ}$. However, because of the uncertainties in the depth of both minima, caused by an insufficient number of observations, no detailed quantitative analysis of this preliminary light curve could be attempted. Its relatively long period and large eccentricity ensure that HD\,5980 is an interesting object in which to study the structure of a W-R envelope, and a photometric monitoring of this system was initiated to define the shape of its light curve in a more accurate way. More than 700 observations were collected. After realizing that none of the existing `classical tools' was suited to our purpose -- the decoding of the light changes of a {\it partially-eclipsing system} characterized by an {\it eccentric orbit} and containing one component with an {\it extended atmosphere} -- we started to develop another approach to the solution of light curves. The technique of light curve analysis applied to V444\,Cyg by Smith \& Theokas (\cite{ST}), which is based on Kopal's fundamental work (cf.\ Kopal \cite{Kop1}, \cite{Kop2}), appeared as an attractive approach to the solution of our problem. This method is based on the interpretation of the observed light changes in the {\it frequency-domain}, i.e., not the light curve as a function of time, but its {\it Fourier-like integral transform}. We now describe in detail the method we developed for the study of Wolf-Rayet eclipsing binaries. A preliminary application of this technique to the light curve of HD\,5980 prior to its 1994 LBV-like eruption (see e.g., Bateson \& Jones \cite{BJ}, Barb\'a et al.\ \cite{Barba}, Heydari-Malayeri et al.\ \cite{HM}) was presented by Breysacher \& Perrier (\cite{BP91}, hereafter BP91). We reanalyse the pre-outburst observational data using an improved version of the software tool. Revised values for the physical parameters of HD\,5980 are derived. A synthetic light curve based on the elements thus obtained allows a quality assessment of the new results. A more detailed study of HD\,5980, including the analysis of the light curve obtained after the eruption (Sterken \& Breysacher \cite{Ster97}), will be presented in a forthcoming paper. \section{Analysis of the light changes in the frequency-domain} In this section, we present the fundamental equations of our method, we then introduce the mathematical functions to model the transparency and limb-darkening functions and consider the specific problem of eccentric orbits. \subsection{The basic equations} We refer to the fundamental work of the Manchester group (cf.\ Kopal \cite{Kop1}, \cite{Kop2}; Smith \cite{Smith}), and first consider an eclipsing system consisting of two spherical stars revolving around the common centre of gravity in circular orbits, and appearing in projection on the sky as uniformly bright discs. The system is seen at an inclination angle $i$. When star 1 of fractional luminosity $L_{1}$ and radius $r_{1}$ is partly eclipsed by star 2 of fractional luminosity $L_{2}$ and radius $r_{2}$ (Fig.\,\ref{fig-1}), the apparent brightness $l$ of the system (maximum light between minima taken as unity) is given by \begin{equation} l (r_{1},r_{2},\delta,J) = 1 - \int\int_{A} J(r)\,d\sigma \label{eqn1}, \end{equation} where $\delta$ is the apparent separation of the centres of the two discs, $J$ represents the distribution of brightness over the apparent disc of the star undergoing eclipse, and $d\sigma$ stands for the surface element. The distances $r_1$, $r_2$, and $\delta$ are expressed in units of the orbital separation. The integral in Eq.\,(\ref{eqn1}) provides the apparent `loss of light' displayed by the system when an area $A(r_{1},r_{2},\delta)$ of star 1 is eclipsed (see Fig.\,\ref{fig-1}). The assumption that star 1 is uniformly bright yields \begin{equation} J(r) = \frac{L_{1}}{\pi r_{1}^{2}} \label{eqn2}. \end{equation} Combining Eqs.\,(\ref{eqn1}) and (\ref{eqn2}), we obtain \begin{equation} 1 - l (r_{1},r_{2},\delta,J) = \frac{L_{1}}{\pi r_{1}^{2}} \int\int_{A} d\sigma = \alpha L_{1}, \end{equation} where $\alpha$ is the ratio of the mutual area of eclipse to the area of the disc of the eclipsed star, and is a function of $r_{1}$, $r_{2}$, and $\delta$ (Kopal \cite{Kop1}). A generalisation of these concepts to the case of spherical stars with arbitrary limb-darkening laws $J(r)$ was presented by Smith (\cite{Smith}). \begin{figure}[htb] \begin{center} \resizebox{8cm}{6.4cm}{\includegraphics{11707fg1.eps}} \end{center} \caption{Geometry of the eclipse of star 1 by star 2. The integral of Eq.\,(\ref{eqn1}) is evaluated over the hatched area $A(r_1,r_2,\delta)$. The points inside this area can be specified by the coordinates $(r, s)$ corresponding to the two intersections of the circle with radius $r$ and centre O$_1$ and the circle with radius $s$ centred on O$_2$.\label{fig-1}} \end{figure} For an orbital period $P$ and an epoch of conjunction $t_0$, we define the phase angle $\theta$ at a time $t$ to be \begin{equation} \theta = \frac{2\,\pi}{P}\,(t-t_0) \label{eqn4}. \end{equation} As proposed by Kopal (\cite{Kop1}, \cite{Kop2}), we focus our attention on the area subtended by the light curve in the $(l, \sin^{2m}\theta)$ plane, where $m$ is a positive integer number $(m = 1, 2, 3,...)$, as shown in Fig.\,\ref{fig-2}. The areas $A_{2m}$ between the lines $l = 1$, $\sin^{2m}\theta = 0$, and the true light curve are then given by the integrals \begin{equation} A_{2m} = \int_{0}^{\theta_{\rm fc}} (1 - l)\,d(\sin^{2m}\theta) \label{eqn5}, \end{equation} which are hereafter referred to as the {\it moments of the eclipse}, of index $m$, where $\theta_{\rm fc}$ denotes the phase angle of the first contact ($\delta (\theta_{\rm fc}) = r_1 + r_2$) of the eclipse. \begin{figure}[htb] \begin{center} \resizebox{8cm}{6.5cm}{\includegraphics{11707fg2.eps}} \end{center} \caption{Light curve of an eclipse in the $(l, \sin^{2m}\theta)$ plane. $\theta_{\rm fc}$ corresponds to the phase angle of first contact, whilst $\theta_{\rm t}$ represents the phase angle corresponding to the beginning of the total eclipse. The shaded area illustrates the moment $A_{2m}$ and $\lambda = 1 - l(\theta = 0)$.\label{fig-2}} \end{figure} Combining Eqs.\ (\ref{eqn1}) and (\ref{eqn5}), Kopal (\cite{Kop1},\cite{Kop2}) and Smith (\cite{Smith}) demonstrated that, based on certain assumptions, it is possible to \begin{itemize} \item[$\bullet$] derive analytical expressions of the moments of the eclipse in terms of the physical parameters ($i, r_1, r_2, L_1, L_2$) of a binary system with a circular orbit consisting of uniformly bright spherical stars, \item[$\bullet$] invert these relationships to determine the parameters of the system in terms of the moments $A_{2m}$ that can be empirically obtained from the data, \item[$\bullet$] extend this treatment to the case of partial eclipses of stars with an arbitrary (yet analytical) limb-darkening law. \end{itemize} Going one step further, Smith \& Theokas (\cite{ST}) generalised the above concepts to derive a convenient mathematical solution to the problem of an atmospheric eclipse, i.e., an eclipse of a limb-darkened star by a star surrounded by an extended atmosphere. In such an eclipse, at each position inside the area $A$ specified by the coordinates $r$ and $s$ (see Fig.\,\ref{fig-1}), a fraction of the light emitted by star 1 is absorbed by the atmosphere of star 2. To account for these transparency effects, a transparency function $F(s)$ is introduced into Eq.\,(\ref{eqn1}), so that the total amount of light seen by the observer becomes \begin{equation} l (r_{1},r_{2},\delta,J,F) = 1 - \int\int_{A} J(r) F(s)\,d\sigma \label{eqn6}. \end{equation} Considering that in the case of an atmospheric eclipse, it might be interesting to give more weight to the data close to mid-minimum, Smith \& Theokas (\cite{ST}) also introduced an alternative set of moments $B_{2m}$ defined by \begin{equation} B_{2m} = - \int_{0}^{\theta_{\rm fc}} (1 - l)\,d(\cos^{2m}\theta) \label{eqn7}. \end{equation} The use of kernel $d(\cos^{2m}\theta)$ in Eq.\,(\ref{eqn7}) places more emphasis on the data points close to mid-eclipse, which leads to smaller errors than in the case of the $A_{2m}$ moments defined by means of the $d(\sin^{2m}\theta)$ kernel (Theokas \& Smith 1983, private communication). We define $\varepsilon$ to be the mean error in an individual data point; the relative error in the light curve $\varepsilon/(1 - l(\theta))$ increases for data points near $\theta_{\rm fc}$. While these points are given more weight by the kernel of the $A_{2m}$ moments, the converse situation holds for the $B_{2m}$ moments, where the kernel reaches its peak for a given $m$ in a zone where $1 - l(\theta)$ is closer to its maximum value. By definition, the $A_{2m}$ and $B_{2m}$ moments are related to each other by means of \begin{equation} B_{2m} = \sum^{m}_{p=1} \frac{m!}{(m-p)!\,p!}\,(-1)^{p+1}\,A_{2p} \label{eqn8}. \end{equation} We now concentrate on the $B_{2m}$ moments because they are equally well suited to the analysis of the primary and secondary minima with the transparency and limb-darkening functions adopted in the present study (cf.\ Section 2.2). Since there are a number of typos in the paper of Smith \& Theokas (\cite{ST}), we provide below the mathematical details of the method. The infinitesimal element of area $d\sigma$ of Eq.\,(\ref{eqn6}) is expressed as \begin{equation} d\sigma = \frac{1}{2} \frac{\partial^{2}}{\partial r \partial s} \left( \pi r^{2} \alpha(r,s,\delta) \right) dr ds \label{eqn9}. \end{equation} The following general expression for the $B_{2m}$ moments was derived by Smith \& Theokas (\cite{ST}) \begin{eqnarray} B_{2m} & = &\lambda + \int_{0}^{r_{1}} \int_{0}^{r_{2}} J(r)\,F(s)\,ds\,dr \nonumber\\ & & \times \frac{\partial^{2}}{\partial r \partial s} \left( \int_{0}^{\theta_{\rm fc}} \cos^{2m}\theta\,\frac{\partial (\pi\,r^{2}\,\alpha (r,s,\delta))}{\partial \theta} d\theta \right), \end{eqnarray} where $1 - l(\theta = 0)$ is defined as $\lambda$ (see Fig.\,\ref{fig-2}). The analytical expressions obtained for the $B_{2m}$'s for $m = 1, 2, 3, 4,$ and $5$ are thus \begin{equation} B_{2} = \lambda - \csc^{2}{i}\,(P - I_{1}R_{1}r_{2}^{2} - \psi_{1}), \end{equation} \begin{equation} B_{4} = \lambda - \csc^{4}{i}\,(P - 2\,I_{1}R_{1}r_{2}^{2} + I_{2}R_{1}r_{1}^{2}r_{2}^{2} + I_{1}R_{2}r_{2}^{4} - \psi_{2}), \end{equation} \begin{eqnarray} B_{6} & = & \lambda - \csc^{6}{i}\,(P - 3\,I_{1}R_{1}r_{2}^{2} + 3\,I_{2}R_{1}r_{1}^{2}r_{2}^{2} + 3\,I_{1}R_{2}r_{2}^{4} \nonumber \\ & & - I_{3}R_{1}r_{1}^{4}r_{2}^{2} - I_{1}R_{3}r_{2}^{6} - 3\,I_{2}R_{2}r_{1}^{2}r_{2}^{4} - \psi_{3}), \end{eqnarray} \begin{eqnarray} B_{8} & = & \lambda - \csc^{8}{i}\,(P - 4\,I_{1}R_{1}r_{2}^{2} + 6\,I_{2}R_{1}r_{1}^{2}r_{2}^{2} + 6\,I_{1}R_{2}r_{2}^{4} \nonumber \\ & & - 4\,I_{3}R_{1}r_{1}^{4}r_{2}^{2} - 4\,I_{1}R_{3}r_{2}^{6} - 12\,I_{2}R_{2}r_{1}^{2}r_{2}^{4} + I_{4}R_{1}r_{1}^{6}r_{2}^{2} \nonumber \\ & & + I_{1}R_{4}r_{2}^{8} + 6\,I_{3}R_{2}r_{1}^{4}r_{2}^{4} + 6\,I_{2}R_{3}r_{1}^{2}r_{2}^{6} - \psi_{4}), \end{eqnarray} and \begin{eqnarray} B_{10} & = & \lambda - \csc^{10}{i}\,(P - 5\,I_{1}R_{1}r_{2}^{2} + 10\,I_{2}R_{1}r_{1}^{2}r_{2}^{2} \nonumber \\ & & + 10\,I_{1}R_{2}r_{2}^{4} - 10\,I_{3}R_{1}r_{1}^{4}r_{2}^{2} - 10\,I_{1}R_{3}r_{2}^{6} \nonumber \\ & & - 30\,I_{2}R_{2}r_{1}^{2}r_{2}^{4} + 5\,I_{4}R_{1}r_{1}^{6}r_{2}^{2} + 30\,I_{3}R_{2}r_{1}^{4}r_{2}^{4} \nonumber \\ & & + 30\,I_{2}R_{3}r_{1}^{2}r_{2}^{6} + 5\,I_{1}R_{4}r_{2}^{8} - I_{5}R_{1}r_{1}^{8}r_{2}^{2} \nonumber \\ & & - 10\,I_{4}R_{2}r_{1}^{6}r_{2}^{4} - 20\,I_{3}R_{3}r_{1}^{4}r_{2}^{6} - 10\,I_{2}R_{4}r_{1}^{2}r_{2}^{8} \nonumber \\ & & - I_{1}R_{5}r_{2}^{10} - \psi_{5}). \end{eqnarray} The coefficients $P$, $I_{m}$, $R_{m}$, and $\psi_{m}$ are defined (cf.\ Smith \& Theokas 1980) by the equations \begin{equation} P(r_{1},r_{2},J,F) = \int_{0}^{min(r_{1},r_{2})} J(r) F(r) 2 \pi r dr, \end{equation} \begin{equation} I_{m}(r_{1},J) = \int_{0}^{r_{1}} \frac{J(r)}{r_{1}^{2m-2}} \frac{\partial}{\partial r} (\pi r^{2m}) dr, \end{equation} \begin{equation} R_{m}(r_{2},F) = \int_{0}^{r_{2}} \frac{F(s)}{r_{2}^{2m}} \frac{\partial}{\partial s} (s^{2m}) ds, \end{equation} and \begin{eqnarray} \psi_m & = & \psi_{m} (r_{1},r_{2},i,J,F) = \frac{L_1}{\pi\,r_1^2}\,\left(\frac{1 + 2\,m\,(r_1 + r_2)^2}{6\,(r_1 + r_2)^2} \right. \nonumber \\ & & \times (\cos^2{i} - (r_2 - r_1)^2)^{3/2} - \sqrt{\cos^2{i} - (r_2 - r_1)^2} \nonumber \\ & & - \frac{m\,|r_2 - r_1|}{8\,(r_1 + r_2)^2}\,(\cos^4{i} - (r_2 - r_1)^4) \nonumber \\ & & + |r_2 - r_1|\,\left.\arctan{\left(\frac{\sqrt{\cos^2{i} - (r_2 - r_1)^2}}{|r_2 - r_1|}\right)} \right). \end{eqnarray} \subsection{The transparency and limb-darkening functions} In our method, the transparency of the W-R wind is described by an analytical function that depends on a limited number of parameters. Since the functional form of the transparency is adopted {\it a priori}, our choice will obviously have a direct influence on the parameters derived for the system. Therefore, it is important to clearly specify the assumptions made in our approach. To avoid confusion with the standard symbols employed by Smith and Theokas (\cite{ST}), from now on, the various radii in our model will be denoted $\rho_{i}$ (i=1,2,3), where $\rho_1$ stands for the radius of the O-star that undergoes the eclipse. The use of the subscripts $e$ and $a$ will indicate whether the components are seen in emission or absorption. The first simplifying hypothesis is that all composite parts of the system are supposed to be spherically symmetrical. This means that the method is not applicable to close binary systems in which the components depart strongly from a spherical form as a result of tidal distortion and where {\it ellipticity} and {\it reflection} effects are both present. \begin{figure}[h] \begin{center} \resizebox{8cm}{7.6cm}{\includegraphics{11707fg3.eps}} \end{center} \caption{Schematical view of the transparency law across the disc of the W-R component as given by Eq.\,(\ref{eqn20}). The solid line shows the amount of light absorbed along the line of sight of impact parameter $s$. The individual contributions due to the opaque core and the semi-transparent extended atmosphere are illustrated.\label{fig-3}} \end{figure} Because a principle objective of the proposed technique of light curve analysis is the determination of the structure of a W-R envelope, a composite model consisting of an opaque core and a surrounding extended atmosphere was adopted for the W-R component. As a consequence, while for the transparency function $F(s)$ of the eclipsing W-R star, of radius $r_{0}$, Smith \& Theokas (\cite{ST}) simply adopted \begin{equation} F(s) = F_{y_a} (r_{0},\upsilon) = y_a \left[1 - \upsilon \label{eqn19} \left(\frac{s}{r_{0}}\right)^{2} \right] \hspace*{5mm} {\rm for} \hspace{2mm} s < r_{0}, \end{equation} where $\upsilon$ is the coefficient of transparency, we chose as a first step a transparency law of the form \begin{equation} F(s) = F_{1-y_a} (\rho_{3a},0) + F_{y_a} (\rho_{2a},\upsilon) \label{eqn20}, \end{equation} where the radius of the opaque core of the W-R star is $\rho_{3a}$ and that of the extended eclipsing envelope is $\rho_{2a}$ ($\geq \rho_{3a}$, see Fig.\ \ref{fig-3}). This transparency law was used in the preliminary analysis of the light curve of HD\,5980 by BP91 and is motivated by the fact that it corresponds to a physically more realistic model of the W-R star than that defined in Eq.\,(\ref{eqn19}), although the advantage of relative mathematical simplicity is still preserved. For the brightness distribution $J(r)$ over the W-R disc (important for the eclipse of the W-R component by the O-star), a law very similar to that of the transparency function is adopted \begin{equation} J(r) = J(0) \left[ J_{1-y_e}(\rho_{3e},0) + J_{y_e}(\rho_{2e},u_2) \right] \label{eqn21}, \end{equation} where $J(0)$ is the central surface brightness, $u_2$ is the coefficient of limb-darkening, and $J_{y_e}$ is defined as \begin{equation} J_{y_e}(r_{0},u) = y_e \left[ 1 - u \left(\frac{r}{r_{0}}\right)^2 \right] \label{eqn22}. \end{equation} When the W-R star is eclipsed, the radius of the core, assumed to be of uniform brightness, becomes $\rho_{3e}$ and that of the limb-darkened envelope $\rho_{2e}$. We note that the core and envelope radii of the W-R component seen in emission or absorption may differ. We briefly address the physical meaning of this transparency law. First, in our model, the semi-transparent envelope of the W-R star has a finite extension given by the radii $\rho_{2a}$ and $\rho_{2e}$. However, the stellar winds of W-R stars do not stop abruptly so close to the star, but instead extend to large distances (much larger than the orbital separation in most binaries) to the shock with the interstellar (or circumstellar) medium. How can we then interpret $\rho_{2a}$ and $\rho_{2e}$? The radius $\rho_{2a}$ corresponds to the farthest position in the stellar wind where the residual optical depth along the line of sight produces a variation in the light curve that can be distinguished against the intrinsic photometric variability of the W-R star and the photometric errors. Similarly $\rho_{2e}$ is the outer radius of the W-R envelope that emits a measurable fraction of the light in the considered waveband. A clear difference between our approach and that of Antokhin \& Cherepashchuk (\cite{Igor}) concerns the functional behaviour of $F(s)$: whereas in our model, $F(s)$ is a convex function over the entire range $\rho \in [0, \rho_{3a}]$, Antokhin \& Cherepashchuk (\cite{Igor}) use a convexo-concave function, where the concave part corresponds to the stellar wind. As a consequence, $F(s)$ given by Eq.\,(\ref{eqn20}) decreases at a slower rate over the wind than the transparency law inferred by Antokhin \& Cherepashchuk. In contrast to Smith and Theokas (\cite{ST}) who neglect the effect, we take into account the limb-darkening of the OB-type component. Assuming that the formula employed by these authors to represent the brightness distribution across the W-R disc also applies to normal stars, we adopt the following limb-darkening law for the OB star \begin{equation} J(r) = \frac{L_{\rm O}}{\pi \rho_{1}^{2} (1 - u_1 + u_1^{2}/3)} \left[ 1 - u_1 \left(\frac{r}{\rho_{1}}\right)^{2} \right]^{2}, \end{equation} where $L_{\rm O}$ is the luminosity of the OB star, of radius $\rho_{1}$, and $u_1$ is the coefficient of limb-darkening at the effective wavelength of the photometric filter considered. Using these laws of transparency and limb-darkening, we then derived the expressions for $P$, $I_{m}$, $R_{m}$ (see Appendix\,\ref{app}), and $\psi_{m}$, and hence the final equations for the moments $B_{2m}$, corresponding to the primary and secondary minima. \subsection{The orbital eccentricity} The treatment of elliptical orbits in the frequency-domain was also addressed by Kopal (\cite{Kop2}). The problem still concerns the determination of the elements of the eclipse from the moments -- $B_{2m}$ in the present case -- derived from the light curve, but accounting for the eccentricity $e$ and the longitude of periastron $\omega$. In the definition of the $B_{2m}$ moments, the phase-angle $\theta$ is no longer identical to the mean anomaly $M$ but has rather to be replaced by a linear function of the true anomaly $v$ \begin{equation} \theta = v + \omega - \frac{\pi}{2}. \end{equation} The $d(\cos^{2m}{\theta})$ kernel in Eq.\,(\ref{eqn7}) thus becomes \begin{equation} d(\cos^{2m}{\theta}) = d \left[\sin^{2m}{(v+\omega)}\right]. \end{equation} As a consequence, the empirical values of $B_{2m}$ cannot be derived from the observed data until a proper conversion of the phase angle into the true anomalies has been completed. This can be achieved either by a numerical inversion of Kepler's equation or the well-known asymptotic expansion of elliptical motion (e.g.\ Danjon \cite{danjon}, Kopal \& Al-Naimiy \cite{KAN}) \begin{eqnarray} v & = & M + (2\,e - \frac{1}{4}\,e^{3})\sin{M} + (\frac{5}{4}\,e^{2} - \frac{11}{24}\,e^{4})\sin{2\,M} \nonumber \\ & & + \frac{13}{2}\,e^{3}\sin{3\,M} + \frac{103}{96}\,e^{4}\sin{4\,M} + ... \end{eqnarray} Regardless of the technique used to compute the true anomaly, this conversion evidently requires an {\it a priori} knowledge of $e$ as well as $\omega$. For a given value of the inclination $i$, these parameters can be derived by inversion of the equations (see e.g.\ Kopal \& Al-Naimiy \cite{KAN}) \begin{eqnarray} \Delta\Phi & = & \frac{1}{2} + \frac{e\,\cos{\omega}}{\pi}\{1 + \csc^{2}{i} \nonumber \\ & & - \frac{e^{2}}{2}\,[\frac{8}{3}\,\cos^{2}{\omega} - 2 + O(\cot^2{i})]\} \label{eqn-27},\end{eqnarray} and \begin{equation} e\sin\omega = \frac{d_{2}-d_{1}}{4 \sin{\left( \frac{d_{1}+d_{2}}{4}\right)}}\,\left[1 - \frac{\cot^{2}{i}}{\sin^{2}{\left(\frac{d_{1}+d_{2}}{4}\right)}}\right]^{-1} \label{eqn-28}, \end{equation} where $\Delta\Phi$, $d_{1}$, and $d_{2}$ are, respectively, the phase displacement of the minima and the durations of the primary and secondary stellar core eclipses. These latter quantities are determined directly from the observed light curve. Since the orbital inclination of an eclipsing binary system is likely to be rather large, the $O(\cot^2{i})$ term in the coefficient of the $e^3$ term of Eq.\,(\ref{eqn-27}) can be neglected. The empirical `elliptical' moments of the light curve then provide the elements of the binary exactly as in the `circular' case. One must ensure that the resulting values of the radii have been reduced to a constant unit of length. In our code, we therefore report all distances in relation to the semi-major axis $a$ of the relative orbit. Our composite model adopted for the W-R star has different radii ($\rho_{2,3a}$ and $\rho_{2,3e}$) depending on whether the W-R component is seen as an eclipsing or an eclipsed disc. Because of this, although each individual half-eclipse provides an independent solution for the elements, the complete determination of the elements requires a combination of solutions obtained for both minima and because of the non-zero eccentricity, both the descending and ascending branches of each. \section{Decoding of the light curve} \subsection{Empirical determination of the moments} We consider the light curve of a W-R binary system of eccentricity $e$ and period $P$, derived for a given photometric bandpass. The data are assumed to consist of a list of entries that provide for each observation the orbital phase $\Phi_{i}$ and the measured intensity $l_{i}$. To normalize the brightness scale, a mean intensity value is derived well outside the eclipses, during a phase-interval where the system is assumed to display (constant) maximum light. The $l_i$ value of each data point is then divided by this mean to normalize the light curve to unity. It has to be emphasized, however, that this does not necessarily imply that $L_{1} + L_{2} = 1$ for the W-R binary. The luminosity of a third photometrically unresolved component along the line of sight may indeed contribute to the observed brightness as well, thereby leading to a brightness distribution such as $L_{1} + L_{2} < 1$ in the final solution for the eclipse. In the case of an eccentric orbit, the eclipse-free mean $l_{i}$ value is preferably taken around apastron to avoid as much as possible any luminosity increase that could occur around periastron as a result of enhanced interaction effects between the components.\\ The determination of the moments $B_{2m}$ requires a smoothed light curve, which can be obtained, for instance, from a spline fit to the observed points with special attention to the minima. However, this task can become difficult if the descending or ascending branch of either minimum is ill-defined because of an uneven sampling of the observations or intrinsic photometric variability in the W-R star (see e.g.\ the case of WR\,22, Gosset et al.\ \cite{wr22}). A clustering of the points, in particular, is a serious handicap for the method. A preliminary processing performed, by filtering the observational data, to help reduce the scatter allows us then to obtain a smooth light curve $l(\Phi)$. The next step consists of determining the quantities $d_{1}$, $d_{2}$, and $\Delta\Phi$ (see above) from the smoothed light curve. For an assumed value of the orbital inclination $i$, the parameters $e$ and $\omega$ are obtained by means of an inversion of Eqs.\,(\ref{eqn-27}) and (\ref{eqn-28}). With these values of $e$ and $\omega$, the orbital phases $\Phi_{i}$ are converted into true anomalies. The moments $B_{2m}$, which take into account the eccentricity effect, are obtained in practice by summation, using the following expression \begin{eqnarray} B_{2m} & = & \sum_{i=1}^{N-1} \left(\cos^{2m}{(\Theta_{i})} - \cos^{2m}{(\Theta_{i+1})}\right) \nonumber \\ & & \times \left(1 - \frac{l(\Theta_{i}) + l(\Theta_{i+1})}{2}\right), \end{eqnarray} where $\Theta_i$ are the predefined angles at which the smoothed light curve is sampled, $N$ is defined by the constant step $\Delta\Theta = \Theta_{i+1} - \Theta_{i}$ adopted, and the value of $\Theta_{1}$ corresponds to the first contact of the eclipse. The $l(\Theta_{i})$ values refer to the normalized smoothed light curve, and by definition $l = 1$ for $|\Theta| > |\Theta_{1}|$. Since the individual data points are affected by observational errors, the integration of the empirical moments must itself be affected by errors. The uncertainty associated with the moments $B_{2m}$ can be evaluated using the following equation (see Al-Naimiy, \cite{AN1}, Smith \& Theokas \cite{ST}) \begin{eqnarray} \Delta B_{2m} & = &\frac{1}{\sqrt{n}}\,\left\{\frac{1}{n}\,\sum_{j=1}^{n} \left[l_{j} - l(\theta_{j})\right]^2\right\}^{1/2} \nonumber \\ & & \times \left(\cos^{2m}\theta_{1} - \cos^{2m}\theta_{n}\right), \end{eqnarray} where $n$ is the number of observed points over the considered eclipse, and $l_{j} - l(\theta_{j})$ is the difference between the observed point of index $j$ and the smoothed light curve at $\theta_j$. The angles $\theta_{1}$ and $\theta_{n}$ refer, respectively, to the first and last observed data point over the relevant part of the light curve. \subsection{Solution for the elements} For each half-eclipse, there are five non-linear algebraic equations to be solved simultaneously for the elements $\rho_{1}$, $\rho_{2a}$ or $\rho_{2e}$, $\rho_{3a}$ or $\rho_{3e}$, $i$, $u_{1}$ or $u_{2}$, $y_{a}$ or $y_{e}$, $\upsilon$, $L_{1}$ or $L_{2}$ the meanings of which are summarized below for convenience: $L_{1}$ = luminosity of the OB-type star, $L_{2}$ = luminosity of the W-R star, $i$ = inclination angle of the orbit, $\rho_{1}$ = radius of the OB-type star, $\rho_{2a,e}$ = radius of the W-R envelope seen in absorption or emission, $\rho_{3a,e}$ = radius of the W-R opaque core seen in absorption or emission, $u_{1}$ = limb-darkening coefficient of the OB-type star, $u_{2}$ = limb-darkening coefficient of the W-R envelope, $\upsilon$ = transparency coefficient of the W-R envelope (cf. Fig.\ \ref{fig-3}), and $y_{a,e}$ = contribution of the W-R envelope in absorption or emission (cf. Fig. \ref{fig-3}) This large number of variables can fortunately always be reduced to a smaller number for both the primary and secondary minima, as explained hereafter. The preliminary determination of the orbit inclination $i$ already eliminates for instance one variable. At the primary minimum, when the OB star is in front, according to our composite model $\upsilon \equiv 0$ by definition. Since we must also have that $0 \leq u_{2} \leq 1$, solutions can be searched for a set of discrete values of the parameter $u_{2}$ in this interval, so that the remaining variables are $\rho_{1}$, $\rho_{2e}$, $\rho_{3e}$, $y_{e}$, and $L_{2}$. At the secondary minimum, when the W-R star eclipses the OB component, $u_{1}$ is the limb-darkening coefficient of the OB star. The value of $u_1$ can be adopted following e.g.\ the tabulated values supplied by Klinglesmith \& Sobieski (\cite{KS}). According to our model, we now have $0 \leq \upsilon \leq 1$, so that again, after selection of a sample of $\upsilon$ values, we can proceed in solving the equations for the remaining parameters $\rho_{1}$, $\rho_{2a}$, $\rho_{3a}$, $y_{a}$, and $L_{1}$ only. The solution of the system of as many as five non-linear equations $$B_{2m}(\rho_{1}, \rho_{2a}, \rho_{3a}, i, u_1, y_a, \upsilon, L_1, L_2) = B_{2m}({\rm observed})$$ or $$B_{2m}(\rho_{1}, \rho_{2e}, \rho_{3e}, i, u_2, y_e, \upsilon, L_1, L_2) = B_{2m}({\rm observed})$$ is obtained by minimizing the $\chi^2$ compiled from the residuals of these equations \begin{equation} \chi^2 = \sum_{m=1}^N \frac{|B_{2m}({\rm computed}) - B_{2m}({\rm observed})|^2}{\Delta\,B_{2m}^2}. \end{equation} This minimization is achieved by means of Powell's technique (e.g., Press et al.\ \cite{NumRec} and references therein). \section{Application to HD\,5980} \subsection{HD\,5980: a peculiar system} HD\,5980 $\equiv$ AB\,5 (Azzopardi \& Breysacher \cite{Azzo79}) is associated with NGC\,346, the largest H\,{\sc ii} region + OB star cluster in the Small Magellanic Cloud. This remarkable W-R binary, which underwent an LBV-type event in August 1994, is presently recognized as a key-object for improving our understanding of massive star evolution. HD\,5980 is a rather complex system because it consists of at least three stars: two stars form the eclipsing binary with the 19.266\,day period, whilst the third component, an O-star, which is detected by means of a set of absorption lines and by means of its third light (see also below), could be a member of a highly eccentric 96.5\,day period binary (Schweickhardt \cite{Schweick}, Foellmi et al.\ \cite{Foellmi}). Whether or not the third star is physically bound to the eclipsing binary remains currently unclear. Before the LBV eruption, both components of the eclipsing binary already showed emission lines in their spectra and were thus classified as Wolf-Rayet stars (Niemela \cite{Niemela}). However, as shown by the analysis of the spectra taken during and after the LBV event, at least the star that underwent the eruption (hereafter called star A) was not a classical, helium-burning, Wolf-Rayet object, but rather a WNha star, i.e., a rather massive star with substantial amounts of hydrogen present in its outer layers (Foellmi et al.\ \cite{Foellmi}). These WNha stars have wind properties that are intermediate between those of extreme Of stars and classical WN stars. A summary of the light changes exhibited by HD\,5980 was presented by Breysacher (\cite{Brey97}). The technique of light curve analysis described above is applied to HD\,5980 prior to the outburst. Given that star A, the component in front of its companion (hereafter called star B) during primary eclipse, was a WNha star and since there are no indications of wind effects in the primary eclipse, we assume that this component behaves as an OB-star in the light curve. 705 measurements with the Stroemgren $v$ filter, described in BP91, are taken into consideration. The shape of the resulting light curve does not allow us to use the ill-defined ascending branch of the primary eclipse (star A in front) for the analysis. Therefore, only three half-minima will be considered. Compared to the preliminary analysis carried out by BP91, the software tool presently used has been upgraded, allowing us for instance to assess the quality of the solution by means of a synthetic light curve. \subsection{Solutions of the light curve} From the smoothed light curve, the durations of the primary and secondary stellar core eclipses as well as the separation between the core eclipses are measured first to be $d_1 = 0.062 \pm 0.005$, $d_2 = 0.095 \pm 0.005$, and $\Delta\,\Phi = 0.362$. All durations are expressed as phase intervals (i.e.\ fractions of the orbital cycle). The corresponding values of the eccentricity $e$ and the longitude of periastron $\omega$ are: $e = 0.314 \pm 0.007$ and $\omega = 132.5^{\circ} \pm 1.5^{\circ}$. These values are in fairly good agreement with those derived for these parameters by Breysacher \& Fran\c{c}ois (\cite{BF}) ($e = 0.30 \pm 0.02$, $\omega = 135^{\circ} \pm 10^{\circ}$) from a completely different approach based on the analysis of the width variation of the He {\sc ii} $\lambda$\,4686 line using the analytical colliding-wind model devised by L\"{u}hrs (\cite{LU}). From radial-velocity studies, Kaufer et al. (\cite{Kau02}) and Niemela et al.\ (\cite{Niemela2}) also found that $e = 0.297 \pm 0.036$ and $e = 0.28$, respectively. The values of the moments of the light curve of HD\,5980 are listed in Table\,\ref{moments}. The inclination angle of the orbit can easily be derived with reasonable accuracy. For each of the three half-minima, a quick analysis is carried out for a number of plausible discrete values of $i$ ($i = 82^{\circ}, 83^{\circ}, ... 89^{\circ}$), and the value of $i$ finally adopted is the one for which closest agreement is obtained between the three solutions provided for the radius of star A and the radius of the opaque core of star B. These quantities are fundamental elements of the system and the combination of the solutions of the descending and ascending branches of both minima must indeed provide, at the end of the detailed analysis, a unique value for $\rho_{1}$ and very similar - if not identical - values for $\rho_{3a}$ and $\rho_{3e}$. The above conditions are fullfilled for $85^{\circ} \leq i \leq 87^{\circ}$, therefore we adopt $i = 86^{\circ} \pm 1^{\circ}$. \begin{table} \caption{Values of the moments of the eclipses of HD\,5980 used throughout this paper. These values are derived from the light curve prior to the 1994 LBV event. \label{moments}} \begin{tabular}{l c c c} \hline & Primary eclipse & \multicolumn{2}{c}{Secondary eclipse} \\ \cline{3-4} & Ingress & Ingress & Egress \\ \hline\hline B$_2$ & $.00935 \pm .00026$ & $.00832 \pm .00041$ & $.00481 \pm .00024$ \\ B$_4$ & $.01801 \pm .00048$ & $.01542 \pm .00065$ & $.00922 \pm .00043$ \\ B$_6$ & $.02604 \pm .00067$ & $.02164 \pm .00080$ & $.01326 \pm .00059$ \\ B$_8$ & $.03351 \pm .00083$ & $.02721 \pm .00088$ & $.01699 \pm .00071$ \\ B$_{10}$ & $.04046 \pm .00097$ & $.03226 \pm .00094$ & $.02044 \pm .00082$ \\ \hline \end{tabular} \end{table} The second step in the procedure consists of solving the equations for $i = 86^{\circ}$, each half-eclipse being treated in a completely independent manner. We recall that all radii are reduced to the semi-major axis of the relative orbit. We first consider the descending branch of the primary eclipse. Solutions for $\rho_{1}, \rho_{2e}, \rho_{3e}, L_{2}$, and $y_{e}$ are searched for discrete values of the parameter $u_{2}$ (0.1, 0.2, 0.3, ... 1) with rather broad variation ranges allowed to the variable parameters. Each search sequence is based on a series of 1000 trials. A first set of results is obtained to provide convergence rates (i.e., an estimate of the likelihood of the derived solutions) and a $\sigma$ value for each parameter. A second iteration with reduced variation ranges (the original one $\pm \sigma$) for the parameters leads to a second set of solutions with improved convergence rates and lower $\sigma$ values. The process is repeated five times, until a stabilization of the parameter values becomes noticeable. For the last iteration, the grid of $u_{2}$ values is enlarged significantly and the respective convergence rates of the corresponding solutions are used to determine the best fit $u_{2}$ value and to estimate the error on this parameter. The obtained results are $\rho_{1} = 0.150 \pm 0.004$, $\rho_{2e} = 0.257 \pm 0.018$, $\rho_{3e} = 0.110 \pm 0.005$, $L_{2} = 0.300 \pm 0.016$, $u_{2} = 0.58 \pm 0.07$, and $y_{e} = 0.19 \pm 0.03$. For the secondary eclipse, solutions are searched for $\rho_1$, $\rho_{2a}$, $\rho_{3a}$, $L_1$, and $y_a$. The choice of the parameter $u_1$ for the limb-drakening law (Klinglesmith \& Sobieski \cite{KS}) of star A has little impact on the other parameters. We repeated the fitting procedure for different values of $u_1$ (0.1, 0.3, 0.5, 0.7) and recovered the same solutions within the errorbars. The largest sensitivity was found for $\rho_{2a}$ when $u_1 = 0.7$. In this rather unlikely case, $\rho_{2a}$ exceeds its usual value by $1.5\,\sigma$. In the following, we thus focus on the results obtained with $u_1 = 0.3$, which seems a reasonable value for star A (Klinglesmith \& Sobieski \cite{KS}). The descending and ascending branches are analysed separately and solutions are searched for discrete values of the parameter $\upsilon$ (0.1, 0.2, 0.3, ... 1). The same iterative procedure as described above for the primary eclipse is applied. The resulting solutions for the secondary descending branch are $\rho_{1} = 0.160 \pm 0.006$, $\rho_{2a} = 0.259 \pm 0.018$, $\rho_{3a} = 0.110 \pm 0.005$, $L_{1} = 0.385 \pm 0.028$, $y_{a} = 0.19 \pm 0.03$, and $\upsilon = 0.60 \pm 0.20$; and for the secondary ascending branch $\rho_{1} = 0.163 \pm 0.008$, $\rho_{2a} = 0.290 \pm 0.019$, $\rho_{3a} = 0.105 \pm 0.003$, $L_{1} = 0.412 \pm 0.026$, $y_{a} = 0.20 \pm 0.03$, and $\upsilon = 0.45 \pm 0.15$ The differences between the solutions provided by the three half-minima are relatively small compared to the errors and average values are derived for the parameters. A distinction between {\it absorption} and {\it emission} values does not appear to be necessary any longer. The error in the mean for each parameter $p_{i}$ is computed in a conservative approach using the expression \begin{equation} \sigma = \left( \frac{\sum_{i=1}^n(p_i - <p>)^2 + \sum_{i=1}^n \sigma_{i}^2}{nN} \right)^{1/2}\label{eqn-33}, \end{equation} where n is the number of values used in the mean and N the number of independent sets (half-eclipses) of values. For an analysis completed in this way, the values adopted for the parameters of the stellar components in the HD\,5980 binary system are given in Table\,\ref{solutions}. \begin{figure*}[htb] \begin{center} \resizebox{16cm}{!}{\includegraphics{11707fg4.eps}} \end{center} \caption{Top panel: the observed light curve of HD\,5980 in the Stroemgren $v$ filter as a function of orbital phase. The lower left and lower right panels show the synthetic light curves for the primary and secondary eclipses respectively compared to the actual data. The synthetic light curves were computed using the mean parameters of the solutions found by our programme. \label{fig-4}} \end{figure*} \begin{table} \caption{Final `best-fit' values of the model parameters of the stars in the HD\,5980 binary system for $i = 86^{\circ}$ and prior to the 1994 LBV outburst.\label{solutions}} \begin{tabular}{l c c c} \hline & Primary eclipse & \multicolumn{2}{c}{Secondary eclipse} \\ \cline{3-4} & Ingress & \multicolumn{2}{c}{Ingress \& Egress} \\ \hline\hline star A & \multicolumn{3}{c}{$\rho_1 = 0.158 \pm 0.005$} \\ & & \multicolumn{2}{c}{$L_1 = 0.398 \pm 0.021$} \\ \hline star B & \multicolumn{3}{c}{$\rho_3 = 0.108 \pm 0.003$}\\ & \multicolumn{3}{c}{$\rho_2 = 0.269 \pm 0.014$}\\ & $L_2 = 0.300 \pm 0.016$ & & \\ & \multicolumn{3}{c}{$y = 0.19 \pm 0.02$} \\ & $u_2 = 0.58 \pm 0.07$ & \multicolumn{2}{c}{$v = 0.52 \pm 0.14$} \\ \hline \end{tabular} \end{table} \subsection{Discussion} Figure \ref{fig-4} shows how the synthetic light curve derived from the above elements fits the observational data for both the primary and secondary eclipses. While the bottom and the wings are fairly well fitted, the transparency law that we adopted for the extended envelope of star B is probably still too crude to allow a perfect match to the observations of the descending and ascending sides of both minima. As a consequence, the size of this envelope is probably slightly overestimated by our model. An asymmetry in this envelope, inferred by BP91, is difficult to ascertain from the present study. The difference between the values of $\rho_{2a}$ provided by the two branches of the secondary eclipse is indeed marginally significant only given the errors. $L_1 + L_2 \neq 1$ is independent confirmation of an unresolved source of third light along the line of sight that accounts for an additional relative luminosity of $L_{3} = 0.302$. In principle, one would expect a superior control of the uniqueness of the solution if the brightness ratios $L_2/L_1$ and $L_3/L_1$ in the optical could be fixed independently of our light curve analysis. However, in the specific case of HD\,5980, it is impossible to infer these brightness ratios from spectroscopy. In this system, there have been changes in both the spectroscopic and in photometric properties of the binary components, and the brightness ratios are thus epoch dependent. Spectrophotometric brightness ratios would have to be estimated based on the dilution of the emission lines. However, as shown by recent spectroscopic studies (e.g.\ Foellmi et al.\ \cite{Foellmi}), these emission lines contain epoch-dependent contributions from components A and B, as well as from the wind-wind interaction region. In our analysis of the light curve, we thus preferred not to constrain the brightness ratios {\it a priori}. Our results can however be checked {\it a posteriori} against the results obtained in the UV domain. Assuming that an O7 supergiant, which does not partake in the orbit, was responsible for this third light, Koenigsberger et al.\ (\cite{Ko94}) estimated a value of 2.8 for the ratio $(L_{1}+L_{2})/L_{3}$, which was considered to be consistent with the 2.0 obtained from the BP91 elements. It is noticeable that the value of 2.3 derived from the present analysis is in even closer agreement with the estimate of Koenigsberger et al.\ (\cite{Ko94}). \begin{figure*}[t!hb] \begin{minipage}{6cm} \begin{center} \resizebox{6cm}{!}{\includegraphics{11707f5a.eps}} \end{center} \end{minipage} \begin{minipage}{6cm} \begin{center} \resizebox{6cm}{!}{\includegraphics{11707f5b.eps}} \end{center} \end{minipage} \begin{minipage}{6cm} \begin{center} \resizebox{6cm}{!}{\includegraphics{11707f5c.eps}} \end{center} \end{minipage} \caption{Distribution of the parameters derived from the inversion of the moments of the light curve in various parameter planes. The left and middle panels are derived from the primary eclipse, whilst the rightmost panel presents the solutions of the secondary eclipse. The final solutions derived from each eclipse as well as their $1-\sigma$ error bars are overplotted in each panel. \label{fig-locus}} \end{figure*} Adopting $M_{v} = -7.5$ for the global absolute visual magnitude of HD\,5980 (cf. Koenigsberger et al.\ \cite{Ko94}), the magnitudes of the binary components are $M_{v}(star A) = -6.50$ and $M_{v}(star B) = -6.19$. Because of the difficulties in assigning the various lines of the spectrum of HD\,5980 to a specific component, there have been few attempts to establish a full SB2 orbital solution (Niemela et al.\ \cite{Niemela2}, Foellmi et al.\ \cite{Foellmi}). Therefore, our knowledge of the component masses of the eclipsing binary remains uncomfortably poor, even though we are dealing with an eclipsing system. Foellmi et al.\ (\cite{Foellmi}) used a multi-component fit of the N\,{\sc iv} $\lambda$\,4058 and N\,{\sc v} $\lambda$\,4603 lines to derive absolute masses of 58 -- 79\,M$_{\odot}$ for star A and 51 -- 67\,M$_{\odot}$ for star B. These values differ significantly from the estimates of Niemela et al.\ (\cite{Niemela2}), who attributed the entire N\,{\sc iv} $\lambda$\,4058 line to star A and the entire N\,{\sc v} $\lambda$\,4603 emission to star B when inferring minimum masses of $m\,\sin^3{i} = 28$ and 50\,M$_{\odot}$, respectively. Adopting the solution of Foellmi et al.\ (\cite{Foellmi}), the sum of the masses of the components of the eclipsing binary equals 109 -- 146\,M$_{\odot}$. This corresponds to a semi-major axis of 0.67 -- 0.74\,AU (144 -- 159\,R$_{\odot}$) for the binary orbit, and to stellar radii of 22.7 -- 25.1\,R$_{\odot}$ for star A, 15.6 -- 17.2\,R$_{\odot}$ for the core of star B, and 38.7 -- 42.8\, R$_{\odot}$ for its envelope. Alternatively, one could estimate absolute masses from the visual magnitudes evaluated above by means of mass-luminosity relations derived e.g., from massive star evolutionary models. However, this approach is hampered by a lack of precise knowledge of the spectral types of the stars (and hence their bolometric corrections) and by the fact that the components of HD\,5980 can probably not be considered as `normal' early-type stars. We considered the correlations between the various free parameters by plotting them in pairs (see Fig.\,\ref{fig-locus}). For this purpose, we used the results of 10\,000 trials for each eclipse. In most cases, we do not observe obvious correlations; the solutions are scattered over a limited part of the parameter plane (see e.g., the radius of the core of star B, $\rho_3$, versus that of star A, $\rho_1$, derived from the primary eclipse). However, the solutions show a clear trend, if we plot the luminosity of star A ($L_1$) as a function of the radius of the envelope of star B ($\rho_{2a}$, evaluated from the secondary eclipse), and fall even along a clearly defined locus if we plot the luminosity of star B ($L_2$) as a function of the radius of star A ($\rho_1$, derived from the primary eclipse). These trends can be understood at least qualitatively. In fact, for the primary eclipse (the opaque core of star A occulting star B), an increase in the radius of star A implies that a larger fraction of the light of star B is removed. To account for the observed depth of the light curve, the total luminosity of star B must decrease. On the other hand, during secondary eclipse (star B in front of star A), an increase in the radius of the semi-transparent envelope of star B will produce deeper and broader wings of the secondary eclipse. To account for the observed eclipse shape, the model must react in terms of an increase in the surface brightness (and hence the luminosity) of star A. In principle, the photometric variability of HD\,5980 as detected through medium-band filter observations could be affected by the Doppler shift of strong emission lines that fall in the wavelength range covered by the photometric filters. To quantify this effect, we simulated an observation of the star through the ESO Stroemgren filters. For this purpose, we used the spectrum of HD\,5980 taken from the spectrophotometric catalogue of Morris et al.\ (\cite{Morris}) kindly provided to us by Dr.\ J.-M.\ Vreux. Our calculations indicate that less than 1\% of the flux in the $v$ band comes from emission lines. Therefore, we do not expect the Doppler shift to have any significant effect on the photometric variability discussed here. The situation would be quite different, if we were dealing with data taken in the $b$ band. There, about 30\% of the flux is produced by emission lines (especially the strong He\,{\sc ii} $\lambda$\,4686 line). In this case, the Doppler motion as well as variations in the line intensity caused by the wind-wind interaction (e.g., Breysacher et al.\ \cite{BMN}, Breysacher \& Fran\c cois \cite{BF}) could lead to significant variations in the observed $b$ magnitude. \begin{figure}[htb] \begin{center} \resizebox{8cm}{!}{\includegraphics{11707fg6.eps}} \end{center} \caption{Observed $v$ magnitudes of HD\,5980 in the phase interval 0.04 -- 0.16. The various symbols indicate data from different orbital cycles. Periastron passage occurs at phase 0.061.\label{fig-peri}} \end{figure} Finally, we note that the data points of the light curve suggest an increase in the brightness of the HD\,5980 system at orbital phases after $\phi \sim 0.04$ (see Fig.\,\ref{fig-4}). Whilst this increase could be caused by light reflections from the wind interaction region (which should have its concavity roughly turned towards the observer) near periastron, we caution that this apparent trend is actually inferred from observations from a single campaign (cycle 39 in Fig.\ \ref{fig-peri}) and could therefore be related to intrinsic variability of the WR star rather than represent a genuine phase-locked effect. \section{Conclusions} We have presented a method that allows us to treat the problem of atmospheric eclipses in the light curves of moderately wide, eccentric Wolf-Rayet + O binary systems in a semi-analytical way. We have then applied this method to the light curve of the peculiar system HD\,5980 prior to its 1994 outburst. Despite the non-uniform sampling of the light curve, the fact that we are analysing data from different orbital cycles (hence affected differently by the intrinsic variability of the WR star) and the limitations of our assumptions on the properties of the WR envelope, our method yields consistent results when applied to the primary or secondary eclipse. We have been able to constrain some of the physical parameters of this system, although the lack of a consistent SB2 spectroscopic orbital solution prevents us from obtaining fully model independent parameters. As a next step, we will try to generalize our method to the analysis of the eclipses of a system harbouring two stars, both with extended atmospheres. This should allow us to analyse the light curve of HD\,5980 observed after the LBV eruption.
1,941,325,220,625
arxiv
\section*{abstract} Urbanization leads to an increase of traffic in cities. The Macroscopic Fundamental Diagram (MFD) suggests to describe urban traffic at a zonal level, in order to measure and control traffic. However, for a proper estimation, all data needs to be available. The main question discussed in this paper is: \textit{How to derive a network-wide traffic state estimate?} We follow up on literature suggesting to base the operational estimate on the speed of limited sample cars sharing floating car data (FCD). We propose an initial step by constructing an MFD based on FCD, which is then used in step 2, the operational traffic state estimation. For operational traffic state estimation, i.e., the real-time traffic state estimation, the penetration rate is unknown. For both steps, we assess the impact of errors in the estimation. In light of the errors, we also formulate an indicator which shows when the method would yield insensible results, for instance in case of an incident. The method has been tested using microsimulation. A 26\% error in the estimated average density is found for a FCD penetration rate of 1\%; increasing the penetration rate to 30\% reduces the error in estimated average density to 7\%. \section{Introduction} With the Macroscopic Fundamental Diagram (MFD) traffic operations can be described at a zonal level, i.e. at the level of a (sub)network. The MFD shows to which extent an increase in average vehicle density (proportional to the vehicle accumulation in an area) reduces average vehicle speed, and thereby possibly vehicle flow. Due to the high number of roads in a town and the many connections and turnings of vehicles, this higher aggregation level makes the MFD a very suitable tool to describe traffic operations in urban environments for many different applications. Like many traffic variables of interest, the variables that describe the MFD relationship, i.e. average density (or vehicle accumulation), network average speed and/or average flow (production), are difficult to observe directly and completely. In the ultimate and ideal case, trajectories of all vehicles in the network over a long time period, are available, and the MFD quantities can be derived directly using Edies' relationships \cite{Edi:1965}. Unfortunately, this situation will not likely occur any time soon, implying that we need to estimate the MFD variables from whatever data \textit{are} available. Currently, many cities have loop detectors installed on major roads and at approaches of major intersections. Moreover, it is expected that in the near future, data from some vehicles will be available. One might argue that this is already the case with some providers collecting this type of data. Examples are Google, Apple, TomTom, but also some vehicles with connecting capabilities (e.g., Tesla) will be able to send information on their position and speed to a centralized server. \vknew{\citev{gayah2013} propose a clever way to use traffic from FCD to obtain a traffic state, which can be used to support network-wide traffic control strategies. In order to test their methodology, they use a simulated network of the city of Orlando. Their methodology takes advantage of the fact that each point on the MFD is related to a unique speed value. Thus, knowing the average speed from the probe vehicles allows them to infer on the MFD to what density that speed corresponds. This is a very promising method, but requires the MFD to be known. \citev{gayah2013} use full information to get this MFD on beforehand, which is unrealistic to get.} In this paper we elaborate and expand on this method, assessing the uncertainties which arise when the method is being applied in practice. Three issues stand out. Firstly, regarding the estimation of the traffic state in normal conditions. An important step is that an MFD should be obtained a priori. We present a method to obtain an MFD using FCD and combine this with loop detector data. This yields errors, because of the low penetration rate of FCD and because of the uneven distribution of these vehicles sharing their data. Secondly, the observed speed of the equipped vehicles is not necessarily the same as the speed of all vehicles, which yields an estimation error. We will assess this error and the combination with the error in estimating the MFD. Thirdly, the MFD might change in incident conditions, which could lead to an estimated traffic state which is far from reality. We discuss how these situations can be assessed with the FCD at hand. The questions discussed in this paper are hence: \textit{How can we reliably estimate the network traffic state (average network density)}, and \textit{How can the (un)reliability of the estimate be indicated}? The key methods laid out in the paper do exactly this. Roughly speaking, the traffic state method has two major steps. In the first step (initialization), FCD is fused with loop detector data to obtain an MFD. In the second step (operational estimates), the FCD speed is combined with the estimated MFD curve to estimate the traffic state (i.e,. the average density); this second step is not new, and follows the ideas of \citev{gayah2013}. The intermediate step of the MFD -- rather than a direct up-scaling of the number of equipped cars towards density -- means the penetration rate of vehicles sharing FCD does not need to be known in the second step, and moreover, the estimated traffic state is less dependent on fluctuations of the penetration rate. The second contribution is the estimation of the reliability of the estimated state. To this end, we consider the rate of change of speed of the FCD as indicator for a possible disruptive incident, which causes the MFD to change. Note that this paper summarizes the main results extensively elaborated in the thesis of \citev{Mer:2016}. The paper is organized as follows. In section \ref{sec_existing_methods} we briefly overview the literature related to state estimation in general and MFD estimation in particular. The method then is described in section \ref{sec_method}. We test it using microscopic simulation in section \ref{sec_simulations}. In this test, we vary the penetration rate of FCD from 1\% to 30\% to assess its capabilities under low penetration rates. We explicitly discuss the errors in the method, and the range of possible outcomes. An indicator for the reliability of the method is presented in section \ref{sec_reliability}; using simulation of incidents, this method is being tested. The paper closes with the conclusions (section \ref{sec_conclusions}). \section{Traffic estimation}\label{sec_existing_methods} Estimating the traffic state using data from multiple sources is one of the key research themes within the traffic flow and ITS community today. Real-time traffic estimates provide the basis for traffic prediction; traffic management and control and for information provision to travelers and road authorities. Reliable traffic state estimates also result in consistent historical databases that can be used for modeling, analyses, evaluation of measures and policies, and for research itself. This section briefly introduces the challenges in estimating the traffic state in general (\ref{sec_TSE}); and motivates how the MFD can be used to do so on a network level (\ref{sec_litMFD}). \subsection{Traffic state estimation}\label{sec_TSE} As noted in the introduction, ideally, the traffic state at any level of scale (on a lane, link or entire network) is reconstructed by using trajectories of all vehicles. \citev{leclercq2014} shows that the trajectory method can produce the MFD very accurately in all network shapes and thus suggests that this method can be used as a base to evaluate and compare other methods---something feasible in simulation-based studies. The same advice holds for state estimation in general. To estimate state variables in the field we need to use estimation techniques in combination with whatever data \textit{are} available. Such traffic state estimation methods involve (apart from data from various types of sensors available): (1) theories (assumptions, models, equations) that describe the relation between the data and the state variables of interest; and (2) data assimilation techniques that combine data and model predictions and in the process address measurement and modeling errors. The most popular types of state estimation approaches are \textit{sequential Bayesian estimation techniques} that combine traffic flow simulation models with for example (extended) Kalman filters \cite{Hinsbergen2012a,Wang2005a}, ensemble Kalman filters \cite{Yuan2015a,Work2008a}, unscented Kalman filters \cite{Ngoduy2011a}, Particle filters \cite{Wang2016a}, and many variations on the same theme to assimilate data from a multitude of different sensors to estimate density; space mean speed; travel time; fundamental diagram parameters, and much more. The great appeal to these approaches, is that they combine knowledge of traffic dynamics with principled data assimilation techniques to "optimally" balance the uncertainty and limitations of traffic flow models with the inherent uncertainty in the data. The quotes indicate that in many cases (due to a variety of reasons) no optimality guarantees are possible. Nonetheless, there is a huge body of evidence that support the effectivity of Bayesian sequential estimation in traffic estimation (and in many other estimation problems). The great advantage is that these methods provide an integrated solution for network wide state estimation (and prediction), and that they use tractable behavioral and physical relationships, which make them highly suitable for what-if reasoning; control optimization and application under non-recurrent conditions. The price for this explanatory power is that these (simulation) model-based methods are generally complex to design and maintain, and sensitive to data errors, and in particular to systematic bias in these data. Moreover, they require many (partially unobservable) inputs (e.g. traffic demand, control inputs) and contain many parameters that need to be calibrated or even predicted from data. At the other end of the spectrum we find purely statistical (data-driven) methods, that use no (traffic flow) assumptions at all, but rely on statistical correlations to infer traffic variables on particular links over particular time periods from historical and real-time data. These approaches may be based on relatively simple linear models \cite{Kong2013a} or on highly advanced deep-learning architectures \cite{Yu2017a} and everything in between. These methods are becoming increasingly popular, particularly for large-scale network wide estimation and prediction, but they lack explanatory power to uncover unobserved state variables, like vehicular density. Whereas an integrated Bayesian approach to estimate density in an entire (urban) network would provide a powerful solution to estimate also network level quantities (by aggregation), the data requirements for such an approach seem infeasible in many cases still. For our purpose more parsimonious and direct approaches to estimate network density are preferable. One category of these approaches are what we term data-data-consistency approaches, that explicitely exploit semantical differences between different data (where one data source may provide supporting evidence to correct a second source), and utilise direct analytical or statistical relationships (e.g. travel time equals distance over speed, density equals the spatial derivative over cumulative flow, the fundamental relationship, etc) to infer state variables. Examples include methods that fuse counts with travel times to estimate vehicle accumulation \cite{Bhaskar2014a,VanLintHoogendoorn2015}; or methods that fuse local speeds and travel times to compute / correct space mean speeds \cite{Ou2008b}. In our case, we want to exploit the (shape of the) MFD itself, in the estimation of network density. To this end, we briefly explore different approaches to estimate the MFD (variables) from data. \subsection{MFD estimation}\label{sec_litMFD} If only detector data are available to obtain the MFD, the network flow can be calculated sufficiently and accurately. However, the same does not apply for the vehicle speed and the network density. As literature has shown \citep{buisson2009,Cou:2011}, the location of the detectors can influence significantly the network speed estimation. When the detectors are near the stop line, most of the captured vehicle speeds are low and consequently, the situation further upstream of the traffic light is not taken into account. This issue seriously affects the validity of the detector data to estimate the MFD. \citev{nagle2014} suggest a methodology that overcomes these disadvantages. Their method uses the generalized definitions of Edie \citep{Edi:1965} to estimate the network-wide variables from probe vehicle data. However, in order to apply these formulas, the data of all the vehicle trajectories are necessary, whereas usually only a small number of vehicles serves as probes. The authors suggest this difficulty can be overcome as long as the ratio of the probe vehicles is known. In order to acquire the ratio, they proposed dividing the number of vehicles that were tracked by GPS in the analysis area for a specific time period to the number of vehicles that crossed the detectors in the same area and period. Microsimulation tests of their methodology showed that a 20\% probe penetration rate can provide accurate estimations for any traffic state, making this a robust methodology to acquire the MFD. Also this study has some limitations. For one, it assumes that the probe vehicles are uniformly distributed across the network, which is not realistic in many cases. \citev{leclercq2014} also combines data from probe vehicles with loop detector data but in a slightly different way. From the probe vehicles, the average network speed is derived; and from the loop detectors, the average network flow. Their results show that a 20\% probe penetration rate can significantly improve the estimation of the network speed for the MFD. However, this study has the limitation that the dynamics of the network are not captured completely, since the loop detectors are not placed everywhere in the network. Also \citev{Amb:2016} describe a problem which is similar to ours: fusing loop detector data and FCD for MFD estimation. They succeed in finding a MFD with an elegant method. However, they need the loop detector data throughout the whole estimation process, i.e., for MFD estimation as well as for real-time traffic state estimation. \citev{du2015} tries to overcome these limitations by proposing a MFD estimation method without the conditions that the probe penetration rate is homogeneous and the detectors are placed in all links. Their approach estimates the appropriate average probe penetration rates from the weighted harmonic means with the weights being the travel times and the distances of the individual probe vehicles. They test their methodology with microsimulation and the results show that it is a very effective approach. \citev{du2015} suggest that since mobile probe data are becoming more and more available, this methodology can soon be applied also with real data. \citev{tsubota2014}, finally, is one of the few studies that applied such a data fusion approach using real data for the city of Brisbane, Australia. Similarly to \cite{Bhaskar2014a,VanLintHoogendoorn2015}, their approach aims to estimate the network density combining loop detector data, traffic signal timings and probe vehicle data by exploiting the relationship between cumulative counts and travel times between consecutive detector locations. The common theme of these approaches is that data fusion seems indeed the way forward in estimating the MFD (variables). Unfortunately, without large-scale ground-truth data, we do need to resort to simulation to assess the quality of the methods. This is also our approach. We combine a number of elements of existing methods, and contribute to the existing literature by designing a data fusion MFD estimation method that works well for low penetration rates of FCD. Moreover, we develop a method that endogenously computes the quality of this estimate. \section{Proposed traffic state estimator}\label{sec_method} \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{combo_steps_paper2} \caption{Schematic representation of the two steps of the traffic state estimation process} \label{mfd_fusion} \end{figure*} In this research, we use two stages in the estimation process. They are graphically shown in figure~ \ref{mfd_fusion}. First, the MFD is estimated, and then with the use of the (speeds of the) FCD and the MFD, the traffic state is found. \subsection{Step 1: creating MFDs} The ground truth for the traffic operations is formed by the generalised speed, flow and density according to Edie's definition \citep{Edi:1965}, scaled by the network length. According to Edie's formulas, the flow $q$ in the network for any arbitrary area in space-time is calculated by dividing the total distance traveled for all vehicles over the amount of space (in this case, the total road link length in kms of lane $L$) and the amount of time (aggregation time, $T$). This hence also applies for the average flow, $P$: \begin{equation} \label{flow_ed} P_\textrm{Edie}=\frac{\sum_{ i}^{n}d_{i}}{LT} \end{equation} In this equation, $d_{i}$ is the travel distance of vehicle $i$ and $n$ is the total number of vehicles that use the network during the time interval. Note that the average flow is proportional to $P_\textrm{Edie}$, differing by a scaling of network length. Similarly, following Edie's definitions, the density $k$ is computed by dividing the total time spent over the amount of space and amount of time. This also holds at the network scale, hence for the average density $A_\textrm{Edie}$ we find \begin{equation} \label{density_ed} A_\textrm{Edie}=\frac{\sum_{ i}^{n}t_{i}}{LT} \end{equation} In this equation, $t_{i}$ is travel time of vehicle $i$. The total number of vehicles in the system, the accumulation $N$, can be derived from the average density: \begin{equation} \label{accum_ed} N=A_\textrm{Edie} L \end{equation} The speed $v$ can be determined by the ratio of average flow and average density: \begin{equation} \label{speed_ed} v=\frac{q}{k}= \frac{P}{A}=\frac{\sum_{i}^{n}d_{i}}{\sum_{i}^{n}t_{i}} \end{equation} The first step in the method is to create the MFD. This is done by combining loop detector data and floating car data. The main idea on how to fuse the data is based on the fact that the vehicle trajectories offer representative information, but only a subset of them is available. However, if the relative size of the subset (the penetration rate) is known, then the variables calculated from the subset can be scaled to the full set of vehicles. In other words, if the penetration rate of the known vehicle trajectories in relation to the total number of vehicles (denoted $\eta_\textrm{traj}$) is calculated, the information from the subset of vehicle trajectories can be divided with the subset to represent the entire network traffic state. This penetration rate of the known vehicle trajectories can be calculated by relating the total number of vehicles that cross the detectors in every time interval with the number of vehicle trajectories that traverse the links with the respective detectors. The average flow and the average density calculated by the subset of the known vehicle trajectories using Edie's definitions can then be divided with the proportion to represent the total set of vehicles. In an algorithm format, the proposed data fusion approach of the 1\textsuperscript{st} step is: \begin{enumerate} \item From the detector data, calculate the total number of vehicles crossing the detectors in the aggregation period $N_\textrm{det}$. \item For the subset of known vehicle trajectories, calculate the average density $A_\textrm{Edie}$ (Equation \ref{density_ed}), the average flow $P_\textrm{Edie}$ (Equation \ref{flow_ed}), as well as the number of vehicle trajectories that traverse the location where detectors are located, $N_\textrm{traj}$. \item For each time interval, measure the total number of counts at all detectors combined ($N_\textrm{traj}$), and calculate which fraction of this number is represented in the known vehicle trajectories: \begin{equation} \eta_\textrm{traj}=\frac{N_\textrm{traj}}{N_\textrm{det}} \end{equation} It is important to note that in the vehicle number, we simply added all observations for all detectors (i.e., we do not calculate so on a link-by-link basis), and use the law of large numbers which apparently also works in constructing a MFD in the first place. The found value $\eta_\textrm{traj}$ can be used as estimate for the penetration rate. \item For each time interval, calculate the average density $A$ and average flow $P$, as \begin{equation} A=\frac{A_\textrm{Edie}}{\eta_\textrm{traj}} \end{equation} \begin{equation} P=\frac{P_\textrm{Edie}}{\eta_\textrm{traj}} \end{equation} \item Construct the MFD of the network, finding the mathematical function that best fits the data points of $A$ and $P$. \end{enumerate} \subsection{Fitting a functional form MFD} Step 2 of the method requires a functional form of the MFD. The requirements that the formula of the MFD needs to fulfil are: \begin{enumerate}\itemsep0em \item The formula needs to follow a concave shape, due to the alternating trend of the two MFD branches: the free flow and the congested branch. In the free flow branch, the flow increases rapidly when the density increases and the slope of the curve is quite steep. In the congested branch, the flow decreases gradually with the density increase, so the slope is less steep. \item The MFD has zero flow for zero density, so the fitting formula needs to have zero constant. \item The derivative of the MFD function at zero density should be equal to the free flow speed. \end{enumerate} The often used expression for a third order polynomial fit without a constant (e.g., \citep{Kno:2013TRRGMFD_A10}) does not fulfil the first criterion; initial fits in fact show that the second turning point of the third order polynomial is often lower than jam density, hence the flow of the fitted fundamental diagram \emph{increases} for high densities. Therefore, we fit another function with not more parameters. We opt for a combination of a Greenshields \citep{Gre:1934} fundamental diagram (parabolic) with elements of a Drake fundamental \citep{Dra:1967} diagram (exponential decay). We fit \begin{equation} P=p_1 \cdot A \cdot e^{\lambda A} + p_2 \cdot A^2 \cdot e^{2 \lambda A} \end{equation} Examples of the functional form can be found in section \ref{sec_MFDresults}. This function fulfils the criteria and at the same time has only 3 parameters to calibrate: $p_1$, $p_2$ and $\lambda$. An interpretation for $\lambda$ is the non-linearity in the density; with $\lambda=0$ one would obtain a symmetrical parabola (Greenshields fundamental diagram). The parameters are found using the data points $P_{\textrm{net},i}$ and $A_{\textrm{net},i}$. The subscript $i$ denotes an individual measurement point over a time interval (and a spacial extent). We estimate the matching average flow according to the average density at data point $i$, and the MFD: \begin{equation} \widetilde{P}_i= p_1 \cdot A_{\textrm{net},i} \cdot e^{\lambda A_{\textrm{net},i}} + p_2 \cdot A_{\textrm{net},i}^2 \cdot e^{2 \lambda A_{\textrm{net},i}} \end{equation} Next, we take the root mean squared error of average densities for all time intervals: \begin{equation} \textrm{RMSE}=\sqrt{\left.\sum_{i=1}^{n}\left(\left(\widetilde{P}_i-P_{\textrm{net},i}\right)^2\right)\right/n} \end{equation} The RMSE depends on the parameters. Fitting the MFD consists of adjusting the parameters such that the RMSE is minimized. We use a \textit{fminsearch} function in Matlab for this. Using the fundamental relationship of \eqref{speed_ed}, we can describe the speed $v$ as mathematical function of the average density $A$: \begin{equation} v(A)=\frac{P}{A}=p_1 \cdot e^{\lambda A} + p_2 \cdot A \cdot e^{2 \lambda A} \end{equation} If the parameters are known, we also have the relationship between average density and speed. \subsection{Step 2: using FCD and MFDs to find the traffic state}\label{sec_step2} The MFD obtained from the 1\textsuperscript{st} step can be used in the 2\textsuperscript{nd} step to calculate the traffic state based on FCD from an unknown penetration rate. The second part of the schematic flow chart in Figure \ref{mfd_fusion} depicts the 2\textsuperscript{nd} step of the process. In the MFD relationship for the network, each speed value corresponds to a unique density value. Taking advantage of this, speed data from any data source can be used to derive the average network density and thus, the point on the MFD that the traffic network is. In an algorithm format, the proposed density estimation approach of the 2\textsuperscript{nd} step is: \begin{enumerate} \item Given the relationship between speed and density for the network and speed measurements from any available data source, calculate the average network density. \item Use the derived average network density to indicate the traffic state on the MFD. \end{enumerate} \section{Case study}\label{sec_simulations} The process described in section \ref{sec_method} is tested using a case study. The setup of this test is described in section \ref{sec_expsetup} and the results in section \ref{sec_MFDresults}. \subsection{Experimental setup}\label{sec_expsetup} For the verification of the method we use microsimulation. That way, we have the ground truth data available in the form of trajectories, hence the ground truth average density and average flow can be determined using \eqref{accum_ed} and \eqref{flow_ed}. In this case, we will be using S-Paramics. We simulated a part of the municipality of Leidschendam-Voorburg, located in the province of South Holland. The model has been developed and calibrated for another project by the engineering company Sweco. The sequel of this section, firstly presents the basic configuration characteristics and secondly, the traffic demand of the simulated network. The traffic network of the town of Leidschendam-Voorburg includes different types of roads such as freeways, arterial roads and urban streets. It is connected to the A4 highway and the N14 crosses through the city via tunnels. A map of the area that is simulated is presented in Figure \ref{map}. \begin{figure} \subfigure[Map (source: OpenStreetMap)]{\includegraphics[width=.5\textwidth]{leidschendam_real_openstreetmap}\label{map}} \subfigure[The simulated network with the location of the loop detectors indicated with green lines, and the location of the incidents (letters)]{\includegraphics[width=.5\textwidth]{leidschendam_model_incident2} \label{fig:leidschendam_model}\label{incident_locations}} \subfigure[Traffic demand during the morning peak period of the simulation]{\includegraphics[width=.5\textwidth]{Demand_profile_of_the_morning_period}\label{demand}} \caption{The network of Leidschendam-Voorburg} \end{figure} The simulated traffic network of Leidschendam-Voorburg in Paramics covers an area of about 7 km\textsuperscript{2} and has a total road length of 29.9 km. It includes 67 zones connected with 491 nodes and 1068 links. The road types that are present in the network are urban roads with speed limits ranging from 30 km/h to 80 km/h and highways with a speed limit ranging from 80 km/h to 120 km/h. In total, the simulated network has 63 junctions, of which 17 are signalized and the rest are flow controlled. The vehicle types in the simulation are single vehicle units, e.g. cars or LGV, bus units, trams and HGV units and bicycles. The network is covered with 65 traffic loop detectors, of which 6 are used for the tram and/ or the bus. The configuration of the simulated network of Leidschendam-Voorburg can be seen in Figure \ref{fig:leidschendam_model}. The location of the loop detectors across the network's intersections can be seen in green color. The simulated time period of the analysis is the morning peak period from 06:00-10:00, when higher congestion is observed in the real network. The profile of the demand during the morning peak period is presented in Figure \ref{demand}. The demand is presented given for 5-minute intervals. As it can be seen, traffic increases gradually with a peak from 07:30 to 09:00 and then, demand decreases until 10:00. \begin{figure*} \subfigure[Ground truth (100\% penetration rate, full data availability)]{\includegraphics[width=.5\textwidth]{Ground-Truth_MFD}\label{fig:gt_mfd}} \subfigure[Sampling data and data fusion]{\includegraphics[width=.5\textwidth]{Data_Fusion_MFD}\label{fig:est_mfd}}\\ \subfigure[Comparison of fitted ground truth and data fusion MFD]{\includegraphics[width=.5\textwidth]{Comparison_of_Ground-Truth_and_Data_Fusion_MFD} \label{fig:mfds}} \subfigure[Comparison between the estimated and the real network densities]{\includegraphics[width=.5\textwidth]{Real_and_Estimated_Accumulation}\label{fig:dens_comp}} \caption{MFDs: ground truth and data fusion MFD} \end{figure*} \begin{table*} \centering \caption{Comparison between the parameters of the Data Fusion MFD and the Ground-Truth MFD} \label{table_comparison} \begin{tabular}{rlcccc} \toprule &&{Properties}& Ground-truth MFD & Data Fusion MFD & Units\\ \midrule \parbox[t]{2mm}{\multirow{5}{*}{\rotatebox[origin=c]{90}{Parameters \&}}}& \parbox[t]{2mm}{\multirow{5}{*}{\rotatebox[origin=c]{90}{fitness}}}& $A$& -0.02 & -0.02 & - \\ && $a1$& 54.4 & 58.05 & - \\ && $a2$& 0.19 & -0.25 & - \\ && Adjusted $R^{2}$ & 0.94 &0.90 & - \\ && RMSE & 49.51 & 69.20 & vehicles/h \\ \midrule \parbox[t]{2mm}{\multirow{6}{*}{\rotatebox[origin=c]{90}{Characterising}}}& \parbox[t]{2mm}{\multirow{6}{*}{\rotatebox[origin=c]{90}{points}}} &&&&\\ && Free Flow Speed & 54.4 & 58.05 & km/h \\ &&Capacity & 1021 &1022 & vehicles/h \\ && Critical Density & 48 & 52 & vehicles/km\\ && Critical Speed &21.3& 19.7 & km/h \\&&&&&\\ \midrule \end{tabular} \end{table*} In order to get the interval variability in the traffic process, we do 6 different runs. This section will later show that these runs are comparable. For each of these runs, we assume a different level of penetration rate. We will check the method for penetration rates of 30\%, 20\%, 10\%, 5\%, 3\% and 1\%. Note that randomly it has been chosen whether the vehicles are sending their speeds, hence they are not necessarily equally distributed over the various links in the network. Besides, different runs also have a different demand level to obtain the all network conditions (from free flowing to congested), for which we take 100\%, 110\% and 120\% of the original demand. The resulting estimated accumulations are compared with the real accumulations taken from 100\% vehicle trajectories to validate their accuracy. \subsection{Results}\label{sec_simresults} This section presents the results of applying the method on the network of Leidschendam-Voorburg. First, the results of the MFD fitting procedure is described, and then the results of step 2, i.e. the traffic state estimation. \subsubsection{Constructed MFDs}\label{sec_MFDresults} First, figure~ \ref{fig:gt_mfd} shows the MFD for the case with full data availability (100\% FCD); the shape is crisp, and a clear relationship between average density and average flow is visible. The fit of the MFD follows the points nicely, confirming the functional form chosen. Figure~ \ref{fig:est_mfd} shows the MFD based on the data fused from loop detectors and trajectories (limited FDC). The shape is also very crisp, and very similar to the MFD for the full data. Also here, a good fit is possible. To compare the fits, and implicitly see how much the reduction of penetration rate influences the fitted MFD, both fits are plotted in figure~ \ref{fig:mfds}. The free flow branch and the peak are almost identical; only for the congested branch, the MFD based on limited data is slightly lower. A comparison of the parameters and the characterising points of both MFDs is found in table~ \ref{table_comparison}. This shows that the parameters and characteristics of the MFDs are very similar. All in all, it seems a MFD can be fitted very well with the available data. The fit described here is the fit based on a combination of various points (i.e., combining all runs, each having a different penetration, and hence all points in the figure). Next, we will discuss the quality of the fit for the various penetration rates. \subsubsection{Quality of traffic states estimation} Following the process of the second step as described in section \ref{sec_step2}, the traffic state, hence the average density, is estimated. Figure~ \ref{fig:dens_comp} compares the estimated and real average density. The error seems to be rather constant throughout the entire density domain. \begin{figure} \includegraphics[width=.5\textwidth]{Error_in_estimated_states} \caption{Mean percentage error for the densities estimated at various penetration rates $\eta_\textrm{traj}$}\label{dens_comp_figure} \end{figure} Figure~ \ref{dens_comp_figure} shows the mean percentage error for each fraction of vehicle trajectories. It shows that the higher the fraction of the known vehicle trajectories is, the lower the mean error. This means that the estimated densities are more similar to the reality when the penetration rate increases, which is in line with the expectations. For the lowest penetration rate tested (1\%), the error is 26\%. The error almost halves to 14\% with 3\% penetration rate. Then it remains relatively constant up to a penetration rate of 20\% . For the highest penetration rate (30\%), the error is 7.6\%. Note that these reported errors are the errors after the fitted MFD is applied. Hence, for a 100\% penetration rate, these errors do not go to 0, but the remaining error shows the natural stochastic fluctuations of traffic. There are fluctuations for the fitness for varying draws of the penetration rate. These fluctuations are small, ranging from 0.52 veh/km (standard deviation) for 1\% penetration rate decreasing to 0.15 veh/km for 30\% penetration rate. These results can offer a first glance at how the different penetration rates influence the accuracy of the estimation of the traffic state. Nevertheless, these results should only be considered as validation test on how well the proposed process works under different conditions. If it is desired to extract conclusions on the required penetration rate of floating car data, additional information is needed on whether an exact traffic state estimation is desired or it is sufficient to only know whether the network is in the congested state or not. The desired accuracy level can be determined depending on the purpose that the traffic state estimation will be used for. \section{Reliability of estimate}\label{sec_reliability} Whereas the previous section has shown an estimation method for the traffic state, this section will present the reliability of the method. This is first mathematically derived, and then the confidence interval will be shown for the example case as described above. Section \ref{sec_incident} will test a measure to identify whether the estimated traffic state can be considered unreliable, resulting from an abnormal event within the network. \subsection{Propagation of errors}\label{sec_uncertainty} Both steps of the process of section \ref{sec_method} (i.e., finding the shape of the MFD and getting the state from the speed of the FCD) are subject to an error. To illustrate the first error, in the fitting of the MFD, error bounds are added in the speed-average density plane, see figure~ \ref{fig_kv_MFD_bounds}. The bounds seem to widen for higher densities, but that is an optical illusion. The width of the bounds is twice the standard deviation of the points to the line, for both directions. If the errors between the points and the line are normally distributed, 95\% of the points fall within the bounds. For each real speed, there is a probability distribution of the matching density, which we indicate by $f(A|\spd_\textrm{real})$. In the second step, an error is introduced because the speeds of the sample of vehicles might differ from the speed of all vehicles. We relate the estimated speed to the real speed -- using the ground truth information from the simulation, and similar to the densities, we determine the confidence intervals. We denote the probability density function that speed has a value $\spd_\textrm{real}$ under the mean of the sampled speed $\widetilde{\spd}$ as $f(\spd_\textrm{real}|\spd_m)$. \begin{figure*}[t!] \subfigure[MFD with confidence intervals]{\includegraphics[width=.5\textwidth]{Accumulation-speed_with_error_bounds}\label{fig_kv_MFD_bounds}} \subfigure[Schematic representation of the combination of the probabilities]{\includegraphics[width=.5\textwidth]{joint_probability2.jpg} \label{fig_conceptuncertenty}}\\ \subfigure[Probability of the estimated density values given speed]{\includegraphics[width=.5\textwidth]{Probability_for_average_density_given_speed}\label{fig_3duncertenty}} \subfigure[Probabilities of estimated densities given the speed measurements]{\includegraphics[width=.5\textwidth]{Average_density_PDF_for_given_speed}\label{fig_examples_speeddensity}} \caption{Combined errors in estimation} \end{figure*} Both errors have a joint effect, which is discussed in the sequel of this paragraph. Given a speed measurement $\spd_m$, we want to estimate the probability density function $f(A|\spd_m)$ of the error level of an average density occurring given this speed measurement. What we have is the probability density function of the average density given the real speed measurements $f(A|\spd_\textrm{real})$ and the probability density function of the measured speeds given the real speeds $f(\spd_m|\spd_\textrm{real})$. In this section we combine the uncertainties, as graphically indicated in figure~ \ref{fig_conceptuncertenty}. Basically, all joint elements of ($A, \spd_m$) are given a weight, such that a cross section of the graph in figure~ \ref{fig_conceptuncertenty} (later constructed in figure~ \ref{fig_3duncertenty}) for a constant $\spd_m$ gives a function for the average density $A$. Scaling this function with a scalar such that the integral of that function over density is one, gives the probability density function. We will now present this system in equations. We consider the average density $A$ and the speed values $\spd_m$ as dependent events, so the conditional probability of $A$ given $\spd_m$ is defined as (Kolmogorov definition): \begin{equation} \label{cond_prob} f(A|\spd_m)=\frac{f(A,\spd_m)}{f(\spd_m)} \end{equation} where $f(A,\spd_m)$ is the joint probability density function of aveage density and speed. Using conditional probabilities, we can rewrite: \begin{equation} f(A,\spd_m)=\int_{0}^{\spd_\textrm{max}}f(A|\spd_m,\spd_\textrm{real})f(\spd_m|\spd_\textrm{real}) f(\spd_\textrm{real}) d\spd_\textrm{real} \end{equation} In this equation $\spd_\textrm{real}$ is given. If one knows the real speed, one does not need to estimate so, and this simplifies to \begin{equation} f(A,v)= \int_{0}^{\spd_\textrm{max}} f(A|\spd_\textrm{real}) f(\spd_m|\spd_\textrm{real}) f(\spd_\textrm{real}) d\spd_\textrm{real} \label{eq_fkv} \end{equation} Similarly, using conditional probabilities we can rewrite $f(\spd_m)$ to \begin{equation} f(\spd_m)=\int_{0}^{\spd_\textrm{max}}f(\spd_m|\spd_\textrm{real}) f(\spd_\textrm{real}) d\spd_\textrm{real} \label{eq_fv} \end{equation} Substituting equation~ \ref{eq_fkv} and equation~ \ref{eq_fv} into equation~ \ref{cond_prob}, we obtain after some algebraic manipulation: \begin{multline} f(A|\spd_m)=\\ \frac{\int_{0}^{\spd_\textrm{max}}f(A|\spd_\textrm{real}) f(\spd_m|\spd_\textrm{real})f(\spd_\textrm{real}) d\spd_\textrm{real}}{\int_{0}^{\spd_\textrm{max}}f(\spd_m|\spd_\textrm{real}) f(\spd_\textrm{real}) d\spd_\textrm{real}}=\\ \frac{\int_{0}^{\spd_\textrm{max}}f(A|\spd_\textrm{real}) f(\spd_m|\spd_\textrm{real}) d\spd_\textrm{real}}{\int_{0}^{\spd_\textrm{max}}f(\spd_m|\spd_\textrm{real}) d\spd_\textrm{real}} \end{multline} Assuming a maximum average speed $\spd_\textrm{max}$ of 60 km/h (a high estimate under urban conditions), we know all right-hand side elements of the equation and we can numerically compute the conditional probability. The resulting probability density function is shown in figure~ \ref{fig_3duncertenty}. Figure~ \ref{fig_examples_speeddensity} shows some examples of resulting curves for various measured speed $\spd_m$. Higher speeds lead to (absolute) higher uncertainty for higher average densities, but the uncertainty relative to the average density decreases with higher average densities. \subsection{Effect of exceptional conditions}\label{sec_incident} In abnormal conditions, the MFD might not hold, which voids the method proposed here. An important question in this respect is to which extent we can recognize these situations from the data. We are hence looking for an indicator predicting the reliability of the estimate. We propose to consider the \emph{rate of change of the speed as function of time}. An incident would cause cars to stop upstream of the incident, without cars moving downstream of the incident. Therefore, network speed would reduce more rapidly than usual because the jam will grow rapidly since the outflow is lower than the regular capacity. To test the working of the method under incidents, as well as the working of the above-mentioned indicator, we simulate different scenarios with incidents. Various scenarios are tried, each with a different location of the incident. The locations of the incidents in the network can be seen in Figure \ref{incident_locations}. Some incident locations are already highly congested (intersection with the four incidents A, C, D and E), one is on the highway (G), one in the network center (B) and one on a smaller road of less significance (F). The road incidents are simulated for one hour from 07:00-08:00. During the road incidents, one lane is blocked, reducing the capacity of the respective road segment. \begin{figure*}[t!] \subfigure[Estimated MFDs with incidents]{\includegraphics[width=.5\textwidth]{MFD_and_Incidents_Databis}\label{incident_mfd}} \subfigure[Speed measurements during the simulation of the incidents]{\includegraphics[width=.5\textwidth]{Average_network_speedsbis}\label{incident_speed}}\\ \subfigure[Speed differences after 5 minutes]{\includegraphics[width=.5\textwidth]{Speed_differences5_minutesbis}\label{speeddiff5mins}} \subfigure[Speed differences after 10 minutes]{\includegraphics[width=.5\textwidth]{Speed_differences10_minutesbis}\label{speeddiff10mins}} \caption{Effect of the incidents. Different colored lines are different incident scenarios. The incident was simulated from 7 to 8, indicated with dashed black lines (fig b-d).} \end{figure*} Depending on the location, the traffic state differs. Figure~ \ref{incident_mfd} shows the MFDs created with sample data. The speeds at incident locations B, D, and F fall within the bounds. For other scenarios the speed falls outside the bounds: scenarios with incidents at location A, C, E (slightly), and G. In all these cases, the speed is lower than the speed without incident. That means that the indicator should predict that the state estimated during these incidents is not reliable. Let's consider what would happen if there was no reliability indicator. For instance the traffic state of the magenta square at the blue line in figure~ \ref{incident_mfd}. The estimated traffic state would be the point where a line with the estimated speed (magenta dotted line) meets the MFD, i.e. at an average density of slightly over 90 veh/km. In reality, the average density is around 63 veh/km. This erroneous estimation happens for the lines which are out of the confidence interval. We will now show an indicator, which will show whether the estimated traffic state is out-of-the-ordinary, and therefore the traffic state will lie outside the band, and the estimation procedure is not working properly. Since speed is the basis for our method, we will also use speed as basis for the reliability indicator. As can be seen in figure~ \ref{incident_speed}, speeds drop rapidly if there are incidents for which the MFD falls outside the uncertainty bands. Figure~ \ref{speeddiff5mins} and \ref{speeddiff10mins} show the speed differences over a 5 and 10 minute interval respectively. It shows that for incidents at spots for which the MFD falls outside the bounds, also show the largest speed drop. A threshold for the speed drop can be an indicator to which extent the network operates at its normal operations, and hence whether the method developed in this paper can be used to estimate the traffic state. The value for this speed threshold is expected to differ per network, both in time as well as in magnitude of the speed drop. A larger time over which a drop should be observed could give a higher reliability for the reliability indicator. However, that would also delay the signalling of incidents. A combination of times, one for early signalling, and one for more reliable signalling, could also be used. A larger network is expected to suffer less from an incident, simply because the affected part is a smaller fraction of the network. At the other hand, also the bounds are probably tighter due to less effect of stochastic fluctuations in normal conditions. Ultimately, the testing on the network itself should show the network-specific bounds. For our network, a speed reduction of 5.75 km/h in 5 minutes was a good threshold. For incident sites B, D, and F, reductions never exceeded this threshold. For incident sites C, G, and especially A, the change well exceeded the threshold. For site E, the change is around the threshold. This is very well aligned with the bounds on the MFD. Incidents for which the change exceeds the threshold fall outside the bounds, and vice versa. Similar, and more reliably, a threshold for a speed reduction of 8.5 km/h in a 10-minute interval coincides well with falling inside or outside the reliability bounds. Thus, we find that the estimation procedure and the bounds are useful also in case of small network disruptions. The evolution of estimated network speeds over time is a good indicator for the size of the network disruption. If it does exceed a threshold value, the method should not be used. \section{Conclusions}\label{sec_conclusions} In this paper we developed a two-step estimation method for average network density by fusing loop detector data and low penetration rates of probe vehicle data. In the first step, a calibration step, we use loop detector data and floating car data (of a limited number of vehicles) to construct the MFD. In the second step, the traffic state (network density) is estimated based on the speed of a limited number of probe vehicles and the MFD. We explicitly address the uncertainty in the process, caused by the creation of the MFD based on data fusion, as well as the uncertainty in the traffic speed given the speed of the FCD. We conclude the following: \begin{itemize} \item The main conclusion is that there is a simple, easily implementable method which works at acceptable accuracy at low penetration rates. The MFD provided for the operational speed estimate can be initialized with loop detector data and floating car data. Through simulation we show a mean estimation error of less than 15\% for penetration rates of 3\% and higher. \item The method also endogenously provides an error bound to the estimate of average density. The accuracy of estimation is in absolute terms better for lower average densities; the relative accuracy is better for higher average densities. In both cases, the resulting traffic estimates are well differentiable (i.e., one knows the approximate level of service for a network). \item Finally, a method has been presented which can predict the reliability of the method using the FCD. This is based on the rate of change of the speed as function of time. As long as that is not exceeding a threshold, the method can be used. If the speed changes too rapidly, this indicates that a major disruption has happened and the method should not be used. \end{itemize} The method has been tested for a simulated network. Indeed, traffic dynamics for a real network might differ. The goal of this paper is to check the consistency between the estimation method and the ground truth, which is remarkably good. Next steps are to test the method on a real network, using real data. \bibliographystyle{trb}
1,941,325,220,626
arxiv
\section*{Supplementary Information} \section{Introduction}\label{SecIntro} This Supplementary Information document contains background information on the experimental methods and additional results followed by a discussion of the interaction potential calculation, for both undressed and dressed potentials. Towards the end, the analytical solution of the many-body dynamics is presented, with detailed derivations on how to include dissipation caused by spontaneous emission and the black-body induced decay. The theoretical curves referred to in the main text and the figures are all calculated from the analytical solution including all dissipation effects, unless stated otherwise \section{Laser system}\label{SecLaserSyst} \begin{figure*} \centering \includegraphics[width=0.7\textwidth]{Gross_EDfig1.pdf}% \caption{\textbf{Ramsey sequences with and without spin echo.} \textbf{a}, Microwave pulses on the $\ket{F=1, m_F=-1}$ to $\ket{F=2, m_F=-2}$ transition with length $t_p$ and areas $\Omega_{mw}t_p=\pi/2$ are shown in blue, normalised dressing laser pulses recorded by a photo diode are shown in green. This sequence is used to measure the collective field $\ensuremath{\Delta_{\vec{i}}^{(\mathrm{coll})}}\xspace$, for Fig. 1d and Fig.~\ref{fig:S2}. \textbf{b}, Sequence including an intermediate echo pulse with area $\pi$, used to obtain correlations shown in Figs. 2 and 3. } \label{fig:S1} \end{figure*} Excitation and coupling to the Rydberg state at $297\,\mathrm{nm}$ with a single photon has several advantages over the more conventional indirect coupling in an off-resonant two-photon configuration. Next to the higher achievable coupling strength and the flexibility when working with Rydberg p-states, both the light shift and the off-resonant scattering on the $D2$ line in Rubidium are negligible for direct excitation. The required uv-light at $297\,$nm was generated in two doubling steps. We started with a diode laser at $1190\,$nm which was stabilised to a passively stable reference cavity with a finesse of approximately $10,000$. Frequency tunability was provided by stabilising a sideband of a fibre coupled electro optic modulator on the cavity resonance. The light at $1190\,$nm was amplified with a Raman fibre amplifier system and frequency doubled in a single pass through a periodically poled lithium niobate crystal, providing a stable output power of up to $2\,$W at $595\,$nm. This light was used to seed a home-built resonant doubling cavity to $297\,$nm, yielding $250-300\,$mW of optical power in the ultra violet. After spatial filtering with a $25\,\mu$m pin-hole, the light was focused on the atoms with a waist of $44(5)\,\mu$m. We estimate the power at the position of the atoms to be $45(10)\,\mathrm{mW}$. The Rydberg resonance was measured by detecting the ground state atom loss versus laser detuning for a fixed excitation pulse time $<10\,\mu$s. Adjusting the parameters allowed us to measure resonance widths of approximately $70\,$kHz, (Fig.~\ref{fig:S2}), which is likely still limited by power broadening but sets a conservative upper bound of $60\,$kHz on the laser linewidth at $297\,$nm. \section{Experimental sequence and dressing pulses}\label{SecRamsey} To probe the Rydberg-dressed interactions, we employed Ramsey interferometry. Typical sequences are shown in Fig.~\ref{fig:S1}. We prepared the system in $\ket{F=1, m_F=-1}=\ket{\downarrow}$, which was coupled to $\ket{F=2, m_F=-2}=\ket{\uparrow}$ with a microwave Rabi coupling of $\Omega_{mw}/2\pi=12.5\,\mathrm{kHz}$. The interferometric sequence was initialized with a microwave pulse of area $\pi/2$, preparing an equal superposition of the two spin states. Next, the dressing laser was switched on for a variable time $t/2$, before a microwave spin echo pulse of area $\pi$ was applied. After a second dressing phase of identical duration, a final $\pi/2$ microwave pulse closed the interferometer and the $\ket{\downarrow}$ component was detected with single site resolution after a resonant pushout on the atoms in the state $\ket{\uparrow}$. To compare our results with the theoretically expected signal, the finite rise time of the dressing pulses had to be taken into account. This was done using the integrated interaction phase $\Phi_0=\int\Omega(t)^4/(8|\Delta|^3)\,\mathrm{d}t$ in the theoretical models, where $\Omega(t)$ denotes the instantaneous Rabi coupling which saturates after a finite rise time at a value $\Omega_s$. The temporal shape of the pulses is captured by two time constants and determined by a combination of the rise time of the acousto optic modulator and the bandwidth of the intensity stabilisation used to generate the pulses.\\ For the Ramsey fringe measurement shown in Fig. 1d in the main text, we used a sequence without the intermediate $\pi$ echo pulse and a single dressing phase of variable time $t$ (Fig.~\ref{fig:S1}a). \section{Collective longitudinal field and Rabi frequency calibration}\label{SecCollField} Next to the direct long range spin-spin interaction, Rydberg-dressing induces a collective longitudinal field for each spin due to the interaction with all its neighbours within the interaction range, c.f. Eq.~1 in the main text. This shift can be directly extracted from the data by performing a Ramsey experiment, which is sensitive to single particle shifts (Fig.~\ref{fig:S2}). The frequency of the Ramsey fringes shown here is dominated by the dressing laser induced single particle light shift \begin{equation}\label{eq:shift} \delta(\Delta)=-\frac{\Delta}{2}+\frac{1}{2}\sqrt{\Omega^2+\Delta^2}\approx\frac{\Omega^2}{4\Delta}. \end{equation} We repeated such measurements for different detunings $\Delta$ and observed the expected increase in oscillation frequency $\nu$, extracted via an exponentially damped sinusoidal fit from the data, with decreasing $\Delta$ (Fig.~\ref{fig:S2}b). However, a fit of the data $\nu(\Delta)$ with the single particle formula $\delta(\Delta)$ and the Rabi frequency $\Omega$ as a free parameter shows a clear systematic residual, which is removed when taking into account the mean field shift $\ensuremath{\Delta_{\vec{i}}^{(\mathrm{coll})}}\xspace$ (cf. Eq.~1 in main text). The latter is achieved with the relation $\nu(\Delta)=\delta(\Delta)-N_{\text{eff}}\frac{\Omega^4}{16\Delta^3}$ and the effective atom number $N_{\text{eff}}$ contributing to the shift. \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{Gross_EDfig2.pdf}% \caption{\textbf{Extraction of the collective longitudinal field for $J=1/2$ and $J=3/2$.} \textbf{a}, Fraction of detected spin down atoms in a central elliptical region of interest with a radius of $0.7$ times the average Mott insulator radius for varying detuning ($\Delta/2\pi=16, 12, 8, 6\,\mathrm{MHz}$ from top to bottom) after Ramsey sequence without echo. The solid blue line shows a fit of a damped sinusoidal oscillation to extract the frequency $\nu(\Delta)$. The errorbars denote the standard error of the mean \textbf{b}, The upper panel shows the extracted dependence of the $\nu$ on detuning $\Delta$ (red points), including a fit with and without $\ensuremath{\Delta_{\vec{i}}^{(\mathrm{coll})}}\xspace$ (red and green solid lines). The errorbars mark the $1\sigma$ confidence interval of the fit. The inset shows a high resolution Rydberg resonance measurement by optical pushout of ground state atoms (blue points) with a Gaussian fit (blue solid line). The pictogram in the lower left corner illustrates the collective longitudinal field shift $\ensuremath{\Delta_{\vec{i}}^{(\mathrm{coll})}}\xspace$. Errorbars on the datapoints denote the standard error of the mean. The lower panel shows the residuals of the two different fits in the upper panel. The zero position is marked by the grey dashed line. The lines are guides to the eye. \textbf{c}, Same analysis as in b for Ramsey oscillations in a spin system dressed to the Rydberg state $31P_{3/2}$ $m_J=-3/2$. Contrary to b, here the oscillation frequency is negative due to the red detuning $\Delta<0$. The insets show an exemplary time trace of the mean fraction of spin down atoms $N_{\downarrow}$ for $\Delta/2\pi=-12\,$MHz and a single realisation of a spin down atom density distribution for one specific time of the oscillation with $\Delta/2\pi=-8\,\mathrm{MHz}$. The ring shaped density distribution illustrates the influence of the system boundary on the collective field. The mean atom numbers shown in this graph are obtained by averaging the result of $5-10$ experimental shots.} \label{fig:S2} \end{figure*} Leaving $\Omega$ and $N_{\text{eff}}$ as fit parameters, we obtained the best agreement for $N_{\text{eff}}=11(2)$ particles within the interaction range, which agrees well with the simple estimate $\pi R_c^2=12.5$ for the $31P_{1/2}$ state with a cut-off radius of $R_c/\alat=2.02$. At the same time this fit provides a calibration value for the Rabi coupling, yielding $\Omega_s/2\pi=1.33(7)\,\mathrm{MHz}$. The error of $5\%$ both takes into account pulse to pulse and day to day fluctuations. In the preceding analysis, we take the time dependence of $\Omega$ discussed in the previous section into account by scaling the time axis as $\tau=\int\Omega^2(t)\,\mathrm{d}t/\Omega_s^2$, where we normalise to the asymptotic value $\Omega_s$ reached when the dressing light power has reached its maximum. This rescaling is suggested by the $\Phi\propto\Omega^2$ dependence of the phase $\Phi$ due to the dominating light shift. The rescaling factor of the time axis amounts to approximately $0.85$ for short dressing times but for longer times between $20$ and $80\,\mu$s it merely leads to a compression of the time axis by a factor of $0.95$. Slight uncertainties in the saturation value $\Omega_s$ and hence the compression factor do not alter the results of the quantitative analysis of Fig.~\ref{fig:S2} within the errorbars. The analysis of the collective longitudinal field relies also on the accurate knowledge of $\Delta$ and hence the resonance position. Precise spectroscopy enabled the measurement of high resolution resonance curves, shown in the inset of Fig.~\ref{fig:S2}, from which the line center was extracted with an uncertainty of less than $10\,\mathrm{kHz}$. This in combination with slight drifts during the day determines the experimental uncertainty of $30\,\mathrm{kHz}$ for the detuning $\Delta$ assumed throughout the paper. Furthermore, from the collective longitudinal field shift one can determine if the induced long range interaction is attractive, as in our case, or repulsive: The observed $\nu(\Delta)$ decreases due to the interactions if the sign of the interaction potential differs from the sign of the detuning. \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{Gross_EDfig3.pdf}% \caption{ \label{fig:S3} \textbf{Level structure relevant for dressing to $31P_{1/2}$ and $31P_{3/2}$.} \textbf{a}, Dressing configuration for coupling to $31P_{1/2}$ with the magnetic offset field of strength $B_{z}=28.6\,$G in $z$-direction, yielding a differential Zeeman splitting of $\Delta_Z/2\pi=26.7\,\mathrm{MHz}$ between the $m_J$ states. The detuning was chosen to be $\Delta/2\pi=6\,\mathrm{MHz}$ to the blue of $m_J=+1/2$, which is coupled by the $\sigma_+$ polarisation component of the dressing laser. \textbf{b}, Dressing configuration for coupling to $31P_{3/2}$ with the same magnetic field as in a, yielding a differential Zeeman splitting of $\Delta_Z/2\pi=53.5\,\mathrm{MHz}$ between the $m_J$ states. The detuning is chosen to the red of $m_J=-3/2$ to be $\Delta/2\pi=-6\,\mathrm{MHz}$, which is coupled by the $\sigma_-$ polarisation component of the dressing laser. Coupling to $m_J=+1/2$ with the equally strong $\sigma_+$ polarisation component can be neglected for the dressing potential due to the additional Zeeman detuning $\Delta_2$ and the strong scaling of the dressing potential $U(0)\propto1/\Delta^3$. \textbf{c}, For demonstrating anisotropic interactions, the magnetic field was set in the $x-y$ plane of the atoms, aligned with the direction of the excitation laser. In the field of strength $B_{xy}=0.43\,\mathrm{G}$, the Zeeman states are split by $\Delta_Z/2\pi=802\,\mathrm{kHz}$. Using $\sigma_-$ polarisation, only the $m_J=-3/2$ state is optically coupled. The detuning for the anisotropy measurement was chosen to be $\Delta/2\pi=-12\,\mathrm{MHz}$ with respect to the $31P_{3/2}$, $m_J=-3/2$ state. } \end{figure*} We repeated the same measurement also for the state $31P_{3/2}$, which is dressed to the ground state via the $\ket{F=2, m_F=-2}$ to $\ket{J=3/2, m_J=-3/2}$ transition. Fig.~\ref{fig:S2} summarises the result for a region of interest spanning an elliptical central area of the Mott insulator containing approximately half the number of atoms of the full sample. As expected, the absolute value of the Ramsey oscillation frequency increases with decreasing $|\Delta|$. The sign of the oscillation frequency indicates a decreasing transition frequency due to the dressing light. However, similarly to the observation for $31P_{1/2}$, the single parameter fit with $\Omega$ as the only free parameter does not capture the dependence of the oscillation frequency on $\Delta$. The fit residual vanishes only when taking into account the collective longitudinal field shift $\ensuremath{\Delta_{\vec{i}}^{(\mathrm{coll})}}\xspace$. From the fit, we conclude that $N_{\text{eff}}=19(1)$ particles contribute to this shift. The deviation from the expected value of $R_c^2\pi\approx3^2\pi\approx 28.3$ could be due to a reduced mean density due to loss processes or residual finite size effects, as particles on the edge of the initially prepared sample experience a smaller collective shift, leading to a decreased detected average shift. From the same fit, we extract a Rabi coupling of $\Omega_s/2\pi=1.9(1)\,\mathrm{MHz}$ to the Rydberg state. For the measurements shown in Fig. 3a in the main text, the optical power is reduced and we work with a Rabi frequency of $\Omega_s/2\pi=1.16(6)\,\mathrm{MHz}$. The same procedure was applied to calibrate the Rabi frequency for the anisotropy measurement shown in Fig. 3b in the main text and we obtained a value of $\Omega_s/2\pi=2.45(12)\,\mathrm{MHz}$. \section{Interaction potentials induced by Rydberg-dressing\label{SecInteractionPotentials}} \subsection{Single particle excitation scheme \label{SecSingleParticleExcitation}} Fig.~\ref{fig:S3} shows the laser and state configurations relevant for the data shown in Figs.~2-4 in the main text. In all cases, we applied a magnetic field to lift the Zeeman degeneracy of the $\ket{F=2}$ ground state manifold. We ensured that the corresponding Zeeman splitting greatly exceeds the Rabi frequency of the applied microwave pulses in order to isolate the $\ket{F=1,m_F=-1}\rightarrow\ket{F=2,m_F=-2}$ transition as the spin system. Therefore the orientation of the magnetic field defines the quantisation axis for the effective spins. The $31P_{1/2}$ Rydberg state (Fig.~\ref{fig:S3}a) is coupled by the $\sigma_{+}$ polarisation component of the excitation beam, which propagated in the $x-y$ plane, at an angle of $90^\circ$ with respect to the magnetic field $B_{z}=28.6\,\mathrm{G}$ applied along the $z$-direction (c.f. Fig.~1b). Hence, the effective spins were defined orthogonal to the plane of the lattice. The two Rydberg Zeeman sublevels $m_J=-1/2$ and $m_J=+1/2$ are split in this field by $\Delta_Z/2\pi=26.7\,\mathrm{MHz}$. Due to optical selection rules, only the state $\ket{J=1/2, m_J=+1/2}$ is coupled to the ground state. We chose a blue detuning of $\Delta/2\pi=6\,\mathrm{MHz}$ to operate in the weak dressing regime. For the data shown in Fig.~3a of the main text, the ground state $\ket{F=2, m_F=-2}$ is coupled to the $31P$ $\ket{J=3/2, m_J=-3/2}$ Rydberg state with a red detuning of $\Delta/2\pi=-6\,\mathrm{MHz}$ (Fig.~\ref{fig:S3}b). Since we used a linearly polarised excitation beam with the polarisation axis lying in the plane of the atomic lattice, the $\ket{J=3/2, m_J=+1/2}$ state is optically coupled as well. However, the large Zeeman spitting $\Delta_Z/2\pi=53.5\,\mathrm{MHz}$ of the Rydberg manifold due to the applied magnetic field, $B_{z}=28.6\,\mathrm{G}$, rendered this coupling negligible. Again the effective spins formed by the two ground states were defined perpendicular to the optical lattice along the magnetic field axis. For the data shown in Fig.~3b of the main text, we used a smaller magnetic field $B_{xy}=0.43\,\mathrm{G}$ that was aligned in the plane of the atoms along the diagonal of the square lattice set by the optical trapping fields. Optical coupling was provided by the $\sigma_-$-polarised Rydberg-excitation laser propagating along the magnetic field direction. Therefore, in this case, the quantisation axis of the effective spins was defined in the plane of the optical lattice. \subsection{Calculation of the Rydberg-Rydberg atom interaction potential}\label{SecBarePotentials} For the calculations of the Rydberg-Rydberg interaction potentials we consider two atoms with an interatomic separation vector $\bf R$. The uncoupled ground states do not participate in the Rydberg-dressing such that it suffices to consider the ground state $\ket{\rm g} = \ket{F = 2, m_F = -2}$ and the manifold of Rydberg states $\ket{e} = |n L J m_J\rangle$, with principal quantum number $n$, orbital angular momentum $L$, and total angular momentum $J$ with its projection $m_J$ along the quantisation axis. Introducing pair states, $\ket{\alpha\beta}$ ($\alpha,\beta={\rm g},e$), the corresponding atomic Hamiltonian can be written as \begin{equation}\label{eq:HA} \hat{H}_{\rm A}=-\sum_e \Delta_e(\ket{{\rm g}e}\bra{{\rm g}e}+\ket{e{\rm g}}\bra{e{\rm g}})-\sum_{e,e^\prime} (\Delta_e+\Delta_{e^\prime})\ket{e\Rydblvl^\prime}\bra{e\Rydblvl^\prime} \end{equation} where $\Delta_e$ denotes the laser detuning with respect to a given Rydberg state, with $\Delta_{e_0}=\Delta$ being the detuning of the targeted Rydberg state, i.e., $\ket{e_0}=\ket{31 P J m_J}$ for the measurements presented in this work. Note that the detunings $\Delta_e$ also depend on $m_J$ through the Zeeman shift by the external magnetic field, whose direction we chose to define the quantisation axis. For these Rydberg states the minimum atomic distance set by the lattice constant is sufficiently large to justify a leading-order description of the electrostatic atomic interaction in terms of the dipole-dipole interactions, in atomic units, \begin{equation}\label{eq:Vdd} \hat{V}_{\rm dd}=\sum_{e,\bar{e},e^\prime,\bar{e}^\prime}\frac{{\bf{d}}_{e,\bar{e}} \cdot {\bf{d}}_{e^{\prime},\bar{e}^{\prime}}}{R^3} - \frac{3({\bf{d}}_{e,\bar{e}}\cdot {\bf R})({\bf{d}}_{e^\prime,\bar{e}^\prime}\cdot {\bf R})}{R^5}\ket{\bar{e}\bar{e}^\prime}\bra{e\Rydblvl^\prime}, \end{equation} that couple atomic pair states with corresponding one-body transition matrix elements ${\bf d}_{e,\bar{e}}=-\bra{e}{\bf r}\ket{\bar{e}}$. We have diagonalised $\hat{H}_A+\hat{V}_{\rm dd}$ using a large basis set of atomic pair states around the laser coupled asymptote, $\ket{e_0e_0}$, ensuring convergence for distances relevant to our experiments. This yields a set of potential curves $V_{\mu}({\bf R})$ (Fig.~\ref{fig:S4}) corresponding to a given molecular eigenstate \begin{equation}\label{eq:MolecularEigenstates} \ket{\mu({\bf R})} = \sum_{e\Rydblvl^\prime} c^{(\mu)}_{e\Rydblvl^\prime}({\bf R}) \ket{e\Rydblvl^\prime}. \end{equation} which depend on distance and orientation ($\bf R$) of the interacting atomic pair through the coefficients $c^\mu_{e\Rydblvl'}({\bf R})$. The effective potential generated by Rydberg-dressing is determined by both the potential curves, $V_\mu$, as well as the associated molecular states, $\ket{\mu}$.\\ \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{Gross_EDfig4.png} \caption{\textbf{Calculated interaction potentials.} {\bf a} and {\bf b}, Upper panels: Rydberg-Rydberg atom potential curves relevant for coupling to $31P_{1/2}$ and $31P_{3/2}$ in a magnetic field of $B_z=28.6\,\mathrm{G}$ aligned with the $z$ axis. Each of the shown asymptotes belongs to a combination of $m_J$ such that the asymptotic splitting corresponds to twice the Zeeman splitting $\Delta_Z$. The intensity of the red colouring indicates the relative coupling strength $\bar{\alpha}_{e_0{\rm g}}^{(\mu)}$ [Eq.(\ref{eq:amu})] for a given potential curve $V_\mu$. Lower panels: Resulting interaction potential $U$ between two Rydberg-dressed ground state ($\ket{\tilde{g}}$) atoms. The dashed vertical line marks the the range $R_c$ of the effective interaction, determined by the detuning and interaction potential of the most strongly coupled pair-state according to $|V_\mu(R_{\rm c})|=2|\Delta|$. Inset of b provides a closer view at small $R$. {\bf c} and {\bf d}, Same plots as a and b, for a magnetic field $B_{xy} = 0.43\,\mathrm{G}$ aligned in-plane with the atoms, and the atomic separation vector ${\bf R}$ either aligned with the magnetic field c, or perpendicular to it d, as depicted in the insets. The dressing-induced interactions have been obtained for Rabi frequencies of a $\Omega_s/2\pi=1.33\,$MHz, b $\Omega_s/2\pi=1.25\,$MHz and c, d $\Omega_s/2\pi=2.45\,$MHz and detunings of a $\Delta/2\pi=6\,$MHz, b $\Delta/2\pi=-6\,$MHz and c, d $\Delta/2\pi=-12\,$MHz. } \label{fig:S4} \end{figure*} \subsection{Calculation of the ground state interaction potential induced by Rydberg-dressing}\label{SecDressedPotentials} We now turn our attention to the laser coupling \begin{widetext} \begin{equation} \hat{H}_{\rm L}=\frac{\Omega}{2}\sum_e\alpha_e \ket{{\rm gg}}\left(\bra{{\rm g}e}+\bra{e{\rm g}}\right)+\frac{\Omega}{2}\sum_{e,\mu}\bar{\alpha}_{e{\rm g}}^{(\mu)}\left(\ket{{\rm g}e}+\ket{e{\rm g}}\right)\bra{{\mu}({\bf R})}+{\rm h.c.} \end{equation} \end{widetext} where the coefficients $\alpha_{e}$ account for the different laser-coupling strength of a given Rydberg state $\ket{e}$ relative to the Rabi frequency $\Omega$ of the target state $\ket{e_0}$, for which $\alpha_{e_0}=1$. The first term describes the laser-coupling of the two-atom ground state to the singly excited pair state, while the second term corresponds to the subsequent coupling to a doubly excited molecular Rydberg state $\ket{\mu}$. The corresponding coefficient \begin{equation}\label{eq:amu} \bar{\alpha}_{e{\rm g}}^{(\mu)}=\sum_{e^\prime}\alpha_{e^\prime}c^{(\mu)}_{e\Rydblvl^\prime} \end{equation} accounts for the laser-coupling to a given molecular state relative to $\Omega$ and follows from the calculated form of $\ket{\mu({\bf R})}$ given in Eq.(\ref{eq:MolecularEigenstates}). The red colouring of the molecular potential curves shown in Fig.~\ref{fig:S4} corresponds to $\bar{\alpha}_{e_0{\rm g}}^{(\mu)}$ and is thus indicative of the laser-coupling to a given curve via the target state $\ket{e_0}$. The distance dependent Rabi frequencies $\bar{\alpha}_{e{\rm g}}^{(\mu)}\Omega$ in conjunction with the potential curves $V_\mu$ give rise to a distance dependent light shift $\delta E({\bf R})$ of the dressed ground state $\ket{\tilde{\rm g}\tilde{\rm g}}$. The resulting effective potential due to Rydberg-dressing thus emerges as the difference of the two-atom light shift and the asymptotic values, $U({\bf R})=\delta E({\bf R})-\delta E(|{\bf R}|\rightarrow\infty)$. We obtain this potential by numerically diagonalising the full Hamiltonian $\hat{H}_{\rm A}+\hat{V}_{\rm dd}+\hat{H}_{\rm L}$. Fig.~\ref{fig:S4} shows the potentials obtained for dressing to the $31P_{1/2}$ (Fig.~\ref{fig:S4}a) and $31P_{3/2}$ (Figs.~\ref{fig:S4}b-d) states, respectively. Qualitatively, the potentials exhibit the expected soft-core shape, with a $1/ R^6$ tail at large $R$ saturating to a finite plateau value near $R = 0$, while quantitative differences arise from the more complex potential structure compared to the assumption of simple van der Waals interactions between Rydberg atoms. Our exact results reveal sharp resonances in the plateau region which arise from resonance with molecular potentials $V_\mu$ that feature a finite value of $\bar{\alpha}_{e{\rm g}}^{(\mu)}$. Since $\bar{\alpha}_{e{\rm g}}^{(\mu)}\ll1$ these resonance are generally very narrow, but can nevertheless cause resonant Rydberg-state excitation which leads to increased losses. Through a careful choice of the targeted Rydberg states such potential losses are avoided by our optical lattice. \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{Gross_EDfig5.pdf}% \caption{ \label{fig:S5} \textbf{Dynamics of atom number histograms.} We show atom number histograms for the varying interaction phases $\Phi_0$ (as in Fig. 2 in the main text) for detection with and without spin sensitive detection (upper and lower row, respectively). The dashed (solid) lines in the histograms are results of fits with a sum of Gaussian peaks for the left (right) peaks. The red dashed line marks the post-selection threshold, the yellow shaded area indicates the data used to calculate $\gtwo$ shown in Fig. \ref{fig:S6}. The histograms were extracted from the atom number in a region of interest of $9\times9$ central sites to suppress the effect of edge fluctuations of the sample. } \end{figure*} Figs.~\ref{fig:S4}a and b show the potentials used for the calculations of Fig. 2 and 3a of the main text. As can be seen, changing the angular momentum from $J=1/2$ to $J=3/2$ allows to switch the sign of the most relevant Rydberg-Rydberg atom potential curve and thereby permits to generate attractive (Fig.~\ref{fig:S4}a) and repulsive (Fig.~\ref{fig:S4}b) effective spin interactions. Moreover, the absolute value of the Rydberg-Rydberg atom interaction is significantly enhanced for $J=3/2$ leading to a longer range of the corresponding potential. In both of these cases we used a linearly polarised excitation laser whose propagation direction and polarisation vector lies in the plane of the atomic lattice. As this laser-field orientation breaks the rotational symmetry, one might expect an anisotropy of the dressing-induced interactions. However, we have applied a large magnetic field, $B_z=28.6\,\mathrm{G}$, orthogonal to the lattice plane which causes a Zeeman shift that greatly exceeds both the Rabi frequency, $\Omega$, and the detuning $\Delta$ of the Rydberg-excitation laser (Fig.~\ref{fig:S3}). Hence, the magnetic field dictates the underlying symmetry and, thereby, forces the induced interactions to be isotropic which is directly reflected in our correlation measurements shown in Figs.~2a and 3a of the main text. This situation changes dramatically for the measurements shown in Fig. 3b of the main text, where we used a smaller magnetic field, $B_{xy}=0.43\,\mathrm{G}$, now aligned in the plane of the atoms along the propagation axis of the circularly polarised excitation beam. As shown in Figs.~\ref{fig:S4}c and d, this leads to strongly anisotropic interactions. Note that the actual potential curves $V_{\mu}$ remain nearly isotropic, just acquiring a weak anisotropy due to the applied magnetic field. However, the composition of the molecular states in Eq. (\ref{eq:MolecularEigenstates}) is very sensitive to the orientation of the distance vector ${\bf R}$ with respect to the quantisation axis, which is fixed relative to the wave vector of the excitation laser and the magnetic field direction. As a result the laser effectively couples to different curves depending on the orientation, as indicated by the red colouring in Figs.~\ref{fig:S4}c and d. Eventually, this leads to the strong anisotropy of the calculated dressing-induced interactions and our measured correlation functions shown in Fig. 3b of the main text. These examples demonstrate the high degree of tunability via the choice of targeted Rydberg state as well as by controlling the strength and orientation of the applied magnetic field and excitation laser. \section{Correlated loss and post-selection}\label{SecPostSelection} \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{Gross_EDfig6.pdf}% \caption{ \label{fig:S6} \textbf{Dynamics and correlations on post-selected data.} \textbf{a}, Azimuthally averaged $\gtwo$-correlations with post-selection for increasing interaction phases $\Phi_0$ (yellow datapoints). The grey shaded area denotes the region of uncertainty for the measured spin correlation due to the $5\%$ fluctuations and uncertainty of the Rabi frequency $\Omega_s$. The solid yellow line represents a best fit of the correlation amplitude adjusting $\Omega$, shifted by the constant offset $\gtwo_{\infty}$ (grey dashed line, see main text for details) \textbf{b}, Fraction of atoms extracted from the Gaussian fit to the measured atom number histograms (Fig. \ref{fig:S5}) in a region of interest of $9\times9$ sites with and without spin selective detection (blue and red points). The grey shaded area indicates the expected fraction of spin down atoms assuming fully coherent Ising dynamics for a Rabi frequency range $\Omega_s/2\pi=1.33(7)\,\mathrm{MHz}$ with the center indicated by the dashed blue line. The red solid line indicates the initial state filling of $97\%$ in our experiment. The grey dashed line marks the expected fraction of $1$ for a perfectly filled initial state.} \end{figure*} To obtain deeper insight into the loss processes observed in the experiment, we analyse the number histograms underlying the mean atom numbers shown in Fig. 4 in detail on a region of interest of $9\times9$ sites in the center of the initial Mott insulator. The set of atom number histograms is shown in Fig.~\ref{fig:S5}. The observed long tail for short times or the bimodality for later times are not expected from the coherent theory without dissipation and support the hypothesis of a loss process which is correlated in the sense that a single trigger event leads to a significant loss of atoms in the whole ensemble. Assuming this kind of process one can explain the bimodal histograms as well as the spatial offset of the correlation measurements. In order to analyse the measurement results independently of this loss process, we fit the histograms with a sum of two Gaussians (Fig.~\ref{fig:S5}). The strong bimodality for long times allows to interpret the right Gaussian peak as mostly consisting of measurement outcomes where no trigger event has happened. This is confirmed by using the position of this peak as an estimate for the mean atom number and the good agreement with the coherent theory without dissipation, both with and without spin selective detection (Fig.~\ref{fig:S6}), and a similar measurement could reveal the scaling of interaction induced dephasing \cite{Mukherjee2015}. Notably, the data without spin selective detection shows no significant decay up to the largest times, which is in accordance with the lifetime of $2.2\,\mathrm{ms}$ expected if the ideal dressing approximation is applied. Focusing on the left peaks, the ratio of the center positions for the measurements with and without resolving the spin equals approximately $1/2$ for interaction phases $\Phi_0\geq23(5)\,\mathrm{^{\circ}}$, which indicates that before the final $\pi/2$-pulse one spin component is fully eliminated from the sample by the loss process. We can furthermore compare the integrated weight within the right fitted Gaussian peak for detection with or without spin-resolved measurement. Here, we obtain very similar weights for the two cases, which allows to conclude that the acquired phase in events without trigger event is not significantly affected by the loss process. Exploiting the strong bimodality of the atom number histograms also allows to analyse the spin-spin correlation function $\gtwo$ (Fig. 2) on post-selected datasets. If only events are kept where the detected fraction of atoms within the region of interest of $9\times9$ sites is larger than $1/2$, we obtain the spin-spin correlation shown in Fig.~\ref{fig:S6}. Especially for the small interaction phase $\Phi_0=12(3)\,\mathrm{^\circ}$, the bimodality of the histogram has not yet fully developed, which hints at a finite time scale associated with the loss of the sample after the initial black-body trigger event. While this introduces some ambiguity into the post-selection procedure, setting a cutoff fraction of $1/2$ appears reasonable since a $\ket{\downarrow}$-fraction lower than this value can not be expected from the purely unitary dynamics at early times. For the largest two interaction phases $\Phi_0\geq35(7)\,\mathrm{^\circ}$, however, the post-selection mainly retains events in the high atom number peak, which predominantly result from the unitary dynamics, undisturbed by dissipative processes. Comparing the $\gtwo$ amplitude for increasing interaction phase $\Phi_0$ with the expected result for the ideal analytic result without loss and taking into account the $5\%$ variation of the Rabi frequency of $\Omega_s/2\pi=1.33(7)\,\mathrm{MHz}$ we obtain quantitative agreement. Whereas the correlation amplitude is not changed significantly apart from an increase for the largest interaction phases $\Phi_0$, the constant global offset $\gtwo_{\infty}$ is strongly reduced, as expected when the loss events with a correlation range exceeding the size of the whole sample are removed. For the $31P_{3/2}$ anisotropy data shown in Fig. 3b in the main text the anisotropic correlation signal nearly disappears in the background when we analyse the data without post-selection (Fig.~\ref{fig:S7}). As in the $31P_{1/2}$ case, this background stems from the atom loss initiated by black-body radiation. However, the post-selection filters the corresponding events effectively and yields a vanishing offset for $\gtwo$ in quantitative agreement with the fully coherent dynamics predicted by our theory (Fig.~3b). Note that Fig.~3b shows the only post-selected dataset in the main text. \section{Theoretical description of the spin dynamics}\label{SecTheoSpinDynamics} Although the applied pulse sequences (Fig.~\ref{fig:S1}) can generate sizeable spin correlations and entanglement, one can nevertheless formulate an exact solution for the final many-body state. This is possible because the dynamics induced by the microwave pulses and Rydberg-dressing stages separately permit an analytical description. The time evolution operator $\hat{U}_{\pi/2}=\bigotimes_i \hat{U}_{\frac{\pi}{2}}^{(i)}$ of the microwave $\pi/2$-pulse factorizes into one-body operators acting on particle $i$ \begin{equation} \hat{U}_{\frac{\pi}{2}}^{(i)} = \frac{1}{\sqrt{2}}\matfour{1}{-\mathrm{i}}{-\mathrm{i}}{1}\;, \end{equation} transforming each spin state as $\ket{\downarrow} \to (\ket{\downarrow} - \mathrm{i}\ket{\uparrow})/\sqrt{2}$ and $\ket{\uparrow} \to (\ket{\uparrow} - \mathrm{i}\ket{\downarrow})/\sqrt{2}$. Equivalently, we have $\hat{U}_{\pi}=\bigotimes_i \hat{U}_{\pi}^{(i)}$ \begin{equation} \hat{U}_{\pi}^{(i)} = \hat{U}_{\frac{\pi}{2}}^{(i)}\hat{U}_{\frac{\pi}{2}}^{(i)}=\matfour{0}{-\mathrm{i}}{-\mathrm{i}}{0}\;. \end{equation} The Hamiltonian that governs the spin evolution during the Rydberg-dressing stage is given by Eq.(1) of the main text. For the analysis below it is more convenient to reformulate the Hamiltonian \begin{equation}\label{EqHamiltonianInteractionPhase} \hat{H}_{\rm dr} = \sum_{i<j} U_{ij}(t) \hat{\sigma}_{\uparrow\uparrow}^{(i)}\hat{\sigma}_{\uparrow\uparrow}^{(j)} + \sum_i \delta(t) \hat{\sigma}_{\uparrow\uparrow}^{(i)} \end{equation} in terms of projection operators $\hat{\sigma}_{\uparrow\uparrow}^{(i)}=\ket{\uparrow}_i\bra{\uparrow}$ for the $i^{\rm th}$ atom. The single-atom energy shift is given by Eq.(\ref{eq:shift}) and $U_{ij}$ denotes the time dependent Ising spin interaction generated by the Rydberg-dressing pulse. The corresponding time evolution operator over the dressing period \begin{equation} \hat{U}_{\rm dr}=\bigotimes_i\exp\left(-\mathrm{i}\varphi \hat{\sigma}_{\uparrow\uparrow}^{(i)}-\mathrm{i}\sum_{j>i}\Phi_{ij} \hat{\sigma}_{\uparrow\uparrow}^{(i)}\hat{\sigma}_{\uparrow\uparrow}^{(j)}\right) \end{equation} induces correlated phase rotations while leaving the spin-state populations unchanged. Here we have defined the phases \begin{equation} \varphi =\int \delta(t) {\rm d}t\:,\quad\Phi_{ij} =\int U_{ij}(t) {\rm d}t \end{equation} where the time integration is understood to extend over a single dressing pulse. Note that $\ket{\uparrow}$ refers to the dressed hyperfine ground state, assuming adiabatic following up a varying Rabi frequency $\Omega(t)$. In particular $\ket{\uparrow}=\ket{F=2,m_F=-2}\equiv\ket{g_\uparrow}$ if $\Omega=0$ during the microwave pulses. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{Gross_EDfig7.pdf}% \caption{\textbf{Atom number histogram for anisotropy measurement.} The number histogram of atoms remaining after the Ramsey sequence including spin echo (Fig. 3b) shows strong bimodality. Evaluation of the spin-spin correlation on the events in the left peak of the histogram, as well as on the full dataset show no significant structure but a large offset $\gtwo_{\infty}$ (left and middle panel above the figure), whereas post-selection on the events in the right peak, bears out the anisotropic correlation (right panel above figure). } \label{fig:S7} \end{figure} \subsection{Ramsey spectroscopy}\label{SecTheoRamsey} In our experiment we prepare all atoms in the $\ket{\downarrow}$-state, i.e. the initial many-body state is $\ket{\Psi(0)}=\ket{\downarrow\downarrow\ldots\downarrow}$. Application of the first $\pi/2$-pulse, thus creates a superposition state \begin{equation} \hat{U}_{\frac{\pi}{2}}\ket{\Psi(0)}=\sum_{\sigma_1, \ldots, \sigma_N} f_{\sigma_1}f_{\sigma_2} \ldots f_{\sigma_N} \ket{\sigma_1 \sigma_2 \ldots \sigma_N}, \end{equation} of all possible spin configurations, where $\sigma_i = \{\downarrow, \uparrow\}$ and $f_{\sigma_i}$ denotes the amplitude for atom $i$ of its state $\ket{\sigma_i}$, i.e. $f_{\downarrow} = 1/\sqrt{2}$ and $f_{\uparrow} = -\mathrm{i} / \sqrt(2)$. The subsequent dressing stage does not affect the populations of these $N$-body state components but causes a state-dependent phase picked up by each component $\ket{\sigma_1 \sigma_2 \ldots \sigma_N}$. Applying the time evolution operator given above we can, therefore, write the many-body state at the end of the dressing stage as \begin{equation}\label{eq:psi} \ket{\Psi } = \sum_{\sigma_1, .., \sigma_N} \prod_{i} \br{f_{\sigma_i} \mathrm{e}^{-\mathrm{i} \varphi s^{(\uparrow)}_{\sigma_i}}\mathrm{e}^{-\mathrm{i} \sum_j \Phi_{ij} s^{(\uparrow)}_{\sigma_i}s^{(\uparrow)}_{\sigma_j}}}\ket{\sigma_1 \sigma_2 \ldots \sigma_N}, \end{equation} where $s^{(\uparrow)}_{\sigma_i}$ is defined as $\hat{\sigma}_{\uparrow\uparrow}^{(i)}\ket{\sigma_i}=s^{(\uparrow)}_{\sigma_i}\ket{\sigma_i}$, i.e., valued $s^{(\uparrow)}_{\sigma_i}=1$ if $\sigma_i = \ \uparrow$ and $0$ otherwise. The Ramsey interferometer is closed by a final $\pi/2$ pulse, such that we can calculate any observable after the Ramsey sequence from \begin{equation}\label{eq:exp} \langle\hat{O}\rangle = \bra{\Psi }\hat{U}_{\frac{\pi}{2}}^{-1} \hat{O} \hat{U}_{\frac{\pi}{2}} \ket{\Psi }. \end{equation} For instance, for the spin projection operator $\hat{\sigma}_{\downarrow\downarrow}^{(i)}$, measured in our experiment, we have \begin{equation}\label{eq:sdd} \hat{U}_{\frac{\pi}{2}}^{-1} \hat{\sigma}_{\downarrow\downarrow}^{(i)} \hat{U}_{\frac{\pi}{2}} = \frac{1}{2} \left[\hat{\sigma}_{\downarrow\downarrow}^{(i)} + \hat{\sigma}_{\uparrow\uparrow}^{(i)} + \mathrm{i}(\hat{\sigma}_{\uparrow\downarrow}^{(i)} - \hat{\sigma}_{\downarrow\uparrow}^{(i)})\right]. \end{equation} where $\hat{\sigma}_{\uparrow\downarrow}^{(i)}=\ket{\uparrow}_i\bra{\downarrow}$. Combining Eqs.(\ref{eq:psi})-(\ref{eq:sdd}) we finally obtain for the Ramsey signal \begin{equation}\label{EqRamseyCoherent} \langle\hat{\sigma}_{\downarrow\downarrow}^{(i)}\rangle = \frac{1}{2} - \frac{1}{2} \mathrm{Re}\brs{\mathrm{e}^{-\mathrm{i} \varphi}\prod_{k \neq i} \br{\frac{1}{2} + \frac{1}{2}\mathrm{e}^{-\mathrm{i} \Phi_{ik}}}}. \end{equation} \begin{figure*} \includegraphics[width=0.8\textwidth]{Gross_EDfig8.pdf} \caption{\textbf{Spontaneous decay channels for bare and dressed atomic states.} \textbf{a}, Schematic representation of the bare atomic levels, indicating decay rates from the Rydberg state. \textbf{b}, Dressed levels (up to first order perturbation theory): the $\ket{g_\uparrow}$ state is dressed with the Rydberg level $\ket{e}$, such the new eigenstates $\ket{\uparrow}$ and $\ket{\tilde{e}}$ will acquire a light shift. Additionally, new decay channels are opened up out of the dressed ground state $\ket{\uparrow}$ to all other states, including itself.} \label{fig:S8} \end{figure*} \subsection{Spin echo sequence}\label{SecTheoEcho} Repeating the same steps for the spin echo sequence we can write the $N$-body wave function after the second dressing stage as \begin{eqnarray} \ket{\Psi} &=& \sum_{\sigma_1, .., \sigma_N } \prod_{i} \Big(\tilde{f}_{\sigma_i} \mathrm{e}^{-\mathrm{i} \left[\varphi^{(1)} s_{\sigma_i}^{(\downarrow)} + \varphi^{(2)}s_{\sigma_i}^{(\uparrow)}\right]} \nonumber\\ &&\times \mathrm{e}^{-\mathrm{i} \sum_j \left[ \Phi_{ij}^{(1)}s_{\sigma_i}^{(\downarrow)}s_{\sigma_j}^{(\downarrow)} + \Phi_{ij}^{(2)} s_{\sigma_i}^{(\uparrow)} s_{\sigma_j}^{(\uparrow)}\right]}\Big)\ket{\sigma_1 \sigma_2 \ldots \sigma_N}, \end{eqnarray} where $\tilde{f}_{\sigma_i}$ is the single particle amplitude after the $\pi$ pulse ($\tilde{f}_{\downarrow} = -1 / \sqrt{2}$ and $\tilde{f}_{\uparrow} = -\mathrm{i} / \sqrt{2}$) and $s_{\sigma_j}^{(\downarrow)}=1-s_{\sigma_j}^{(\uparrow)}$. The phases $\varphi^{(1)}$ and $\varphi^{(2)}$ denote the total accumulated single particle phases of the first and second dressing pulse, respectively. Similarly, $\Phi_{ij}^{(1)}$ and $\Phi_{ij}^{(2)}$ denote the total accumulated interaction phases during the first and second dressing pulse, respectively. Using this expression in Eq.~(\ref{eq:sdd}) we obtain for the spin echo signal \begin{equation}\label{EqEchoSignalCoherent} \langle\hat{\sigma}_{\downarrow\downarrow}^{(i)}\rangle = \frac{1}{2} + \frac{1}{2}\mathrm{Re} \brs{\mathrm{e}^{\mathrm{i} [\varphi^{(1)} - \varphi^{(2)}]}\prod_{j\neq i} \frac{1}{2}\br{\mathrm{e}^{\mathrm{i}\Phi_{ij}^{(1)}}+\mathrm{e}^{-\mathrm{i}\Phi_{ij}^{(2)}}}}. \end{equation} Ideally the two dressing pulses are identical, such that $\varphi^{(1)} = \varphi^{(2)} = \varphi$ and $\Phi_{jk}^{(1)} = \Phi_{jk}^{(2)} \equiv \Phi_{jk}$. Using the experimentally determined laser pulses we have studied effects of occurring field fluctuations and found that they have a negligible effect for all of the relevant observables. Hence, we neglect potential asymmetries between the two pulses such that Eq.~(\ref{EqEchoSignalCoherent}) reduces to \begin{equation}\label{EqEchoCoherent} \langle\hat{\sigma}_{\downarrow\downarrow}^{(i)}\rangle = \frac{1}{2} + \frac{1}{2}\prod_{j\neq i}\cos (\Phi_{ij}). \end{equation} Following the same procedure one can readily obtain the corresponding correlation function \begin{widetext} \begin{eqnarray} g_{ij}^{(2)}&=& \langle \hat{\sigma}_{\downarrow\downarrow}^{(i)} \hat{\sigma}_{\downarrow\downarrow}^{(j)}\rangle - \langle \hat{\sigma}_{\downarrow\downarrow}^{(i)}\rangle \langle \hat{\sigma}_{\downarrow\downarrow}^{(j)}\rangle \nonumber \\ &=& \frac{1}{8}\left(\prod_{k\neq i,j}\cos\Phi_{k,ij}^{(+)}+\prod_{k\neq i,j}\cos\Phi_{k,ij}^{(-)}\right)-\frac{1}{4}\cos\Phi_{ij}^2\prod_{k\neq i,j} \cos\Phi_{ik} \cos\Phi_{jk}\label{Eqg2Coherent}\\ &\approx&\frac{\Phi_{ij}^2}{4}\:\:({ \Phi_{ij}\ll1}), \end{eqnarray} \end{widetext} where $\Phi_{k,ij}^{(\pm)}=\Phi_{ik}\pm\Phi_{jk}$, and $\Phi_{ii} = 0$. In the last step we have assumed $\Phi_{ij}\ll1$, valid for short dressing times, in which case the correlation function directly reflects the shape of the interaction potential. \subsection{Including spontaneous emission}\label{SecSpontEm} In the presence of dissipative effects, the induced spin dynamics still permits an analytical treatment \cite{Foss-Feig2013}. Here, we first focus on effects of spontaneous emission, i.e. spontaneous decay of a Rydberg state to one of the atomic ground states. In the following, it is important to distinguish between the bare (undressed) ground state $\ket{g_\uparrow} = \ket{F = 2, m_F = -2}$ and (dressed) eigenstate in the presence of the excitation laser light, $\ket{\uparrow}$. As illustrated in Fig.~\ref{fig:S8}a, we need to distinguish three different decay channels. This includes the decay of the Rydberg state to the targeted ground state $\ket{g_\uparrow}$ with a rate $\Gamma_\uparrow$, described by the jump operator \begin{align} \hat{\mathcal{C}}_{2} &= \sqrt{\Gamma_\uparrow} \ket{g_\uparrow}\bra{e}\;, \end{align} the decay of the Rydberg state to the uncoupled ground state $\ket{\downarrow}$ with a rate $\Gamma_\downarrow$, described by the jump operator \begin{align} \hat{\mathcal{C}}_{1} &= \sqrt{\Gamma_\downarrow} \ket{\downarrow}\bra{e}\;, \end{align} and the decay of the Rydberg state to the remaining ground state manifold denoted by $\ket{g_0}$ with a rate $\Gamma_0$, described by the jump operator \begin{align} \hat{\mathcal{C}}_{0} &= \sqrt{\Gamma_0} \ket{g_0}\bra{e}\;. \end{align} The corresponding master equation describing the time evolution of the density matrix during the dressing stage thus reads \begin{widetext} \begin{align} \pdd{\hat{\rho}(t)}{t} = -\frac{\mathrm{i}}{\hbar} \comm{\hat{H}_{\rm dr}}{\hat{\rho}} + \sum_{i=1}^N\sum_{k} \br{\hat{\mathcal{C}}_{k,i} \hat{\rho} \C^\dagger_{k,i} - \frac{1}{2}\br{\C^\dagger_{k,i} \hat{\mathcal{C}}_{k,i} \hat{\rho} + \hat{\rho} \C^\dagger_{k,i}\hat{\mathcal{C}}_{k,i}}}.\label{EqLindbladDiagonal} \end{align} \end{widetext} Next we need to express the Lindblad operator in the basis of the dressed spin states. As illustrated in Fig.~\ref{fig:S8}b, this leads to an effective dephasing of the dressed $\ket{\uparrow}$ state and opens up additional decay channels to the remaining ground states. The corresponding jump operators are given by \begin{align} \hat{\mathcal{J}}_{02} &= \sqrt{\Gamma_0}{\frac{\Omega}{2|\Delta|}} \ket{g_0}\bra{\uparrow},\label{EqJ02}\\ \hat{\mathcal{J}}_{12} &= \sqrt{\Gamma_{\downarrow}}{\frac{\Omega}{2|\Delta|}} \ket{\downarrow}\bra{\uparrow},\\ \hat{\mathcal{J}}_{22} &= \sqrt{\Gamma_{\uparrow}}{\frac{\Omega}{2|\Delta|}} \ket{\uparrow}\bra{\uparrow}. \end{align} In addition, one obtains a dissipative transition from the dressed ground state to the dressed Rydberg state $\ket{\tilde e}$, which is predominantly composed of $\ket{e}$. The associated jump operator is given by \begin{align} \hat{\mathcal{J}}_{32} &= \sqrt{\gamma_{\uparrow}}\br{\frac{\Omega}{2|\Delta|}}^2 \ket{\tilde{e}}\bra{\uparrow},\label{EqJ32} \end{align} The decay processes out of $\ket{\tilde e}$ are approximately equal to the bare atomic jump operators \begin{eqnarray}\label{EqJk3} \hat{\mathcal{J}}_{k0} &=& \sqrt{\Gamma_{0}} \ket{g_0}\bra{\tilde e},\nonumber\\ \hat{\mathcal{J}}_{k1} &=& \sqrt{\Gamma_{\downarrow}} \ket{\downarrow}\bra{\tilde e},\nonumber\\ \hat{\mathcal{J}}_{k2} &=& \sqrt{\Gamma_{\uparrow}} \ket{\uparrow}\bra{\tilde e}, \end{eqnarray} with the exception of an arising dephasing term \begin{align}\label{EqJ33} \hat{\mathcal{J}}_{33} &= \sqrt{\Gamma_{\uparrow}}\br{\frac{\Omega}{2|\Delta|}} \ket{\tilde e}\bra{\tilde e}. \end{align} Since the adiabatic laser-coupling dressing does not populate the dressed Rydberg state and the dissipative coupling Eq.~(\ref{EqJ32}) is suppressed by a factor $\Omega/2|\Delta|$ we can neglect all jump operators involving $\ket{\tilde e}$. With this simplification the master equation in the dressed-state representation reads \begin{widetext} \begin{align} \pdd{\hat{\rho}(t)}{t} = -\frac{\mathrm{i}}{\hbar} \comm{\hat{H}_{\rm dr}}{\hat{\rho}} + \sum_{m = 0}^2 \sum_{i} \gamma_m \br{\hat{\mathcal{J}}_{m, i} \hat{\rho} \J^\dagger_{m, i} - \frac{1}{2}\br{\J^\dagger_{m,i} \hat{\mathcal{J}}_{m,i} \hat{\rho} + \hat{\rho} \J^\dagger_{m,i}\hat{\mathcal{J}}_{m,i}}}, \label{EqLindblad} \end{align} \end{widetext} where we have defined the operators for particle $i$ \begin{align} \hat{\mathcal{J}}_{0,i} &= \ket{g_0}_i\bra{\uparrow},\label{EqJ0}\\ \hat{\mathcal{J}}_{1,i} &= \ket{\downarrow}_i\bra{\uparrow},\\ \hat{\mathcal{J}}_{2,i} &= \ket{\uparrow}_i\bra{\uparrow}, \end{align} and the effective rates \begin{align} \gamma_0 &= {\Gamma_{0}}\br{\frac{\Omega}{2\Delta}}^2,\label{EqRate0}\\ \gamma_\downarrow &= {\Gamma_{\downarrow}}\br{\frac{\Omega}{2\Delta}}^2,\\ \gamma_\uparrow &= {\Gamma_{\uparrow}}\br{\frac{\Omega}{2\Delta}}^2\label{EqRate2}. \end{align} This diagonal form of the Lindblad equation (\ref{EqLindblad}) enables the analytic treatment of the correlated spin dynamics. Using a Monte Carlo wave function approach \cite{Molmer1993a}, the expectation value of an operator is evaluated as a sum of coherent quantum trajectories interspersed with discrete jumps. Each trajectory is weighted by the corresponding jump probability, determined by the rates (\ref{EqRate0}) - (\ref{EqRate2}). Formally, for some operator $\hat{O}$, \begin{equation}\label{EqQMC} \expval{\hat{O}} = \sum_{\textrm{traj.}} P(\textrm{traj.}) \expval{\hat{O}}_{\textrm{traj.}}, \end{equation} where $P(\textrm{traj.})$ is the probability for a given quantum trajectory and $\langle \hat{O} \rangle_{\textrm{traj.}}$ is the expectation value of $\hat{O}$ at the end of the trajectory. For our particular pulse sequences the expectation values of the individual trajectories can be calculated by applying all jump operators at the initial time \cite{Foss-Feig2013} . Each trajectory expectation value can then be evaluated using similar techniques to those described for the unitary evolution, allowing for an analytical evaluation of the sum in Eq. (\ref{EqQMC}). Assuming that the dressing field is a simple square-pulse of duration $t$, the Ramsey signal in the presence of spontaneous emission can be written in a compact form \begin{align}\label{EqRamseySpontEmission} \langle \hat{\sigma}_{\downarrow\downarrow}^{(i)} \rangle &= \frac{1}{4} + \frac{\mathrm{e}^{-(\gamma_0 + \gamma_\downarrow) t}\gamma_0 + \gamma_\downarrow}{4 (\gamma_0 + \gamma_\downarrow)} - \frac{1}{2}\mathrm{e}^{-(\gamma_0 + \gamma_\downarrow + \gamma_\uparrow) t/2}\ \mathrm{Re} \br{\Xi}, \end{align} with \begin{widetext} \begin{equation} \Xi = \frac{\mathrm{e}^{-\mathrm{i}\varphi }}{2^N} \prod_{j \neq{k}} \br{1 + \mathrm{e}^{-(\gamma_0 + \gamma_{\downarrow})t} \mathrm{e}^{-\mathrm{i}\Phi_{jk} } + (\gamma_0 + \gamma_\downarrow)\frac{1 - \mathrm{e}^{-(\gamma_0+\gamma_{\downarrow})t}\mathrm{e}^{-\mathrm{i}\Phi_{jk} }}{\gamma_0 + \gamma_\downarrow + \mathrm{i} \Phi_{jk} }}. \end{equation} \end{widetext} In Fig.~\ref{fig:S9} we show the fraction \begin{equation}\label{eq:fdown} f_\downarrow=N^{-1}\sum_i\langle \hat{\sigma}_{\downarrow\downarrow}^{(i)} \rangle \end{equation} obtained from Eq.(\ref{EqRamseySpontEmission}) in comparison to the coherent result Eq. (\ref{EqRamseyCoherent}) for typical parameters of our experiment. Clearly, the influence of spontaneous emission is negligible for timescale considered in our measurements, highlighting the promise of Rydberg-dressing for observing unitary dynamics of interacting spins. \subsection{Effects of black-body radiation}\label{SecBBrad} As discussed in the main text, our experiments suggest the presence of additional dissipation that induces loss of a sizeable fraction of the atoms but is triggered with a small probability. We attribute this loss to black-body radiation (BBR), which stimulates incoherent transitions to other Rydberg states. This process has fundamentally different consequences for Rydberg-dressing. As described in the previous section, spontaneous emission from the virtually excited Rydberg states causes comparably weak decoherence, in particular because the dominant quantum jump processes leave the atoms within the Rydberg-dressed ground state manifold. The situation is profoundly different for BBR-induced transitions between Rydberg states. Here, the associated jump process fully projects the atomic state onto a Rydberg state, and thereby creates a real Rydberg atom out of a virtual Rydberg excitation. Not only does this take an atom out of the dressed ground state manifold, but also generates Rydberg states ($n^\prime S$ or $n^\prime D$) that feature direct dipole-dipole interactions with the laser-coupled $nP$ states. Such strong interactions are expected to cause substantial line broadening facilitating near-resonant laser-excitation of Rydberg states which eventually leads to rapid atom loss through decay out of the spin manifold ($\ket{\downarrow}$ and $\ket{\uparrow}$) and mechanical motion induced by the optical lattice potential or the Rydberg-Rydberg interactions. Here we do not attempt to describe the detailed dynamics of these loss processes, but propose a phenomenological model assuming an instantaneous loss of all atoms in the Rydberg-dressed state $\ket{\uparrow}$, triggered by one-body BBR-induced transitions with a rate $\gamma_{\rm BB}$. Within the above Monte Carlo wave function approach \cite{Molmer1993a} this process is described by a projection of a given $N$-body state $\ket{\Psi}$ onto $\br{\ket{\downarrow}\bra{\downarrow} + \ket{0}\bra{\uparrow}}^N \ket{\Psi}$. The complete loss of all population in the $\ket{\uparrow}$ state stops any further dynamics during the dressing stage, which greatly reduces the complexity of the problem and the number of possible quantum trajectories. For both pulse sequences used in our experiments (Fig.~\ref{fig:S1}) each atom carries a $50\%$ $\ket{\uparrow}$-population during the dressing stage. The probability for a BBR transition to occur is thus \begin{equation} p_{\rm BB}=\frac{1}{2}-\frac{1}{2}\exp\!\left(-\gamma_{\rm BB}\int_0^{t/2} \beta(\tilde{t})^2 {\rm d}\tilde{t}\right)\;, \end{equation} where $\beta(t)^2=\Omega(t)^2/(4\Delta^2)$ is the Rydberg-state population of the dressed $\ket{\uparrow}$-state defined in the main text and the integration extends over a single dressing pulse of duration $t/2$. Since the decay can occur only once per dressing pulse, the probability that \emph{none} of the $N$ atoms undergoes a BBR-induced transition during a single dressing period is \begin{equation} P_0 = (1-p_{\rm BB})^N \approx \exp\br{-\frac{N}{2} \gamma_{\rm BB}\int_0^{t/2} \beta(\tilde{t})^2 {\rm d}\tilde{t}}. \end{equation} \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{Gross_EDfig9.pdf} \caption{\textbf{Influence of spontaneous decay on spin coherence.} Ramsey signal, Eq.(\ref{eq:fdown}) as a function of the dressing time $t$. The black solid line shows the fully coherent result Eq.(\ref{EqRamseyCoherent}) and gray line the result in the presence of spontaneous emission, Eq.(\ref{EqRamseySpontEmission}). Calculations where performed for $31P_{1/2}$ Rydberg states using the effective spin-spin interaction potential shown in Fig.~\ref{fig:S4}a. For these and all other parameters of our experiment, effects of spontaneous emission are practically negligible, even at large times.} \label{fig:S9} \end{center} \end{figure} \subsubsection{Ramsey spectroscopy}\label{SecRamseyBB} For the Ramsey sequence (Fig.~\ref{fig:S1}a) the collective loss leaves an average of $N/2$ atoms in the $\ket{\downarrow}$-state, half of which is subsequently transferred to the $\ket{\uparrow}$-state by the final $\pi/2$ pulse, giving a final population of $\langle\hat{\sigma}_{\downarrow\downarrow}^{(i)}\rangle=1/4$. The total Ramsey signal is simply obtained, according to Eq. (\ref{EqQMC}), as the weighted sum of the signals with and without a BBR-transition \begin{equation}\label{EqRamseyBB} \langle\hat{\sigma}_{\downarrow\downarrow}^{(i)}\rangle = P_0 \langle\hat{\sigma}_{\downarrow\downarrow}^{(i)}\rangle_{\rm c} + \frac{1}{4}(1 - P_0), \end{equation} where $\langle\hat{\sigma}_{\downarrow\downarrow}^{(i)}\rangle_{\rm c}$ denotes the result of the unitary evolution given by Eq. (\ref{EqRamseyCoherent}). This expression has been used to obtain the theory data shown in Fig. 1d of the main text. \subsubsection{Spin echo sequence}\label{SecEchoBB} For the spin echo sequence we need to consider four different types of quantum trajectories: the completely unitary evolution, a BBR-transition occurring during the first dressing pulse, a BBR-transition occurring during the second dressing pulse and a BBR-transition occurring during both dressing pulses. As for the Ramsey sequence the unitary evolution yields a $50\%$ population in the Rydberg-dressed state such that the probability for a BBR-transition to occur during one of the two dressing pulses is given by $2P_0(1-P_0)$. Following the same arguments as above the final spin echo signal conditioned on having exactly one loss event is again $\langle\hat{\sigma}_{\downarrow\downarrow}^{(i)}\rangle=1/4$. Since the loss destroys all spin-correlations the corresponding two-body expectation values are simply $\langle\hat{\sigma}_{\downarrow\downarrow}^{(i)}\hat{\sigma}_{\downarrow\downarrow}^{(j)}\rangle=1/16$. If a BBR-transition occurs during both of the dressing-pulses it causes complete atom loss and, thus, does not contribute to any relevant observable. The remaining number of atoms after the two dressing pulses of total time $t$ is thus given by \begin{equation}\label{EqEchoBB_N} \begin{split} N &= P_0^2 N(0) + N(0) P_0 [1 - P_0]\\ &= N(0) \exp\br{-\frac{N(0)}{2} \gamma_{\rm BB}\int_0^{t/2} \beta(\tilde{t})^2 {\rm d}\tilde{t}}\;, \end{split} \end{equation} and the spin echo signal can be written as \begin{equation}\label{EqEchoBB_spin} \langle\hat{\sigma}_{\downarrow\downarrow}^{(i)}\rangle = P_0^2 \langle\hat{\sigma}_{\downarrow\downarrow}^{(i)}\rangle_{\rm c} + \frac{1}{2} P_0 [1 - P_0], \end{equation} where $\langle\hat{\sigma}_{\downarrow\downarrow}^{(i)}\rangle_{\rm c}$ is given by Eq. (\ref{EqEchoCoherent}). Eqs.~(\ref{EqEchoBB_N}) and (\ref{EqEchoBB_spin}) have been used for the theory curves in Fig. 4 of the main text. Similarly we obtain for the spin correlation function \begin{equation} g_{ij}^{(2)} = P_0^2\left( g_{ij,{\rm c}}^{(2)}+\langle\hat{\sigma}_{\downarrow\downarrow}^{(i)}\rangle_{\rm{c}}\langle\hat{\sigma}_{\downarrow\downarrow}^{(j)}\rangle_{\rm c}\right) + \frac{1}{8}P_0 [1 - P_0] - \expval{\hat{\sigma}_{\downarrow\downarrow}^{(i)}}\expval{\hat{\sigma}_{\downarrow\downarrow}^{(i)}}, \label{Eqg2BB} \end{equation} where $\langle\hat{\sigma}_{\downarrow\downarrow}^{(i)}\rangle_{\rm c}$, $g_{ij, c}^{(2)}$ and $\langle\hat{\sigma}_{\downarrow\downarrow}^{(i)}\rangle$ are given by Eq.~(\ref{EqEchoCoherent}), Eq.~(\ref{Eqg2Coherent}) and Eq.~(\ref{EqEchoBB_spin}), respectively. Eq.~(\ref{Eqg2BB}) is used for the theoretical data shown Figs. 2 and 3a of the main text. \end{document}
1,941,325,220,627
arxiv
\section{Introduction} A Weyl semimetal possesses a pair of, or pairs of, nondegenerate Dirac cones with opposite chirality.~\cite{shindo,murakami,wan,yang,burkov1,burkov2,WK, delplace,halasz,sekine} The pair of Dirac cones can be nondegenerate if time-reversal symmetry or inversion symmetry is broken. The electronic property of a Weyl semimetal is significantly influenced by the position of a pair of Weyl nodes in reciprocal and energy spaces, where a Weyl node represents the band-touching point of each Dirac cone. In the absence of time-reversal symmetry, a pair of Weyl nodes is separated in reciprocal space. In this case, low-energy states with chirality appear on the surface of a Weyl semimetal~\cite{wan} if the Weyl nodes are projected onto two different points in the corresponding surface Brillouin zone. A notable feature of such chiral surface states is that they propagate only in a given direction, which depends on the position of the Weyl nodes. This gives rise to an anomalous Hall effect.~\cite{burkov1} If inversion symmetry is also broken in addition to time-reversal symmetry, a pair of Weyl nodes is also separated in energy space. In this case, the propagating direction of chiral surface states is tilted according to the deviation of the Weyl nodes. If only inversion symmetry is broken, no chiral surface state appears since the pair of Weyl nodes coincides in reciprocal space. To date, some materials have been experimentally identified as Weyl semimetals.~\cite{weng,huang1,xu1,lv1,lv2,xu2,souma,kuroda} Let us focus on the case in which a Weyl semimetal with a pair of Weyl nodes at $\mib{k}_{\pm} = (0,0,\pm k_{0})$ is in the shape of a long prism parallel to the $z$ axis. In this case, chiral surface states appear on the side of the system. In the presence of inversion symmetry, they typically propagate in a direction perpendicular to the $z$ axis; thus, we expect that a spontaneous charge current appears to circulate in the system near the side surface. If inversion symmetry is additionally broken, the propagating direction is tilted to the $z$ direction; thus, an electron in the chiral surface state shows spiral motion around the system.~\cite{baireuther} Thus, we expect that a spontaneous charge current has a nonzero component in the $z$ direction. This longitudinal component must be canceled out by the contribution from bulk states if they are integrated over a cross section parallel to the $xy$ plane. In this paper, we theoretically examine whether a spontaneous charge current appears in the ground state of a Weyl semimetal. Our attention is focused on the case where time-reversal symmetry is broken. We calculate the spontaneous charge current induced near the side of the system by using a simple model with particle-hole symmetry. We find that no spontaneous charge current appears when the Fermi level, $E_{F}$, is located at the band center, which is set equal to $0$ hereafter, implying that the contribution from chiral surface states is completely canceled out by that from bulk states. However, once $E_{F}$ deviates from the band center, the spontaneous charge current appears to circulate around the side surface of the system and its direction of flow is opposite for the cases of electron doping (i.e., $E_{F} > 0$) and hole doping (i.e., $E_{F} < 0$). The circulating charge current is shown to be robust against weak disorder. In the absence of inversion symmetry, we show that chiral surface states induce the longitudinal component of a spontaneous charge current near the side surface, which is compensated by the contribution from bulk states appearing beneath the side surface. This longitudinal component is shown to be fragile against disorder. In the next section, we present a tight-binding model for Weyl semimetals and show the absence of a spontaneous charge current when the Fermi level is located at the band center. In Sect.~3, we derive a tractable continuum model from the tight-binding model and analytically determine the magnitude of a spontaneous charge current induced by the deviation of the Fermi level from the band center. In Sect.~4, we numerically study the behaviors of a spontaneous charge current by using the tight-binding model. We also examine the effect of disorder on the spontaneous charge current. The last section is devoted to a summary and discussion. We set $\hbar = 1$ throughout this paper. \section{Model} Let us introduce a tight-binding model for Weyl semimetals on a cubic lattice with lattice constant $a$. Its Hamiltonian is given by $H = H_{0}+H_{x}+H_{y}+H_{z}$ with~\cite{yang,burkov1} \begin{align} H_{0} & = \sum_{l,m,n} |l,m,n \rangle h_{0} \langle l,m,n| , \\ H_{x} & = \sum_{l,m,n} \left\{ |l+1,m,n \rangle h_{x} \langle l,m,n| + {\rm h.c.} \right\} , \\ H_{y} & = \sum_{l,m,n} \left\{ |l,m+1,n \rangle h_{y} \langle l,m,n| + {\rm h.c.} \right\} , \\ H_{z} & = \sum_{l,m,n} \left\{ |l,m,n+1 \rangle h_{z} \langle l,m,n| + {\rm h.c.} \right\} , \end{align} where the indices $l$, $m$, and $n$ are respectively used to specify lattice sites in the $x$, $y$, and $z$ directions and \begin{align} |l,m,n \rangle \equiv \left[ |l,m,n \rangle_{\uparrow}, |l,m,n \rangle_{\downarrow} \right] \end{align} represents the two-component state vector with $\uparrow, \downarrow$ corresponding to the spin degree of freedom. The $2 \times 2$ matrices are \begin{align} h_{0} & = \left[ \begin{array}{cc} 2t\cos(k_{0}a) + 4B & 0 \\ 0 & -2t\cos(k_{0}a) - 4B \end{array} \right] , \\ h_{x} & = \left[ \begin{array}{cc} -B & \frac{i}{2}A \\ \frac{i}{2}A & B \end{array} \right] , \\ h_{y} & = \left[ \begin{array}{cc} -B & \frac{1}{2}A \\ -\frac{1}{2}A & B \end{array} \right] , \\ h_{z} & = \left[ \begin{array}{cc} -t+i\gamma & 0 \\ 0 & t+i\gamma \end{array} \right] , \end{align} where $0 < k_{0} < \pi/a$, and the other parameters, $A$, $B$, $t$, and $\gamma$, are assumed to be real and positive. The Fourier transform of $H$ is expressed as \begin{align} \mathcal{H}(\mib{k}) = \left[ \begin{array}{cc} \Lambda(\mib{k})+2\gamma\sin(k_{z}a) & \Theta_{-}(k_{x},k_{y}) \\ \Theta_{+}(k_{x},k_{y}) & -\Lambda(\mib{k})+2\gamma\sin(k_{z}a) \end{array} \right] , \end{align} where $\Lambda(\mib{k}) = \Delta(k_{z}) +2B\sum_{\alpha=x,y}[1-\cos(k_{\alpha}a)]$ and $\Theta_{\pm}(k_{x},k_{y}) = A[\sin(k_{x}a)\pm i\sin(k_{y}a)]$ with \begin{align} \Delta(k_{z}) = - 2t\left[\cos(k_{z}a)-\cos(k_{0}a)\right] . \end{align} It can be seen that inversion symmetry is broken if $\gamma \neq 0$. The energy dispersion of this model is given as \begin{align} \label{eq:exp-E} E = & 2\gamma\sin(k_{z}a) \pm \Big\{ A^{2}\bigl(\sin^{2}(k_{x}a)+\sin^{2}(k_{y}a)\bigr) \nonumber \\ & \hspace{-4mm} + \left[\Delta(k_{z}) +2B\bigl(2-\cos(k_{x}a)-\cos(k_{y}a)\bigr)\right]^{2} \Big\}^{\frac{1}{2}} , \end{align} indicating that a pair of Weyl nodes appears at $\mib{k}_{\pm} = (0,0,\pm k_{0})$. Note that $E = 0$ at the band center. The energy of each Weyl node is located at the band center of $E = 0$ at $\gamma = 0$, whereas it deviates from there if $\gamma \neq 0$. In Sect.~4, we focus on the lattice system of a rectangular parallelepiped with $L$ sites in both the $x$ and $y$ directions and $N$ sites in the $z$ direction (see Fig.~1). In this setup, chiral surface states appear on its side. \begin{figure}[btp] \begin{center} \includegraphics[width=5cm]{Weyl-rect.eps} \end{center} \caption{ Lattice system considered in the text: a rectangular parallelepiped with $L$ sites in both the $x$ and $y$ directions and $N$ sites in the $z$ direction. } \end{figure} Now, we show that no spontaneous charge current appears if $E_{F}$ is located at the band center on the basis of the particle-hole symmetry inherent in the model introduced above. Let us define the operator $\Gamma_{\rm ph}$ as \begin{align} \Gamma_{\rm ph} = \sigma_{x}K , \end{align} where $\sigma_{x}$ and $K$ are respectively the $x$ component of the Pauli matrices and the complex conjugate operator. The tight-binding Hamiltonian $H$ satisfies \begin{align} \label{eq:ph-symmet} \Gamma_{\rm ph}^{-1}H\Gamma_{\rm ph} = - H . \end{align} Let us denote eigenstates of $H$ in the valence band as $|q\rangle_{v}$ and those in the conduction band as $|q\rangle_{c}$ with $q = 1,2,3,\dots$, where $H|q\rangle_{v}=\epsilon_{q}^{v}|q\rangle_{v}$ with $\epsilon_{q}^{v} < 0$ and $H|q\rangle_{c}=\epsilon_{q}^{c}|q\rangle_{c}$ with $\epsilon_{q}^{c} > 0$. Here, $q$ labels the eigenstates in descending order (i.e., $0\ge\epsilon_{1}^{v}\ge\epsilon_{2}^{v}\ge\epsilon_{3}^{v}\ge\dots$) in the valence band and in ascending order (i.e., $0\le\epsilon_{1}^{c}\le\epsilon_{2}^{c}\le\epsilon_{3}^{c}\le\dots$) in the conduction band. Equation~(\ref{eq:ph-symmet}) allows us to set $\epsilon_{q}^{v}=-\epsilon_{q}^{c}$ with \begin{align} |q\rangle_{c} = \Gamma_{\rm ph} |q\rangle_{v} . \end{align} We introduce the charge current operator $j_{\alpha}$ defined on an arbitrary site, where $\alpha = x,y,z$ specifies the direction of flow. For example, the current operator $j_{x}$ on the $(l,m,n)$th site is given by \begin{align} j_{x} = -e(-i)\bigl[ |l+1,m,n\rangle h_{x} \langle l,m,n| - {\rm h.c.} \bigr] . \end{align} It may be more appropriate to state that this is defined on the link connecting the $(l,m,n)$th and $(l+1,m,n)$th sites. We can show that any $j_{\alpha}$ is invariant under the transformation of $\Gamma_{\rm ph}$ as \begin{align} \Gamma_{\rm ph}^{-1}j_{\alpha}\Gamma_{\rm ph} = j_{\alpha} . \end{align} In the ground state with $E_{F} = 0$, the expectation value of any $j_{\alpha}$ is expressed as \begin{align} \langle j_{\alpha} \rangle = \sum_{q \ge 1} {}_{v}\langle q |j_{\alpha}|q\rangle_{v} . \end{align} By using the relations given above and the completeness of the set of eigenstates consisting of $\{|q\rangle_{v}\}$ and $\{|q\rangle_{c}\}$, we can show that \begin{align} \langle j_{\alpha} \rangle & = \frac{1}{2} \sum_{q \ge 1} \left[ {}_{v}\langle q |j_{\alpha}|q\rangle_{v} +{}_{c}\langle q |j_{\alpha}|q\rangle_{c} \right] \nonumber \\ & = \frac{1}{2}{\rm tr}\{ j_{\alpha} \} = 0 , \end{align} indicating that a spontaneous charge current completely vanishes everywhere in the system at $E_{F} = 0$. That is, although chiral surface states carry a circulating charge current, their contribution is completely canceled out by that from bulk states. Note that the above argument based on particle-hole symmetry is not restricted to the two-orbital model used in this study and is also applicable to the four-orbital model introduced in Ref.~\citen{vazifeh}. \section{Analytical Approach} Before performing numerical simulations in the case of $E_{F} \neq 0$, we analytically study the behaviors of chiral surface states in a cylindrical Weyl semimetal. To do so, we apply the analytical approach given in Ref.~\citen{takane1}, which was developed to describe unusual electron states in a Weyl semimetal: chiral surface states~\cite{okugawa} and chiral modes along a screw dislocation.~\cite{imura,sumiyoshi} It is convenient to modify the tight-binding Hamiltonian by taking the continuum limit in the $x$ and $y$ directions, leaving the lattice structure in the $z$ direction so that the resulting model has a layered structure. After the partial Fourier transformation in the $z$ direction, the Hamiltonian is reduced to \begin{align} \mathcal{H} = \left[ \begin{array}{cc} \tilde{\Lambda} + 2\gamma\sin(k_{z}a) & \tilde{A}(\hat{k}_{x}-i\hat{k}_{y}) \\ \tilde{A}(\hat{k}_{x}+i\hat{k}_{y}) & -\tilde{\Lambda} + 2\gamma\sin(k_{z}a) \end{array} \right] , \end{align} where $\tilde{\Lambda} = \Delta(k_{z})+\tilde{B}(\hat{k}_{x}^{2}+\hat{k}_{y}^{2})$ with $\hat{k}_{x}=-i\partial_{x}$, $\hat{k}_{y}=-i\partial_{y}$, $\tilde{A} = Aa$, and $\tilde{B} = Ba^{2}$. We adapt this model to a cylindrical Weyl semimetal of radius $R$ by using the cylindrical coordinates $(r,\phi)$ with $r = \sqrt{x^{2}+y^{2}}$ and $\phi = \arctan(y/x)$. Let $\Psi(r,\phi) = {}^t\!(F,G)$ be an eigenfunction of $\mathcal{H}$ for a given $k_{z}$. It is convenient to rewrite $F$ and $G$ as $F = e^{i\lambda\phi} f(r)$ and $G = e^{i(\lambda+1)\phi} g(r)$, where $\lambda$ is the azimuthal quantum number. Then, in terms of $\psi(r,\phi) = {}^t\!(f,g)$ for given $k_{z}$ and $\lambda$, the eigenvalue equation is written as \begin{align} \label{eq:ev-Eq1} \left[ \begin{array}{cc} \Delta(k_{z})-\tilde{B}\mathcal{D}_{\lambda} & \tilde{A}\left(-i\partial_{r}-i\frac{\lambda+1}{r}\right) \\ \tilde{A}\left(-i\partial_{r}+i\frac{\lambda}{r}\right) & -\Delta(k_{z})+\tilde{B}\mathcal{D}_{\lambda+1} \end{array} \right]\psi = \tilde{E}\psi , \end{align} where $\tilde{E} = E -2\gamma\sin(k_{z}a)$ and \begin{align} \label{eq:def-D} \mathcal{D}_{\lambda} = \partial_{r}^{2}+\frac{1}{r}\partial_{r} - \frac{\lambda^{2}}{r^{2}} . \end{align} As demonstrated in Ref.~\citen{takane1}, if $\tilde{B}$ is finite but very small, the eigenvalue equation, Eq.~(\ref{eq:ev-Eq1}), can be decomposed into two separate equations: the Weyl and supplementary equations. The Weyl equation for $f$ and $g$ is given by \begin{align} \label{eq:f-first} \left(\mathcal{D}_{\lambda}-\Lambda_{-}\right)f = 0 , \hspace{3mm} \left(\mathcal{D}_{\lambda+1}-\Lambda_{-}\right)g = 0 , \end{align} while the supplementary equation is \begin{align} \label{eq:f-second} \left(\mathcal{D}_{\lambda}-\Lambda_{+}\right)f = 0 , \hspace{3mm} \left(\mathcal{D}_{\lambda+1}-\Lambda_{+}\right)g = 0 , \end{align} where \begin{align} \label{eq:def-Lambda} \Lambda_{-} = -\frac{\tilde{E}^{2}-\Delta^{2}}{\tilde{A}^{2}} , \hspace{6mm} \Lambda_{+} = \frac{\tilde{A}^{2}}{\tilde{B}^{2}} . \end{align} Here, $f$ and $g$ are related by the original eigenvalue equation with a finite but very small $\tilde{B}$. Note that the restriction on $\tilde{B}$ (i.e., $\tilde{B}$ is very small) does not significantly affect the behaviors of chiral surface states. Indeed, the energy of chiral surface states does not depend on $\tilde{B}$ as seen in Eq.~(\ref{eq:disp-CS_state}). We hereafter focus on chiral surface states, which appear only in the case of $|\Delta(k_{z})|>|\tilde{E}|$. The solutions of both the Weyl and supplementary equations are expressed by modified Bessel functions in this case. By superposing two solutions that asymptotically increase in an exponential manner, we can describe spatially localized states near the side. With $\eta \equiv \sqrt{\Delta^{2}-\tilde{E}^{2}}/\tilde{A}$ and $\kappa \equiv \tilde{A}/\tilde{B}$, the general solution for given $\lambda$ and $k_{z}$ is written as \begin{align} \psi = a \left[ \begin{array}{c} I_{|\lambda|}(\eta r) \\ -i\frac{\Delta-\tilde{E}}{\sqrt{\Delta^{2}-\tilde{E}^{2}}} I_{|\lambda+1|}(\eta r) \end{array} \right] + b \left[ \begin{array}{c} I_{|\lambda|}(\kappa r) \\ iI_{|\lambda+1|}(\kappa r) \end{array} \right] , \end{align} where the first and second terms respectively arise from the Weyl and supplementary equations. The boundary condition of $\psi(R)={}^t\!(0,0)$ requires \begin{align} \frac{\Delta-\tilde{E}}{\sqrt{\Delta^{2}-\tilde{E}^{2}}} = -\frac{I_{|\lambda+1|}(\kappa R)}{I_{|\lambda|}(\kappa R)} \frac{I_{|\lambda|}(\eta R)}{I_{|\lambda+1|}(\eta R)} , \end{align} indicating that a relevant solution is obtained only in the case of $\Delta(k_{z}) < 0$, which holds when $k_{z} \in (-k_{0}, k_{0})$. The eigenvalue of energy is approximately determined as \begin{align} \label{eq:disp-CS_state} E = \frac{\tilde{A}}{R}\left(\lambda+\frac{1}{2}\right)+2\gamma\sin(k_{z}a) . \end{align} In the case of $\gamma = 0$, the dispersion is flat (i.e., independent of $k_{z}$), representing a characteristic feature of the chiral surface state. An electron in the chiral surface state propagates in the anticlockwise direction viewed from above. The dispersion becomes dependent on $k_{z}$ if $\gamma \neq 0$, indicating that the group velocity is tilted to the $z$ direction. Consequently, an electron in the chiral surface state circulates around the side surface in a spiral manner.~\cite{baireuther} This implies that a spontaneous charge current in the $z$ direction can be induced near the side surface if $\gamma \neq 0$. Now, we roughly determine the magnitude of a spontaneous circulating current in the system consisting of $N$ layers. The circulating charge current carried by each chiral surface state is \begin{align} J_{\phi}^{0} = -e\frac{\tilde{A}}{2\pi R} , \end{align} which flows in the clockwise direction. Note that the chiral surface state with $E(\lambda)$ appears only when $k_{z} \in (-k_{0},k_{0})$. If $k_{z}$ deviates from this interval, the state is continuously transformed to a bulk state, which is spatially extended over the entire system. Let us determine the total charge current $J_{\phi}$. Since $J_{\phi}$ vanishes at $E_{F} = 0$, we need to collect the contributions to $J_{\phi}$ arising from the chiral surface states with $E(\lambda)$ satisfying $0 < E < E_{F}$ if $E_{F} > 0$. Here, it is assumed that the contribution from bulk states in the same interval of energy is not important since their wavelength is relatively long and hence they cannot induce a short-wavelength response localized near the surface. If $E_{F} < 0$, the total charge current is obtained by collecting the contributions to $J_{\phi}$ arising from the states with $E(\lambda)$ satisfying $E_{F} < E < 0$ and then reversing its sign. In addition to the condition for $\lambda$, it is important to note that the chiral surface states are stabilized only when $|k_{z}| < k_{0}$. Assuming that $k_{z}$ is given by $k_{z}^{j} = j\pi/[(N+1)a]$ with $j = 1,2,3,\dots$ in the system consisting of $N$ layers, we require $k_{z}^{j} < k_{0}$. From the observation given above, we find that the spontaneous charge current per layer is \begin{align} \label{eq:J_phi-av} \frac{J_{\phi}}{N} = -e\frac{k_{0}a}{2\pi^{2}} E_{F} , \end{align} which depends on $E_{F}$ as well as $k_{0}$. This indicates that the direction of flow reverses depending on the sign of $E_{F}$. That is, the spontaneous current flows in opposite directions for the cases of electron doping and hole doping. Note that, since $J_{\phi}/N$ is independent of $R$, we expect that the circulating charge current will be insensitive to the geometry of the Weyl semimetal. Thus, we expect that Eq.~(\ref{eq:J_phi-av}) can be applied to the system of a rectangular parallelepiped, which is treated in the next section. \begin{figure}[btp] \begin{center} \includegraphics[width=6cm]{Weyl-disp_gam0.eps} \includegraphics[width=6cm]{Weyl-disp_gam1.eps} \end{center} \caption{ Energy dispersion as a function of $k_{z}$ in the cases of (a) $\gamma/A =0$ and (b) $0.1$. For the purely bulk states, only a quarter of the corresponding branches are shown for clarity. } \end{figure} \section{Numerical Results} We focus on the lattice system of the rectangular parallelepiped shown in Fig.~1, which occupies the region of $1 \le l, m \le L$ and $1 \le n \le N$, under the open boundary condition in the three spatial directions. We set $L = 50$ and $N = 30$ with the following parameters: $B/A =0.5$, $t/A = 0.5$, and $k_{0}a = 3\pi/4$. We consider the cases of $\gamma/A = 0$ and $0.1$ for $E_{F}/A = \pm 0.1$ and $\pm 0.2$. The energy dispersion for the infinitely long system with a cross-sectional area of $L^{2}$ is shown in Fig.~2 in the cases of (a) $\gamma/A = 0$ and (b) $0.1$ as a function of $k_{z}$. In the case of $\gamma/A = 0$, branches with flat dispersion are uniformly distributed near $E = 0$ in the region of $-k_{0} < k_{z} < k_{0}$. They represent chiral surface states localized near the side surface of the system. These states should induce a circulating charge current near the side surface when $E_{F} \neq 0$. The dispersion of these states becomes slightly upward to the right in the case of $\gamma/A = 0.1$, implying the appearance of a spontaneous charge current in the $z$ direction. Hereafter, we numerically study how the spontaneous charge current appears in this system depending on $E_{F}$ and $\gamma$. \begin{figure}[btp] \begin{center} \includegraphics[width=6.0cm]{Weyl-j20_0_0xz.eps} \includegraphics[width=6.0cm]{Weyl-j10_0_0xz.eps} \includegraphics[width=6.0cm]{Weyl-jm10_0_0xz.eps} \includegraphics[width=6.0cm]{Weyl-jm20_0_0xz.eps} \end{center} \caption{(Color online) Spatial distribution of $j_{y}$ normalized by $eA$ in the cross section parallel to the $xz$ plane; (a) $E_{F}/A = 0.2$, (b) $0.1$, (c) $-0.1$, and (d) $-0.2$. } \end{figure} Firstly, let us examine the behaviors of a spontaneous charge current that circulates around the system near the side surface. To do so, we calculate the distribution of the spontaneous charge current in the $y$ direction through the cross section parallel to the $xz$ plane at the center of the system (dotted line in Fig.~1). Precisely speaking, $j_{y}$ on each link connecting the $(l,25,n)$th and $(l,26,n)$th sites is calculated for $1 \le l \le 50$ and $1 \le n \le 30$. Figure~3 shows the results for $j_{y}$ normalized by $eA$ for $\gamma/A = 0$ and $E_{F}/A = 0.2$, $0.1$, $-0.1$, and $-0.2$. The results for $\gamma/A = 0.1$ are not shown since they are almost identical to those for $\gamma/A = 0$. We observe that $j_{y} > 0$ near $l = 1$ and $j_{y} < 0$ near $l = 50$ in the case of $E_{F} > 0$, whereas $j_{y} < 0$ near $l = 1$ and $j_{y} > 0$ near $l = 50$ in the case of $E_{F} < 0$. This indicates that the spontaneous charge current circulates around the system in the clockwise direction viewed from above when $E_{F} > 0$, whereas it circulates in the anticlockwise direction when $E_{F} < 0$ (see Fig.~1). That is, its direction of flow is opposite for the cases of $E_{F} > 0$ and $E_{F}<0$. We also observe that $j_{y}$ increases with increasing $E_{F}$ in accordance with Eq.~(\ref{eq:J_phi-av}). Equation~(\ref{eq:J_phi-av}) predicts $|J_{y}|/N \approx 0.024 \times eA$ in the case of $E_{F}/A = \pm 0.2$, where $J_{y}$ represents the total charge current induced near each side surface of height $N$. This result is consistent with those shown in Figs.~3(a) and 3(d). \begin{figure}[btp] \begin{center} \includegraphics[width=6.0cm]{Weyl-j20_1_0xy.eps} \includegraphics[width=6.0cm]{Weyl-j10_1_0xy.eps} \includegraphics[width=6.0cm]{Weyl-jm10_1_0xy.eps} \includegraphics[width=6.0cm]{Weyl-jm20_1_0xy.eps} \end{center} \caption{(Color online) Spatial distribution of $j_{z}$ normalized by $eA$ in the cross section parallel to the $xy$ plane; (a) $E_{F}/A = 0.2$, (b) $0.1$, (c) $-0.1$, and (d) $-0.2$. } \end{figure} Secondly, let us examine the behaviors of a spontaneous charge current in the longitudinal direction. We calculate the distribution of the charge current in the $z$ direction through the cross section parallel to the $xy$ plane at the center of the system (broken line in Fig.~1). Precisely speaking, $j_{z}$ on each link connecting the $(l,m,15)$th and $(l,m,16)$th sites is calculated for $1 \le l \le 50$ and $1 \le m \le 50$. Figure~4 shows the results for $j_{z}$ normalized by $eA$ for $\gamma/A = 0.1$ and $E_{F}/A = 0.2$, $0.1$, $-0.1$, and $-0.2$. The results for $\gamma/A = 0$ are not shown since $j_{z}$ vanishes everywhere in this case. Again, Fig.~4 indicates that $j_{z}$ increases with increasing $E_{F}$ and that its sign is opposite for the cases of $E_{F} > 0$ and $E_{F}<0$. Note that a relatively large current appears near the side surface, particularly near the corners, while a small current flowing in the opposite direction is distributed beneath the side surface. The former is induced by chiral surface states, while the latter originates from bulk states. These two contributions cancel each other out if they are integrated over the cross section; thus, the total charge current in the $z$ direction completely vanishes. \begin{figure}[btp] \begin{center} \includegraphics[width=6.0cm]{Weyl-j10_0_3xz.eps} \includegraphics[width=6.0cm]{Weyl-j10_0_4xz.eps} \includegraphics[width=6.0cm]{Weyl-j10_0_45xz.eps} \end{center} \caption{(Color online) Spatial distribution of $\langle j_{y}\rangle$ normalized by $eA$ in the cross section parallel to the $xz$ plane at $E_{F}/A = 0.1$; (a) $W/A = 3$, (b) $4$, and (c) $4.5$. } \end{figure} \begin{figure}[btp] \begin{center} \includegraphics[width=6.0cm]{Weyl-j10_1_2xy.eps} \includegraphics[width=6.0cm]{Weyl-j10_1_3xy.eps} \end{center} \caption{(Color online) Spatial distribution of $\langle j_{z}\rangle$ normalized by $eA$ in the cross section parallel to the $xy$ plane at $E_{F}/A = 0.1$; (a) $W/A = 2$, and (b) $3$. } \end{figure} Finally, we examine the effect of disorder~\cite{chen2,shapourian,LOS,gorbar, takane2,yoshimura} on the spontaneous charge current by adding the impurity potential term \begin{align} H_{\rm imp} = \sum_{l,m,n} |l,m,n \rangle \left[ \begin{array}{cc} V_{1}^{(l,m,n)} & 0 \\ 0 & V_{2}^{(l,m,n)} \end{array} \right] \langle l,m,n| \end{align} to the Hamiltonian $H$, where $V_{1}$ and $V_{2}$ are assumed to be uniformly distributed within the interval of $[-W/2,+W/2]$. Previous studies have shown that a Weyl semimetal phase is robust against weak disorder up to a critical disorder strength, $W_{\rm c}$,~\cite{chen2,shapourian} and that chiral surface states also persist as long as $W < W_{\rm c}$.~\cite{takane2} In the case of $B/A =0.5$, $t/A = 0.5$, and $k_{0}a = 3\pi/4$, the critical disorder strength is $W_{\rm c}/A \sim 4$. The ensemble averages, $\langle j_{y}\rangle$ and $\langle j_{z}\rangle$, are calculated over $500$ samples with different impurity configurations at $E_{F}/A = 0.1$ for a given value of $W/A$. In calculating $j_{y}$ and $j_{z}$ for a given impurity configuration, we take account of only the contribution from electron states with an energy $E$ satisfying $0 < E < E_{F}$, assuming that electron states below the band center have no contribution owing to cancellation between them. This assumption is not strictly justified here since particle-hole symmetry is broken by the impurity potential. Nonetheless, this should be a good approximation after taking the ensemble average. Figure~5 shows the results for $\langle j_{y}\rangle$ normalized by $eA$ for $\gamma/A = 0$ and $E_{F}/A = 0.1$ with $W/A = 3$, $4$, and $4.5$. We observe that the circulating charge current is robust against disorder up to $W/A \sim 4$ but is suppressed when $W/A$ exceeds $4$. This behavior is consistent with an observation reported previously.~\cite{takane2} Figure~6 shows the results for $\langle j_{z}\rangle$ normalized by $eA$ for $\gamma/A = 0.1$ and $E_{F}/A = 0.1$ with $W/A = 2$ and $3$. We observe that $\langle j_{z}\rangle$ is significantly suppressed in the case of $W/A = 3$, although the circulating charge current is almost unaffected in this case. This indicates that the charge current in the $z$ direction is more fragile than the circulating charge current against the mixing of chiral surface states and bulk states due to disorder. \section{Summary and Discussion} We theoretically studied a spontaneous charge current due to chiral surface states in the ground state of a Weyl semimetal. We analytically and numerically determined the magnitude of the charge current induced near the side surface of the system. It is shown that no spontaneous charge current appears when the Fermi level, $E_{F}$, is located at the band center. It is also shown that, once $E_{F}$ deviates from the band center, the spontaneous charge current appears to circulate around the side surface of the system and its direction of flow is opposite for the cases of electron doping and hole doping. The circulating current is shown to be robust against weak disorder. Let us focus on the two features revealed in this paper: the appearance of a spontaneous charge current except at the band center and the reversal of its direction of flow as a function of $E_{F}$. As they are derived by using a model possessing particle-hole symmetry, a natural question arises: do these features manifest themselves even in the absence of particle-hole symmetry? The answer is yes. The disappearance of the spontaneous charge current reflects the fact that the contribution from chiral surface states is completely canceled out by that from bulk states. As the spontaneous charge current due to chiral surface states is localized near the side surface, this cancellation should be mainly caused by bulk states with a short wavelength, occupying the bottom region of the energy band far from the band center. Hence, if $E_{F}$ is varied near the band center, the contribution from bulk states is almost unaffected but that from chiral surface states is significantly changed, depending on $E_{F}$ in a roughly linear manner. This behavior should take place regardless of the presence or absence of particle-hole symmetry. Thus, we expect that the features of the spontaneous charge current still manifest themselves even in the absence of particle-hole symmetry, although the point of the disappearance shifts away from the band center. \section*{Acknowledgment} This work was supported by JSPS KAKENHI Grant Numbers JP15K05130 and JP18K03460.
1,941,325,220,628
arxiv
\section*{Acknowledgments} The interest of this study appeared to us during a discussion with Chris Vale at the BEC 2019 conference in Sant Feliu de Guixols. We also thank Hadrien Kurkjian for helpful remarks on calculating the density-density response function, even if he ultimately preferred to collaborate with other people on the subject \cite{arxiv}. Finally, note that the date of submission of this work is much later than that of the corresponding preprint \cite{hal}; indeed, we had to withdraw our previous submission to a journal which proved unable to produce a referee report. \section{Introduction} On sait d\'esormais pr\'eparer en laboratoire un gaz d'atomes froids fermioniques de spin $1/2$ { pi\'eg\'e} dans une bo\^{\i}te de potentiel \`a fond plat \cite{buxida1}, donc spatialement homog\`ene \cite{buxida2,buxida3,buxida4}. Ces atomes { subissent} une interaction attractive dans l'onde $s$ entre \'etats de spins oppos\'es $\uparrow$ et $\downarrow$ de type van der Waals, de port\'ee $b$ n\'egligeable et de longueur de diffusion $a$ ajustable par r\'esonance de Feshbach magn\'etique \cite{Thomas2002,Salomon2003,Grimm2004b,Ketterle2004,Salomon2010,Zwierlein2012}. On peut s'arranger pour le gaz soit non polaris\'e, c'est-\`a-dire qu'il comporte le m\^eme nombre de particules dans $\uparrow$ et $\downarrow$. Aux tr\`es basses temp\'eratures atteintes exp\'erimentalement, on peut alors supposer, en premi\`ere approximation, que tous les fermions s'assemblent par paires li\'ees $\uparrow\downarrow$, \'equivalentes pour notre syst\`eme neutre aux paires de Cooper des supraconducteurs, ces paires formant de plus un condensat et un superfluide, comme le pr\'edit la th\'eorie BCS. \`A la limite thermodynamique, \`a vecteur d'onde d'excitation $\mathbf{q}$ fix\'e, le spectre d'excitation du syst\`eme \`a temp\'erature nulle comporte un continuum de paire bris\'ee, de la forme $\varepsilon_{\mathbf{q}/2+\mathbf{k}}+\varepsilon_{\mathbf{q}/2-\mathbf{k}}$, o\`u $k\mapsto \varepsilon_\mathbf{k}$ est la relation de dispersion d'un fragment de paire bris\'ee et le vecteur d'onde relatif $\mathbf{k}$ des deux fragments d\'ecrit tout l'espace de Fourier tridimensionnel. { Nous nous limitons ici au cas habituel, o\`u les paires li\'ees sont de rayon suffisamment grand (par rapport \`a la distance moyenne entre fermions) pour pr\'esenter un caract\`ere de boson composite affirm\'e, c'est-\`a-dire que $k\mapsto\varepsilon_\mathbf{k}$ atteint son minimum en un nombre d'onde $k_0>0$. Si $0<q<2 k_0$, la densit\'e d'\'etats du continuum de paire bris\'ee pr\'esente alors, sur l'axe des \'energies r\'eelles, deux points de singularit\'e $\varepsilon_1(q) < \varepsilon_2(q)$, et m\^eme un troisi\`eme $\varepsilon_3(q) > \varepsilon_2(q)$ pour $q/k_0$ suffisamment petit. Par prolongement analytique de l'\'equation aux \'energies propres \`a travers l'intervalle $[\varepsilon_1(q),\varepsilon_2(q)]$, on trouve} que le continuum abrite un mode d'excitation collectif par brisure de paire, d'\'energie complexe $z_\mathbf{q}$ s'\'ecartant quadratiquement en $q$ de sa limite $2\Delta$ en $q=0$, o\`u $\Delta$ est le param\`etre d'ordre du condensat de paires, pris r\'eel { positif} \`a l'\'equilibre. Ceci est pr\'edit aussi bien dans la limite de couplage faible $\Delta\ll\varepsilon_F$ \cite{AndrianovPopov}, o\`u $\varepsilon_F$ est l'\'energie de Fermi du gaz, que dans la limite de couplage fort $\Delta \approx \varepsilon_F$ \cite{PRL2019,CRAS2019}~; d'apr\`es la th\'eorie BCS d\'ependant du temps utilis\'ee, il suffit que le potentiel chimique du gaz soit positif, $\mu>0$, { afin que $k_0>0$}. Notons que le mode du continuum est souvent appel\'e mode d'amplitude, ou m\^eme mode de Higgs \cite{Varma} mais l'analogie avec la physique des hautes \'energies n'est qu'approximative \cite{Benfatto} et la relation de dispersion donn\'ee dans la r\'ef\'erence \cite{Varma} est incorrecte. { Notons encore que d'autres modes du continuum peuvent \^etre obtenus par prolongement analytique \`a travers les intervalles $[\varepsilon_2(q),+\infty[$, $[\varepsilon_2(q),\varepsilon_3(q)]$ et $[\varepsilon_3(q),+\infty[$ (si $\varepsilon_3(q)$ existe), et que le r\'egime $k_0=0$ ($\mu<0$ d'apr\`es la th\'eorie BCS) admet lui aussi des modes du continuum par prolongement \`a travers $[\varepsilon_1(q),+\infty[$ (il n'y a dans ce cas qu'un seul point de singularit\'e), m\^eme dans la limite $\mu/\varepsilon_F\to -\infty$ o\`u les paires li\'ees se r\'eduisent \`a des bosons \'el\'ementaires \cite{CRAS2019}. Ces modes exotiques ont en g\'en\'eral une \'energie complexe $z_\mathbf{q}$ de partie r\'eelle \'eloign\'ee de leur intervalle de prolongement analytique \`a faible $q$ et de partie imaginaire \'elev\'ee (en valeur absolue) \`a grand $q$, ce qui les rend difficilement observables selon les crit\`eres de la r\'ef\'erence \cite{PRL2019}~; aussi les ignorons-nous ici.} La question est de savoir comment mettre en \'evidence la branche du continuum { ordinaire. La question est d'importance car la branche est} pour l'instant inobserv\'ee : { les oscillations du param\`etre d'ordre \`a la pulsation $2\Delta/\hbar$ d\'etect\'ees dans un supraconducteur \cite{Shimano,Sacuto} ou dans un gaz d'atomes froids fermioniques \cite{Koehl} apr\`es une excitation spatialement homog\`ene ont une d\'ependance temporelle amortie en loi de puissance $\sin(2\Delta t/\hbar+\phi)/t^\alpha$ plut\^ot que purement sinuso\"idale \cite{Kogan,Altshuler,Gurarie}, et ne r\'esultent pas d'un mode discret du superfluide mais simplement d'un effet g\'en\'erique du bord $2\Delta$ du continuum de paire bris\'ee \cite{Orbach,Stringari} ; c'est que, \`a $q=0$, comme le pr\'edit la th\'eorie, l'\'equation aux \'energies propres admet comme racine seulement l'\'energie nulle, point de d\'epart de la branche acoustique d'Anderson-Bogolioubov, et surtout pas l'\'energie $2\Delta$, m\^eme apr\`es prolongement analytique au demi-plan inf\'erieur \cite{PRL2019}. En revanche}, sous certaines conditions, les fonctions de r\'eponse lin\'eaire (ou susceptibilit\'es) $\chi$ du syst\`eme \`a une excitation de pulsation $\omega$ et de vecteur d'onde $\mathbf{q}$ {\sl non nul} doivent pr\'esenter, en fonction de la pulsation, un pic centr\'e pr\`es de $\omega= \re z_\mathbf{q}/\hbar$ et de mi-largeur approximative $\im z_\mathbf{q}/\hbar$, au-dessus du fond large de r\'eponse du continuum, { ce qui est caract\'eristique d'une contribution modale}. C'est bien le cas de la fonction de r\'eponse module-module $\chi_{|\Delta| |\Delta|}(\mathbf{q},\omega)$, o\`u l'on regarde l'effet sur le module du param\`etre d'ordre d'une excitation en module du param\`etre d'ordre {\sl via} une modulation spatio-temporelle de la longueur de diffusion \cite{PRL2019}~; une telle excitation est cependant difficile \`a mettre en \oe uvre. En revanche, l'excitation en densit\'e d'un gaz d'atomes froids par une impulsion de Bragg, au moyen de deux faisceaux laser de diff\'erence de pulsation $\omega$ et de diff\'erence de vecteur d'onde $\mathbf{q}$, est une technique bien rod\'ee en laboratoire, qui a donn\'e naissance \`a une v\'eritable spectroscopie de Bragg \cite{Bragg1,Bragg2,Bragg3,Bragg4,Bragg5}. Selon que l'on mesure la variation de la densit\'e totale $\rho$ du gaz (par absorption ou dispersion d'un faisceau laser \cite{KetterleVarenna}) ou du module $|\Delta|$ de son param\`etre d'ordre (par interf\'erom\'etrie \cite{Iacopo} ou bosonisation des paires { li\'ees} $\uparrow\downarrow$ par rampe de Feshbach rapide { \cite{KetterleVarenna,Ketterletourb}}) \`a la suite de l'impulsion de Bragg, on acc\`ede \`a la fonction de r\'eponse $\chi_{\rho\rho}(\mathbf{q},\omega)$ ou $\chi_{|\Delta|\rho}(\mathbf{q},\omega)$. D'un c\^ot\'e, la susceptibilit\'e densit\'e-densit\'e d'un gaz de fermions superfluide a fait l'objet de nombreuses \'etudes th\'eoriques \cite{crrth1,crrth2,crrth3,crrth4,crrth5,crrth6,crrth7} et exp\'erimentales \cite{Bragg3,Bragg4,Bragg5}, mais sans que la moindre attention ait \'et\'e pr\^et\'ee au mode du continuum~; de l'autre, la susceptibilit\'e module-densit\'e a \'et\'e rarement calcul\'ee, et \`a notre connaissance jamais mesur\'ee avec des atomes froids. L'objectif du pr\'esent travail est de combler ces deux lacunes, du moins sur le plan th\'eorique. \section{Fonctions de r\'eponse dans la th\'eorie BCS} Notre gaz de fermions condens\'e par paires, initialement pr\'epar\'e \`a l'\'equilibre \`a temp\'erature nulle, est soumis \`a une excitation en densit\'e, c'est-\`a-dire \`a une perturbation de son hamiltonien de la forme \begin{equation} \label{eq:001} \hat{W}(t)=\int \mathrm{d}^3r\, U(\mathbf{r},t) \sum_{\sigma=\uparrow,\downarrow} \hat{\psi}_\sigma^{\dagger}(\mathbf{r}) \hat{\psi}_\sigma(\mathbf{r}) \end{equation} o\`u le potentiel r\'eel $U(\mathbf{r},t)$ d\'epend du temps et de l'espace, et les op\'erateurs de champ $\hat{\psi}_\sigma(\mathbf{r})$ et $\hat{\psi}^\dagger_\sigma(\mathbf{r})$, \'ecrits en point de vue de Schr\"odinger, annihilent et cr\'eent un fermion dans l'\'etat de spin $\sigma$ au point $\mathbf{r}$ et ob\'eissent aux relations d'anticommutation fermioniques habituelles. Lorsque $U(\mathbf{r},t)$ est suffisamment faible ou est appliqu\'e pendant un temps suffisamment court, la r\'eponse du syst\`eme sur une observable $\hat{O}$ est lin\'eaire, c'est-\`a-dire que l'\'ecart $\delta\langle\hat{O}\rangle$ de la valeur moyenne de $\hat{O}$ \`a sa valeur \`a l'\'equilibre est une fonctionnelle lin\'eaire de $U$, d\'ecrite par une susceptibilit\'e $\chi_{O\rho}$. Nous nous limitons ici \`a deux observables, la densit\'e totale $\rho$ et le module $|\Delta|$ du param\`etre d'ordre complexe $\Delta$ d\'efini dans la r\'ef\'erence \cite{RMP}~: \begin{eqnarray} \label{eq:002a} \delta\rho(\mathbf{r},t) &\!\!\!=\!\!\!& \int\mathrm{d}^3r'\int \mathrm{d} t' \chi_{\rho\rho}(\mathbf{r}-\mathbf{r}',t-t') U(\mathbf{r}',t') \\ \label{eq:002b} \delta|\Delta|(\mathbf{r},t) &\!\!\!=\!\!\!& \int\mathrm{d}^3r'\int \mathrm{d} t' \chi_{|\Delta|\rho}(\mathbf{r}-\mathbf{r}',t-t') U(\mathbf{r}',t') \end{eqnarray} Comme l'\'etat initial du syst\`eme est stationnaire et spatialement homog\`ene, les susceptibilit\'es ne d\'ependent que de la diff\'erence des temps et des positions~; elles sont aussi causales donc retard\'ees (nulles si $t<t'$). En pratique, l'excitation de Bragg mentionn\'ee dans l'introduction correspond au potentiel de d\'eplacement lumineux $U(\mathbf{r},t)=U_0 \mathrm{e}^{\mathrm{i}(\mathbf{q}\cdot\mathbf{r}-\omega t)}+\mbox{c.c.}$, o\`u l'amplitude $U_0$ est complexe. Elle donne ainsi acc\`es, comme le montre le report de $U(\mathbf{r},t)$ dans (\ref{eq:002a}) et (\ref{eq:002b}), \`a la transform\'ee de Fourier spatio-temporelle des susceptibilit\'es~: \begin{equation} \label{eq:003} \chi(\mathbf{q},\omega) \equiv \int\mathrm{d}^3r\int\mathrm{d} t\, \mathrm{e}^{\mathrm{i}[(\omega+\mathrm{i}\eta)t-\mathbf{q}\cdot\mathbf{r}]} \chi(\mathbf{r},t) \quad\quad(\eta\to 0^+) \end{equation} Le facteur $\mathrm{e}^{-\eta t}$ assurant { la} convergence de l'int\'egrale sur le temps est habituel des fonctions de Green retard\'ees \cite{CCT}. Pour obtenir une expression approch\'ee des susceptibilit\'es \`a l'aide de la th\'eorie variationnelle BCS d\'ependant du temps, il est commode d'utiliser un mod\`ele sur r\'eseau cubique de pas $b$ dans le volume de quantification $[0,L]^3$ avec des conditions aux limites p\'eriodiques, en faisant tendre $b$ vers z\'ero et $L$ vers l'infini \`a la fin des calculs. Les fermions de masse $m$ ont la relation de dispersion de l'espace libre $\mathbf{k}\mapsto E_\mathbf{k}=\hbar^2 k^2/2m$ sur la premi\`ere zone de Brillouin $\mathcal{D}=[-\pi/b,\pi/b[^3$, et on l'\'etend par p\'eriodicit\'e au-del\`a. Ils interagissent par le potentiel binaire de contact $V(\mathbf{r}_i,\mathbf{r}_j)=g_0 \delta_{\mathbf{r}_i,\mathbf{r}_j}/b^3$, avec une constante de couplage nue $g_0$ ajust\'ee pour reproduire la longueur de diffusion $a$ de l'exp\'erience \cite{Houches,livreZwerger}~: $1/g_0=1/g-\int_{\mathcal{D}}\frac{\mathrm{d}^3k}{(2\pi)^3} \frac{1}{2E_\mathbf{k}}$ o\`u $g=4\pi\hbar^2a/m$ est la constante de couplage effective. L'\'etat fondamental grand canonique du gaz de potentiel chimique $\mu$ { (de signe quelconque dans cette section)} est approxim\'e par l'habituel \'etat $|\psi_0\rangle$, \'etat coh\'erent de paires { de fermions} brisant la sym\'etrie $U(1)$~: c'est le vide des op\'erateurs d'annihilation fermioniques $\hat{\gamma}_{\mathbf{k}\sigma}$ de quasi-particules d\'efinis plus bas. L'ansatz variationnel BCS s'\'etend au cas d\'ependant du temps \cite{Varenna}, et le param\`etre d'ordre vaut simplement \begin{equation} \Delta(\mathbf{r},t)=g_0 \langle\psi(t)|\hat{\psi}_\downarrow(\mathbf{r})\hat{\psi}_\uparrow(\mathbf{r})|\psi(t)\rangle \end{equation} Pour obtenir $\chi(\mathbf{q},\omega)$, le plus simple est de consid\'erer une excitation percussionnelle en temps et de vecteur d'onde non nul bien d\'efini, $U(\mathbf{r},t)=\hbar\epsilon \cos(\mathbf{q}\cdot\mathbf{r})\delta(t)$, avec $\epsilon\to 0$. La th\'eorie des perturbations d\'ependant du temps donne le vecteur d'\'etat juste apr\`es la perturbation au premier ordre en $\epsilon$~: \begin{equation} |\psi(0^+)\rangle\simeq\left[1\!-\!\mathrm{i}\epsilon\!\int\!\mathrm{d}^3r\, \cos(\mathbf{q}\cdot\mathbf{r}) \sum_{\sigma} \hat{\psi}_\sigma^{\dagger}(\mathbf{r}) \hat{\psi}_\sigma(\mathbf{r})\right]|\psi(0^-)\rangle \simeq \left[1\!+\!\frac{\mathrm{i}\epsilon}{2}\sum_{\mathbf{k}} (U_+V_-\!+\!U_-V_+) (\hat{\gamma}^\dagger_{+\uparrow}\hat{\gamma}^\dagger_{-\downarrow}\!+\!\mathbf{q}\leftrightarrow\!-\mathbf{q}) \right]|\psi_0\rangle \label{eq:005} \end{equation} Ici, les indices $+$ et $-$ font r\'ef\'erence aux nombres d'onde $\mathbf{q}/2+\mathbf{k}$ et $\mathbf{q}/2-\mathbf{k}$, les coefficients $U_\mathbf{k}=[\frac{1}{2}(1+\xi_\mathbf{k}/\varepsilon_\mathbf{k})]^{1/2}$ et $V_\mathbf{k}=[\frac{1}{2}(1-\xi_\mathbf{k}/\varepsilon_\mathbf{k})]^{1/2}$ sont les amplitudes des modes de quasi-particules sur les particules et les trous, et $\mathbf{k}\mapsto\varepsilon_\mathbf{k}=(\xi_\mathbf{k}^2+\Delta^2)^{1/2}$ est leur relation de dispersion BCS, avec $\xi_\mathbf{k}=E_\mathbf{k}-\mu+g_0\rho/2$.\footnote{Il a fallu utiliser les d\'eveloppements modaux des op\'erateurs de champ, $\hat{\psi}_\uparrow(\mathbf{r})=L^{-3/2} \sum_\mathbf{k} (\hat{\gamma}_{\mathbf{k}\uparrow} U_\mathbf{k}-\hat{\gamma}^\dagger_{-\mathbf{k}\downarrow}V_\mathbf{k}) \mathrm{e}^{\mathrm{i}\mathbf{k}\cdot\mathbf{r}}$ et $\hat{\psi}_\downarrow(\mathbf{r})=L^{-3/2} \sum_\mathbf{k} (\hat{\gamma}_{\mathbf{k}\downarrow}U_\mathbf{k}+\hat{\gamma}^\dagger_{-\mathbf{k}\uparrow}V_\mathbf{k})\mathrm{e}^{\mathrm{i}\mathbf{k}\cdot\mathbf{r}}$.} L'\'evolution de la densit\'e et du param\`etre d'ordre pour un tr\`es faible \'etat coh\'erent de paires de {\sl quasi-particules} comme (\ref{eq:005}) { (qui reste bien entendu un fort \'etat coh\'erent de paires d'atomes fermioniques)} a \'et\'e calcul\'ee avec la th\'eorie BCS d\'ependant du temps \cite{HadrienThese,Annalen}~; en particularisant les expressions g\'en\'erales de la r\'ef\'erence \cite{CRAS2019}, nous trouvons pour $t>0$~: \begin{equation} \begin{pmatrix} 2\mathrm{i}\Delta(\delta\theta)_\mathbf{q}(t) \\ 2(\delta|\Delta|)_\mathbf{q}(t) \\ (\delta\rho)_\mathbf{q}(t) \end{pmatrix} = (-\mathrm{i}\epsilon) \int_{\mathrm{i}\eta+\infty}^{\mathrm{i}\eta-\infty} \frac{\mathrm{d} z}{2\mathrm{i}\pi} \frac{\mathrm{e}^{-\mathrm{i} z t/\hbar}}{M(z,\mathbf{q})} \begin{pmatrix} \Sigma_{13}(z,\mathbf{q}) \\ \Sigma_{23}(z,\mathbf{q}) \\ \Sigma_{33}(z,\mathbf{q}) \end{pmatrix} \end{equation} o\`u $\theta(\mathbf{r},t)=\arg\Delta(\mathbf{r},t)$ est la phase du param\`etre d'ordre et $X_\mathbf{q}$ est le coefficient de Fourier de $X(\mathbf{r})$ sur l'onde plane $\mathrm{e}^{\mathrm{i}\mathbf{q}\cdot\mathbf{r}}$. Nous avons d\^u introduire la matrice $3\times 3$, fonction de l'\'energie complexe $z$ dans le demi-plan sup\'erieur et du nombre d'onde, \begin{equation} \label{eq:007} M(z,\mathbf{q}) = \begin{pmatrix} \Sigma_{11}(z,\mathbf{q}) & \Sigma_{12}(z,\mathbf{q}) & \phantom{1}-g_0 \Sigma_{13}(z,\mathbf{q}) \\ \Sigma_{12}(z,\mathbf{q}) & \Sigma_{22}(z,\mathbf{q}) & \phantom{1}-g_0 \Sigma_{23}(z,\mathbf{q}) \\ \Sigma_{13}(z,\mathbf{q}) & \Sigma_{23}(z,\mathbf{q}) & 1-g_0\Sigma_{33}(z,\mathbf{q}) \end{pmatrix} \end{equation} d\'ecrite par les six coefficients ind\'ependants~: \begin{equation} \begin{array}{lll} \displaystyle \Sigma_{11}(z,\mathbf{q})=\!\!\int_{\mathcal{D}}\!\frac{\mathrm{d}^3k}{(2\pi)^3} \left(\frac{(\varepsilon_++\varepsilon_-)(\varepsilon_+\varepsilon_-+\xi_+\xi_-+\Delta^2)}{2\varepsilon_+\varepsilon_-[z^2-(\varepsilon_++\varepsilon_-)^2]}+\frac{1}{2\varepsilon_\mathbf{k}}\right) & \quad & \displaystyle \Sigma_{12}(z,\mathbf{q})=\!\!\int_{\mathcal{D}}\!\frac{\mathrm{d}^3k}{(2\pi)^3} \frac{z(\xi_+\varepsilon_-+\xi_-\varepsilon_+)}{2\varepsilon_+\varepsilon_-[z^2-(\varepsilon_++\varepsilon_-)^2]} \\ \displaystyle \Sigma_{22}(z,\mathbf{q})=\!\!\int_{\mathcal{D}}\!\frac{\mathrm{d}^3k}{(2\pi)^3} \left(\frac{(\varepsilon_++\varepsilon_-)(\varepsilon_+\varepsilon_-+\xi_+\xi_--\Delta^2)}{2\varepsilon_+\varepsilon_-[z^2-(\varepsilon_++\varepsilon_-)^2]}+\frac{1}{2\varepsilon_\mathbf{k}}\right) & \quad & \displaystyle \Sigma_{13}(z,\mathbf{q})=\!\!\int_{\mathcal{D}}\!\frac{\mathrm{d}^3k}{(2\pi)^3} \frac{z \Delta (\varepsilon_++\varepsilon_-)}{2\varepsilon_+\varepsilon_-[z^2-(\varepsilon_++\varepsilon_-)^2]} \\ \displaystyle \Sigma_{33}(z,\mathbf{q})=\!\!\int_{\mathcal{D}}\!\frac{\mathrm{d}^3k}{(2\pi)^3} \frac{(\varepsilon_++\varepsilon_-)(\varepsilon_+\varepsilon_--\xi_+\xi_-+\Delta^2)}{2\varepsilon_+\varepsilon_-[z^2-(\varepsilon_++\varepsilon_-)^2]} & \quad & \displaystyle \Sigma_{23}(z,\mathbf{q})=\!\!\int_{\mathcal{D}}\!\frac{\mathrm{d}^3k}{(2\pi)^3} \frac{\Delta (\varepsilon_++\varepsilon_-)(\xi_++\xi_-)}{2\varepsilon_+\varepsilon_-[z^2-(\varepsilon_++\varepsilon_-)^2]} \end{array} \end{equation} En sp\'ecialisant (\ref{eq:002a}) et (\ref{eq:002b}) \`a l'excitation percussionnelle consid\'er\'ee, il vient ais\'ement pour notre mod\`ele sur r\'eseau~: \begin{eqnarray} \label{eq:009a} \chi_{\rho\rho}(\mathbf{q},\omega) = (0,0,1) \cdot \frac{2}{M(z,\mathbf{q})} \left. \begin{pmatrix} \Sigma_{13}(z,\mathbf{q}) \\ \Sigma_{23}(z,\mathbf{q}) \\ \Sigma_{33}(z,\mathbf{q}) \end{pmatrix}\right\vert_{z=\hbar\omega+\mathrm{i}\eta} \\ \label{eq:009b} \chi_{|\Delta|\rho}(\mathbf{q},\omega) = (0,1,0)\cdot \frac{1}{M(z,\mathbf{q})} \left. \begin{pmatrix} \Sigma_{13}(z,\mathbf{q}) \\ \Sigma_{23}(z,\mathbf{q}) \\ \Sigma_{33}(z,\mathbf{q}) \end{pmatrix}\right\vert_{z=\hbar\omega+\mathrm{i}\eta} \end{eqnarray} ce que nous exploiterons pour un espace continu dans la suite. \section{Dans le raccordement CBE-BCS} \label{sec:CBE-BCS} Dans l'espace \`a un param\`etre mesurant la force des interactions, on appelle raccordement CBE-BCS la zone interm\'ediaire entre la limite d'attraction forte $k_F a\to 0^+$, o\`u l'\'etat fondamental du syst\`eme est un condensat de Bose-Einstein (CBE) de dim\`eres $\uparrow\downarrow$ de taille $a$ petite devant la distance moyenne entre particules, et la limite d'attraction faible $k_F a\to 0^-$, o\`u l'\'etat fondamental est un \'etat BCS de paires li\'ees $\uparrow\downarrow$ de taille $\xi \approx \hbar^2 k_F/m\Delta$ bien plus grande que la distance interatomique. Ici $k_F=(3\pi^2\rho)^{1/3}$ est le nombre d'onde de Fermi du gaz. Le raccordement correspond donc au r\'egime $1\lesssim k_F |a|$, qui est aussi celui dans lequel les gaz d'atomes froids fermioniques superfluides sont pr\'epar\'es en pratique, pour \'eviter de fortes pertes de particules par collision \`a trois corps dans la limite CBE et des temp\'eratures critiques trop faibles dans la limite BCS. Or, notre mod\`ele sur r\'eseau doit toujours avoir un pas $b\ll 1/k_F$ pour bien reproduire la physique de l'espace continu. On a donc aussi $b\ll|a|$, et l'on est conduit \`a faire tendre $b$ vers z\'ero \`a longueur de diffusion fix\'ee. On remplace alors la premi\`ere zone de Brillouin $\mathcal{D}$ par l'espace de Fourier tout entier $\mathbb{R}^3$. Dans la d\'efinition des $\Sigma_{ij}$, cela ne conduit \`a aucune divergence ultraviolette~; cela en d\'eclenche une dans l'expression de $1/g_0$, ce qui fait tendre $g_0$ vers z\'ero dans la matrice (\ref{eq:007})~: \begin{equation} \label{eq:010} g_0\to 0 \end{equation} La relation de dispersion des excitations BCS se r\'eduit \`a $\varepsilon_k=[(E_k-\mu)^2+\Delta^2]^{1/2}$~; elle admet un minimum $\Delta$ en un nombre d'onde $k_0>0$, et le gaz admet une branche d'excitation collective du continuum de d\'epart $2\Delta$ \cite{PRL2019,CRAS2019}, lorsque le potentiel chimique $\mu$ est $>0$, ce que nous supposerons d\'esormais. De m\^eme, les expressions (\ref{eq:009a}) et (\ref{eq:009b}) des susceptibilit\'es se simplifient comme suit~: \begin{equation} \label{eq:011} \chi_{\rho\rho}= \frac{2\left|\begin{array}{lll} \Sigma_{11} & \Sigma_{12} & \Sigma_{13}\\ \Sigma_{12} & \Sigma_{22} & \Sigma_{23} \\ \Sigma_{13} & \Sigma_{23} & \Sigma_{33} \end{array}\right|}{\left|\begin{array}{ll} \Sigma_{11} & \Sigma_{12}\\ \Sigma_{12} & \Sigma_{22} \end{array}\right|}\,, \quad\quad \chi_{|\Delta|\rho}= \frac{\left|\begin{array}{ll} \Sigma_{11} & \Sigma_{13}\\ \Sigma_{12} & \Sigma_{23} \end{array}\right|} {\left|\begin{array}{ll} \Sigma_{11} & \Sigma_{12}\\ \Sigma_{12} & \Sigma_{22} \end{array}\right|} \end{equation} o\`u $|A|$ est le d\'eterminant de la matrice $A$, et o\`u l'on a sous-entendu la d\'ependance des $\chi$ en $(\mathbf{q},\omega)$ et des $\Sigma_{ij}$ en $(z,\mathbf{q})$ pour all\'eger.\footnote{\label{note:Cramer} En effet, le vecteur $\mathbf{x}=M^{-1}\mathbf{s}$ est solution du syst\`eme $M \mathbf{x}=\mathbf{s}$, que l'on r\'esout par la m\'ethode de Cramer, avec $\mathbf{s}$ le vecteur colonne de (\ref{eq:009a}) et (\ref{eq:009b}), pour obtenir ses coordoonn\'es $x_2$ et $x_3$.} { La valeur de $\chi_{\rho\rho}$ est en accord avec l'\'equation (118) de la r\'ef\'erence \cite{crrth7}.} Cherchons la trace \'eventuelle du mode du continuum dans les fonctions de r\'eponse dans la limite des faibles nombres d'onde, $q\to 0$, o\`u la partie imaginaire de l'\'energie complexe $z_\mathbf{q}$ est la plus faible. C'est l\`a que le mode a {\sl a priori} le plus de chance de bien se d\'etacher du fond de r\'eponse large du continuum sous forme d'un pic \'etroit en pulsation $\omega$. Dans cette limite, sous la condition $q\ll k_0\min(\Delta/\mu,(\mu/\Delta)^{1/2})$ \cite{CRAS2019}, la branche du continuum a une dispersion quadratique en $q$~: \begin{equation} \label{eq:012} z_\mathbf{q} \underset{q\to 0}{=} 2\Delta + \zeta_0 \frac{\hbar^2 q^2}{2m} \frac{\mu}{\Delta} + O(q^3) \end{equation} Le coefficient $\zeta_0$ est solution dans le demi-plan complexe inf\'erieur d'une \'equation transcendante donn\'ee dans la r\'ef\'erence \cite{PRL2019} (g\'en\'eralisant celle de \cite{AndrianovPopov} au raccordement CBE-BCS). { Il est repr\'esent\'e en fonction de la force des interactions sur la figure \ref{fig:fit}, et son comportement aux limites est donn\'e dans \cite{PRL2019}. On retiendra ici que} sa partie r\'eelle est positive pour $\Delta/\mu<1,21$, et n\'egative sinon. Calculons donc les fonctions de r\'eponse sur l'axe r\'eel des pulsations pr\`es du mode du continuum, en imposant la m\^eme loi d'\'echelle en nombre d'onde que dans (\ref{eq:012})~: \begin{equation} \label{eq:013} \hbar\omega \equiv 2\Delta + \nu \frac{\hbar^2q^2}{2m} \frac{\mu}{\Delta}\quad\quad (\nu\in\mathbb{R}) \end{equation} c'est-\`a-dire en faisant tendre $q$ vers z\'ero \`a fr\'equence r\'eduite $\nu$ quelconque fix\'ee. { Ceci revient \`a observer l'axe des pulsations autour de $2\Delta/\hbar$ sous un grossissement divergent $\propto q^{-2}$.} Dans la suite, il sera commode de poser \begin{equation} \label{eq:014} \zeta=\nu+\mathrm{i}\eta \quad\quad(\eta\to 0^+) \end{equation} par analogie avec $z=\hbar\omega+\mathrm{i}\eta$ dans (\ref{eq:009a}) et (\ref{eq:009b}). { Comme les points de singularit\'e $\varepsilon_1(q)$ et $\varepsilon_2(q)$ de la densit\'e d'\'etats du continuum de paire bris\'ee \`a $\mathbf{q}$ fix\'e mentionn\'es dans l'introduction v\'erifient $\varepsilon_1(q)=2\Delta$ et $\varepsilon_2(q)=2\Delta+(\mu/\Delta)\hbar^2q^2/2m+O(q^4)$ \cite{PRL2019}, on s'attend dans la limite $q\to 0$ \`a ce que les fonctions de r\'eponse admettent des singularit\'es en fr\'equence en $\nu=0$ et $\nu=1$, et qu'il faille effectuer le prolongement analytique de l'\'equation aux \'energies propres \`a $\im\zeta <0$ en passant entre les points $\nu=0$ (soit $\hbar\omega=\varepsilon_1(q)$) et $\nu=1$ (soit $\hbar\omega=\varepsilon_2(q)+O(q^4)$) pour trouver le mode du continuum. Le fait que ces singularit\'es subsistent \`a $\nu$ fini lorsque $q\to 0$ permet d'\'eliminer tout de suite un faux espoir : m\^eme si sa largeur en \'energie tend vers z\'ero comme $q^2$, le mode du continuum ne peut conduire \`a un pic tr\`es \'etroit en valeur relative dans les fonctions de r\'eponse en fr\'equence, car la contribution \g{large} du continuum pr\'esente une structure variant \`a la m\^eme \'echelle $\propto q^2$. En revanche, le troisi\`eme point de singularit\'e $\varepsilon_3(q)=(\mu^2+\Delta^2)^{1/2}+O(q^2)$ est non pertinent car rejet\'e \`a $\nu=+\infty$ par le changement d'\'echelle (\ref{eq:013}). Pour le calcul de $\chi$ proprement dit,} reprenons la m\'ethode de d\'eveloppement des quantit\'es $\Sigma_{ij}$ en puissances de $q$ \`a $\nu$ fix\'ee de la r\'ef\'erence \cite{PRL2019}~: il ne suffit pas de d\'evelopper na\"{\i}vement les int\'egrales sur $\mathbf{k}$ sous le signe somme en puissances de $q$, mais il faut traiter \`a part la couche de vecteurs d'onde $\mathbf{k}$ d'\'epaisseur $\propto q$ autour de la sph\`ere $k=k_0$, qui donne en g\'en\'eral la contribution dominante { car} les d\'enominateurs d'\'energie y prennent des valeurs extr\^emement faibles, de l'ordre de $q^2$.\footnote{Apr\`es passage en coordonn\'ees sph\'eriques d'axe $\mathbf{q}$, on s\'epare le domaine d'int\'egration sur le module $k$ en les deux composantes $I=[k_0-Aq,k_0+Aq]$ et $J=\mathbb{R}^+\setminus I$, o\`u $A\gg 1$ est fix\'e. Sur $J$, on d\'eveloppe directement l'int\'egrande en puissances de $q$ \`a $k$ fix\'e. Sur $I$, on effectue le changement de variable $k=k_0+qK$ puis on d\'eveloppe l'int\'egrande en puissances de $q$ \`a $K$ fix\'e. On regroupe les contributions de $I$ et $J$ ordre par ordre en $q$, puis on fait tendre $A$ vers $+\infty$ dans les coefficients des mon\^omes $q^n$. Dans les r\'esultats (\ref{eq:017}), la contribution de $J$ est n\'egligeable sauf dans $\delta\Sigma_{23}^{[2]}$ et $\delta\Sigma_{33}^{[2,3]}$.} Les formes (\ref{eq:011}) conduiraient \`a des calculs assez longs car, \`a l'ordre dominant en $q$, la premi\`ere et la derni\`ere colonne des d\'eterminants aux num\'erateurs sont \'equivalentes ($\Sigma_{i3}\sim\Sigma_{1i}$, $1\leq i\leq 3$), ce qui donne un r\'esultat nul et oblige \`a aller chercher les ordres sous-dominants des $\Sigma_{ij}$. On peut heureusement effectuer d'astucieuses combinaisons lin\'eaires sans changer la valeur de ces d\'eterminants, en soustrayant la premi\`ere colonne de la derni\`ere puis, seulement dans $\chi_{\rho\rho}$, en soustrayant la premi\`ere ligne de la troisi\`eme, si bien que~: \begin{equation} \label{eq:015} \chi_{\rho\rho}=\frac{2\left|\begin{array}{lll} \Sigma_{11} & \Sigma_{12} & \delta\Sigma_{13}\\ \Sigma_{12} & \Sigma_{22} & \delta\Sigma_{23} \\ \delta\Sigma_{13} & \delta\Sigma_{23} & \delta\Sigma_{33} \end{array}\right|}{\left|\begin{array}{ll} \Sigma_{11} & \Sigma_{12}\\ \Sigma_{12} & \Sigma_{22} \end{array}\right|}, \quad\quad \chi_{|\Delta|\rho}=\frac{\left|\begin{array}{ll} \Sigma_{11} & \delta\Sigma_{13}\\ \Sigma_{12} & \delta\Sigma_{23} \end{array}\right|} {\left|\begin{array}{ll} \Sigma_{11} & \Sigma_{12}\\ \Sigma_{12} & \Sigma_{22} \end{array}\right|}\hspace{-3mm} \end{equation} avec \begin{equation} \label{eq:016} \delta\Sigma_{13}\equiv \Sigma_{13}-\Sigma_{11}, \quad \delta\Sigma_{23}\equiv \Sigma_{23}-\Sigma_{12},\quad \delta\Sigma_{33}\equiv \Sigma_{33}+\Sigma_{11}-2\Sigma_{13} \end{equation} On d\'eveloppe alors ces $\delta\Sigma$ apr\`es recalcul de leur int\'egrande par combinaison lin\'eaire des int\'egrandes des $\Sigma_{ij}$. Il suffit ici de conna\^{\i}tre l'ordre dominant des $\delta\Sigma$ et celui des $\Sigma_{ij}$ restants, sauf pour $\delta\Sigma_{33}$ o\`u l'ordre sous-dominant est requis~: \begin{eqnarray} \label{eq:017} &\hspace{-5mm}&\check{\Sigma}_{11}^{[-1]}=\frac{\check{\Delta}}{8\mathrm{i}\pi} \asin \frac{1}{\sqrt{\zeta}}\,,\quad\check{\Sigma}_{22}^{[1]}=\frac{\zeta\asin\frac{1}{\sqrt{\zeta}} + \sqrt{\zeta-1}}{16\mathrm{i}\pi\check{\Delta}}\,,\quad\check{\Sigma}_{12}^{[0]}=\frac{\sqrt{\mathrm{e}^{2\tau}\!-\!1}}{-(2\pi)^2} \left[\re\Pi(\mathrm{e}^{\tau},\mathrm{i} \mathrm{e}^{\tau})\!-\!\Pi(-\mathrm{e}^{\tau},\mathrm{i} \mathrm{e}^{\tau})\!+\!\frac{K(\mathrm{i}\mathrm{e}^{\tau})}{\sh\tau}\right]\,,\nonumber \\ &\hspace{-5mm}& \delta\check{\Sigma}_{13}^{[1]}=\frac{\mathrm{i}\sqrt{\zeta-1}}{16\pi\check{\Delta}}\,,\quad\delta\check{\Sigma}_{23}^{[2]}=\frac{(2/3-\zeta)}{2\check{\Delta}^2}\check{\Sigma}_{12}^{[0]}-\frac{\sqrt{1\!-\!\mathrm{e}^{-2\tau}}}{24\pi^2} [E(\mathrm{i}\mathrm{e}^\tau)\!-\!\mathrm{e}^\tau\ch\tau K(\mathrm{i}\mathrm{e}^\tau)]\,,\nonumber\\ &\hspace{-5mm}& \delta\check{\Sigma}_{33}^{[2]}=\frac{\sqrt{1\!-\!\mathrm{e}^{-2\tau}}}{24\pi^2\check{\Delta}}[E(\mathrm{i}\mathrm{e}^\tau)+\coth\tau\, K(\mathrm{i}\mathrm{e}^\tau)]\,,\quad\delta\check{\Sigma}_{33}^{[3]}=\frac{(\zeta\!-\!2)\sqrt{\zeta\!-\!1}+\zeta^2\asin\frac{1}{\sqrt{\zeta}}}{64\mathrm{i}\pi\check{\Delta}^3} \end{eqnarray} o\`u $\check{\Sigma}_{ij}^{[n]}$ ($\delta\check{\Sigma}_{ij}^{[n]}$) est le coefficient de $\check{q}^n$ dans le d\'eveloppement limit\'e de $\check{\Sigma}_{ij}$ ($\delta\check{\Sigma}_{ij}$). Les trois premi\`eres identit\'es figurent d\'ej\`a dans \cite{PRL2019,CRAS2019}. L'accent tch\`eque signale l'adimensionnement des \'energies par $\mu$ ($\check{\Delta}=\Delta/\mu$), des nombres d'onde par $k_0$ ($\check{q}=q/k_0$) et des $\Sigma_{ij}$ par $k_0^3/\mu$, { avec ici $k_0=(2m\mu)^{1/2}/\hbar$}. On a utilis\'e les expressions de plusieurs int\'egrales sur $k$ en termes d'int\'egrales elliptiques compl\`etes $K$, $E$ et $\Pi$ de premi\`ere, seconde et troisi\`eme esp\`ece \cite{GR}, en particulier celles donn\'ees dans la r\'ef\'erence \cite{Strinati}, apr\`es avoir pos\'e $\sh\tau=1/\check{\Delta}$ pour abr\'eger.\footnote{On a aussi utilis\'e, pour $x\geq 0$, $E(\mathrm{i} x)=\sqrt{1\!+\!x^2}E(x/\sqrt{1\!+\!x^2})$ et $K(\mathrm{i} x)=K(x/\sqrt{1\!+\!x^2})/\sqrt{1\!+\!x^2}$ \cite{GR}. Ainsi, par exemple, $\int_0^{+\infty}\!\mathrm{d}\check{k} \frac{\check{k}^2\check{\xi}_k}{\check{\varepsilon}_k^3}=K(\mathrm{i}\mathrm{e}^\tau)\sqrt{\mathrm{e}^{2\tau}\!-\!1}/2$.} Nous obtenons finalement le comportement des fonctions de r\'eponse \`a faible $q$~: \begin{eqnarray} \label{eq:018a} &\hspace{-5mm}&\check{\chi}_{\rho\rho}\stackrel{\nu\,\mbox{\scriptsize fix\'e}}{\underset{q\to 0}{=}} 2\check{q}^2\delta\check{\Sigma}_{33}^{[2]} +2\check{q}^3\left[\delta\check{\Sigma}_{33}^{[3]} +\frac{2\check{\Sigma}^{[0]}_{12}\delta\check{\Sigma}_{13}^{[1]}\delta\check{\Sigma}_{23}^{[2]}-\check{\Sigma}_{22}^{[1]}\delta\check{\Sigma}_{13}^{[1]2}-\check{\Sigma}_{11}^{[-1]}\delta\check{\Sigma}_{23}^{[2]2}}{\check{\Sigma}_{11}^{[-1]}\check{\Sigma}_{22}^{[1]}-\check{\Sigma}_{12}^{[0]2}}\right]+O(\check{q}^4)\\ &\hspace{-5mm}&\chi_{|\Delta|\rho}\stackrel{\nu\,\mbox{\scriptsize fix\'e}}{\underset{q\to 0}{=}}\check{q} \frac{\check{\Sigma}_{11}^{[-1]}\delta\check{\Sigma}_{23}^{[2]}-\check{\Sigma}_{12}^{[0]}\delta\check{\Sigma}_{13}^{[1]}}{\check{\Sigma}_{11}^{[-1]}\check{\Sigma}_{22}^{[1]}-\check{\Sigma}_{12}^{[0]2}}+O(\check{q}^2) \label{eq:018b} \end{eqnarray} o\`u $\chi_{\rho\rho}$ est exprim\'e en unit\'es de $k_0^3/\mu$ et $\chi_{|\Delta|\rho}$ est naturellement sans dimension. Une d\'ependance plus explicite en la fr\'equence r\'eduite $\nu$ est obtenue en passant \`a la limite $\eta\to 0^+$ comme dans l'\'equation (\ref{eq:014})~: \begin{equation} \asin\frac{1}{\sqrt{\zeta}} \underset{\eta\to 0^+}{\to} \left\{ \begin{array}{ll} -\mathrm{i}\argsh \frac{1}{\sqrt{-\nu}} & \mbox{si }\nu<0 \\ \frac{\pi}{2}-\mathrm{i}\argch \frac{1}{\sqrt{\nu}} & \mbox{si }0<\nu<1 \\ \asin\frac{1}{\sqrt{\nu}} & \mbox{si }1<\nu \end{array} \right. \quad\mbox{et}\quad \sqrt{\zeta-1} \underset{\eta\to 0^+}{\to} \left\{ \begin{array}{ll} \mathrm{i}\sqrt{1-\nu} & \mbox{si }\nu<1 \\ \sqrt{\nu-1} & \mbox{si }\nu>1 \end{array} \right. \end{equation} Elle permet de v\'erifier que le coefficient $\check{\chi}_{\rho\rho}^{[3]}$ de la contribution d'ordre $\check{q}^3$ dans (\ref{eq:018a}) et celui $\check{\chi}_{|\Delta|\rho}^{[1]}$ de la contribution d'ordre $\check{q}$ dans (\ref{eq:018b}) ont une partie imaginaire nulle pour $\nu<0$ ({ ceci \'etait pr\'evisible et se produit dans les fonctions de r\'eponse \`a tous les ordres en $q$}, car la densit\'e d'\'etats du continuum de paire bris\'ee { $\mathbf{k}\mapsto\varepsilon_{\mathbf{q}/2+\mathbf{k}}+\varepsilon_{\mathbf{q}/2-\mathbf{k}}$ est nulle aux \'energies $<2\Delta$}) et une partie r\'eelle nulle pour $\nu>1$ { (ceci pour une raison physique que nous ignorons, et qui ne vaut pas \`a tous les ordres en $q$)}. Ces coefficients ont une limite finie et r\'eelle en $\nu=0$, atteinte lentement (logarithmiquement, avec un \'ecart variant comme $1/\ln|\nu|$), \begin{eqnarray} \label{eq:021a} \lim_{\nu\to 0} \check{\chi}_{\rho\rho}^{[3]}(\nu) &=& -\frac{1}{16\pi\check{\Delta}^3} -32\pi\check{\Delta} [\delta\check{\Sigma}_{23}^{[2]}(\nu=0)]^2 \\ \label{eq:021b} \lim_{\nu\to 0} \check{\chi}_{|\Delta|\rho}^{[1]}(\nu) &=& 16\pi\check{\Delta}\,\delta\check{\Sigma}_{23}^{[2]}(\nu=0) \end{eqnarray} ce qui donne naissance \`a une structure pointue, \`a tangente verticale, dans la d\'ependance en $\nu$, comme dans la r\'ef\'erence \cite{PRL2019}~; ils pr\'esentent en $\nu=1$ une singularit\'e en $|\nu-1|^{1/2}$, sur la partie r\'eelle pour $\nu\to 1^-$, sur la partie imaginaire pour $\nu\to 1^+$, ce qui donne lieu cette fois \`a un banal point anguleux \`a tangente verticale (voir la figure \ref{fig:gra} \`a venir). Analysons physiquement les r\'esultats (\ref{eq:018a},\ref{eq:018b}). D'abord, le terme dominant (d'ordre $q^2$) dans la fonction de r\'eponse densit\'e-densit\'e n'a gu\`ere d'int\'er\^et pour notre \'etude~: il est insensible au mode du continuum puisque les fonctions $\Sigma_{ij}(z,\mathbf{q})$, m\^eme apr\`es prolongement au demi-plan complexe inf\'erieur, ne comportent aucun p\^ole. Heureusement, il constitue un fond ind\'ependant de la fr\'equence r\'eduite $\nu$, comme on peut le v\'erifier sur l'\'equation (\ref{eq:017})~; il est donc possible de s'en d\'ebarrasser exp\'erimentalement en consid\'erant la diff\'erence \begin{equation} \label{eq:019} \check{\chi}_{\rho\rho}(\check{q},\nu)-\check{\chi}_{\rho\rho}(\check{q},\nu_0) \end{equation} o\`u $\nu$ est la variable courante et $\nu_0$, la fr\'equence r\'eduite de r\'ef\'erence, est fix\'ee. On remarque aussi que ce fond en $q^2$ est r\'eel, si bien qu'il ne contribue pas \`a la partie imaginaire de { $\chi_{\rho\rho}$}, qui est souvent ce que l'on mesure vraiment dans l'exp\'erience \cite{Bragg5}. En revanche, le terme sous-dominant (d'ordre $q^3$) dans $\chi_{\rho\rho}$ est sensible au mode du continuum~: comme il contient des fonctions $\Sigma_{ij}$ au d\'enominateur, son prolongement analytique au demi-plan complexe inf\'erieur \`a travers l'intervalle $\nu\in[0,1]$ admet un p\^ole en $\zeta=\zeta_0$, o\`u le nombre complexe $\zeta_0$ est celui de l'\'equation (\ref{eq:012}), avec un r\'esidu non nul, voir la figure \ref{fig:res}a. La m\^eme conclusion s'impose pour le terme dominant (d'ordre $q$) dans la fonction de r\'eponse module-densit\'e, voir la figure \ref{fig:res}b.\footnote{\label{note:pro} Le prolongement analytique de $\zeta\mapsto\check{\chi}_{\rho\rho}^{[3]}$ et $\zeta\mapsto\check{\chi}_{|\Delta|\rho}^{[1]}$ du demi-plan sup\'erieur au demi-plan inf\'erieur est effectu\'e en passant \`a travers l'intervalle $[0,1]$ (reliant leurs singularit\'es en $\nu=0$ et $\nu=1$ sur l'axe r\'eel) par les substitutions $\asin\frac{1}{\sqrt{\zeta}} \to \pi-\asin\frac{1}{\sqrt{\zeta}}$ et $\sqrt{\zeta-1}\to -\sqrt{\zeta-1}$ comme dans la r\'ef\'erence \cite{PRL2019}. Le prolongement analytique du d\'enominateur au second membre des \'equations (\ref{eq:018a}) et (\ref{eq:018b}) donne pr\'ecis\'ement la fonction de \cite{PRL2019} dont $\zeta_0$ est racine. Il n'y a pas d'autre intervalle de prolongement \`a consid\'erer car le d\'enominateur de $\check{\chi}_{\rho\rho}^{[3]}$ et $\check{\chi}_{|\Delta|\rho}^{[1]}$ (\'etendu \`a $\mathbb{C}\setminus\mathbb{R}$ par les relations $\Sigma_{ij}(z)=[\Sigma_{ij}(z^*)]^*$) n'a de ligne de coupure ni pour $\nu\in]-\infty,0]$ (par annulation de la densit\'e d'\'etats du continuum de paire bris\'ee) ni pour $\nu\in[1,+\infty[$ (par compensation des discontinuit\'es de $\check{\Sigma}_{11}^{[-1]}$ et $\check{\Sigma}_{22}^{[1]}$ sur cette demi-droite, qui sont un simple changement de signe).}\ \footnote{\label{note:pro2} Le prolongement analytique des fonctions $\Sigma_{ij}(z)$ de $\im z>0$ \`a $\im z<0$ est donn\'e par $\Sigma_{ij}\!\!\downarrow\!\!(z)=\Sigma_{ij}(z)-\frac{2\mathrm{i}\pi}{(2\pi)^3} \rho_{ij}(z)$ en termes des densit\'es spectrales d\'efinies sur $\mathbb{R}^+$ par $\im\Sigma_{ij}(\varepsilon+\mathrm{i} 0^+)=-\frac{\pi}{(2\pi)^3}\rho_{ij}(\varepsilon)$ \cite{Noz}, qu'il suffit ici de conna\^{\i}tre sur l'intervalle de prolongement entre les deux premiers points de branchement { $\varepsilon_1(q)=2\Delta$} et $\epsilon_2(q)$ \cite{PRL2019}. Alors $\rho_{13}(\varepsilon)=(\frac{2m}{\hbar^2})^2 \frac{\pi\varepsilon}{2 q} K(\mathrm{i}\sh\Omega)$, $\rho_{23}(\varepsilon)=0$, $\rho_{33}(\varepsilon)=(\frac{2m}{\hbar^2})^2 \frac{\pi\Delta}{q} E(\mathrm{i}\sh\Omega)$, avec $\Omega=\argch\frac{\varepsilon}{2\Delta}$. Les autres $\rho_{ij}(\varepsilon)$ figurent dans les r\'ef\'erences \cite{PRL2019,CRAS2019}.} { Sans surprise, sur ces figures trac\'ees en fonction de $q$, le r\'esidu $Z$ du mode du continuum est complexe, puisqu'aussi bien les fonctions de r\'eponse que l'\'energie $z_\mathbf{q}$ le sont. En pratique, la phase de $Z$ importe peu (le p\^ole est unique et sa contribution ne peut interf\'erer avec celle d'un autre p\^ole) et c'est son module qui caract\'erise le poids spectral du mode ; nous repr\'esentons donc $|Z|$ (ou plus pr\'ecis\'ement le coefficient de son ordre dominant en $q$) en fonction de la force des interactions sur la figure \ref{fig:fit}.} \begin{figure}[t] \centerline{\includegraphics[width=6cm,clip=]{figresa1.pdf}\hspace{2cm}\includegraphics[width=6cm,clip=]{figresb1.pdf}} \centerline{\includegraphics[width=6cm,clip=]{figresa2.pdf}\hspace{2cm}\includegraphics[width=6cm,clip=]{figresb2.pdf}} \caption{Poids spectral complexe du mode du continuum dans les fonctions de r\'eponse densit\'e-densit\'e (colonne a) et module-densit\'e (colonne b), c'est-\`a-dire r\'esidu $Z_{\rho\rho}$ ou $Z_{|\Delta|\rho}$ du prolongement analytique de $\chi_{\rho\rho}(\mathbf{q},z/\hbar)$ ou de $\chi_{|\Delta|\rho}(\mathbf{q},z/\hbar)$ de $\im z>0$ \`a $\im z<0$ (\`a travers l'intervalle entre leurs deux premi\`eres singularit\'es { $\varepsilon_1(q)$ et $\varepsilon_2(q)$} sur $\mathbb{R}^+$) en le p\^ole $z_\mathbf{q}$ (\'energie complexe du mode), en fonction du nombre d'onde $q$, du c\^ot\'e $\mu>0$ du raccordement CBE-BCS (\ref{eq:010}), pour $\check{\Delta}=1/2$ (ligne 1) et $\check{\Delta}=2$ (ligne 2). Les r\'esidus ont \'et\'e divis\'es par la puissance de $q$ assurant l'existence d'une limite finie et non nulle en $q=0$. En noir~: partie r\'eelle~; en rouge~: partie imaginaire. Cercles reli\'es en pointill\'e~: r\'esultats num\'eriques tir\'es des formes g\'en\'erales (\ref{eq:011})~; le prolongement analytique est effectu\'e comme dans \cite{PRL2019,CRAS2019} par la m\'ethode des densit\'es spectrales de la r\'ef\'erence \cite{Noz}, voir notre note \ref{note:pro2}. Tiret\'es horizontaux~: limite en $q=0$, tir\'ee des r\'esultats analytiques (\ref{eq:018a},\ref{eq:018b}) prolong\'es comme dans la note \ref{note:pro}. Accent tch\`eque~: adimensionnement de $\Delta$ par $\mu$ ($\check{\Delta}=\Delta/\mu$), de $q$ par $k_0=(2m\mu)^{1/2}/\hbar$ { ($\check{q}=q/k_0$)}, de $Z_{\rho\rho}$ par $k_0^3$ et de $Z_{|\Delta|\rho}$ par $\mu$. } \label{fig:res} \end{figure} Cependant, les mesures physiques ont lieu sur l'axe r\'eel des pulsations. Aussi avons-nous repr\'esent\'e $\check{\chi}^{[3]}_{\rho\rho}$ et $\check{\chi}_{|\Delta|\rho}^{[1]}$ en fonction de la fr\'equence r\'eduite $\nu$ sur la figure \ref{fig:gra}, pour deux valeurs de la force des interactions. Les structures \'etroites esp\'er\'ees devraient se trouver sur l'intervalle de prolongement analytique $\nu\in [0,1]$, { presque} au-dessus du p\^ole $\zeta_0$ donc pr\`es de la ligne verticale en trait plein vert. Pour $\check{\Delta}=1/2$ { (premi\`ere ligne de la figure)}, on est dans le cas favorable $\re\zeta_0\in [0,1]$~; or, $\check{\chi}^{[3]}_{\rho\rho}$ pr\'esente, sur l'intervalle $[0,1]$, une structure en forme d'\'epaule avec un maximum et un minimum, aussi bien sur sa partie r\'eelle que sur sa partie imaginaire~; encore mieux, $\check{\chi}^{[1]}_{|\Delta|\rho}$ pr\'esente, sur le m\^eme intervalle, { une bosse assez prononc\'ee} sur sa partie r\'eelle, il est vrai assez loin de la ligne verte, et un creux assez net sur sa partie imaginaire, proche de la ligne. Pour $\check{\Delta}=2$ { (seconde ligne de la figure)}, on est dans le cas d\'efavorable $\re\zeta_0<0$~; les fonctions de r\'eponse devraient donc garder la trace du mode du continuum sur l'intervalle $\nu\in[0,1]$ seulement au travers de l'aile de la r\'esonance complexe associ\'ee, et non plus sous forme d'extr\'ema~; malheureusement, les structures observ\'ees restent essentiellement les m\^emes que pour $\check{\Delta}=1/2$, ce qui jette un doute { affreux} sur leur lien avec le mode du continuum.\footnote{Quand $\re\zeta_0<0$, il ne faut pas en g\'en\'eral esp\'erer voir de { bosse} ou de creux associ\'e au mode du continuum dans les fonctions de r\'eponse sur l'intervalle $\nu\in]-\infty,0[$. En effet, cet intervalle physique est s\'epar\'e du p\^ole par le bout de la ligne de coupure $[0,1]$ qu'il a fallu rabattre sur $]-\infty, 0]$ pour effectuer le prolongement analytique, l'autre bout \'etant rabattu sur $[1,+\infty[$.} \begin{figure}[t] \centerline{\includegraphics[width=6cm,clip=]{figgraa1.pdf}\hspace{2cm}\includegraphics[width=6cm,clip=]{figgrab1.pdf}} \centerline{\includegraphics[width=6cm,clip=]{figgraa2.pdf}\hspace{2cm}\includegraphics[width=6cm,clip=]{figgrab2.pdf}} \caption{Premier coefficient sensible au mode du continuum dans le d\'eveloppement \`a faible nombre d'onde $q$ (\ref{eq:018a}, \ref{eq:018b}) des fonctions de r\'eponse densit\'e-densit\'e (colonne a, ordre $q^3$) et module-densit\'e (colonne b, ordre $q$), du c\^ot\'e $\mu>0$ du raccordement CBE-CBS (\ref{eq:010}), pour $\check{\Delta}=1/2$ (ligne 1) et $\check{\Delta}=2$ (ligne 2), { en fonction de la fr\'equence r\'eduite $\nu$ de l'\'equation (\ref{eq:013}) (utiliser la variable $\nu$ plut\^ot que $\omega$ revient \`a regarder astucieusement l'axe des pulsations autour de $2\Delta/\hbar$ avec un grossissement divergent $\propto q^{-2}$ compensant exactement le r\'etr\'ecissement du mode du continuum lorsque $q\to 0$).} Trait plein noir~: partie r\'eelle~; trait plein rouge~: partie imaginaire. Pointill\'es verticaux~: positions $\nu=0$ et $\nu=1$ des singularit\'es. Ligne verticale verte~: partie r\'eelle r\'eduite $\re\zeta_0$ de l'\'energie { complexe} du mode du continuum. Les extr\'ema indiqu\'es par une fl\`eche sont une marque physique du mode du continuum sur l'axe r\'eel des fr\'equences (voir le texte et la figure \ref{fig:lie}). Accent tch\`eque~: adimensionnement de $\Delta$ par $\mu$, de $q$ par $k_0=(2m\mu)^{1/2}/\hbar$, de $\chi_{\rho\rho}(\mathbf{q},\omega)$ par $k_0^3/\mu$~; $\chi_{|\Delta|\rho}(\mathbf{q},\omega)$ est d\'ej\`a sans dimension.} \label{fig:gra} \end{figure} Pour voir ce qu'il en est vraiment, nous effectuons un prolongement analytique des coefficients $\check{\chi}^{[3]}_{\rho\rho}(\zeta)$ et $\check{\chi}_{|\Delta|\rho}^{[1]}(\zeta)$ au demi-plan complexe inf\'erieur $\im\zeta<0$ comme dans la note \ref{note:pro}, puis nous rep\'erons la position des extr\'ema de la partie r\'eelle ou de la partie imaginaire de ces coefficients, c'est-\`a-dire leur abscisse $\nu_R$, sur la droite horizontale $\zeta=\nu_R+\mathrm{i}\nu_I$ d'ordonn\'ee fix\'ee $\nu_I$, et nous tra\c{c}ons enfin le lieu de ces extr\'ema lorsque $\nu_I$ varie dans l'intervalle $]\im\zeta_0,0]$, voir la figure \ref{fig:lie}. Ce lieu est la r\'eunion de lignes continues (ses composantes connexes)~; certaines, mais pas toutes\footnote{Une ligne de maxima et une ligne de minima de $\re\chi$ ($\im\chi$) peuvent { se rejoindre} en un point d'arr\^et o\`u $\re\partial_{\nu_R}^2\chi\!=\!0$ ($\im\partial_{\nu_R}^2\chi\!=\!0$).}, convergent vers le p\^ole $\zeta_0$.\footnote{En $\zeta_0$, on s'attend en g\'en\'eral \`a voir converger une ligne de minima et une ligne de maxima de la partie r\'eelle et de la partie imaginaire, soit quatre lignes au total. En effet une fonction m\'eromorphe $f(\zeta)$ au voisinage de son p\^ole $\zeta_0$ est \'equivalente \`a $Z/(\zeta-\zeta_0)$, o\`u $Z$ est le r\'esidu. Des d\'ecompositions en partie r\'eelle et imaginaire $Z=a+\mathrm{i} b$, $\zeta-\zeta_0=x+\mathrm{i} y$, et du changement d'\'echelle $x=y X$, o\`u $y>0$ est la distance de la droite horizontale $\zeta=\nu_R+\mathrm{i}\nu_I$ au p\^ole, nous tirons $f(\zeta)\sim y^{-1} (\frac{a X+b}{X^2+1} +\mathrm{i} \frac{bX-a}{X^2+1})$. Or, pour tout $u\in\mathbb{R}$, la fonction $X\mapsto (X+u)/(X^2+1)$ admet sur $\mathbb{R}$ un minimum en $-u-\sqrt{1+u^2}$ et un maximum en $-u+\sqrt{1+u^2}$. Donc, si $a\neq 0$ ($b\neq 0$), on voit converger vers $\zeta_0$ deux lignes d'extr\'ema de la partie r\'eelle (imaginaire). Pour $\check{\Delta}=1/2$, la ligne des minima de $\re\check{\chi}^{[3]}_{\rho\rho}$ arrive presque horizontalement ({ par} la droite, avec une pente $\simeq -a/2b$, { voir la figure \ref{fig:lie}a1}) car le r\'esidu est presque imaginaire pur, $\check{Z}_{\rho\rho}\sim (-0,003+0,04\mathrm{i})\check{q}^5$ comme on le voit sur la figure \ref{fig:res}a1.} Tout extr\'emum de $\check{\chi}^{[3]}_{\rho\rho}$ ou $\check{\chi}_{|\Delta|\rho}^{[1]}$ sur l'intervalle r\'eel $\nu \in ]0,1[$ reli\'e contin\^ument au p\^ole par une de ces lignes est indubitablement une marque physique du mode du continuum, observable dans la fonction de r\'eponse associ\'ee~; les autres extr\'ema sur l'axe r\'eel n'en sont pas. D'o\`u le verdict sur la figure \ref{fig:gra}~: pour $\check{\Delta}=1/2$, seuls le maximum de $\re\check{\chi}^{[3]}_{\rho\rho}$, le maximum de $\im\check{\chi}^{[3]}_{\rho\rho}$, le maximum de $\re\check{\chi}_{|\Delta|\rho}^{[1]}$ et le minimum de $\im\check{\chi}_{|\Delta|\rho}^{[1]}$ sur $\nu\in]0,1[$ sont des marques physiques du mode du continuum~; pour $\check{\Delta}=2$, c'est le cas seulement du maximum de $\re\check{\chi}^{[3]}_{\rho\rho}$, du maximum de $\im\check{\chi}^{[3]}_{\rho\rho}$ et du maximum de $\re\check{\chi}_{|\Delta|\rho}^{[1]}$ sur $\nu\in]0,1[$. \begin{figure}[t] \centerline{\includegraphics[width=6cm,clip=]{nfigliea1.pdf}\hspace{2cm}\includegraphics[width=6cm,clip=]{nfiglieb1.pdf}} \centerline{\includegraphics[width=6cm,clip=]{nfigliea2.pdf}\hspace{2cm}\includegraphics[width=6cm,clip=]{nfiglieb2.pdf}} \caption{Lieu des extr\'ema des fonctions de la partie r\'eelle $\nu_R\mapsto\re\chi^{[n]}\!\!\!\downarrow\!\!(\zeta=\nu_R+\mathrm{i}\nu_I)$ (en noir) et $\nu_R\mapsto\im\chi^{[n]}\!\!\!\downarrow\!\!(\zeta=\nu_R+\mathrm{i}\nu_I)$ (en rouge) lorsque la partie imaginaire $\nu_I$ de $\zeta$ varie. Ici $\chi^{[n]}\!\!\!\downarrow\!\!(\zeta)$ est le coefficient de $q^n$ dans le d\'eveloppement \`a faible $q$ de la susceptibilit\'e $\chi(\mathbf{q},\omega)$ \`a $\nu$ fix\'e dans (\ref{eq:013}), voir les \'equations (\ref{eq:018a},\ref{eq:018b}), prolong\'e analytiquement de $\im\zeta>0$ \`a $\im\zeta<0$ \`a travers $\nu\in[0,1]$ comme l'indique la fl\`eche $\downarrow$ dans la notation $\chi\!\!\downarrow$. Colonne (a)~: $\chi=\chi_{\rho\rho}$ et $n=3$~; colonne (b)~: $\chi=\chi_{|\Delta|\rho}$ et $n=1$. Le gaz de fermions est dans le raccordement CBE-BCS du c\^ot\'e $\mu>0$~: $\Delta/\mu=1/2$ (ligne 1) et $\Delta/\mu=2$ (ligne 2). Trait plein~: l'extr\'emum est un maximum. Tiret\'e~: l'extr\'emum est un minimum. Croix bleue~: \'energie complexe r\'eduite $\zeta_0$ du mode du continuum. Pointill\'es verticaux~: positions $\nu=0$ et $\nu=1$ des singularit\'es de $\chi^{[n]}(\nu)$ sur l'axe r\'eel.} \label{fig:lie} \end{figure} En d\'efinitive, il reste \`a voir jusqu'\`a quel point on peut extraire la position et le poids spectral du mode du continuum de mesures des fonctions de r\'eponse sur l'intervalle de fr\'equence r\'eduite $\nu\in [0,1]$. \`A cette fin, nous proposons un ajustement tr\`es simple des susceptibilit\'es $\chi(\mathbf{q},\omega)$ par la somme de la contribution du p\^ole du mode collectif et d'un fond affine lentement variable d\'ecrivant la r\'eponse large du continuum~: \begin{equation} \label{eq:022} \check{\chi}_{|\Delta|\rho}^{[1]}(\nu)|_{\rm ajust} = \frac{A}{\nu-B} +C +D\nu \quad (A,B,C,D\in\mathbb{C}) \end{equation} en prenant l'exemple de la r\'eponse module-densit\'e limit\'ee \`a son ordre dominant en $q$. La fonction d'ajustement est \'equilibr\'ee dans sa recherche de pr\'ecision, puisqu'elle d\'ecrit le fond avec le m\^eme nombre de param\`etres ajustables complexes ($C$ et $D$) que la r\'esonance ($A$ et $B$), soit un param\`etre de plus que dans la r\'ef\'erence \cite{PRL2019}. L'ajustement est effectu\'e sur un sous-intervalle $[\nu_1,\nu_2]$ de $[0,1]$ afin d'\'eviter les singularit\'es aux bornes. Le r\'esultat est tr\`es encourageant pour la r\'eponse module-densit\'e, voir la figure \ref{fig:fit}a~: on obtient une bonne approximation de l'\'energie complexe et du poids spectral du mode, m\^eme pour $\check{\Delta}>1,21$ o\`u $\re \zeta_0<0$ et o\`u le p\^ole n'est plus en dessous de l'intervalle de prolongement analytique (et de mesure) $\nu\in [0,1]$. En revanche, le r\'esultat est mauvais pour la r\'eponse densit\'e-densit\'e, voir la figure \ref{fig:fit}b, sauf peut-\^etre pour la largeur de la r\'esonance. Pour comprendre cette diff\'erence de succ\`es suivant l'observable $|\Delta|$ ou $\rho$, nous avons calcul\'e la hauteur relative $h_{\rm rel}$ de la contribution de la r\'esonance sur le fond.\footnote{La fonction $\chi(\nu)$ \'etant donn\'ee, on d\'efinit le fond par $F(\nu)=\chi(\nu)-Z_0/(\nu-\zeta_0)$, $\zeta_0$ \'etant le p\^ole du prolongement analytique de $\chi$ au demi-plan complexe inf\'erieur et $Z_0$ le r\'esidu associ\'e. Alors $h_{\rm rel}=|Z_0/(\nu_0-\zeta_0)|/|F(\nu_0)|$ avec $\nu_0=\max(\nu_1,\re\zeta_0)$.} Pour $\check{\Delta}<2$, nous trouvons toujours que $h_{\rm rel}>1$ pour le module du param\`etre d'ordre, mais que $h_{\rm rel}<1$ pour la densit\'e. Par exemple, pour $\check{\Delta}=1/2$, $h_{\rm rel}^{|\Delta|\rho}\simeq 1,\!8$ alors que $h_{\rm rel}^{\rho\rho}\simeq 0,\!37$. Le probl\`eme est donc que la r\'esonance complexe n'\'emerge pas assez du fond dans la r\'eponse densit\'e-densit\'e. Ce probl\`eme devient r\'edhibitoire dans la limite d'interaction faible, o\`u $h_{\rm rel}^{\rho\rho}\to 0$, alors qu'il ne se pose pas pour le module, puisque $h_{\rm rel}^{|\Delta|\rho}\to 2,\!338\ldots$ pour $\nu_1<\re\zeta_0$, voir la section \ref{sec:bcs}. \begin{figure}[t] \centerline{\includegraphics[width=6cm,clip=]{figfita.pdf}\hspace{2cm}\includegraphics[width=6cm,clip=]{figfitb.pdf}} \caption{Extraction de l'\'energie complexe du mode du continuum et de son poids spectral par un ajustement \`a quatre param\`etres complexes (\ref{eq:022}) des fonctions de r\'eponse $\chi(\mathbf{q},\omega)$ dans le raccordement CBE-BCS du c\^ot\'e $\mu>0$ { (la valeur $\check{\Delta}=0,2$ correspondant \`a $k_{\rm F}a\simeq -1$ dans la th\'eorie BCS}). (a) Par ajustement du coefficient $\check{\chi}^{[1]}_{|\Delta|\rho}(\nu)$ de $\check{q}$ dans le d\'eveloppement (\ref{eq:018b}) de la fonction de r\'eponse module-densit\'e. (b) Idem pour le coefficient $\check{\chi}^{[3]}_{\rho\rho}(\nu)$ de $\check{q}^3$ dans (\ref{eq:018a}). En noir (rouge)~: partie r\'eelle (imaginaire) du coefficient $\zeta_0$ dans le d\'epart quadratique (\ref{eq:012}) de l'\'energie complexe. En vert~: poids spectral $\Pi_0$ du mode dans la fonction de r\'eponse consid\'er\'ee (c'est le module du r\'esidu du prolongement analytique de $\check{\chi}^{[n]}$ en $\zeta_0$, si bien que $\Pi_0=\lim_{\check{q}\to 0} \check{\Delta}|\check{Z}_{|\Delta|\rho}|/\check{q}^3$ dans (a) et $\Pi_0=\lim_{\check{q}\to 0} \check{\Delta}|\check{Z}_{\rho\rho}|/\check{q}^5$ dans (b), o\`u les $\check{Z}$ sont ceux de la figure \ref{fig:res} { et on a tenu compte du changement d'\'echelle (\ref{eq:013})}). Trait plein~: valeurs exactes. Tiret\'e~: valeurs tir\'ees de l'ajustement sur l'intervalle de fr\'equence r\'eduite $\nu\in [1/10,9/10]$. Pointill\'e~: idem pour $\nu\in [1/5, 4/5]$. L'intervalle d'ajustement a \'et\'e discr\'etis\'e en 60 points r\'eguli\`erement espac\'es. Noter le facteur $100$ sur $\Pi_0$ dans (b).} \label{fig:fit} \end{figure} \section{Dans la limite BCS d'interaction faible} \label{sec:bcs} Le r\'egime d'attraction faible $k_F a\to 0^-$, bien que peu pertinent pour les exp\'eriences d'atomes froids, pr\'esente un certain int\'er\^et th\'eorique~: c'est en effet l\`a que la th\'eorie BCS utilis\'ee est la plus quantitative et la plus fiable. Une fa\c{c}on astucieuse de prendre la limite continue de notre mod\`ele sur r\'eseau correspond \`a la cha\^{\i}ne d'in\'egalit\'es $0 < -a \ll b \ll 1/k_F$~: on peut continuer \`a remplacer le domaine d'int\'egration $\mathcal{D}$ par $\mathbb{R}^3$ dans la d\'efinition des $\Sigma_{ij}$, mais comme $|a|/b \ll 1$, l'interaction entre fermions est d\'esormais dans le r\'egime de Born de la th\'eorie de la diffusion, si bien que l'on peut approximer $g_0$ par $g$ dans la matrice (\ref{eq:007}), \begin{equation} \label{eq:023} g_0 \to g \end{equation} et que le d\'eplacement de champ moyen de Hartree subsiste dans le spectre BCS, avec $\xi_\mathbf{k}=E_\mathbf{k}-\mu+\rho g/2$. \`A l'ordre un en $k_F a$, l'\'equation d'\'etat du gaz \`a temp\'erature nulle contient justement ce terme de Hartree, $\mu=\varepsilon_F+\rho g/2$~; le spectre BCS correspondant vaut simplement $\varepsilon_\mathbf{k}=[(E_\mathbf{k}-\varepsilon_F)^2+\Delta^2]^{1/2}$, en accord avec la r\'ef\'erence \cite{AndrianovPopov}, et atteint son minimum au nombre d'onde $k_0=k_F$. Les expressions (\ref{eq:011}) des fonctions de r\'eponse obtenues pour $g_0=0$ ne suffisent plus. Recalculons-les en partant des expressions g\'en\'erales (\ref{eq:009a},\ref{eq:009b}) et en y effectuant la substitution (\ref{eq:023}). En proc\'edant comme dans la note \ref{note:Cramer}, nous obtenons\footnote{Dans le d\'eterminant de Cramer $3\times 3$ au num\'erateur de $\chi_{|\Delta|\rho}$, on ajoute \`a la troisi\`eme colonne la deuxi\`eme multipli\'ee par $g$ pour se ramener \`a un d\'eterminant $2\times 2$.} \begin{equation} \label{eq:025} \chi_{\rho\rho}=\frac{2\left|\begin{array}{lll} \Sigma_{11} & \Sigma_{12} & \Sigma_{13} \\ \Sigma_{12} & \Sigma_{22} & \Sigma_{23} \\ \Sigma_{13} & \Sigma_{23} & \Sigma_{33} \end{array}\right|}{\det M}\,,\quad\quad \chi_{|\Delta|\rho} = \frac{\left|\begin{array}{ll} \Sigma_{11} & \Sigma_{13}\\ \Sigma_{12} & \Sigma_{23} \end{array}\right|}{\det M} \end{equation} en sous-entendant que les $\chi$ sont \'evalu\'es en $(\mathbf{q},\omega)$ et les $\Sigma_{ij}$ en $(z=\hbar\omega+\mathrm{i} 0^+,\mathbf{q})$. Or, le d\'eterminant $\det M$ est une fonction lin\'eaire du troisi\`eme vecteur colonne de $M$, si bien que \begin{equation} \label{eq:026} \det M = \left|\begin{array}{ll} \Sigma_{11} & \Sigma_{12}\\ \Sigma_{12} & \Sigma_{22} \end{array}\right| - g \left|\begin{array}{lll} \Sigma_{11} & \Sigma_{12} & \Sigma_{13}\\ \Sigma_{12} & \Sigma_{22} & \Sigma_{23} \\ \Sigma_{13} & \Sigma_{23} & \Sigma_{33} \end{array}\right| \end{equation} En divisant (\ref{eq:025}) haut et bas par le premier terme au second membre de (\ref{eq:026}), nous faisons appara\^{\i}tre les susceptibilit\'es (\ref{eq:011}) obtenues pour $g_0=0$, que nous notons $\chi^{g_0=0}$, et qui permettent donc d'\'ecrire tr\`es simplement les susceptibilit\'es cherch\'ees \`a $g_0=g$ non nul~: \begin{equation} \label{eq:027} \chi_{\rho\rho}= \frac{\chi_{\rho\rho}^{g_0=0}}{1-\frac{g}{2}\chi_{\rho\rho}^{g_0=0}}\,, \quad\quad \chi_{|\Delta|\rho}=\frac{\chi_{|\Delta|\rho}^{g_0=0}}{1-\frac{g}{2}\chi_{\rho\rho}^{g_0=0}} \end{equation} Les formes (\ref{eq:027}) sont typiques de la th\'eorie de la RPA \cite{Anderson}, \`a laquelle notre th\'eorie BCS d\'ependant du temps lin\'earis\'ee est \'equivalente \`a des fluctuations quantiques entrantes pr\`es \cite{HadrienThese}. De telles formes (mais pas les expressions explicites que nous en donnons) apparaissent d\'ej\`a dans les r\'ef\'erences \cite{crrth2,crrth3}. Il est donc facile de reprendre l'\'etude des fonctions de r\'eponse au voisinage du mode du continuum \`a faible nombre d'onde, en faisant tendre $q$ vers z\'ero \`a fr\'equence r\'eduite $\nu$ fix\'ee comme dans (\ref{eq:013}). Comme $\chi_{\rho\rho}^{g_0=0}(\mathbf{q},\omega)$ varie alors au second ordre en $q$, les d\'enominateurs dans (\ref{eq:027}) peuvent \^etre approxim\'es par $1$ et les r\'esultats (\ref{eq:018a},\ref{eq:018b}) se transposent directement. De mani\`ere remarquable, toute la discussion \`a faible $q$ de la section \ref{sec:CBE-BCS} est en fait ind\'ependante de la valeur pr\'ecise de $g_0$ et s'applique aussi au cas $g_0=g$, exception faite bien s\^ur de la valeur de la fonction $\xi_\mathbf{k}$ et du spectre BCS $\varepsilon_\mathbf{k}$, ainsi que de la position $k_0$ de son minimum. Dans la pr\'esente limite $k_F a\to 0^-$, le param\`etre d'ordre \`a l'\'equilibre tend exponentiellement vers z\'ero, $\Delta/\varepsilon_F\sim 8\mathrm{e}^{-2}$ $\exp(-\pi/2 k_F |a|)$ selon la th\'eorie BCS \cite{Randeriavert}. Ceci permet de simplifier grandement nos r\'esultats. Donnons ainsi le coefficient de $\check{q}^3$ et de $\check{q}$ dans les d\'eveloppements limit\'es (\ref{eq:018a},\ref{eq:018b}) des fonctions de r\'eponse \`a l'ordre dominant en $\Delta$~:\footnote{On utilise la section 4.6.3 de la r\'ef\'erence \cite{CRAS2019} pour $\Sigma_{12}$ et le d\'eveloppement connu des int\'egrales elliptiques pour le reste. Les d\'eveloppements (\ref{eq:018a},\ref{eq:018b}) supposent que $q\xi\ll 1$ donc $\check{q}\ll \check{\Delta}$, d'o\`u l'ordre des limites $q\to 0$ puis $\Delta\to 0$.} \begin{eqnarray} \label{eq:028a} \check{\chi}_{\rho\rho}^{[3]}(\nu)\underset{\check{\Delta}\to 0}{\sim} \frac{(\zeta-2)\sqrt{\zeta-1}+\zeta^2\asin \frac{1}{\sqrt{\zeta}} -\frac{2(\zeta-1)}{\asin \frac{1}{\sqrt{\zeta}}}}{32\mathrm{i}\pi \check{\Delta}^3} \\ \label{eq:028b} \check{\chi}_{|\Delta|\rho}^{[1]}(\nu)\underset{\check{\Delta}\to 0}{\sim} \frac{2}{\mathrm{i}\pi}\left[\frac{1+\zeta \ln \frac{\check{\Delta}}{8\mathrm{e}}}{\zeta\asin \frac{1}{\sqrt{\zeta}} +\sqrt{\zeta-1}} -\frac{\frac{1}{2} \ln \frac{\check{\Delta}}{8\mathrm{e}}}{\asin \frac{1}{\sqrt{\zeta}}}\right] \end{eqnarray} o\`u les \'energies sont cette fois en unit\'es de $\varepsilon_F$ ($\check{\Delta}=\Delta/\varepsilon_F$) et les nombres d'onde en unit\'es de $k_0=k_F$ ($\check{q}=q/k_F$). \`A cet ordre, au contraire de la r\'eponse en module, la r\'eponse en densit\'e ne pr\'esente plus de p\^ole dans son prolongement analytique donc ne porte aucune trace du mode du continuum\footnote{Il faut d\'evelopper $\check{\chi}_{\rho\rho}^{[3]}(\zeta)$ \`a l'ordre $\check{\Delta}^{-1}$ pour trouver un p\^ole dans son prolongement analytique, de r\'esidu $\zeta_0^{(0)} (1+\zeta_0^{(0)}\ln\frac{\check{\Delta}}{8\mathrm{e}})^2/(2\mathrm{i}\pi^3\check{\Delta}\sqrt{\zeta_0^{(0)}-1})$ avec $\zeta_0^{(0)}\simeq 0,2369-0,2956\mathrm{i}$.}~; d'ailleurs, sur l'intervalle ouvert $\nu\in]0,1[$, sa partie r\'eelle (imaginaire) est une fonction purement croissante (d\'ecroissante) de $\nu$, sans extr\'emum, en contraste avec la figure \ref{fig:gra}a1. \section{Conclusion} \`A temp\'erature nulle, dans l'approximation BCS d\'ependant du temps, nous avons calcul\'e la r\'eponse lin\'eaire { $\chi(\mathbf{q},\omega)$} d'un gaz superfluide de fermions de spin $1/2$ \`a une excitation de Bragg de vecteur d'onde $\mathbf{q}$ et de pulsation $\omega$, comme on sait bien en r\'ealiser dans une exp\'erience d'atomes froids. Pour un potentiel chimique $\mu>0$, nous avons \'etudi\'e cette r\'eponse analytiquement dans la limite des faibles nombres d'onde $q\to 0$, l'\'ecart de $\hbar\omega$ au bord $2\Delta$ du continuum de paire bris\'ee \'etant mis \`a l'\'echelle $\propto q^2$ de celui de l'\'energie complexe $z_\mathbf{q}$ du mode du continuum, mode \`a ce jour inobserv\'e ; { bien que de nombreux travaux th\'eoriques aient \'et\'e consacr\'es aux fonctions de r\'eponse d'un superfluide de fermions, nos r\'esultats analytiques sur $\chi$ dans cette \'etroite fen\^etre de fr\'equence $\hbar\omega -2 \Delta =O(q^2)$ sont \`a notre connaissance originaux.} Dans le raccordement CBE-BCS o\`u le param\`etre d'ordre $\Delta$ est comparable \`a $\mu$, le mode du continuum est \`a l'origine de { bosses} ou de creux en fr\'equence dans les fonctions de r\'eponse en densit\'e et en module du param\`etre d'ordre. Un ajustement simple de ces fonctions par la somme d'une r\'esonance complexe et d'un fond affine en fr\'equence permet d'estimer l'\'energie complexe $z_\mathbf{q}$ et le poids spectral du mode, avec une bonne pr\'ecision pour la r\'eponse en module, m\^eme lorsque $\re z_\mathbf{q}<2\Delta$ donc que le mode n'est pas sous l'intervalle de prolongement analytique (dans la th\'eorie) et de mesure (dans l'exp\'erience). Ceci laisse augurer une observation prochaine. \section*{Remerciements} L'int\'er\^et de cette \'etude nous est apparu lors d'une discussion avec Chris Vale au congr\`es BEC 2019 \`a Sant Feliu de Guixols. Nous remercions aussi Hadrien Kurkjian pour d'utiles remarques sur le calcul de la fonction de r\'eponse densit\'e-densit\'e, { m\^eme s'il a pr\'ef\'er\'e finalement collaborer avec d'autres sur le sujet \cite{arxiv}}. { Remarquons pour terminer que la date de soumission du pr\'esent travail est tr\`es post\'erieure \`a celle de la pr\'epublication correspondante \cite{hal} ; nous avons d\^u en effet retirer notre pr\'ec\'edente soumission \`a une revue qui s'est r\'ev\'el\'ee incapable de produire un rapport d'expertise.}
1,941,325,220,629
arxiv
\section{Non-stationary quantum walks on the cycle} Consider a non-oriented graph where all the $N$ nodes have the same degree $d$ and assume that, at each time step, a walker makes a choice, out of a set of $d$ elements, $ \{ 1,...,d \} $, a {\it (generalized) coin}, with probability $p_1,p_2,..., p_d$, respectively. The walker starts from a given node of the graph and moves in a direction determined by the choice in $\{1,...,d\}$. After time $t$, the walker will have a probability $P(j,t)$ of being found in the node $j$, $j=1,...,N$. Such a system is known as a {\it random walk} on a graph. A {\it quantum walk} is the quantum counterpart of a random walk in that both the walker and the coin are seen as quantum systems of dimensions $N$ and $d$, respectively. At each step an operation $C$ is performed on the coin system and then an operation is performed on the walker system. The latter operation depends on the state of the coin system. Quantum walks have recently received large attention due to the fact that they can model quantum algorithms and generate interesting quantum states. There are several review papers on quantum walks, their use, dynamics, implementations and generalizations (see, e.g., \cite{Kempe1}, \cite{Kendon1}). In most studies presented so far, the coin operation $C$ is fixed and repeated at each time step. We shall call this type of quantum walks {\it stationary} while quantum walks where the coin operation is allowed to change at each time step will be named {\it non-stationary}. Studies exist on how the parameters of $C$ affect the behavior of the quantum walk \cite{KenCon}. The non-stationary case has been considered in both numerical and analytic studies where the coin operation is allowed to change at each step according to a prescribed sequence or it is random \cite{quattro}, \cite{uno}, \cite{tre}, \cite{due}. It is shown that, for certain walks, the presence of random noise in $C$ at each step, the so called {\it unitary noise}, causes a behavior similar to the classical random walk. A non stationary setting can be considered also for other types of random walks such as classical walks on groups (as for example the Heisenberg group) \cite{French1}, \cite{French2}, \cite{French3}. These systems are of current interest as models of quantum dynamics. The role of the coin process is played by a dynamical system which may be characterized by a time varying transformation therefore giving rise to a non-stationary random walk on a group. Similar questions to the ones treated here, in particular concerning the set of achievable distributions, can be asked in that setting as well. In this paper, we approach the study of non-stationary quantum walk from the point of view of {\it design} and {\it control} \cite{MikoBook}. We consider the coin operation $C$ as a control variable which we can change at each step in order to obtain a desired behavior. The first questions that arise in this setting are therefore about the type of behavior that can be obtained (in particular the type of probability distributions) and whether there are significant differences with the stationary case. This paper is a first study in this direction. Current proposals for implementations of stationary quantum walks (see \cite{Kempe1} and references therein) may be modified in order to obtain a non-stationary walk. This is discussed for example in \cite{tre} for a specific experimental proposal where a variable coin operation can be obtained by varying the duration of a laser pulse. \vspace{0.25cm} The quantum walk on the cycle is the simplest finite dimensional quantum walk. The study of stationary quantum walks on the cycle was started in \cite{Aharonov}. In this case, as a consequence of the reversibility of the evolution, the probability distribution $P(j,t)$ does not converge to a constant value as $t \rightarrow \infty$. This is in contrast with the classical random walk on the cycle whose probability distribution converges to a uniform distribution independently of the initial state. For this reason, a Cesaro type of alternative probability distribution is introduced which is the average of $P(j,t)$ over an interval of time $[0,t)$. With this definition, a uniform limit distribution is obtained for number of positions $N$ odd, which is independent of the initial state as long as this one is localized in one given position. For $N$ even, there is a much richer behavior and different limit distributions are obtained for different initial states as discussed in \cite{cinque}, \cite{sei}. From an experimental point of view, having a uniform Cesaro type limit probability distribution, means that there is equal probability of finding the walker in one of the positions by measuring at a random time over a very large interval. In dealing with non-stationary walks on the cycle, the first question concerns the type of (non-Cesaro) probability distributions that can be obtained. This question is of interest in the use of random walks for algorithmic purposes. In fact, there exist several computational algorithms which are based on sampling from a given set of objects according to a prescribed distribution \cite{Sinclair}. These algorithms are referred to as {\it randomized algorithms}. If one uses a quantum walk to implement one of these algorithms, one can obtain the desired sample by measuring the position of the walker. One natural question concerns the set of possible distributions available. We shall answer this question in Lie algebraic terms in this paper for a non-stationary walk on the cycle and will show that it is possible that the probability distribution converges to a {\it uniform} distribution. There are at least two reasons to consider the uniform distribution with special attention. One is that it offers an example of a {\it limit} distribution (in the non-Cesaro sense) which is not available in the stationary case. In fact we will show that it is possible to reach a separable state of the form $|1\rangle \otimes |w\rangle$ where $|1\rangle$ is a state of a two dimensional coin and $|w\rangle$ is a state of the walker with all equiprobable positions (cf. formulas (\ref{tobeshown}) and (\ref{sulmod}) below). Since the coin operation is arbitrary, we can set it equal to the identity in the following steps and in the following evolution the walker will just move from one position to the other in a cycle and the probability of each position will be uniform. The probability distribution therefore reaches the uniform value and then stays constant. This is an example of a behavior different from the stationary case. The second reason to consider the uniform distribution in more detail is that this is the limit of the corresponding classical random walk on the cycle. Therefore, if an algorithm uses this feature of the classical counterpart, it can be implemented with the non-stationary quantum walk. For example, a randomized algorithm which requires sampling from a uniform distribution can be implemented by measuring the position of the walker at a large time. If we use the Cesaro definition of probability distribution we could perform a measurement but will have to select (again) a random time over a large interval. A quantum walk on the cycle is a bipartite quantum system ${\cal C} \otimes {\cal W}$, where the system ${\cal C}$, the coin, is a two level system with orthonormal basis states $|+1 \rangle$ and $|-1 \rangle$. The system $\cal W$, the walker, is an $N$-level system with orthonormal basis states $|0 \rangle,$ $ |1 \rangle,$...,$|N-1\rangle$. At the $t$-th time-step, one performs a coin operation of the form $C_t \otimes {\bf 1}$ where $C_t$ is an arbitrary (special) unitary operation on the two dimensional Hilbert space associated to ${\cal C}$, i.e., an element of $SU(2)$. This is followed by a conditional shift $S$ on the Hilbert space associated to ${\cal W}$ defined as $$ S|c\rangle \otimes |j \rangle =|c\rangle \otimes |(j +c) \text{mod} N\rangle. $$ By considering the standard basis $|e_j\rangle $, $j=1,...,2N$, defined by $|e_j\rangle:=|1 \rangle \otimes |j-1\rangle$, and $|e_{j+N}\rangle :=|-1\rangle \otimes |j-1 \rangle$, $j=1,...,N$, the matrix representation of the operator $C_t \otimes {\bf 1}$ is $C_t \otimes {\bf 1}_{N \times N}$ where ${\bf 1}_{N \times N}$ is the $N \times N$ identity,\footnote{We replace this notation by ${\bf 1}$ when there is no ambiguity on the dimension.} $\otimes$ denotes the Kronecker product of matrices, and $C_t \in SU(2)$. The matrix representation of the operator $S$ is the block diagonal matrix $\text{diag}(F,F^T)$, where $F$ is the basic circulant permutation matrix, that is, \be{F} S:=\text{diag}(F,F^T), \qquad F:=\pmatrix{0 & 0 & \cdot & \cdot & \cdot & 0 & 1 \cr 1 & 0 & \cdot & \cdot & \cdot & 0 & 0 \cr 0 & 1 & \cdot & \cdot & \cdot & 0 & 0 \cr \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \cr \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \cr \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \cr 0 & \cdot & \cdot & \cdot & \cdot & 1 & 0}. \end{equation} \vspace{0.25cm} The probability of finding the walker in state $|j-1\rangle$, $j=1,...,N$, is the sum of the probabilities of finding the state of the composite system ${\cal C} \otimes {\cal W}$ in $|1\rangle \otimes |j -1\rangle:=|1,j-1\rangle$ and $|-1\rangle \otimes |j -1\rangle:=|-1,j-1\rangle$. That is, if $|\psi\rangle$ is the state of the composite system, \be{proba} P(j-1,t)=|\langle \psi(t)| 1 ,j -1\rangle|^2+|\langle \psi(t)| -1 , j -1\rangle|^2=|\langle \psi(t)|e_j\rangle|^2 + |\langle \psi(t)|e_{j+N}\rangle|^2. \end{equation} By writing \be{psiw} |\psi \rangle:=\sum_{k=1}^{2N} \alpha_k |e_k\rangle, \qquad \sum_{k=1}^{2N} |\alpha_k|^2=1, \end{equation} we have \be{prt} P(j-1,t)=|\alpha_j|^2+ |\alpha_{j+N}|^2, \qquad j=1,...,N. \end{equation} \vspace{0.25cm} In what follows we shall make the following standing assumption. \vspace{0.25cm} \noindent {\bf Assumption:} N is an {\it odd} number. \vspace{0.25cm} We use this technical assumption in several steps and in particular in Theorem \ref{evolutions} to show that the matrix $S$ defined in (\ref{F}) is in a certain Lie group (cf. formula (\ref{TogetS})). In the stationary case, problems with the $N$ even case arise from the degeneracy of eigenvalues in the basic $S (C \otimes {\bf 1})$ operation. As we have mentioned, this leads to a very different and richer behavior with respect the $N$ odd case. \section{Characterization of the admissible evolutions} In this section, we characterize the set of unitary evolutions available for a non-stationary quantum walk on the cycle, that is, the set of available state transfers. This is the set of finite products of operators of the form $S(C \otimes {\bf 1}_{N \times N})$ where $S$ is defined in (\ref{F}) and $C \in SU(2)$. We denote such a set by ${\bf G}$. The set ${\bf G}$ is a group. It is in fact a Lie group as it is shown in the following theorem. In order to state this theorem, we need to recall some properties of circulant matrices \cite{Davis} and define two Lie algebras. Circulant $N \times N$ matrices with complex entries form a vector space over the real numbers. Each matrix is determined by the first row since all the other rows can be obtained by cyclic permutation of the first one. Moreover every complex circulant matrix $R$ can be written as linear combination with complex coefficients of the basic permutation matrix $F$ defined in (\ref{F}) and its powers from $0$ to $N-1$, i.e., \be{Expa} R:=\sum_{l=0}^{N-1}a_l F^l, \end{equation} with $N$ complex coefficients $a_0,...,a_{N-1}$. All the circulant matrices commute. If we require that $R$ is not only circulant but also skew-Hermitian then we must have \be{skewHer} R^\dagger=a_0^* {\bf 1}+\sum_{l=1}^{N-1}a_l^*F^{lT}=-R=-a_0{\bf 1}- \sum_{l=1}^{N-1}a_lF^l, \end{equation} and with a change of index $l \rightarrow N-l$ and using ${F^{N-l}}^T=F^l$, we have \be{skewHer2} R^\dagger=a_0^* {\bf 1}+\sum_{l=1}^{N-1}a_{N-l}^*F^l=-a_0{\bf 1}- \sum_{l=1}^{N-1}a_lF^l. \end{equation} This gives the relations \be{rt} a_0^*=-a_0, \qquad a_{N-l}^*=-a_l, \quad l=1,...,\frac{N-1}{2}. \end{equation} Equations (\ref{skewHer2}) constitute $N$ independent relations on the $2N$ real parameters of $R$ and show that the space of skew-Hermitian circulant matrices is a real vector space of dimension $N$. \vspace{0.25cm} Denote by $\cal L$ the Lie algebra spanned by the $2N \times 2N$ skew-Hermitian matrices of the form \be{ghi} L_1:=\pmatrix{R & 0 \cr 0 & -R} \quad \text{and} \quad L_2:=\pmatrix{0 & Q \cr -Q^\dagger & 0}, \end{equation} with $R$ a skew-Hermitian circulant $N \times N$ matrix and $Q$ a general circulant matrix. It is easily seen that this is in fact a Lie algebra of (real) dimension $3N$; the fact that it is closed under Lie bracket being a consequence of the fact that the product of two circulant matrices is another circulant matrix. Notice, in particular, that matrices of the type $L_1$ form an Abelian subalgebra of dimension $N$. We denote by $e^{\cal L}$ the connected Lie group associated to ${\cal L}$. \vspace{0.25cm} \bt{evolutions} The set ${\bf G}$ of possible evolutions of a non-stationary quantum walk is the Lie group $e^{\cal L}$. \end{theorem} \begin{proof} We define an auxiliary Lie algebra ${\cal L}'$, prove that ${\bf G}=e^{{\cal L}'}$ and then prove that ${\cal L}={\cal L}'$. The claim then follows from the correspondence between Lie groups and Lie algebras. We denote by ${\cal L}'$ the Lie algebra generated by the set \be{EFFE1} {\cal F}:=\{ su(2)\otimes {\bf 1}, S \left( su(2) \otimes {\bf 1} \right) S^T,..., S^{N-1}\left(su(2) \otimes {\bf 1} \right) S^{(N-1)T}\}, \end{equation} where $S$ is defined in (\ref{F}) and $T$ denotes transposition. To show that ${\bf G} \subseteq e^{{\cal L}'}$, it is enough to show that both $C \otimes {\bf 1}$, $C \in SU(2)$, and $S$ are in $e^{{\cal L}'}$. This fact is obvious for $C \otimes {\bf 1}$, since this is the exponential of an element in $su(2) \otimes {\bf 1}$. For $S$, we consider the elements $\pmatrix{0 & -1 \cr 1 &0} \otimes {\bf 1}$ and $S^{\frac{N-1}{2}}\left(\pmatrix{0 & 1 \cr -1 &0} \otimes {\bf 1}\right) S^{\left(\frac{N-1}{2}\right)T}$, both in $e^{{\cal L}'}$, and calculate with (\ref{F}) \be{TogetS} \left[\pmatrix{0 & -1 \cr 1 &0} \otimes {\bf 1} \right]\left[S^{\frac{N-1}{2}}\left(\pmatrix{0 & 1 \cr -1 &0} \otimes {\bf 1}\right) S^{\left(\frac{N-1}{2}\right)T}\right]= \end{equation} $$\pmatrix{F^{(N-1)T} & 0 \cr 0 & F^{N-1}}=\pmatrix{F & 0 \cr 0 & F^T}:=S.$$ We have used $F^{(N-1)T}=F$. To show that $e^{{\cal L}'} \subseteq {\bf G}$ it is enough to show that every element of the type $S^j \left( X\otimes {\bf 1} \right)S^{jT}$, with $X \in SU(2)$, $j=0,...,N-1$, can be written as the finite product of elements of the form $S\left(C \otimes {\bf 1}\right)$ with $C \in SU(2)$.\footnote{Recall that every element of a connected Lie group can be obtained as the finite product of exponentials of a set of generators of the corresponding Lie algebra (see, e.g., \cite{JS}) and the exponential map is surjective on $SU(2)$ (see, e.g., \cite{SagleWalde}).} This is readily seen because, with $X\in SU(2)$, for every $j$, \be{obvrel} S^j \left(X \otimes {\bf 1}_{N \times N} \right) S^{jT}= \left( S^j \left( X \otimes {\bf 1}_{N \times N}\right) \right) \left( S^{N-j} \left({\bf 1}_{2 \times 2} \otimes {\bf 1}_{N \times N} \right) \right). \end{equation} \vspace{0.25cm} To conclude the proof, we show that ${\cal L}={\cal L}'$ showing that ${\cal F} \subseteq {\cal L}$ and a basis of ${\cal L}$ can be obtained as (repeated) Lie brackets and-or linear combinations of elements of ${\cal F}$ in (\ref{EFFE1}). A general matrix in ${\cal F}$ has the form, with $A \in su(2)$, \be{hjk} S^j \left( A \otimes {\bf 1}_{N \times N} \right) S^{jT}= \end{equation} $$\pmatrix{F^j & 0 \cr 0 & F^{jT}} \pmatrix{ib{\bf 1}_{N \times N}& \alpha {\bf 1}_{N \times N} \cr -\alpha^* {\bf 1}_{N \times N} & -ib{\bf 1}_{N \times N}} \pmatrix{F^{jT} & 0 \cr 0 & F^{j}} = \pmatrix{ib {\bf 1}_{N \times N} & \alpha F^{2j} \cr -\alpha^* (F^{2jT}) & -ib {\bf 1}_{N \times N}}, $$ with general $b$ real and $\alpha $ complex, $j=0,...,N-1$. This is clearly in ${\cal L}$. Elements of the form $L_2$ in (\ref{ghi}) are real linear combinations of elements of the form $\pmatrix{0 & \gamma F^{k} \cr -\gamma^* F^{kT} & 0}$ which are of the form in (\ref{hjk}) with $b=0$, $\gamma=\alpha$ and $j=\frac{k}{2}$ for $k$ even and $j=\frac{N+k}{2}$ for $k$ odd. A basis for the real elements of the type $L_1$ is given by the $\frac{N-1}{2}$ linearly independent elements \be{lop} \pmatrix{F^j-F^{jT} & 0 \cr 0 & -(F^j-F^{jT})}, \qquad j=1,...,\frac{N-1}{2}. \end{equation} These are obtained as Lie brackets of $\pmatrix{0&{\bf 1}_{N \times N} \cr -{\bf 1}_{N \times N}&0}$ and $\pmatrix{0&F^j \cr -F^{jT}&0}$ which are both of type $L_2$. A basis for the purely imaginary elements of type $L_1$ is given by the $\frac{N+1}{2}$ linearly independent elements of the type \be{ppp} \pmatrix{i(F^j+F^{jT}) & 0 \cr 0 & -i (F^j+F^{jT})}, \qquad j=0,...,\frac{N-1}{2}, \end{equation} which are obtained as Lie brackets of $\pmatrix{0 & F^j \cr -F^{jT} &0}$ and $\pmatrix{0 & i {\bf 1}_{N \times N} \cr i {\bf 1}_{N \times N} & 0}$. This completes the proof of the theorem. \end{proof} \section{Obtaining the uniform distribution} The Lie group ${\bf G}= e^{\cal L}$, having dimension $3N$, is not isomorphic, for $N \geq 3$, to $SU(2N)$ (which has dimension $4N^2-1$) nor to $Sp(N)$ (which has dimension $N(2N+1)$). Therefore ${\bf G}=e^{\cal L}$ is not transitive on the complex sphere of dimension $2N$ which means that there are state transfers for the quantum system of coin and walker which are not induced by any transformation in ${\bf G}$ \cite{confracon}. Some state transfers are of special interest. In particular, we are interested in whether a state of the form \be{iopl} |\psi_{in}\rangle:=|\psi_{coin} \rangle \otimes |0 \rangle, \end{equation} that is, a state corresponding to the walker with certainty in position $|0\rangle$, can be transferred to a state corresponding to the uniform distribution. This is a state where the probability $P(j-1,t)$ in (\ref{proba}) is equal to $\frac{1}{N}$, for every $j=1,...,N$, at some $t$, that is, the walker is found in any position with the same probability. Since, $\forall \, C \in SU(2)$, $C \otimes {\bf 1}_{N \times N} \in e^{\cal L}$, we can assume, without loss of generality, that $|\psi_{coin}\rangle $ in (\ref{iopl}) is $|1\rangle$ so that the problem is to transfer the state $|e_1 \rangle:=[1,0,...,0]^T$ to a state with the desired property. We shall show in the following that such a state transfer is possible. \bt{Transfer} There exists a matrix $L$ in ${\cal L}$ such that \be{tobeshown} e^L|e_1 \rangle=\pmatrix{r_1 \cr r_2 \cr \vdots \cr r_N \cr 0 \cr 0 \cr \vdots \cr 0}, \end{equation} \end{theorem} where \be{sulmod} |r_1|^2=|r_2|^2=\cdot \cdot \cdot =|r_N|^2=\frac{1}{N}. \end{equation} In order to prove this theorem we first prove a lemma. Recall the definition of the Fourier matrix $\Phi$ of order $N$ (see, e.g., \cite{Davis}). This is defined so that its conjugate transposed is \be{fi} \Phi^\dagger:=\frac{1}{\sqrt{N}}\pmatrix{ 1 & 1 & 1 & 1& \ldots & 1 \cr 1 & \omega & \omega^ 2 & \omega^3 & \ldots & \omega^{N-1} \cr 1 & \omega^2 & \omega^4 & \omega^6 & \ldots & \omega^{2(N-1)} \cr 1 & \omega^3 & \omega^6 & \omega^9 & \ldots & \omega^{3(N-1)} \cr \vdots & \vdots & \vdots & \vdots & & \vdots \cr 1 & \omega^{N-1} & \omega^{2(N-1)} & \omega^{3(N-1)} & \ldots & \omega^{(N-1)(N-1)}}, \end{equation} where $\omega$ is the $N$-th root of the unity, that is $\omega:=e^{i \frac{2 \pi}{N}}$. The Fourier matrix $\Phi$ is unitary. \bl{basiclemma} Define \be{defp} x_l:=\frac{l(l-1)\pi}{N}, \qquad l=0,1,...,N-1. \end{equation} \end{lemma} Then \be{tyh} \pmatrix{r_1 \cr r_2 \cr r_3 \cr \vdots \cr r_N}:=\frac{1}{\sqrt{N}}\Phi^\dagger \pmatrix{e^{i x_0} \cr e^{ix_1} \cr e^{ix_2} \cr \vdots \cr e^{ix_{N-1}}}\end{equation} has the property (\ref{sulmod}). \begin{proof} {}From (\ref{fi}) and (\ref{tyh}), we obtain \be{erreh} r_h=\frac{1}{N} \left(1+\sum_{l=1}^{N-1} \omega^{({h-1})l} e^{ix_l} \right), \qquad h=1,...,N. \end{equation} This, using the definition of $\omega$, gives \be{err} r_h=\frac{1}{N} \left(1+\sum_{l=1}^{N-1} e^{\frac{i2\pi({h-1})l}{N}} e^{ix_l} \right). \end{equation} We calculate $|r_h|^2$, $h=1,...,N$, as \be{normaquad} |r_h|^2=r_h^*r_h=\frac{1}{N^2}\sum_{l_1 , l_2=0}^{N-1} e^{i \frac{2 \pi}{N}(l_2 -l_1)(h-1)} e^{i(x_{l_2}- x_{l_1})} =\end{equation} $$ \frac{1}{N}+\frac{2}{N^2}\sum_{\{l_1, l_2\}=0}^{N-1} \text{Re} \left( e^{i\frac{2 \pi}{N}(l_2-l_1)(h-1)} e^{i(x_{l_2}-x_{l_1})} \right). $$ The sum in the last term is intended over all the pairs of indices $\{ l_1,l_2 \}$, with $l_1 \not= l_2$, where only one is chosen between $\{l_1, l_2\}$ and $\{l_2, l_1\}$. Because of the presence of the real part `$\text{Re} $' it is not important which pair is chosen. We now show that, with the choice (\ref{defp}), the last term of this expression is zero for every $h$, which will prove the claim that $|r_h|^2=\frac{1}{N}$. It is convenient to re-write the sum by regrouping elements corresponding to $l_2-l_1=p \text{mod} N$, for $p=1,...,N-1$. This means $l_2-l_1=p$ or $l_1-l_2=N-p$. We have \be{hjl} \sum_{\{l_1, l_2\}=0}^{N-1} \text{Re} \left( e^{i\frac{2 \pi}{N}(l_2-l_1)(h-1)} e^{i(x_{l_2}-x_{l_1})} \right)=\end{equation} $$\sum_{p=1}^{N-1} \text{Re} \left( \sum_{l_2-l_1=p} e^{i(l_2-l_1)(h-1) \frac{2\pi}{N}} e^{i(x_{l_2}-x_{l_1})}+ \sum_{l_1-l_2=N-p} e^{i(l_2-l_1)(h-1) \frac{2\pi}{N}} e^{i(x_{l_2}-x_{l_1})}\right). $$ Doing the substitution $l_1=l$ and $l_2=l+p$ in the first term of the sum and the substitution $l_1=l$ and $l_2=l-(N-p)$ in the second term, this sum becomes \be{somme} \sum_{p=1}^{N-1} \text{Re} \left( e^{ip(h-1)\frac{2\pi}{N}} \left( \sum_{l=0}^{N-1-p} e^{i(x_{l+p}-x_l)}+ \sum_{l=N-p}^{N-1} e^{i(x_{l-(N-p)}-x_l)} \right)\right). \end{equation} We now show that, with the choice (\ref{defp}), the content of the innermost parenthesis in the above expression, i.e., \be{Q} M:=M(p):=\sum_{l=0}^{N-1-p} e^{i(x_{l+p}-x_l)}+ \sum_{l=N-p}^{N-1} e^{i(x_{l-(N-p)}-x_l)}, \end{equation} is zero for each $p$ which will conclude the proof of the Lemma. Replacing (\ref{defp}) in (\ref{Q}) and after some algebraic manipulations, we obtain \be{QE} M(p)=\sum_{l=0}^{N-1-p} e^{i\frac{2 \pi}{N}\left(\frac{p(p-1)}{2}+pl \right)}+ \sum_{l=N-p}^{N-1}e^{i\frac{2 \pi}{N}\left(\frac{(N-p)(N-p+1)}{2}-(N-p)l \right)}= \sum_{l=0}^{N-1} e^{i\frac{2 \pi}{N}\left(\frac{p(p-1)}{2}+pl \right)}. \end{equation} Thus we have \be{almostfin} M(p)=e^{i \frac{2 \pi}{N} \frac{p(p-1)}{2}} \sum_{l=0}^{N-1} e^{i \frac{2\pi pl}{N}} = e^{i \frac{2 \pi}{N} \frac{p(p-1)}{2}}\frac{1-e^{i 2 \pi p}}{1-e^{i\frac{ 2 \pi p}{N}}}=0 \ \ \ \ \forall p \, \neq 0 \text{mod} N. \end{equation} This concludes the proof of the Lemma. \end{proof} We are now ready to prove Theorem \ref{Transfer} \begin{proof} (Proof of Theorem \ref{Transfer}) We choose $L$ as a matrix of the form $L_1$ in (\ref{ghi}) so that $e^{L}$ has the form \be{formeL} e^{L}=\pmatrix{e^{R} & 0 \cr 0 & e^{-R}}, \end{equation} with $R$ a general skew-Hermitian $N \times N$ circulant matrix. The problem is therefore to find a circulant matrix $R$ so that \be{pa} e^R\pmatrix{1 \cr 0 \cr \vdots \cr 0}= \pmatrix{r_1 \cr r_2 \cr \vdots \cr r_N}, \end{equation} with $r_1,...,r_N$ satisfying (\ref{sulmod}). Any circulant matrix $R$ is diagonalized by the Fourier matrix (\ref{fi}) of the corresponding dimension, that is, \be{pppa} R=\Phi^\dagger \Lambda \Phi, \end{equation} with $\Lambda$ diagonal. Conversely every matrix of the form on the right hand side is circulant \cite{Davis}. Moreover if $\Lambda=\text{diag} (i \lambda_0,i\lambda_1,...,i \lambda_{N-1})$, with $\lambda_l$, $l=0,...,N-1$ real numbers, $R$ is skew-Hermitian. In this case, we have \be{hjh4} e^{R} \pmatrix{1 \cr 0 \cr \vdots \cr 0}=\Phi^\dagger e^{\Lambda} \Phi \pmatrix{1 \cr 0 \cr \vdots \cr 0}=\Phi^\dagger e^{\Lambda}\frac{1}{\sqrt{N}}\pmatrix{1 \cr 1 \cr \vdots \cr 1}=\Phi^\dagger \frac{1}{\sqrt{N}} \pmatrix{e^{i \lambda_0} \cr e^{i \lambda_1}\cr \vdots \cr e^{i \lambda_{N-1}}}. \end{equation} Choosing $\lambda_l=x_l$, $l=0,...,N-1$ with the definition (\ref{defp}), the theorem follows from Lemma \ref{basiclemma}. \end{proof} Other states with the same property can be obtained by applying a transformation $U \otimes {\bf 1}$, $U \in SU(2)$, which is in ${\bf G}$. In particular notice that the state (\ref{tobeshown}) is a separable state. \section{Conclusion} Non-stationary quantum walks have properties which distinguish them from stationary ones. Moreover they are amenable of study with the methods of quantum control. In fact, several problems, such as obtaining a given evolution, can be seen as control problems where the evolution of the coin plays the role of the control. In this paper we have shown that, opposite to the stationary case, a non-stationary quantum walk on the cycle may converge to a constant distribution and in particular to a uniform distribution as for classical random walks. A constructive approach to achieve this and other evolutions of interest for general quantum walks will be the subject of future research. \vspace{0.25cm} \noindent {\bf Acknowledgement} D.D. is grateful to Mark Hillery for helpful conversations on the topic of quantum walks. The authors also thank the reviewers for useful suggestions. D. D'Alessandro's research was supported by NSF under Career Grant ECS-0237925. \vspace{0.25cm}
1,941,325,220,630
arxiv
\section{Introduction} The 6$\,$cm transition of formaldehyde (H$_2$CO, J$_{K_a K_c} = 1_{11} - 1_{10}$) was among the first molecular lines detected in the interstellar medium. This transition is ubiquitously observed in absorption against Galactic continuum sources (e.g., Araya et al. 2002) as well as against the 2.7$\,$K Cosmic Microwave Background (e.g., Palmer et al. 1969). In contrast, only a handful of H$_2$CO 6$\,$cm emitters have been detected: H$_2$CO megamaser emission has been confirmed toward three galaxies (Araya, Baan \& Hofner 2004a), and in our Galaxy H$_2$CO emission has been detected toward 5 sources: (quasi) thermal emission toward the Orion BN/KL region, and maser emission toward NGC$\,$7538 IRS1, Sgr B2, G29.96$-$0.02, and recently IRAS$\,$18566+0408 (Araya et al. 2005). All Galactic emitters are found in close proximity to signposts of young massive stars, indicating that the physical conditions necessary for H$_2$CO 6$\,$cm emission occur in early phases of massive star formation. In addition, the low detection rate of H$_2$CO masers may be a consequence of specific and short lived physical conditions in massive star forming regions, thus H$_2$CO masers could become a valuable astrophysical probe if the excitation mechanism of the maser was known. However, the physical mechanism for H$_2$CO 6$\,$cm maser emission is still not understood. Currently, the only quantitative model proposed to explain H$_2$CO 6$\,$cm masers is that of Boland \& de Jong (1981), where the level inversion is caused by the radio continuum from a background compact H{\small II} region. Unfortunately, this model cannot explain most of the known H$_2$CO 6$\,$cm masers (e.g., Hoffman et al. 2003). The lack of progress in the understanding of the H$_2$CO 6$\,$cm maser mechanism is in part due to the small sample of known H$_2$CO maser regions. A larger sample of H$_2$CO maser regions would not only serve to further test the Boland \& de Jong (1981) model, but also to investigate the dependence of H$_2$CO masers on a larger variety of physical environments, e.g., to check whether H$_2$CO masers are preferentially associated with shocked gas or with more quiescent molecular material that could be radiatively excited. With the goal of increasing the number of known H$_2$CO 6$\,$cm maser regions, we conducted in 2002 and 2003 a survey for H$_2$CO 6$\,$cm emission with the Arecibo telescope (Araya et al. 2004b). In the survey we observed massive star forming regions characterized by high molecular densities and low radio-continuum at 6$\,$cm, and detected H$_2$CO 6$\,$cm maser emission toward IRAS$\,$18566+0408 (Araya et al. 2005). Motivated by our first Arecibo survey, we recently completed a second survey with the GBT and VLA\footnote{The 100$\,$m Green Bank Telescope (GBT) and the Very Large Array (VLA) are operated by the National Radio Astronomy Observatory (NRAO), a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.}. The main result of our second survey, which is reported in this letter, is the discovery of a new H$_2$CO 6$\,$cm maser toward the massive star forming region G23.71$-$0.20. \section{Observations and Data Reduction} On 2005 January 10, we conducted an H$_2$CO 6$\,$cm maser survey toward 10 massive star forming regions with the VLA in the A configuration. The sources were selected based on single dish H$_2$CO spectra that had been obtained by our group with the Arecibo and GBT telescopes (Watson et al. 2003, Sewilo et al. 2004a), and that were consistent with H$_2$CO absorption blended with emission. We detected radio continuum and H$_2$CO 6$\,$cm absorption toward several regions, and a new H$_2$CO 6$\,$cm emitter toward G23.71$-$0.20 (IRAS$\,$18324$-$0820). This latter detection is the topic of this letter. The results for the non-emitting regions will be presented in a future paper (Araya et al. {\it in prep.}). Further observations of the G23.71$-$0.20 region were conducted on 2005 April 24, with the VLA in the B configuration. These observations were intended to confirm the detection. In Table~1, we list details of the spectral line observations conducted with the VLA in the A and B configurations. The data were calibrated and imaged in AIPS following standard spectral-line reduction procedures. Only external calibration (i.e., no self-calibration) was used to obtain the complex gain corrections for data calibration. We did not detect radio continuum in individual channels, hence continuum subtraction was not necessary. No bandpass calibration was necessary due to the narrow bandwidth. \vspace{-0.7cm} \section{Results and Discussion} \subsection{A New H$_2$CO 6$\,$cm Maser in the Galaxy} Using the VLA in the A configuration we detected H$_2$CO 6$\,$cm emission toward G23.71$-$0.20. The emission was detected in two channels (i.e., width 0.76$\,$km s$^{-1}$) with an rms noise of 4.5$\,$mJy$\,$beam$^{-1}$$\,$ in a natural weighted map. The peak intensity is I$_{\nu} =\,$60$\,$mJy$\,$beam$^{-1}$~ at the velocity V$_{max} =\,$79.2$\,$km s$^{-1}$. The maximum intensity of the H$_2$CO emission is located at $\alpha$(J2000) = 18$^{\rm h}$35$^{\rm m}$12\fsecs37, $\delta$(J2000) = $-$08\arcdeg 17\arcmin39\farcs3. The emission feature is unresolved ($\theta_{source} <~$0\farcs4), which implies a lower limit on the brightness temperature of $\sim$30000$\,$K, i.e., the emission is due to a maser mechanism. The intensity of this new maser is similar to the 70$\,$mJy/beam of the maser in G29.96$-$0.02 (Pratap, Menten, \& Snyder, 1994). On the other hand, the maser in G23.71$-$0.20 is narrower in comparison with other H$_2$CO masers (e.g., the FWHM of the maser in IRAS$\,$18566+0408 is 1.6$\,$km s$^{-1}$, Araya et al. 2005, whereas the FWHM of the new maser is smaller than 0.8$\,$km s$^{-1}$). The VLA-B observations confirmed the existence of the H$_2$CO 6$\,$cm maser in G23.71$-$0.20. As in the VLA-A observations, the maser is detected in two channels (0.76$\,$km s$^{-1}$) with a signal to noise ratio greater than 4 (rms $\sim$ 6.0$\,$mJy$\,$beam$^{-1}$). We measured a peak velocity and intensity values of V$_{max} = 79.2$$\,$km s$^{-1}$~ and I$_{\nu} =\,$44$\,$mJy$\,$beam$^{-1}$~(T$_B > 400\,$K), respectively. The H$_2$CO peak intensity measured with the VLA-B is less than the peak intensity measured with the VLA-A, however both values are consistent within 5$\sigma$. We combined the VLA-A and B {\it uv} data to produce the map and spectrum shown in Figure~1. The spectrum in Figure~1 shows that H$_2$CO emission may also be present in the two lower-velocity channels blueward of the peak channel. \vspace{-0.7cm} \subsection{An Overview of the G23.71$-$0.20 Massive Star Forming Region} G23.71$-$0.20 is a massive star forming region located in the Scutum constellation. Sewilo et al. (2004a) reported H110$\alpha$ emission from the region at a LSR velocity of 76.5$\,$km s$^{-1}$, hence, the H$_2$CO maser LSR velocity is coincident (within $\sim$2.7$\,$km s$^{-1}$) with the H110$\alpha$ line center. This velocity correspondence implies that the H$_2$CO maser is associated with the G23.71$-$0.20 massive star forming region. Based on the LSR velocity of the H110$\alpha$ line, the two possible kinematic distances of G23.71$-$0.20 are 4.9$\,$kpc and 11$\,$kpc. Sewilo et al. (2004a) considered the far kinematic distance (i.e., D$_{LSR} \sim 11\,$kpc) as more likely. Becker et al. (1994) (see also White, Becker, \& Helfand 2005) detected 6$\,$cm radio continuum from the region. In Figure~2 we show their VLA-C 6$\,$cm continuum map\footnote{Multi-Array Galactic Plane Imaging Survey, http://third.ucllnl.org/gps/.}. As the figure illustrates, the H$_2$CO maser is not coincident with the strong radio continuum emission, but rather lies in an area of more diffuse emission. Apart from the 6$\,$cm detection by White et al. (2005), no other high-angular resolution detection of the radio continuum at frequencies above 2$\,$GHz\footnote{The region was detected in VLA L-band (20$\,$cm) surveys: NVSS (Condon, et al. 1998), and the Multi-Array Galactic Plane Imaging Survey (White et al. 2005).} is available. Future high-angular resolution observations of the radio continuum of the region at several frequencies are required to test if the H$_2$CO maser in G23.71$-$0.20 can be explained by the Boland \& de Jong (1981) model. The radio continuum source in G23.71$-$0.20 is coincident with IRAS$\,$18324$-$0820, which has the characteristic far-infrared color of ultra-compact H {\small II} regions (Sewilo et al. 2004a). Assuming a distance of 11$\,$kpc, isotropic emission, and following the formulation of Casoli et al. (1986), we estimate a bolometric luminosity of $\sim 2.4\times 10^5$$\,$L$_{\odot}$, which corresponds to the luminosity of an $\sim$O6 ZAMS star (Panagia 1973). High angular resolution studies of molecular line transitions toward G23.71$-$0.20 are as scarce as the radio continuum observations. The only available interferometric molecular data of the region are from Walsh et al. (1998). They detected seven CH$_3$OH 6.7$\,$GHz maser spots in the region. In Figure~2, we show the positions of these masers. The CH$_3$OH 6.7$\,$GHz masers are all coincident with the H$_2$CO 6$\,$cm maser within $\sim$1$\arcsec$, which is the absolute positional accuracy of the CH$_3$OH masers (Walsh et al. 1998). The masers are spread over a velocity range from 74.9 to 81.4$\,$km s$^{-1}$, which encompasses the velocity of the H$_2$CO maser (79.2$\,$km s$^{-1}$). G23.71$-$0.20 has been detected in several single dish surveys for CH$_3$OH masers: Blaszkiewicz \& Kus (2004) (12.2$\,$GHz CH$_3$OH masers); Schutte et al. (1993), Slysh et al. (1999), Szymczak, Hrynek, \& Kus (2000) (6.7$\,$GHz CH$_3$OH masers). In all cases, the CH$_3$OH masers show similar velocities to that of the H$_2$CO maser. G23.71$-$0.20 was also observed in CS J=2-1 by Bronfman, Nyman, \& May (1996). They detected CS J=2-1 emission at a velocity of 68.3$\,$km s$^{-1}$, which is approximately 10$\,$km s$^{-1}$~lower than that of the H$_2$CO maser. Finally, Han et al. (1998) conducted a survey of 22$\,$GHz H$_2$O masers at the Purple Mountain Observatory, and report detection of a 35.6$\,$Jy H$_2$O maser at a LSR velocity of $-$40.3$\,$km s$^{-1}$. This radial velocity is very different from the velocity of the H$_2$CO maser (79.2$\,$km s$^{-1}$), thus the association of this H$_2$O maser with the H$_2$CO and CH$_3$OH masers is unclear. We searched the MSX and Spitzer/IRAC GLIMPSE images for infrared (IR) emission in the vicinity of the H$_2$CO maser position. The 21.3 $\mu$m emission seen in the MSX E band (angular resolution 18$''$) partly traces the two main 6$\,$cm continuum emission regions to the SE and SW of the maser position, but also shows extended emission near the maser position. The Spitzer/IRAC GLIMPSE data show a compact IR source located within $0\farcs3$ of the H$_2$CO maser position. This source is detected in all four IRAC bands. In Figure~3 we show a color--coded image of the Spitzer data (blue is the average of the 3.6 and 4.5 $\mu$m data, green is 5.8 $\mu$m, and red is 8.0 $\mu$m), and we mark the position of the H$_2$CO maser with a cross. While the [3.6]--[4.5] color of 2.1 is indicative of a deeply embedded object, the [5.8]--[8.0] color of 0.1 is peculiar in combination with the first color (see Indebetouw et al. 2006). The [5.8]--[8.0] color could be caused by unresolved source multiplicity and/or the presence of PAH emission features. In addition, strong and broad $\sim$10 $\mu$m silicate absorption could be affecting the 8.0 $\mu$m band. High angular resolution infrared observations are underway to further study this IR source. \subsection{Implication for the Nature of H$_2$CO 6$\,$cm Masers} The inversion mechanism of H$_2$CO 6$\,$cm masers is unclear (e.g., Mehringer, Goss, \& Palmer 1994). Besides the model by Boland \& de Jong (1981) which is based on radiative excitation by radio continuum, Hoffman et al. (2003) suggested that H$_2$CO 6$\,$cm masers could be collisionally pumped. The new detection of an H$_2$CO 6$\,$cm maser in G23.71$-$0.20 and the coincidence of the maser with a Spitzer compact source (Figure~3) suggests another possibility: H$_2$CO 6$\,$cm masers could be radiatively pumped by IR photons in massive star forming regions. A connection between FIR radiation and H$_2$CO 6$\,$cm masers has been noticed before in megamaser galaxies, e.g., the data by Araya et al. (2004a) suggest a correlation between the luminosity of H$_2$CO megamaser lines and FIR luminosity (see also Baan, Haschick, \& Uglesich 1993). In addition, Litvak (1970) pointed out that absorption of infrared radiation in conjunction with large H$_2$CO optical depths and H$_2$CO density comparable to that of OH maser regions, may result in H$_2$CO maser emission. Quantitative models have been developed to explain OH masers via FIR radiative excitation (e.g., Henkel, G\"usten, \& Baan 1987; Cesaroni \& Walmsley 1991), however, to date there exist no equivalent models exploring pumping of the observed H$_2$CO 6$\,$cm masers via IR photons. The idea of FIR pumping of H$_2$CO masers is compelling; however a quantitative test of this hypothesis is not practical since (as mentioned in section 3.2) the available data toward the H$_2$CO maser in G23.71$-$0.20 do not provide sufficient constraints to carry out an analysis of the inversion mechanism. Specifically, the FIR fluxes measured by IRAS toward G23.71$-$0.20 are likely to trace the extended IR region to the SW of the maser position (see Figure~3) and not the FIR properties of the compact Spitzer source. In the case of G23.71$-$0.20, the other proposed pumping mechanisms cannot be currently tested either. The radio continuum at the position of the maser is blended with emission from two nearby continuum regions, thus neither the gain of the maser or the emission measure of the ionized gas at the maser position can be determined. In addition, the thermal molecular data (e.g., CS) are insufficient to establish whether the H$_2$CO maser is associated with a molecular clump or outflow, and we also lack high quality information about shock tracers like H$_2$O masers that may be associated with H$_2$CO masers (e.g., Araya et al. 2005). On the other hand, the detection of maser emission in G23.71$-$0.20 supports the trend that H$_2$CO masers reside very near to massive young stellar objects (YSOs), i.e., closer than a few thousand AU. For example, the maser in G29.96$-$0.02 is coincident with a hot molecular core (Pratap et al. 1994) that probably contains a massive circumstellar disk (Olmi et al. 2003); the maser in IRAS$\,$18566+0408 is coincident with a 2MASS and a bright Spitzer/GLIMPSE source (Araya et al. 2005, 2006 in prep.) and the central continuum object has been classified as a massive disk candidate (Zhang 2005). Also, the maser in NGC$\,$7538 IRS1 (which shows an H$_2$CO NE$-$SW velocity gradient, Hoffman et al. 2003) is located at the position of an hypercompact H{$\,$\small II} region candidate (Sewilo et al. 2004b) that harbors a possible circumstellar disk oriented in a NE$-$SW direction (De Buizer \& Minier 2005). The close association of H$_2$CO masers with massive YSOs is in contrast to other molecular masers (e.g., 44$\,$GHz CH$_3$OH masers, Kurtz, Hofner, \& Vargas-\'Alvarez 2004) that can be found at larger distances from massive YSOs and are probably related to the interaction of jets and outflows with the surrounding material of the molecular cloud. If the H$_2$CO masers are associated with shocked gas, then they might trace the regions where accretion from a mass reservoir to a massive circumstellar disk occurs. \section{Summary} Using the VLA in the A configuration we detected H$_2$CO emission from the massive star forming region G23.71$-$0.20. Based on the brightness temperature limit of the detection (T$_b > 30000\,$K), the line must be due to a maser mechanism: i.e., this source is the fifth region in the Galaxy where H$_2$CO 6$\,$cm maser emission has been detected. The FWHM of the line is $< 0.8$$\,$km s$^{-1}$, i.e., narrower than other H$_2$CO masers detected with the VLA. The maser was independently confirmed by VLA-B observations of the region, that were conducted approximately 3 months after the initial detection. The LSR velocity and position of the maser closely correspond to those of CH$_3$OH 6.7$\,$GHz masers detected by Walsh et al. (1998) with the Australia Telescope Compact Array. We found a compact Spitzer/IRAC IR source, possibly a deeply embedded young stellar object, coincident with the H$_2$CO maser. The detection of H$_2$CO maser emission in G23.71$-$0.20 supports the trend that H$_2$CO 6$\,$cm masers are located very near massive YSOs. \acknowledgments Support for this work was provided by the NSF through award GSSP 05-0006 from the NRAO. Part of the observations presented in this paper were conducted as part of a VLA student project directed by D. Westpfahl at NMT. P. H. acknowledges support from NSF grant AST-0454665. H. L. was supported by a Postdoc stipend of the German Max Planck Society. We also acknowledge an anonymous referee for comments that improved the manuscript. This research has made use of NASA's Astrophysics Data System and is based in part on observations made with the Spitzer Space Telescope, operated by JPL, CalTech under contract with NASA.
1,941,325,220,631
arxiv
\section{Introduction} \medskip The class of affine semigroup rings is rich with examples that combine the flavors of convex geometry and commutative algebra. The structure of the semigroup ring $K[S]$ is intimately related to the structure of the affine semigroup $S$ and the cone $\ensuremath{\mathrm{pos}}(S)$ spanned by $S$. For example, it is well known that $K[S]$ is normal if and only if $S$ contains all integral points of $\ensuremath{\mathrm{pos}}(S)$ (see \cite{BH}). Normal affine semigroup rings are Cohen-Macaulay by a theorem of Hochster \cite{Hoch}. Ishida \cite{Ish} characterized the S$_2$-ness of $K[S]$ in terms of $S$ and the facets of $\ensuremath{\mathrm{pos}}(S)$. In \cite{GW1} Goto and Watanabe announced a characterization of those affine semigroups $S$ for which $K[S]$ is Cohen-Macaulay; Hoa and Trung gave a corrected characterization in terms of both $S$ and the cone spanned by $S$ over the rational numbers in \cite{HT} and \cite{HT2}. In an earlier paper \cite{Vi} the author characterized those affine semigroup rings which satisfy Serre's condition R$_1$. In this note we characterize those affine semigroup rings $K[S]$ over an arbitrary field $K$ which satisfy condition $\mathrm{R}_{\ell}$ of Serre. Our characterization is in terms of the face lattice of the positive cone $\ensuremath{\mathrm{pos}}(S)$ of $S$. We start by recalling some basic facts about the faces of $\ensuremath{\mathrm{pos}}(S)$ and consequences for the monomial primes of $K[S]$. After proving our characterization we turn our attention to the Rees algebras of a special class of monomial ideals in a polynomial ring over a field. We may view these as affine semigroup rings; the associated affine semigroups are in some sense complementary to the class of polytopal semigroups introduced and studied by Bruns, Gubeladze, and Trung in \cite{BG} and \cite{BGT}. In this special case, most of the characterizing criteria are always satisfied. We give examples of nonnormal affine semigroup rings that satisfy $\mathrm{R}_2$. For the fundamentals on convex geometry we refer the reader to \cite{Ew}, \cite{Grun}, or \cite{Zieg}. For background on monoids and semigroup rings one can consult \cite{Gil}. We make the standard assumptions that an affine semigroup $S$ is a subsemigroup of $\mathbb{Z}^n$ and that $\ensuremath{\mathrm{grp}}(S) = \mathbb{Z}^n$ for some positive integer $n$. Consider the \textit{positive cone} $\ensuremath{\mathrm{pos}}(S)$ of $S$ defined by $$\ensuremath{\mathrm{pos}}(S) = \{ c_1\boldsymbol{\alpha}_1 + c_2\boldsymbol{\alpha}_2 + \cdots + c_m\boldsymbol{\alpha}_m \mid m \ge 0, \boldsymbol{\alpha}_i \in S, c_i \in \mathbb{R}_{\ge} \; (i = 1, \ldots, m)\},$$ where $\mathbb{R}_{\ge}$ denotes the set of nonnegative real numbers. Recall that $\ensuremath{\mathrm{pos}}(S)$ is a polyhedral cone, that is, $\ensuremath{\mathrm{pos}}(S)$ is the intersection of finitely many positive halfspaces $H_i^+ = \{ \boldsymbol{\alpha} \in \mathbb{R}^n \mid \sigma_i(\boldsymbol{\alpha}) \ge 0\}$, where $\sigma_i$ is a linear form on $\mathbb{R}^n$. Since $S$ is a finitely generated subsemigroup of $\mathbb{Z}^n$ we may assume that each $\sigma_i$ has rational coefficients, that is, $\ensuremath{\mathrm{pos}}(S)$ is a \textit{rational polyhedral cone}. After scaling we may assume that the coefficients of each $\sigma_i$ are relatively prime integers; we call such a \textit{primitive linear form}. Recall that a \textit{supporting hyperplane} of $\ensuremath{\mathrm{pos}}(S)$ is a hyperplane $H$ such that $\ensuremath{\mathrm{pos}}(S) \cap H \ne \emptyset$ and $\ensuremath{\mathrm{pos}}(S)$ lies on one side of $H$. A \textit{face} of $\ensuremath{\mathrm{pos}}(S)$ is the intersection of $\ensuremath{\mathrm{pos}}(S)$ and a supporting hyperplane. The faces of $\ensuremath{\mathrm{pos}}(S)$ are again rational polyhedral cones. By the \textit{dimension} $\dim(F)$ of $F$ we mean the dimension of the vector space spanned by $F$ and by the \textit{codimension} $\mathrm{codim}(F)$ we mean $n - \dim(F)$. A face of codimension one is called a \textit{facet}. If we represent a polyhedral cone $C$ as an irredundant intersection of positive halfspaces $C= \cap_{i = 1}^r H_i^+$ then $F_1 = C \cap H_1, \ldots, F_r = C \cap H_r$ are the facets of $C$ (see \cite[Section 2.6]{Grun} ). By a \textit{monomial }in $\mathcal{R}=K[S]$ we mean an element of the form $x^{\boldsymbol{\alpha}}$ and by a \textit{monomial ideal }we mean an ideal generated by monomials. There is an order-reversing bijective correspondence between the nonempty faces of $\ensuremath{\mathrm{pos}}(S)$ and the monomial primes of $\mathcal{R}$. Indeed, the monomial prime of $\mathcal{R}$ corresponding to the nonempty face $F$ of $\ensuremath{\mathrm{pos}}(S)$ is $P_F = (x^{\boldsymbol{\alpha}} \mid \boldsymbol{\alpha} \in S \setminus F )$ (e.g. see \cite{BH}). We let $\mathfrak{m}$ denote the ideal of $K[S]$ generated by all noninvertible monomials; it is the maximal monomial prime of $K[S]$. Notice that we are considering the zero ideal to be monomial. Finally, we let $\mathbf{e}_1, \ldots , \mathbf{e}_n$ denote the standard basis vectors for $\mathbb{R}^n$. The maximal proper faces of a polyhedral cone $C$ are precisely the facets of $C$. If $P = P_F$ is the monomial prime of height $d$ in an affine semigroup ring $K[S]$ corresponding to the face $F$ of $\ensuremath{\mathrm{pos}}(S)$, then there exists a chain of monomials primes of length $d$ descending from $P$ (see \cite[Prop. 1.2.1]{GW2} ) and $\mathrm{ht}(P) = \mathrm{codim}(F)$. \section{Serre's Regularity Condition for Affine Semigroup Rings} We start this section with a basic result about $\mathbb{Z}^n$-graded rings, which is crucial for our purposes. A version for $\mathbb{Z}$-graded rings is well known (e.g. see \cite[Ex. 2.24]{BH}). Then we specialize to the case of an affine semigroup ring defined over a field. Recall that if $P$ is a prime ideal of a $Z^n$-graded ring $R$ then $P^*$ denotes the largest homogeneous ideal of $R$ that is contained in $P$ and $R_{(P)}$ the homogeneous localization of $R$ at $P$, i.e., $R_{(P)} = S^{-1}R$, where $S$ is the set of homogeneous elements of $R$ that are not in $P$. The graded ring $R_{(P)}$ is an example of a ${}^*$local ring, that is, a graded ring with a unique maximal homogeneous ideal. \begin{prop} \label{prop regular graded ring} Let $R$ be a Noetherian $\mathbb{Z}^n$-graded ring. Then, $R$ is regular if and only if $R_{(P)}$ is regular, for every homogeneous prime ideal $P$ of $R$. Furthermore, for a ${}^*$local ring $R$ with unique maximal homogeneous ideal $\mathfrak{m}$, the ring $R$ is regular if and only if $R_{\mathfrak{m}}$ is regular. \end{prop} \begin{proof} First suppose that $R$ is regular. Let $P$ be a homogeneous prime ideal. Then, $R_P$ is a regular local ring. We must show that the ${}^*$local ring $\mathcal{R} := R_{(P)}$ is regular. If $P\mathcal{R}$ is a maximal ideal of $\mathcal{R}$ then $R_{(P)} = R_P$ is a regular local ring. So assume $P\mathcal{R}$ is not maximal and choose a prime ideal $Q$ of $R$ such that $Q^*= P$. By assumption, $R_P = R_{Q^*} \cong \mathcal{R}_{Q^*\mathcal{R}}$ is a regular local ring. Let $\mathcal{P}$ be any prime ideal of $\mathcal{R}$. Then $\mathcal{P}^* \subseteq Q^*\mathcal{R} \Rightarrow \mathcal{R} _{\mathcal{P}^*}$ is regular and hence $\mathcal{R} _{\mathcal{P}}$ is regular by \cite[Prop. 1.2.5]{GW2}. Now suppose that all homogeneous localizations of $R$ at homogeneous primes are regular. Let $P$ be any prime. Then $R_{P^*}$ is a regular local ring since it is a localization of $R_{(P^*)}$. Hence $R_P$ is a regular local ring by \cite[Prop. 1.2.5]{GW2}. Thus $R$ is a regular ring. Now suppose $(R, \mathfrak{m})$ is a ${}^*$local ring. Suppose that $R_{\mathfrak{m}}$ is regular. Let $P$ be any homogeneous prime ideal. Then, $P \subseteq \mathfrak{m}$ implies $R_P$ is regular. Since $R = R_{(\mathfrak{m})}$ we may deduce that $R$ is regular by the first part of the proof. The other implication is immediate. \end{proof} Now we limit our attention to an affine semigroup ring $\mathcal{R} = K[S]$ over a field $K$. We let $\mathfrak{m}$ denote the ideal generated by the noninvertible monomials in $\mathcal{R}$. \begin{notn} {\rm Notice that for elements $\alpha, \beta$ of the affine semigroup $S$ the monomial quotient $x^{\alpha}/x^{ \beta} \in \mathcal{R}_{\mathfrak{m}}$ if and only if $x^{\alpha}/x^{ \beta} \in \mathcal{R}$. One way to see this is to use the fact that the colon ideal $(x^{\beta}\mathcal{R}: x^{\alpha})$ is monomial. Now let $P$ be a monomial prime of $\mathcal{R}$ corresponding to the nonempty face $F$ of $\ensuremath{\mathrm{pos}}(S)$. Replacing $S$ by $S_F := S - S \cap F$ and $\mathcal{R}$ by $K[S_F] \cong K[S]_{(P)}$, where $K[S]_{(P)}$ denotes the homogeneous localization at $P$, we see that $$K[\ensuremath{\mathrm{grp}}(S)] \cap K[S]_P = K[S]_{(P)}.$$ We will identify $\ensuremath{\mathrm{grp}}(S \cap F)$ with the the group of invertible monomials in $\mathcal{R}_P$ and refer to $S_F$ as the localization of $S$ at $F$. Let $S_0$ denote the subgroup of invertible elements in the affine semigroup $S$ and let $\widetilde{S}$ denote the quotient monoid $S/S_0$. It is well known that $K[S]$ is regular if and only if $S$ is the direct sum of a free abelian group $\mathbb{Z}^{\ell}$ and a free abelian monoid $\mathbb{N}^k$ (e.g. see \cite[Ex. 6.1.11]{BH}). We shall need a slight variant of this result whose proof we omit.} \end{notn} \begin{prop} \label{prop: K[S] regular} The affine semigroup ring $K[S]$ is regular if and only if $\widetilde{S} \cong \mathbb{N}^k$, where $k = \dim(K[S]_{\mathfrak{m}})$. \end{prop} Suppose the affine semigroup ring is regular. Notice that if the images of the elements $\gamma_1, \ldots , \gamma_k \in S \setminus S_0$ form a free basis for $\widetilde{S}$, then the monomials $x^{\gamma_1} , \ldots , x^{\gamma_k}$ form a regular system of parameters for $K[S]_{\mathfrak{m}}$. The local version of the above proposition is the following. \begin{prop} Let $P$ be a prime ideal in the affine semigroup ring $\mathcal{R} = K[S]$ over a field $K$ corresponding to the face $F$ of $\ensuremath{\mathrm{pos}}(S)$. Then, $\mathcal{R}_P$ is regular if and only if the quotient $\widetilde{S_F}$ is free. \end{prop} \begin{proof} Notice that the group of units in $S_F$ is $\ensuremath{\mathrm{grp}}(S \cap F)$. By Proposition \ref{prop regular graded ring}, $\mathcal{R}_P$ is regular if and only if the homogeneous localization $\mathcal{R}_{(P)} \cong K[S_F]$ is regular. The result now follows immediately from Proposition \ref{prop: K[S] regular}. \end{proof} We now turn our attention to an alternate characterization of the regularity condition in the spirit of a result in \cite{Vi}. One advantage of the alternate characterization is that it can be easily checked using the program NORMALIZ \cite{Norm}. We first prove some auxiliary results. \begin{lemma}\label{lemma 1} Let $F = \ensuremath{\mathrm{pos}}(S) \cap H$ be a face of the positive cone of the affine semigroup $S$ and $\gamma_1 , \ldots , \gamma_k \in S$. Suppose that $\widetilde{S_F}$ is a free abelian monoid and the images of $\gamma_1 , \ldots , \gamma_k $ form a basis. Let $P$ denote the monomial prime of $K[S]$ corresponding to $F$. The following assertions hold. \begin{enumerate} \item $F$ is contained in precisely $k$ facets $F_i = \ensuremath{\mathrm{pos}}(S) \cap H_i$ of $\ensuremath{\mathrm{pos}}(S)$, and hence $F = F_1 \cap \cdots \cap F_k$; \item $\sigma_i(\gamma_j) = \delta_{i j}$ for all $1 \le i,j \le k$, where $\sigma_i$ is the primitive linear form corresponding to $ H_i$; and \item $\ensuremath{\mathrm{grp}}(S \cap F) = \ensuremath{\mathrm{grp}}(S) \cap H_1 \cap \cdots \cap H_k$. \end{enumerate} \end{lemma} \begin{proof} (1) Since $x^{\gamma_1} , \ldots , x^{\gamma_k}$ is a regular system of parameters for $\mathcal{R}_P$ we know that $x^{\gamma_i}\mathcal{R}_P$ is a height one prime of $\mathcal{R}_P$. Thus there exist facets $F_i$ of $\ensuremath{\mathrm{pos}}(S)$ corresponding to the height one primes $P_i$ of $\mathcal{R}$ such that $x^{\gamma_i}\mathcal{R}_P = P_i\mathcal{R}_P \, (i = 1, \ldots , k)$. We have $P_1 + \cdots + P_k = P$ since they are both primes contained in $P$ and they are equal after localizing at $P$. Suppose $G$ is a facet of $\ensuremath{\mathrm{pos}}(S)$ containing $F$ and let $Q = P_G$. Then, $Q\mathcal{R}_P = x^{\delta}\mathcal{R}$ since $\mathcal{R}_P$ is a UFD. Since $x^{\delta} \in P_1 + \cdots + P_k$ we must have $x^{\delta} \in P_i$ for some $i$. Hence $Q = P_i$. So $F_1, \ldots , F_k$ are precisely the facets of $\ensuremath{\mathrm{pos}}(S)$ containing $F$ and $P_1, \ldots , P_k$ are precisely the height one primes contained in $P$. Thus $F = F_1 \cap \cdots \cap F_k$. (2) By construction, $\sigma_i(\gamma_i) > 0$. Just suppose $\sigma_i(\gamma_j) > 0$ for some $j \ne i$. Then, $x_{\gamma_j} \in P_i$, which implies $P_j\mathcal{R}_P \subseteq P_i\mathcal{R}_j$, which is absurd. Hence $\sigma_i(\gamma_j) = 0$ for $ i \ne j$. Since $\sigma_i$ is primitive, we must have $\sigma_i(\gamma_i) =1$. (3) Suppose that $\alpha, \beta \in S$ and $\alpha - \beta \in H_1 \cap \cdots \cap H_k$. There exist nonnegative integers $a_1, \ldots , a_k, b_1, \ldots, b_k $ such that $\alpha - \sum a_i\gamma_i, \beta - \sum b_i\gamma_i \in \ensuremath{\mathrm{grp}}(S \cap F)$. Since $\alpha - \beta \in H_1 \cap \cdots \cap H_k$, we must have $a_i = b_i \; (i = 1 , \ldots , k)$ by (3). Hence $\alpha - \beta \in \ensuremath{\mathrm{grp}}(S \cap F)$. Since the opposite containment is clear we have equality of groups. \end{proof} \begin{lemma}\label{lemma 2} Let $F$ be a face of $\ensuremath{\mathrm{pos}}(S)$ that is the intersection of $k$ facets $F_1 = \ensuremath{\mathrm{pos}}(S) \cap H_1, \ldots , F_k = \ensuremath{\mathrm{pos}}(S) \cap H_k$ of $\ensuremath{\mathrm{pos}}(S)$ and let $\sigma_i$ be the primitive linear form associated with $H_i \; (i = 1, \ldots , k)$. Suppose that \begin{enumerate} \item there exist $\gamma_1 , \ldots , \gamma_k \in S$ such that $\sigma_i(\gamma_j) = \delta_{i j} \mbox{ for all } 1 \le i, j \le k$; and \item $\ensuremath{\mathrm{grp}}(S \cap F) = \ensuremath{\mathrm{grp}}(S) \cap H_1 \cap \cdots \cap H_k$. \end{enumerate} Then, $S_F/U(S_F)$ is a free monoid with basis given by the images of $\gamma_1 , \ldots , \gamma_k$ in the quotient monoid. \end{lemma} \begin{proof} The proof is straightforward. Suppose $\alpha \in S$ and $\sigma_i(\alpha) = a_i (i = 1, \ldots k)$. Then $\alpha - (\sum a_i \gamma_i) \in \ensuremath{\mathrm{grp}}(S) \cap H_1 \cap \cdots \cap H_k = \ensuremath{\mathrm{grp}}(S \cap F)$ implies the image of $\sum a_i \gamma_i$ in the quotient monoid is $\overline{\alpha}$. Suppose that $a_i , b_i \in \mathbb{N}$ and $\sum a_i \overline{\gamma_i} = \sum b_i \overline{\gamma_i}$. Then there exists $\mu \in \ensuremath{\mathrm{grp}}(S \cap F)$ such that $\sum a_i\gamma_i + \mu = \sum b_i \gamma_i$. By condition (1) we must have $a_i = b_i \; (i = 1, \ldots , k)$. Hence $\widetilde{S_F}$ is free with the asserted basis. \end{proof} Combining the previous two lemmas we immediately obtain the following characterization. \begin{theorem}\label{thm: R_k} An affine semigroup ring $\mathcal{R} = K[S]$ satisfies condition $\mathrm{R}_{\ell}$ of Serre if and only if for each positive integer $k \le \ell$ and any face $F$ of $\ensuremath{\mathrm{pos}}(S)$ such that $\mathrm{ht}(P_F) = k$ there exist facets $F_1, F_2, \ldots, F_k$ of $\mathrm{pos}(S)$ such that $F = F_1 \cap F_2 \cap \cdots \cap F_k$ and the following conditions hold: \begin{enumerate} \item there exist $\boldsymbol{\gamma}_1, \ldots, \boldsymbol{\gamma}_k \in S$ such that $\sigma_i(\boldsymbol{\gamma}_j) = \delta_{i j} \mbox{ for all } 1 \le i, j \le k$; and \item $\ensuremath{\mathrm{grp}}(S \cap F_1 \cap \cdots \cap F_k) = \ensuremath{\mathrm{grp}}(S) \cap H_1 \cap \cdots \cap H_k$. \end{enumerate} \end{theorem} We end this section with an example of an affine semigroup ring that satisfies condition R$_2$ of Serre but doesn't satisfy condition S$_2$. It was inspired by an example suggested to the author by I. Swanson. \begin{example} Suppose $K$ is a field and consider the semigroup $S$ of $\mathbb{Z}^3$ generated by the following vectors: $$(1,0,0), (1,3,0), (1,0,3), (1,1,0), (1,2,0), (1,0,1), (1,0,2), (1,2,1), \mbox{and } (1,1,2).$$ Notice that $\ensuremath{\mathrm{grp}}(S) = \mathbb{Z}^3$ and that $\ensuremath{\mathrm{pos}}(S) = H_2^+ \cap H_3^+ \cap H_4^+$, where $H_2$ and $H_3$ are the indicated coordinate hyperplanes and $H_4$ is defined by the primitive linear form $\sigma$, where $\sigma(a,b,c) = 3a -b -c$. Thus $\ensuremath{\mathrm{pos}}(S)$ has 3 facets $F_2, F_3, F_4$ and 3 codimension 2 faces $F_{2 3} = F_2 \cap F_3, F_{2 4} = F_2 \cap F_4, F_{3 4} = F_3 \cap F_4$. One checks that $\ensuremath{\mathrm{grp}}(S \cap F_2) = \ensuremath{\mathrm{grp}}(\{ (1,0,0), (1,0,1) \}) = \ensuremath{\mathrm{grp}}(S) \cap H_2$ and by symmetry, $\ensuremath{\mathrm{grp}}(S \cap F_3) = \ensuremath{\mathrm{grp}}(S) \cap H_3$. We also have $\ensuremath{\mathrm{grp}}(S \cap F_4) = \ensuremath{\mathrm{grp}}(\{ (1,3,0), (1,0,3), (1,2,1), (1,1,2)\}) = \ensuremath{\mathrm{grp}}(\{ (1,0,3), (0,1,-1)\}) = \ensuremath{\mathrm{grp}}(S) \cap H_4$. One can also verify that $\ensuremath{\mathrm{grp}}(S \cap F_2 \cap F_3) = \ensuremath{\mathrm{grp}}(\{ (1,0,0) \}) = \ensuremath{\mathrm{grp}}(S) \cap H_2 \cap H_3$, $\ensuremath{\mathrm{grp}}(S \cap F_2 \cap F_4) = \ensuremath{\mathrm{grp}}(\{ (1,0,3) \}) = \ensuremath{\mathrm{grp}}(S) \cap H_2 \cap H_4$, and by symmetry $\ensuremath{\mathrm{grp}}(S \cap F_3 \cap F_4) = \ensuremath{\mathrm{grp}}(S) \cap H_3 \cap H_4$. So the group conditions for the affine semigroup ring $K[S]$ to satisfy R$_2$ are satisfied. For each codimension 2 face $F_{ij}$ we must produce 2 vectors $\boldsymbol{\gamma}_i, \boldsymbol{\gamma}_j$ satisfying $\sigma_i(\boldsymbol{\gamma}_j) = \delta_{i j}$, where $\sigma_2, \sigma_3$ are the coordinate functions and $\sigma_4 = \sigma$ is defined above. The vectors are given below. $$\begin{array}{ccc} (i,j) &\boldsymbol{\gamma}_i & \boldsymbol{\gamma}_j \\ (2,3) & (1,1,0) & (1,0,1) \\ (2,4) & (1,1,2) & (1,0,2) \\ (3,4) & (1,2,1) & (1,2,0) \end{array}$$ Thus condition (1) in Theorem \ref{thm: R_k} is satisfied and we may conclude that $K[S]$ is regular in codimension 2. However, as we shall now see, $K[S]$ is not normal so can't possibly satisfy S$_2$. Notice that $(1,1,1) = \frac{1}{3}(1,0,0) + \frac{1}{3}(1,3,0) + \frac{1}{3}(1,0,3) \Rightarrow (1,1,1) \in \ensuremath{\mathrm{pos}}(S)$. However, $(1,1,1) \notin S$ and hence $K[S]$ is not normal. There is another way to see that $K[S]$ is not normal that is more obvious. Consider the injective homomorphism $\varphi: \mathbb{Z}^3 \rightarrow \mathbb{Z}^3$ defined by $\varphi(\mathbf{e}_1) = (3,0,0), \\ \varphi(\mathbf{e}_2) = (-1,1,0), \mbox{ and }\varphi(\mathbf{e}_3) = (-1,0,1)$. The image of $S$ is the semigroup $\tilde{S}$ generated by the vectors $$(3,0,0), (0,3,0), (0,0,3), (2,1,0), (1,2,0), (2, 0,1), (1,0,2), (0,2,1), (0,1,2).$$ Notice that we have listed all 3-tuples of non-negative integer whose components sum to 3 except (1,1,1). This isomorphism of semigroups induces an isomorphism of $K[S]$ and $K[\tilde{S}] \cong K[x^3, y^3, z^3, x^2y, xy^2, x^2z, xz^2, y^2z, yz^2]$. The latter has normalization \\ $K[x^3, y^3, z^3, xyz, x^2y, xy^2, x^2z, xz^2, y^2z, yz^2]$, which is the 3rd Veronese subring of \\ $K[x,y,z]$. Hence $K[S]$ is not normal. \end{example} \section{The Rees Algebras of a Special Class of Monomial Ideals} We now look at the Rees algebras of a special class of integrally closed monomial ideals. \begin{notn}\label{blam} { \rm Let $\boldsymbol{\lambda} = (\ensuremath{{\lambda}}_1, \ldots, \ensuremath{{\lambda}}_n)$ be a tuple of positive integers and \\ $J = J(\boldsymbol{\lambda}) = (x_1^{\ensuremath{{\lambda}}_1}, \ldots, x_n^{\ensuremath{{\lambda}}_n})$, where the ideal in taken inside the polynomial ring \\$K[\xvec{n}] =: R$, and $I = I(\boldsymbol{\lambda}) = \bar{J}$. Thus $I$ is an integrally closed monomial ideal with minimal monomial reduction $J$. Let $L = \ensuremath{\mathrm{lcm}}(\ensuremath{{\lambda}}_1, \ldots , \ensuremath{{\lambda}}_n)$, $\omega_i = L/\ensuremath{{\lambda}}_i \; (i = 1, \ldots , n)$, and $\boldsymbol{\omega} = (\omega_1 , \ldots , \omega_n)$. Notice that $L = dw$, where $d = \gcd(\lambda_1 , \ldots , \lambda_n)$.} \end{notn} We will characterize those monomial ideals $I(\boldsymbol{\lambda})$ whose Rees algebras satisfy $\mathrm{R}_{\ell}$ for some $\ell < n$. Observe that the Rees algebra $R[It]$ of a monomial ideal $I$ can always be identified with an affine semigroup ring over $K$. Namely, if $I = (x^{\boldsymbol{\beta}_1}, \ldots, x^{\boldsymbol{\beta}_r})$ and $S(I) = \langle (\mathbf{e}_1,0), \ldots, (\mathbf{e}_n,0),(\boldsymbol{\beta}_1,1), \ldots, (\boldsymbol{\beta}_r,1) \rangle \subseteq \mathbb{N}^{n+1}$, then $R[It] \cong K[S(I)]$. In case $\mathcal{R} := R[It]$ is the Rees algebra of $I = I(\boldsymbol{\lambda})$ the condition that every height $k$ monomial prime corresponds to an intersection of precisely $k$ facets and condition (2) of Theorem \ref{thm: R_k} automatically hold as we shall see below. First we describe the height $k$ monomial primes of $\mathcal{R}$. The facets $F_{\sigma}, F_1, \ldots , F_{n+1}$ of $\ensuremath{\mathrm{pos}}(S)$ are cut out by the supporting hyperplanes \\ $H_{\sigma}, H_1, \ldots, H_{n+1}$ where $\sigma(\boldsymbol{\alpha}, a_{n+1})=\boldsymbol{\omega} \cdot \boldsymbol{\alpha} - La_{n+1}$ and $ H_1, \ldots, H_{n+1}$ are the coordinate hyperplanes in $\mathbb{R}^{n+1}$. Notice that with this notation the generating set for $S(I)$ is $$ \{ (a_1,\ldots,a_n,d) \in \mathbb{N}^{n+1} \mid a_1 \omega_1 + \cdots + a_n \omega_n \ge dL \mbox{ for } d \le 1 \}.$$ Notice that $(\mathbf{e}_1,0), \ldots , (\mathbf{e}_n,0), (\ensuremath{{\lambda}}_1\mathbf{e}_1,1) \in S$ and hence $\ensuremath{\mathrm{pos}}(S) = \mathbb{Z}^{n+1}$, i.e., $S$ is full-dimensional. The following description of the height one monomial primes of $R[It]$ appeared in \cite{Vi}. \begin{lemma}\label{lem: ht 1 monl prime} For a monomial ideal $I=I(\boldsymbol{\lambda})$ the height one monomial primes of $R[It]$ are as follows: \begin{eqnarray*} P_i &= & (x_i) + (x^{\boldsymbol{\beta}_j}t \mid \mathbf{e}_i \le_{pr} \boldsymbol{\beta}_j) \; for \; (i=1, \ldots, n);\\ P_{n+1} &=& (x^{\boldsymbol{\beta}_1}t, \ldots, x^{\boldsymbol{\beta}_r}t); and \\ P_{\sigma} &= & (x_1, \ldots, x_n) + (x^{\boldsymbol{\beta}_j}t \mid \sigma(\boldsymbol{\beta}_j,1) > 0). \end{eqnarray*} \end{lemma} We wish to describe the height $k$ monomials ideals. Towards this end we show that every codimension $k$ face of $\ensuremath{\mathrm{pos}}(S)$ is the intersection of precisely $k$ facets of $\ensuremath{\mathrm{pos}}(S)$ for each $k$ such that $1 \le k \le n$. Notice that \begin{eqnarray*} \ensuremath{\mathrm{pos}}(S(I)) &=& \ensuremath{\mathrm{pos}}(S(J)) \\ &=& \ensuremath{\mathrm{pos}}((\mathbf{e}_1,0), \ldots, (\mathbf{e}_n,0), (\ensuremath{{\lambda}}_1\mathbf{e}_1,1), \ldots , (\ensuremath{{\lambda}}_n\mathbf{e}_n,1)). \end{eqnarray*} Alternate proofs that the following are the height $k$ monomial primes of $R[It]$ can be found in \cite{Co}. In the next few paragraphs, given integers $1 \le i_1 < i_2 < \cdots < i_k \le n$ let $1 \le j_1< \cdots < j_{n-k} \le n$ be such that $\{i_1, \ldots , i_k, j_1, \ldots , j_{n-k} \} = \{ 1 , \ldots , n \}$. \begin{lemma}\label{lem: face 1} For $k < n$ and integers $1 \le i_1 < i_2 < \cdots < i_k \le n$, let \\ $F= F_{i_1} \cap \cdots \cap F_{i_k}$. Then, \begin{enumerate} \item $ F = \ensuremath{\mathrm{pos}}((\mathbf{e}_{j_1},0), \ldots , (\mathbf{e}_{j_{n-k}},0), (\ensuremath{{\lambda}}_{j_1}\mathbf{e}_{j_1},1), \ldots , (\ensuremath{{\lambda}}_{j_{n-k}}\mathbf{e}_{j_{n-k}},1));$ \item $\mathrm{codim}(F_{i_1} \cap \cdots \cap F_{i_k}) = k$ and $P_F = (x_{i_1}, \ldots , x_{i_k}) + (x^{\boldsymbol{\beta}}t \mid (\boldsymbol{\beta} ,1) \in S \setminus F);$ and \item $\widetilde{S_F}$ is free with basis given by the images of $(\mathbf{e}_{i_1},0), \ldots , (\mathbf{e}_{i_k}, 0)$ and hence $R[It]$ localized at $P_F$ is regular. \end{enumerate} \end{lemma} \begin{proof} Let $\sigma_i(\boldsymbol{\alpha}, a_{n+1}) = a_i$ for $1 \le i \le n+1$. The first assertion is a consequence of the fact that $\sigma_i$ of a sum of vectors in $\ensuremath{\mathrm{pos}}(S)$ is zero if and only if $\sigma_i$ of each summand is zero. The codimension statement follows immediately. The description of the corresponding monomial prime comes from looking at the generators of the prime ideal $S \setminus F$ of $S$. To see that the images of $(\mathbf{e}_{i_1},0), \ldots , (\mathbf{e}_{i_k}, 0)$ generate the quotient monoid it suffices to consider generators of $S$ of the form $(\beta,1)$ that aren't in $S \cap F$. For such, $b_{i_s} > 0 $ for some $s$. Then, \begin{eqnarray*} (\beta,1) &\cong &(( \beta ,0) - (\ensuremath{{\lambda}}_{j_1} \mathbf{e}_{j_1}, 0)) + (\ensuremath{{\lambda}}_{j_1} \mathbf{e}_{j_1}, 1) \pmod{\ensuremath{\mathrm{grp}}(S \cap F)} \\ &\cong& (b_{i_1}\mathbf{e}_{i_1},0) + \cdots + (b_{i_k}\mathbf{e}_{i_k},0) \hspace{.35in} \pmod{ \ensuremath{\mathrm{grp}}(S \cap F)} \end{eqnarray*} Thus the images of $(\mathbf{e}_{i_1},0), \ldots , (\mathbf{e}_{i_k}, 0)$ form a free basis for the quotient $\widetilde{S_F}$, since uniqueness of representation is clear. \end{proof} Next we consider the case where one of facets in the intersection is $F_{n+1}$. The proof is due to the same observations that appeared in the preceding proof, so we omit it. \begin{lemma} \label{lem: face 2} For integers $1 \le i_1 < i_2 < \cdots < i_{k} \le n$ let $F=F_{i_1} \cap \cdots \cap F_{i_{k}} \cap F_{n+1}$. The following hold. \begin{enumerate} \item $F= \ensuremath{\mathrm{pos}}((\mathbf{e}_{j_1}, 0) , \ldots, (\mathbf{e}_{j_k}, 0)).$ \item $\mathrm{codim}(F) = k+1$ and the corresponding monomial prime is \\ $P_F = (x_{i_1}, \ldots , x_{i_k}) + (x^{\boldsymbol{\beta}}t \mid (\boldsymbol{\beta}, 1) \in S).$ \item If $k < n$ then, $\widetilde{S_F}$ is free with basis given by the images of the vectors \\ $(\mathbf{e}_{i_1},0), \ldots , (\mathbf{e}_{i_k},0), (\mathbf{e}_{j_1},1)$. In case $k < n$ the Rees algebra $R[It]$ localized at $P_F$ is regular. \end{enumerate} \end{lemma} Notice that $F_1 \cap \cdots \cap F_n = \{(0, \ldots , 0)\} = F_{n+1} \cap F_{\sigma}$ and the corresponding monomial prime is $\mathfrak{m} = (x_1 , \ldots , x_n) + (x^{\boldsymbol{\beta}}t \mid (\boldsymbol{\beta} , 1) \in S)$, the maximal monomial prime of $K[S]$. In this case, the codimension drops more than the expected amount. We can still realize the apex of the cone as the intersection of $n+1$ facets, namely $ \{(0, \ldots , 0)\} = F_1 \cap \cdots \cap F_{n+1}$. Notice $S_0 = \{ 0 \}$ and $\widetilde{S}$ is not free provided that $n > 1$. Now we involve the facet $F_{\sigma}$. By the above lemmas these are the only faces we need to worry about when characterizing which Rees algebras $R[It]$ are regular in codimension $k \le n$. The proof of the next lemma is a consequence of the same observations and is omitted. \begin{lemma} \label{lem: face 3} For integers $1 \le i_1 < i_2 < \cdots < i_k \le n$ let $F = F_{i_1} \cap \cdots \cap F_{i_k} \cap F_{\sigma}$. The following hold. \begin{enumerate} \item $F = \ensuremath{\mathrm{pos}}((\ensuremath{{\lambda}}_{j_1}\mathbf{e}_{j_1},1), \ldots , (\ensuremath{{\lambda}}_{j_{n-k}}\mathbf{e}_{j_{n-k}},1)).$ \item $\mathrm{codim}(F_{i_1} \cap \cdots \cap F_{i_k} \cap F_{\sigma} ) = k+1$ and the corresponding monomial prime is $P_F = (x_{i_1}, \ldots , x_{i_k}) + (x^{\boldsymbol{\beta}}t \mid (\boldsymbol{\beta}, 1) \in S \setminus F).$ \end{enumerate} \end{lemma} We now show that condition (2) of Theorem \ref{thm: R_k} is always satisfied by the positive dimensional faces of $\ensuremath{\mathrm{pos}}(S)$ that are contained in $F_{\sigma}$. The condition is a priori true for positive dimensional faces not contained in $F_{\sigma}$ by Lemmas \ref{lem: face 1} and \ref{lem: face 2}. \begin{lemma}\label{lem: ker Z^n to Z} Let $\boldsymbol{\omega} = (\omvec{n})$ be a tuple of positive integers with $n \ge 2$ and let $\phi: \mathbb{Z} ^n \rightarrow \mathbb{Z} $ be defined by $\phi(\boldsymbol{\alpha}) = \boldsymbol{\omega} \cdot \boldsymbol{\alpha}$. For each $1 \le i < j \le n$ let $r_{ij} = \gcd(\omega_i, \omega_j)$. Then, the kernel of $\phi$ is generated by the tuples $\boldsymbol{\mu}_{ij} = \frac{\omega_j}{r_{ij}}\mathbf{e}_i - \frac{\omega_i}{r_{ij}}\mathbf{e}_j \; \; (1 \le i < j \le n)$, where $\mathbf{e}_1, \ldots , \mathbf{e}_n$ are the standard basis vectors for $\mathbb{Z}^n$. Furthermore, for integers $1 \le i_1 < i_2 < \cdots < i_k \le n$ we have $\ker(\phi) \cap H_{i_1} \cap \cdots \cap H_{i_k}$ is generated by vectors $\boldsymbol{\mu}_{ij}$ in $H_{i_1} \cap \cdots \cap H_{i_k}$. \end{lemma} \begin{proof} This lemma is used in Gr$\mathrm {\ddot{o}}$bner basis theory but for the reader's convenience we will supply a proof. We proceed by induction on $n$ the case $n=2$ being straightforward. Suppose $n > 2$ and assertion holds for $n - 1$. Assume $\boldsymbol{\beta} = (b_1, \ldots, b_n) \in \ker(\phi)$. Let $g = \gcd(\omega_1, \ldots, \omega_n)$. Then, \begin{equation} \label{ 1} b_n\frac{\omega_n}{g}= - (b_1\frac{\omega_1}{g} + \cdots + b_{n-1}\frac{\omega_{n-1}}{g}), \end{equation} which implies \begin{equation} \label{2 } b_n = \sum_{i=1}^{n-1} c_i\frac{\omega_i}{g} = \sum_{i=1}^{n-1} c_is_i\frac{\omega_i}{r_{in}}, \end{equation} where $r_{in} = s_ig \, (i=1, \ldots, n-1)$. Then, \begin{equation} \label{ 3} \boldsymbol{\beta} + \sum_{i=1}^{n-1} c_is_i\boldsymbol{\mu}_{in} = (b_1^{\prime}, \ldots, b_{n-1}^{\prime},0) \in \ker(\phi), \end{equation} and the assertion then follows from the induction hypothesis. Notice that if $\boldsymbol{\beta} \in \ker(\phi) \cap H_{i_1} \cap \cdots \cap H_{i_k}$ then each step only involves vectors in $H_{i_1} \cap \cdots \cap H_{i_k}$. \end{proof} This enables us to prove that in our setting the group property is automatic. The following two results appear in the unpublished thesis of H. Coughlin \cite{Co}. \begin{lemma} \label{lem: group 2} For integers $1 \le i_1 < i_2 < \cdots < i_k \le n$ we always have $$\ensuremath{\mathrm{grp}}(S \cap F_{i_1} \cap \cdots \cap F_{i_k} \cap F_{\sigma} ) = \ensuremath{\mathrm{grp}}(S) \cap H_{i_1} \cap \cdots \cap H_{i_k} \cap H_{\sigma} .$$ \end{lemma} \begin{proof} Recall that $\ensuremath{\mathrm{grp}}(S) = \mathbb{Z}^{n+1}$. The containment $\ensuremath{\mathrm{grp}}(S \cap F_{i_1} \cap \cdots \cap F_{i_k} \cap F_{\sigma} ) \subseteq \ensuremath{\mathrm{grp}}(S) \cap H_{i_1} \cap \cdots \cap H_{i_k} \cap H_{\sigma} = \mathbb{Z}^{n+1}\cap H_{i_1} \cap \cdots \cap H_{i_k} \cap H_{\sigma} $ is clear. If $k = n$ then $\ensuremath{\mathrm{grp}}(S) \cap H_{i_1} \cap \cdots \cap H_{i_k} \cap H_{\sigma} = \mathbb{Z}^{n+1} \cap H_1 \cap \cdots \cap H_n \cap H_{\sigma} = \{ \mathbf{0} \}$ and the assertion follows. Now assume $k < n$. Suppose $(\boldsymbol{\beta},d) \in \mathbb{Z}^{n+1}\cap H_{i_1} \cap \cdots \cap H_{i_k} \cap H_{\sigma} $. Then, $\boldsymbol{\beta} - d\ensuremath{{\lambda}}_n\mathbf{e}_n \in \ker(\phi)$, where $\phi$ is defined as in the preceding lemma. By that lemma and the fact that $(\ensuremath{{\lambda}}_{j_1}\mathbf{e}_{j_1},1) \in S \cap F_{i_1} \cap \cdots \cap F_{i_k} \cap F_{\sigma} $, it suffices to show that $(\boldsymbol{\mu}_{ij},0) \in \ensuremath{\mathrm{grp}}(S \cap H_{i_1} \cap \cdots \cap H_{i_k} \cap H_{\sigma})$ for $1 \le i < j \le n$ and $\boldsymbol{\mu}_{i j} \in H_{i_1} \cap \cdots \cap H_{i_k}$, where we are viewing $H_{i_j}$ as a coordinate hyperplane in either $\mathbb{R}^n$ or $\mathbb{R}^{n+1}$. Let $1 \le i < j \le n$, set $r = \gcd(\omega_i,\omega_j)$ and choose $d \in \mathbb{Z}$ such that $0 \le d\ensuremath{{\lambda}}_j - \frac{\omega_i}{r} < \ensuremath{{\lambda}}_j$. Multiplying by $\omega_j$ and dividing by $\omega_i$ we get $0 \le d\ensuremath{{\lambda}}_i - \frac{\omega_j}{r} < \ensuremath{{\lambda}}_i$ which implies $0 < \frac{\omega_j}{r} - (d-1)\ensuremath{{\lambda}}_i \le \ensuremath{{\lambda}}_i$. Then, \begin{eqnarray*} (\boldsymbol{\mu}_{ij},0) &=& (\frac{\omega_j}{r}\mathbf{e}_i +(d\ensuremath{{\lambda}}_j - \frac{\omega_i}{r})\mathbf{e}_j,d) - d(\ensuremath{{\lambda}}_j\mathbf{e}_j,1) \\ & = & ((\frac{\omega_j}{r} - (d-1)\ensuremath{{\lambda}}_i)\mathbf{e}_i + (d\ensuremath{{\lambda}}_j - \frac{\omega_i}{r})\mathbf{e}_j,1) + (d-1)(\ensuremath{{\lambda}}_i\mathbf{e}_i,1)-d(\ensuremath{{\lambda}}_j\mathbf{e}_j,1) \\ & \in & \ensuremath{\mathrm{grp}}(S \cap F_{i_1} \cap \cdots \cap F_{i_k} \cap F_{\sigma}), \end{eqnarray*} since each of the 3 tuples involved is a generator of $S$ that is in $F_{i_1} \cap \cdots \cap F_{i_k} \cap F_{\sigma}$. \end{proof} Combining Theorem \ref{thm: R_k} with the preceding lemma we obtain the following result. \begin{theorem} \label{thm: R_k for Rees} For a positive integer $\ell < n$ the Rees algebra of $I = I(\boldsymbol{\lambda})$ over a field satisfies condition $\mathrm{R}_{\ell + 1}$ of Serre if and only if for all sequences of positive integers $1 \le i_1 < \cdots < i_{\ell} \le n$ there exist $\boldsymbol{\gamma}_i = (\boldsymbol{\beta}_i , 1) \in \mathbb{N}^{n+1} \; (i = 1 , \ldots , \ell +1)$ such that $\sigma_i(\boldsymbol{\gamma}_j) = \delta_{i j} \, (1 \le i,j \le \ell + 1)$, where $\sigma_1 , \ldots , \sigma_{\ell +1}$ are the primitive linear forms associated with the hyperplanes $H_{i_1} , \ldots , H_{i_{\ell}}, H_{\sigma}$. \end{theorem} \begin{proof} First let us determine when $R_{\ell + 1}$ holds. We need only consider faces contained in $F_{\sigma}$. By Lemma \ref{lem: group 2} and Theorem \ref{thm: R_k} we need only show that condition (1) of Theorem \ref{thm: R_k} holds for such faces. Let $1 \le k \le \ell +1$ and let $Q$ be a height $k$ monomial prime corresponding to a face contained in the facet $F_{\sigma}$. Then $Q$ is contained in a height $\ell +1$ monomial prime corresponding to a face contained in the facet $F_{\sigma}$ by Lemma \ref{lem: face 3}. Thus it suffices to establish condition (1) of Theorem \ref{thm: R_k} for height $\ell+1$ monomial primes whose faces are contained in the facet $F_{\sigma}$. Hence it is necessary and sufficient that there exist vectors $\boldsymbol{\gamma}_i \in S(I) \; (i = 1 , \ldots , \ell +1)$ such that $\sigma_i(\boldsymbol{\gamma}_j) = \delta_{i j}$ where $\sigma_1 , \ldots , \sigma_{\ell +1}$ are the primitive linear forms associated with the hyperplanes $H_{i_1} , \ldots , H_{i_{\ell}}, H_{\sigma}$. Recall that the generators of $S(I)$ have $(n+1)^{\mathrm{st}}$ component 0 or 1. Write each $\boldsymbol{\gamma}_j$ as a sum of generators of $S(I)$. First suppose that $1 \le j \le \ell $. The condition that $\sigma_i(\boldsymbol{\gamma}_j) = \delta_{i j} \; (i = 1, \ldots , \ell + 1)$ forces some summand $(\boldsymbol{\beta}_j, 1)$ of $\boldsymbol{\gamma}_j$ to satisfy $\sigma_i(\boldsymbol{\beta}_j,1) = \delta_{i j}$ for all $1 \le i \le \ell +1$. Replacing $\boldsymbol{\gamma}_j$ by this summand we may assume that $\boldsymbol{\gamma}_j = (\boldsymbol{\beta}_j,1).$ Now consider the summands involved in the expression for $\boldsymbol{\gamma}_{\ell + 1}$. Consider first the possibility that all summands have $(n+1)^{\mathrm{st}}$ component 0. Then, each summand has positive $\sigma$-value so there is only one summand $\boldsymbol{\gamma}_{\ell +1} = (\mathbf{e}_j ,0)$, where $j \in \{j_1, \ldots , j_{n_\ell} \}$ and $\omega_j = 1$. In this case we also have $( (L+1)\mathbf{e}_{j} ,1)$ satisfies the requirements and we may replace $\boldsymbol{\gamma}_{\ell+1}$ by $((L+1)\mathbf{e}_{j} ,1)$. The remaining possibility is that some summand has $(n+1)^{\mathrm{st}}$ component 1 and again we may replace $\boldsymbol{\gamma}_{\ell+1}$ by this summand. In any case, we may assume $\boldsymbol{\gamma}_{\ell + 1}$ has $(n+1)^{\mathrm{st}}$ component 1. Conversely, if vectors $(\boldsymbol{\beta}_j, 1) \in \mathbb{N}^{n+1}$ satisfying $\sigma_i(\boldsymbol{\beta}_j,1) = \delta_{i j}$ for all $1 \le i, j \le \ell +1$ exist, they are automatically in $S(I)$ since $I$ is integrally closed. \end{proof} We now state the result entirely in terms of the integers $L, \omega_1, \ldots , \omega_n$ determined by the vector $\boldsymbol{\lambda} = (\ensuremath{{\lambda}}_1 , \ldots , \ensuremath{{\lambda}}_n)$. \begin{cor} \label{cor: R_k for Rees} For a positive integer $\ell < n$ the Rees algebra of $ I(\boldsymbol{\lambda})$ over a field satisfies condition $\mathrm{R}_{\ell + 1}$ of Serre if and only if for all sequences of positive integers $1 \le i_1 < \cdots < i_{\ell} \le n$ and $1 \le j_1 < \cdots < j_{n-\ell} \le n$ such that $\{1, \ldots , n \} = \{i_1 , \ldots , i_{\ell} \} \cup \{ j_1 , \ldots , j_{n-\ell} \} $, we have $$L - \omega_{i_1}, \ldots , L - \omega_{i_{\ell}}, L + 1 \in \langle \omega_{j_1}, \ldots , \omega_{j_{n-\ell}} \rangle.$$ \end{cor} \begin{proof} By Theorem \ref{thm: R_k for Rees} it suffices to show that for a sequence of positive integers $1 \le i_1 < \cdots < i_{\ell} \le n$ and $1 \le j_1 < \cdots < j_{n-\ell} \le n$ such that $ \{1, \ldots , n \} = \{i_1 , \ldots , i_{\ell} \} \cup \{ j_1 , \ldots , j_{n-\ell} \} $, vectors $\boldsymbol{\gamma}_j = (\boldsymbol{\beta}_j, 1) \; (j = 1, \ldots, \ell +1) \in \mathbb{N}^{n+1}$ exist such that $\sigma_i(\boldsymbol{\gamma}_j) = \delta_{i j} \; \mbox{ for all }1 \le i, j \le \ell + 1$, where the $\sigma_i$ are as above, if and only if $L - \omega_{i_1}, \ldots , L - \omega_{i_{\ell}}, L + 1 \in \langle \omega_{j_1}, \ldots , \omega_{j_{n-\ell }} \rangle.$ Suppose first that the vectors $\boldsymbol{\gamma}_i = (\boldsymbol{\beta}_i, 1) \in \mathbb{N}^{n+1}$ satisfying the necessary conditions exist. By our requirements, $\boldsymbol{\gamma}_i= (\mathbf{e}_i + a_{j_1}\mathbf{e}_{j_1} + \cdots + a_{j_{n - \ell}}\mathbf{e}_{j_{n - \ell}},1)$ for all $i = 1, \ldots , \ell$, where the coefficients $a_{j_1} , \ldots , a_{j_{n-\ell}}$ are nonnegative. The existence of such vectors $\boldsymbol{\gamma}_i$ is equivalent to the conditions $L - \omega_{i_1}, \ldots , L - \omega_{i_{\ell}} \in \langle \omega_{j_1}, \ldots , \omega_{j_{n-\ell}} \rangle.$ We must also have $\boldsymbol{\gamma}_{\ell + 1} = (a_{j_1}\mathbf{e}_{j_1} + \cdots + a_{j_{n - \ell}}\mathbf{e}_{j_{n - \ell}} ,1)$, where the coefficients $a_{j_1} , \ldots , a_{j_{n-\ell}}$ are nonnegative and $ a_{j_1}\omega_{j_1} + \cdots + a_{j_{n - \ell}}\omega_{j_{n - \ell}} - L = 1$. The existence of such a $\boldsymbol{\gamma}_{\ell + 1}$ is equivalent to $L + 1 \in \langle \omega_{j_1}, \ldots , \omega_{j_{n-\ell }} \rangle.$ \end{proof} Applying this theorem for values of $\ell$ close to $n$ gives simple descriptions of when the Rees algebra of $I(\boldsymbol{\lambda})$ is regular in codimension $\ell$. \begin{cor} The Rees algebra of $ I(\boldsymbol{\lambda}) \subset K[\xvec{n}]$ over a field $K$ is regular in codimension $n$ if and only if $\boldsymbol{\lambda} = \ensuremath{{\lambda}}(1,1, \ldots , 1)$ and hence, $I(\boldsymbol{\lambda}) = \mathfrak{m}^{\ensuremath{{\lambda}}}$. \end{cor} \begin{proof} The Rees algebra of $I$ satisfies $\mathrm{R}_{n}$ if and only if for every sequence \\$1 < \cdots < i-1 < i+1 < \cdots < n$ of length $n-1$, we have $$L - \omega_1, \ldots , L - \omega_{i-1} , L - \omega_{i+1} , \ldots , L-\omega_n , L+1 \in \langle \omega_i \rangle.$$ In particular, $L+1 = a\omega_i$ for some $a \ge 0$, which implies $1 = \omega_i(a - \ensuremath{{\lambda}}_i)$. Thus each $\omega_i = 1$. Conversely, if all the $\omega_i = 1$ then the necessary conditions are satisfied. \end{proof} \begin{cor} \label{cor: 2} Suppose that $n \ge 3$. The Rees algebra of $ I(\boldsymbol{\lambda}) \subset K[\xvec{n}]$ over a field $K$ is regular in codimension $n-1$ if and only if the integers $\omega_i$ are pairwise relatively prime. \end{cor} \begin{proof} The sequences of length $n-2$ arise from omitting two integers $1 \le i < j \le n$. For each pair $1 \le i < j \le n$ we must have $$L - \omega_k \in \langle \omega_i , \omega_j \rangle \mbox{ for all } k \ne i,j \mbox{ and } L+1 \in \langle \omega_i , \omega_j \rangle.$$ Write $L+1 = a\omega_i + b\omega_j$ for $a, b \ge 0$ and read modulo $\omega_i$ to obtain the congruence $b\omega_j \equiv 1 \pmod{\omega_i}$. Hence $\omega_i$ and $\omega_j$ are relatively prime. This holds for all pairs $1 \le i<j \le n$. Conversely, if the integers $\omega_i$ are pairwise relatively prime then every integer at least $(\omega_i - 1)(\omega_j -1)$ is in $\langle \omega_i, \omega_j \rangle$. So $L + 1 \mbox { and } L - \omega_k = \omega_k(g \prod_{s \ne k} \omega_s - 1) \in \langle \omega_i, \omega_j \rangle$, where $g = \gcd(\ensuremath{{\lambda}}_1, \ldots , \ensuremath{{\lambda}}_n)$. \end{proof} If $n = 2$ the Rees algebra $R[I(\boldsymbol{\lambda})t]$ is normal and hence regular in codimension $n - 1 = 1$ without any additional assumptions. Corollary \ref{cor: 2} says that if $n=3$ the Rees algebra of $I(\boldsymbol{\lambda})$ is regular in codimension 2 if and only if the $\omega_i$ are pairwise relatively prime. H. Coughlin \cite{Co} proved this special case and also that this condition is sufficient for $R[I(\boldsymbol{\lambda})t]$ to be normal. This result combined with an earlier result of Reid, Roberts, and Vitulli \cite{RRV} has an interesting consequence, which we now present. \begin{notn} {\rm For an $\mathbb{N}$-graded ring $A$ and a positive integer $t$ we let $A_{\ge t}$ denote the homogeneous ideal $A_{\ge t} = \oplus_{s \ge t} A_s$. Further assume that $A$ is generated as an $A_0$-algebra by homogeneous elements $\xvec{n}$ of positive degrees $\omega_1 , \ldots , \omega_n$, respectively. Notice that for a tuple of positive integers $\boldsymbol{\lambda}$, with $L = \ensuremath{\mathrm{lcm}}(\lambda_1, \ldots, \lambda_n )$ and $\omega_i = L/\lambda_i \, (i = 1 , \ldots , n)$ as in (\ref{blam}), if we define a new grading on $R = K[\xvec{n}]$ by declaring $\deg(x_i) = \omega_i \, (i = 1 , \ldots , n)$, then $I(\boldsymbol{\lambda}) = R_{\ge L} = R_{\ge dw}$, where $d = \gcd(\ensuremath{{\lambda}}_1, \ldots , \ensuremath{{\lambda}} _n)$. Ideals of the form $A_{\ge t}$ have been studied by E. Hyry and K. E. Smith in connection with Kawamata's Conjecture which speculates that every adjoint ample line bundle on a smooth variety admits a nonzero section (e.g. see \cite{HS1} and \cite{HS2}). They also arise as test ideals in tight closure theory as illustrated in \cite[Remark 3]{F}}. \end{notn} \begin{prop} Let $R= K[x,y,z]$ be a polynomial ring over a field $K$ and let $a,b,c$ be pairwise relatively prime positive integers. Set $S = K[x^a, y^b, z^c]$ and $L = abc$. Then, the ideal $\mathfrak{a} = S_{\ge L}$ is normal, that is, $\mathfrak{a}^t = S_{\ge tL}$ for all $t \ge 1$. \end{prop} \begin{proof} Observe that the homogeneous ideal $ \mathfrak{a} = S_{\ge L}$ is integrally closed and that the integral closure of $ \mathfrak{a}^t$ is $ S_{\ge t L}$ for $t \ge 1$ (e.g. see the discussion in \cite{RRV}). By Proposition 3.7 of \cite{RRV} to prove that $\mathfrak{a}$ is normal, it suffices to show that $\mathfrak{a}^2 = S_{\ge 2L}$. For this we proceed as in the proof of Theorem III.2.2 of \cite{Co}. Suppose that $x^{ua}y^{bv}z^{cw} \in S_{\ge 2L}$ is a minimal monomial generator. In particular, $ua + vb + wc \ge 2L$. We must exhibit a decomposition $(u,v,w) = (u_1,v_1,w_1)+(u_2,v_2,w_2)$ with $u_ia + v_ib + w_ic \ge L \, (i=1,2)$. If $u \ge L/a, v \ge L/b$ or $w \ge L/c$ it is clear that we can do this. For example, if $u \ge L/a$ write $(u,v,w) = (L/a,0,0) + (u - L/a,v,w)$. Thus it suffices to assume that $u < L/a, v < L/b$ and $w < L/c$. Notice that this forces the sum of any two of $ua, vb,$ and $wc$ to be strictly greater than $L$ and each summand to be positive. We consider three cases. First suppose that either $a, b,$ or $c$ is 1. For example, suppose $a=1$. Say $u + vb = L + u_2$. Then $0 < u_2 < u$ and $$(u,v,w) = (u-u_2,v,0) + (u_2, 0, w)$$ is the desired decomposition. Thus we may assume that $a,b,c > 1$. Now suppose that $u < L/2a , v < L/2b $ or $w < L/2c $. Without loss of generality we may assume that $w < L/2c $. Then, $L - wc > L/2 \ge (a-1)(b-1)$ so there exist $u_1, v_1 \in \mathbb{N}$ such that $u_1a + v_1b = L - wc$. Since $vb < L$, we have $ua + wc > L$. Now $u_1a \le L - wc < ua$ implies $u_1 < u$. Similarly, $v_1 < v$. Therefore, $$(u,v,w) = (u_1,v_1,w) + (u-u_1, v-v_1,0)$$ is the desired decomposition. Finally, assume that $u \ge L/2a , v \ge L/2b $ and $w \ge L/2c $. Set $w_1 = \lceil L/2c \rceil$. Then, $L - w_1c \ge L - (ab+1)c/2 = (c/2)(ab-1) > (a-1)(b-1)$ and we may write $u_1a + v_1b = L - w_1c$ for some $u_1, v_1 \in \mathbb{N}$. Notice that $u_1a \le L - w_1c \le L/2$ and hence $u_1 \le L/2a \le u$. Similarly, $v_1 \le v$. Thus $$(u,v,w) = (u_1,v_1,w_1) + (u-u_1,v-v_1,w-w_1)$$ is the desired decomposition. \end{proof} We now present an example of a Rees algebra $\mathcal{R} = R[It]$ of a monomial ideal that satisfies $\mathrm{R}_2$ but is not normal. In order to find an example we must work over a polynomial ring in at least 4 indeterminates by the above remarks. The following example is due to H. Coughlin \cite{Co}. The example was first explored using the program NORMALIZ \cite{Norm} of Bruns and Koch. \begin{example}\label{ex: weirdex} Let $\boldsymbol{\lambda}=(1443,37,21,91)$. Define $I=I(\boldsymbol{\lambda})$, $S=S(I)$, and $\mathcal{R}$ as above. We claim $\mathcal{R}$ is not normal but satisfies the equivalent conditions for R$_2$. Hence, $\mathcal{R}$ does not satisfy S$_2$. \end{example} We first show that $\mathcal{R}$ is not normal Note that $L=10101$. The vector $\boldsymbol{\alpha}=(2,36,1,89)$ satisfies $\boldsymbol{\omega} \cdot \boldsymbol{\alpha}=2L$, so that $x^{\boldsymbol{\alpha}} \in \overline{I^2}$. Direct computation shows that $\boldsymbol{\alpha}$ is not the sum of two vectors in $S$ and hence $x^{\boldsymbol{\alpha}} \notin I^2$. Thus $\mathcal{R}$ is not normal. We show that $\mathrm{R}_2$ holds. As in the proof of Theorem \ref{thm: R_k for Rees} we need only deal with the height two monomial primes corresponding to the faces $G_1, G_2, G_3, G_4$, where $G_i = \ensuremath{\mathrm{pos}}(S) \cap H_i \cap H_{\sigma}$. As in the proof of Theorem \ref{thm: R_k for Rees}, for each $G_i$ we must define a pair of elements $\boldsymbol{\gamma}_i$ and $\boldsymbol{\gamma}_6$ in $S$ such that $\sigma_a(\boldsymbol{\gamma}_b) = \delta_{a b}$ for $a, b \in \{ i, 6 \}$. The following vectors satisfy $\sigma_a(\boldsymbol{\gamma}_b)=\delta_{ab}$ for $a, b \in \{ i, 6 \}$: $$\begin{array}{ccc} i&\boldsymbol{\gamma}_i & \boldsymbol{\gamma}_6\\ 1&(1,28,8,12,1) &(0,8,1,67,1) \\ 2&(35,1,1,82,1)&(16,0,0,91,1)\\ 3&(35,1,1,82,1) & (16,0,0,91,1)\\ 4&(220,1,17,1,1) & (275,0,17,0,1)\\ \end{array}$$ $\mathcal{R}$ satisfies R$_2$ by Theorem~\ref{thm: R_k for Rees}. Thus $\mathcal{R}$ does not satisfy S$_2$.
1,941,325,220,632
arxiv
\section{Data Set} \label{sec:dataset} The data used in this paper was collected by a project called Untrue.News: A search engine designed to find fake stories on the internet \cite{woloszyn2020untrue}. It uses an open-source web crawler for searching fake news. This web crawler is connected to a natural language processing pipeline that uses automatic and semi-automatic strategies for data enrichment. In this way approximately 30.000 documents have been retrieved, dating back to the year 1995 (English documents). For our research we used 25.886 English sentences of these documents. More languages, that were not used for our project can be found at untrue.news. \citet{woloszyn2020untrue} use four categories to classify their documents: \begin{itemize} \item \textbf{TRUE}: completely accurate statements \item \textbf{FALSE}: completely false statement \item \textbf{MIXED}: partially accurate statements with some elements of falsity \item \textbf{OTHER}: special articles that do not provide a clear verdict or do not match any other categories \end{itemize} These can be found in \cite{tchechmedjiev2019claimskg} and are less than those found in the schema.org/ClaimReview markup. For performing the text attacks, we only used the TRUE and FALSE statements. In Table \ref{table:multiclass} classification results can be found for the models trained with all four categories. \section{Conclusion} \label{sec:conclusion} This paper aimed to answers the question: how vulnerable are automatic fake news detection to adversarial attacks? We tested this by checking if automated augmentation of fake news sentences (FALSE statements) will lead to TRUE classifications. This would allow them to bypass the fake news detection mechanisms. Our results show that using the python library TextAttack allows automated changing of classification for 65,15\% of the sentences. Flair, the only model using word-level embedding (contextual string embeddings), seems less vulnerable to attacks with a Success Rate of 54,77\%. The other two models using document embeddings show 72,45\% (BERTweet) and 68,22\% (Roberta). Furthermore, word-level swaps seem to be slightly more successful with an average of 76,87\% compared to character level or mixed swaps with average 55,21\%. Consequently, the models are more vulnerable to attacks using semantically correct sentences with changed meaning than to attacks using typos. Overall it seems that it is possible to bypass the classifier with these attacks. However, our results do not consider that a human will be able to see obvious spelling mistakes in sentences. Furthermore, a human will have a higher accuracy of recognizing unfitting words in sentences. Looking at the augmented sentences we think that many of them will be recognized as FALSE (\textit{TrOmp} \textit{hsut} down American \textit{airportA} on 4 \textit{Jul} \textit{201B} or Hollywood Action Star \textit{Christelle} Chan Dead). \\ As a conclusion of these results we think that using the policy initiatives blocking and deprioritization, should be avoided if possible, as these methods don't completely remove the fake news statements from the users feeds. This makes it easier to target these statements with automated attacks as shown in our research. A scenario would be to attack these sentences until they are unblocked or re-prioritized in a users feed. This would lead to dangerous spreading of Fake News in social networks. \section*{Acknowledgments} We would like to thank Dr. Vinicius Woloszyn for his support and supervision of the project. \section{Results and Discussion} \label{sec:results} The results discussed in this section can also be found in our Github repository \cite{githubcode} including the respective implementations. \subsection{Training the Models} The first step of our experiment is to train state-of-the-art machine learning models to detect fake news. Since our dataset includes classes beyond TRUE and FALSE, we trained RoBERTa, BERTweet and Flair Embeddings both as binary and multi class classifier. \subsubsection{Binary Classification} \label{sec:binclass} For the binary classification RoBERTa, BERTweet and FlairEmbeddings were only trained on TRUE and FALSE statements. Their precision, recall and F1-scores, depicted in Table \ref{table:binclass}, demonstrate that the BERT based models performed best with scores of $>80\%$. Overall, BERTweet had the best results, however, the difference to RoBERTa is minimal even though the models were pre-trained on entirely different datasets. With an F1-score of ~70\% FlairEmbeddings received the worst score which could be traced back to the small size of the dataset. \input{train_results/classification_bin} \subsubsection{Multiclass Classification} We also trained all three models on the four existing classes. Due to the higher classification complexity all validation scores (shown in Table \ref{table:multiclass}) resulted about 10-20\% worse than for the binary classification. It is worth to mention, that RoBERTa performed slightly better than BERTweet, contrary to the less complex classification in Section \ref{sec:binclass}. FlairEmbeddings' scores were affected the most and dropped to ~50\%. \input{train_results/classification_multi} \subsection{Adversarial Attacks} The next step is to apply adverserial attacks on the dataset using TextAttack. Approximately 40 FALSE statements were sampled from the dataset and attacked using the TextAttack recipes. Due to the poor multiclass classification performance we applied all attacks on the binary-trained models. The results can be found in Table \ref{bert-attacks-table} for BERTweet, in Table \ref{roberta-attacks-table} for RoBERTa and in Table \ref{flair-attacks-table} for Flair. The tables contain the percentage of sentences predicted as Successful, Failed or Skipped. Table \ref{reduction-feng-table} shows the percentages obtained for InputReductionFeng for all three models and the mean value of score improvement over all sentences for each model. To see the distribution of word level and character level attacks, Table \ref{mean-success-score} was generated. It contains the mean value of the Success percentage that can be found in tables \ref{bert-attacks-table}, \ref{roberta-attacks-table} and \ref{flair-attacks-table}. \\ \input{train_results/attack_bertweet} \input{train_results/attack_roberta} BERTweet and RoBERTa both use document embeddings. They show similar results. For BERTweet the word level attack IGAWang had the highest success rate with 90\%. CheckList2020 was the least successful with 20\%. For RoBERTa the word level attack TextFoolerJin2019 was the most successful with 92.5\% and Checklist2020 the least successful with 7.5\%. With the exception of IGAWang the recipes show the same order in success rates. The character level attack DeepWordBug seems to be very successful for these two model, ranking place 2 for RoBERTa and 3 for BERTweet. This is surprising since in the overall ranking in Table \ref{mean-success-score} it only achieved place 7. This might be an indication that document embeddings are more vulnerable to character level attacks than word embeddings. \input{train_results/attack_flair In comparison to BERT-based models Flair classified a lot more inputs wrongly, which is depicted in the higher percentage of skipped statements in Table \ref{flair-attacks-table}. Nevertheless, it proved to be a lot less vulnerable towards adverserial attacks considering that the best performing recipe TextFoolerJin2019 only reached a success rate of 66\% (vs 90\% and 92.5\%). Similar to the previous models, CheckList2020 performed worse, but this time with a success rate of only 2\%. Both pure character level attack recipes failed most of their attacks with 48\% (DeepWordBug) and 54\% (Pruthi) fail rate. Overall, it appears that the contextualization used by Flair makes the model a lot robuster towards the used word and character level based TextAttack recipes. \input{train_results/attack_inputreductionfeng} BERTweet seems to be the most vulnerable to InputReductionFeng attacks but the difference in score increasing is not very high (+-0.01). Overall it seems that these attacks show similar results over all models. \input{train_results/recepy_rates} Table 8 shows that the 4 top ranking attacks are word level attacks. It seems that the models are more vulnerable to word level attacks than to character level or mixed attacks (character \& word). \\ Finally the Success Rate, for every model was calculated using formula \ref{eq:1}: \begin{equation} \label{eq:1} S_r = { \sum_{i=1}^{a} { s_{i} \over s_{i} + f_{i}} \over a} \end{equation} with $S_r$ being the success rate $s$ being the number of successful attacks, $f$ being the number of failed attacks and $a$ being number of attacks recipes. \input{train_results/sucess_rates} The skipped statements were not taken into the calculation, as they are dependent on the model training and not on TextAttack. The skipped values show which statements were not predicted correctly by the model in the first place. Thus, they are depended on the model and reflect models accuracy. For the success rates we wanted to see TextAttack's efficiency on statements that the model would correctly classify as fake news. Our results for the Success Rates underline the results seen in tables \ref{bert-attacks-table}, \ref{roberta-attacks-table} and \ref{flair-attacks-table}, showing that Flair is less vulnerable to the attacks with S$_r$=54,77\% than the other two models with S$_r$= $\sim$70\%. As an conclusion this gives us an Total Success Rate of 65,15\% for adversarial attacks on fake news detection using TextAttack. \section{Related Work} \label{sec:relatedwork} \subsection{Fake News Detection} Fake news are used to manipulate general opinions of readers about a certain topic \cite{surveyzhouzafarani}. Unlike typical ``clickbait" articles, which use misleading and eye-catching headlines, fake news are usually quite long and wordy, consisting of inaccurate or invented plots \cite{chakraborty2016stop}. This gives rise to the assumption of a well-researched and factually correct article. The reader, thus, does not notice how their personal opinion about a certain topic is deliberately manipulated. Fake news detection refers to any kind of identification of such fake news. Due to the speed at which digital news is produced today, effective, automated fake news detection requires the use of machine learning tools. Previous research has mainly focused on fake news in social media and fake news in online news articles \cite{q1_fakeflow}. There are various models of the machine detection of fake news, which are based on different heuristics. For example, Ghanem et al. \cite{q1_fakeflow} discuss the effectiveness of the ``FakeFlow" model, which incorporates both word embedding and affective information such as emotions, moods or hyperbolic words, based on four different datasets. The model receives several small text segments as input instead of an entire article. The result of the study was that this model is more effective than most state-of-the-art models. It generated similar results with less resources. Another study \cite{q2_claimreview} investigates the possibility of using artificial intelligence for the automatic generation of Claim-Review. Claim-Review is the web markup introduced in 2015 that allows search engines to access fact-checked articles. The basic idea of the so-called ``fact check" is that journalists and fact checkers identify misinformation and prevent it from spreading. Accordingly, it is important that fact-checked articles are highlighted and shared by users. Furthermore, research is currently looking at when and why a news article is identified as fake news and when it is not \cite{q3_defend}. This research on ``explainable fake news detection" aims to improve the detection performance of the algorithms. For this purpose, both news content and user comments are used as data input. \subsection{Adversarial Attack} Adversarial attacks are part of adversarial machine learning, which has become increasingly important in the field of applied artificial intelligence in recent years. In an adversarial attack, the input data of a neural network is intentionally manipulated to test how resistant it is to deliver the same outputs. These manipulated input data is called ``adversarial examples". \cite{via_NLP} Such a neural network is described as a ``fake news detector" in the context of this paper. The reason for research in the field of adversarial attacks is that more and more attempts are being made to outsmart fake news detectors on the internet. This can be done, for example, by changing the spelling of a word so that the word remains easily interpretable for humans but not for an algorithm (from ``Barack" to ``B4r4ck"). For this reason, the input data is always manipulated in such a way that a human hardly notices any differences from the non-manipulated input data. In the area of object recognition, for example, only individual pixels are slightly changed, which the human eye can hardly perceive. Examples of adversarial examples in the field of fake news detection are: \begin{itemize} \item \textbf{Fact distortion:} Here, some words are changed or exaggerated. It can be about people, time or places. \item \textbf{Subject-object-exchange:} By exchanging subject and object in a sentence, the reader is confused as to who is the executor and who is the recipient. \item \textbf{Cause confounding:} Here, either false causal connections are made between events or certain passages of a text are omitted. \end{itemize} Up to now, vulnerabilities to adversarial attacks have been identified in all application areas of neural networks. Especially in the context of increasingly safety-critical tasks for neural networks (e.g., autonomous driving), methods for detecting false or distorted input data are becoming more and more relevant. \cite{textattack}. \subsection{Regulating disinformation with artificial intelligence} According to the study ``Regulating disinformation with artificial intelligence" \cite{euregulation}, disinformation is defined as 'false, inaccurate, or misleading information designed, presented and promoted to intentionally cause public harm or for profit. This definition is based on the definition of ``Final report of the High Level Expert Group on Fake News and Online Disinformation" \cite{expert_group}, which additionally specifies that the term ``disinformation" does not include fundamentally illegal content such as hate speech or incitement to violence. Nor does it include misinformation that is clearly not misleading, such as satire or parody. As a further delimitation, the paper defines the term ``misinformation", which refers to any misinformation that is unintentionally or accidentally false or inaccurate. Marsden and Meyer explain the causes of disinformation in the online context and the responses that have been formulated from a technological perspective. Furthermore they analyze the impact of AI-disinformation initiatives on freedom of expression, media pluralism and democracy. The issue of disinformation is a very long-term historical problem with human society. It has just got a global effect through automation, through the internet and new technologies. Even in the past, people faced the challenges of filtering out false information or illegal content from newly emerging media such as newspapers, radio or television. So, on the one hand there is currently a desire within the European Commission to take action against illegal and unwanted content through online intermediaries. On the other hand, the intervention and regulation of content on the internet is also seen critically, because this is against the basic concept of the internet with its freedom of expression. However, to stop the global spread of disinformation, a restriction of freedom of expression is necessary. Accordingly, for measures to filter fake news, there are three principles that must exist when restricting freedom of expression \cite{euregulation}: \begin{itemize} \item Measures must be established by law \item Measures must be legitimate and shown to be necessary \item Measures must be the least restrictive method of pursuing the goal \end{itemize} There are two different ways to detect and remove disinformation: The human and the technical moderation by AI. Over time, AI solutions have become increasingly effective in detecting and removing illegal or unwanted content, but they also raise the question of who is the judge in deciding what is legal or illegal and what is wanted or unwanted in society. The problem here is that neither law nor technology can be truly neutral. Both reflect the values and priorities of those who designed them. This principal is called “garbage in – garbage out”. According to expert opinions, AI can help to make an initial filtering, especially for texts and articles, which is then checked by humans. If AI is wrong, there is always the possibility to reverse the decision. To do this, companies also hire cheap subcontractors in remote countries to remove content. Policy initiatives in the past have focused on making internet intermediaries more responsible for reducing disinformation on their platforms. While actual content creators were responsible for their content in the past, now the platforms have to take more and more responsibility for the content of the individual actors. The reactions of intermediary sites such as Facebook and YouTube are technological initiatives that identify certain content and remove it in different ways or do not publish it at all. The three most popular initiatives are filtering, blocking, deprioritisation: \textbf{Filtering} is the most effective method. There are 2 different types of filtering: ex-ante and ex-post. This means filtering before a content goes online and filtering after it has already been published. With the exception of obviously illegal content (e.g. child abuse images or terrorist content) or disruptive content (e.g. spam, viruses), the ex post-removal is preferable. The reason for this is that there is better legal justification for this. One example of filtering is YouTube Content ID. With YouTube Content ID, uploaded files are matched against databases of works provided by copyright holders. If a match is found, copyright owners can decide whether to block, monetise or track a video containing their work. \textbf{Blocking} is probably the most widespread method. It is used by users, email providers, search engines, social media platforms and network and internet access providers. Similar to filtering, blocking can take place ex-ante or ex-post, i.e. after knowledge, request or order. Blocking means that the content is not completely removed but blocks certain users from accessing the content. For this reason, it provides one significant advantage: Content can be blocked depending on the provider's terms of use or the laws of a particular region. For example, content may be blocked in certain places but may be available in others, e.g. if some countries allow certain content by law while others do not. The last option is \textbf{deprioritisation}. In the context of disinformation, deprioritisation means that content is de-emphasized in users' feeds. This takes place when correct content from certain providers is displayed side by side with incorrect content. Then the wrong information is identified and deprioritised so that it is displayed further down and not so prominently anymore. \section{Introduction} The spreading of disinformation throughout the web has become a serious problem for a democratic society. The dissemination of fake news has become a profitable business and a common practice among politicians and content producers. A recent study entitled ``Regulating disinformation with artificial intelligence'' \cite{euregulation}, examines the trade-offs involved in using automated technology to limit the spread of disinformation online. Based on this study, this paper discusses the social and technical problems of Automatic Content Moderation (ACM) poses to freedom of expression. \\ Although AI and Natural Language Generation have evolved tremendously in the last decade, there are still concerns regarding the potential implications of automatically using AI to moderate content. One problem is that automatic moderation of content on social networks will accelerate a race in which AI will be created to counter-attack AI. \\ Adversarial machine learning is an technique that attempts to fool models by exploiting vulnerabilities and compromising the results. For example, by changing particular words - e.g., from ``Barack" (Obama) to ``b4r4ck" - it is possible to mislead classifiers and overpass automatic detection filters. Recent works \cite{zhou2019fake} show the state-of-the-art machine learning models are vulnerable to these attacks. This study relies on state-of-the-art techniques to attack and dive deep into the fake news detection vulnerabilities. The goal is to experiment with adversarial attacks to discover and compute the vulnerabilities of fake news classifiers. Therefore, this work aims to answer the following research question: \emph{How vulnerable is fake news detection to adversarial attacks?} The remainder of this paper is organized as follows. Section \ref{sec:relatedwork} discusses the background and previous related works. Section \ref{sec:experimentdesign} describes the design of our experiments, and Section \ref{sec:results} presents the results and discussion. Section \ref{sec:conclusion} summarizes our conclusions and presents future research directions. \section{Experiment Design} \label{sec:experimentdesign} To understand the vulnerability of models trained to detect fake news we split our experiment into two steps. First training state-of-the-art machine learning models using the dataset described in Section \ref{sec:dataset} and second applying adverserial attacks on the dataset using TextAttack to manipulate the trained models to classify fake news as TRUE news. \subsection{Fake News Detection as a Classification Problem} Two types of classifiers were used. \textbf{Bin} a binary classifier, in which a positive class represents true news, and the negative not being true news and \textbf{Mult} a multiclass classifier, where each document is classified as either being true news, fake news, untrue news or other. \subsection{Pre-trained Models} In this subsection, we present the applied pre-trained models and BERT \cite{bert} which two of the models are based on. In this context a token describes a single word of a given text and a segments describes a sequence of tokens. The trained models will later be attacked by the Text Attack recipes described in Section \ref{sec:textattack}. \subsubsection{BERT} The BERT (Bidirectional Encoder Representations from Transformers) model was originally trained on two datasets: \textit{BookCoorpus} \cite{bookcorpus} and \textit{English Wikipedia}. \\ The way BERT analyzes the text is by taking concatenations of two segments. The total length of the concatenation is bound by a parameter $T$ and is computed as one input with particular tokens in between describing the \textit{beginning} and \textit{end} of the sentence, as well as the separation point of the two segments. \\ During pretraining, BERT applies a \textit{Masked Language Model} (MLM) and a \textit{Next Sentence Prediction} (NSP). For MLM BERT selects 12\% of input tokens and replaces them with a \textit{[MASK]} token and another 1.5\% with a random vocabulary token. It then proceeds with predicting the selected \textit{[MASK]} tokens. With NSP BERT makes a binary prediction on whether two segments in a text are adjacent. \subsubsection{RoBERTa} The pre-trained RoBERTa (Robustly optimized BERT approach) \cite{roberta} is an optimized version of the original BERT model. The authors use a total length of concatenation $T=512$ (-longer than previously used) and expand the used datasets to: \textit{BookCoorpus}, \textit{CC-News} \cite{ccnews}, \textit{OpenWebText} \cite{openwebtext} and \textit{Stories} \cite{storiesdataset}. Furthermore, RoBERTa trains using a bigger batch size, removes NSP and changes the the masking for MLM each time a new input is given to the model therefore avoiding constant masking during different training epochs. \subsubsection{BERTweet} BERTweet \cite{bertweet} is a pre-trained model that uses elements from both BERT and RoBERTa. For instance, it's architecture is taken from BERT, while the pre-training procedure is "copied" from RoBERTa. The main difference is the used dataset: the only used dataset consists of 850 English tweets together accumulating 80GB of memory. \subsubsection{Flair Embeddings} The main difference of Flair Embeddings \cite{flairembeddings} to other models is their capturing of words which considers token as a sequence of characters. Furthermore, Flair Embeddings, also called contextual string embeddings, take contextual words into consideration, i.e. words that would appear consequently or previously. A word would therefore be embedded depending on the sequence of words. For our training we use the embeddings \textit{news-forward} and \textit{news-backward}, which were both trained on a 1 billion word corpus. \subsection{Parameterization} The training of each model runs for $E=15$ Epochs each, since at the small amount of training samples all models start converging after approximately 7-8 epochs. The mini batch size is set to $b=32$. For BERTweet and RoBERTa we specify the learning rate to $lr=3-e5$, while for Flair Embeddings to $lr=0.1$. Finally, for Flair Embeddings we also set the anneal factor (describing the factor by which the learning rate is annealed) to $a_f=0.5$ and the patience (the number of epochs without improvement until the learning rate would be annealed) to $p=5$. \noindent $tp$ is the number of positive instances correctly classified as positive, $tn$ number of negative instances correctly classified as negative, $fp$ negative instances wrongly classified as positive, and $fn$ is the number of positive instances wrongly classified as negative. We defined positive instances as fake news websites and negative instances as reliable news websites. \subsection{Text Attack Recipes} \label{sec:textattack} The following attack recipes from TextAttack \cite{textattack} were applied on the dataset (described in Section \ref{sec:dataset}) to manipulate the three introduced models to classify FALSE statements as TRUE, therefore misinterpreting fake news as true news. \begin{itemize} \item \textbf{DeepWordBug}: Generates small text perturbations in a black-box setting. It uses different types of character swaps (swapping, substituting, deleting and inserting) with greedy replace-1 scoring \cite{gao2018blackbox}. \item \textbf{Pruthi}: Simulates common typos with a concentration on QWERTY keyboard. Uses character insertion, deletion and swapping \cite{pruthi2019combating}. \item \textbf{TextBugger}: These attacks were optimized to perform with real world applications. They use space insertions, character deletion and swapping. Additionally they substitute characters with similar looking letters (ex. o with 0) and replace words with top nearest neighbor in context-aware word vector space \cite{Li_2019}. \item \textbf{PSOZang}: Word level attack using a sememe-based word substitution strategy as well as particle swarm optimization \cite{Zang_2020}. \item \textbf{PWWSRen2019}: These attacks focus on maintaining lexical correctness, grammatical correctness as well as semantic similarity by using synonym swap. Words for swap are prioritized on a combination of there saliency score and maximum word-swap effectiveness \cite{ren-etal-2019-generating}. \item \textbf{TextFoolerJin2019}: Word swap with their 50 closest embedding nearest neighbors. Optimized on BERT \cite{jin2020bert}. \item \textbf{IGAWang}: Implemented as an adversarial attack defense method. Uses counter-fitted word embedding swap \cite{wang2021natural}. \item \textbf{BAEGarg2019}: Uses a BERT masked language model transformation. It uses the language model for token replacement to best fit overall context \cite{garg2020bae}. \item \textbf{CheckList2020}: Inspired by the principles of behavioral testing. Uses changes in names, numbers and locations as well as contraction and extension \cite{ribeiro2020accuracy}. \item \textbf{InputReductionFeng}: This attack concentrates on the least important words in a sentence. It iteratively removes the world with the lowest importance value until the model changes its prediction. The importance is measured by looking at the change in confidence of the original prediction when removing the word from the original sentence \cite{feng2018pathologies} \end{itemize} With exception of \textit{InputReductionFeng} each recipe has three possible results for the attack on each sentence. \textit{Success} means the text attack resulted in a wrong classification. \textit{Skipped} mean that the model classified the sentence wrongly to begin with, therefore the sentence doesn`t need to be manipulated. \textit{Fail} means the model still classified the sentence correctly. \textit{InputReductionFeng} uses \textit{Maximized} to indicate that the model uncertainty was maximized. A rubbish example is classified as correct with a higher accuracy than the original valid input. \textit{Skipped} is used when the model classified the sentence wrongly to begin with, therefore the sentence doesn`t need to be manipulated.
1,941,325,220,633
arxiv
\section{Introduction and definitions} Arrowhead matrices and diagonal-plus-rank-one matrices arise in many applications and computations with such matrices are parts of many important linear algebra algorithms (for details see \cite{GV96, NEG14, JSB15}). We give unified formulas for matrix-vector multiplications, determinants, and inverses for both types of matrices having elements in commutative and noncommutative associative algebras. Our results complement and extend the existing results in the literature. All matrices are in $\mathbb{F}^{n\times n}$ where $\mathbb{F}\in\{\mathbb{R},\mathbb{C},\mathbb{H}\}$, where $\mathbb{H}$ is a non-commutative field of quaternions. Each formula requires $O(n)$ arithmetic operations. In Section 1, we give basic definitions. In section 2, we give formulas for the multiplication of vectors with both types of matrices. In Section 3, we give formulas for determinants, and in Section 4, we give formulas for inverses of both types of matrices. In Section 5, we explain how the formulas apply to block matrices. Discussion and conclusions are in Section 6. \subsection{Quaternions} {\em Quaternions} are a non-commutative associative number system that extends complex numbers, introduced by Hamilton \cite{Ham53, Ham66}. For {\em basic quaternions} $\mathbf {i}$, $\mathbf {j}$, and $\mathbf {k}$, the quaternions have the form $$q=a+b\ \mathbf {i} +c\ \mathbf {j} +d\ \mathbf {k},\quad a,b,c,d, \in \mathbb{R}.$$ The {\em multiplication table} of basic quaternions is the following: $$\begin{array}{c|ccc} \times & \mathbf {i} & \mathbf {j} & \mathbf {k} \\ \hline \mathbf {i} & -1 & \mathbf {k} & -\mathbf {j} \\ \mathbf {j} & -\mathbf {k} & -1 & \mathbf {i} \\ \mathbf {k} & \mathbf {j} & -\mathbf {i} & -1 \end{array}$$ {\em Conjugation} is given by $$\bar q=a-b\ \mathbf {i} -c\ \mathbf {j} -d\ \mathbf {k}.$$ Then, $$\bar q q=q\bar q=|q|^2=a^2+b^2+c^2+d^2.$$ Let $f(x)$ be a complex analytic function. The value $f(q)$, where $q\in\mathbb{H}$, is computed by evaluating the extension of $f$ to the quaternions at $q$ (see \cite{Sud79}). All of the above is implemented in the Julia \cite{Julia} package Quaternions.jl \cite{Quaternions}. Quaternions are {\em homomorphic} to $\mathbb{C}^{2\times 2}$: $$ q\to \begin{bmatrix}a+b\,\mathbf{i} & c+d\, \mathbf{i}\\-c+d\, \mathbf{i} & a-b\, \mathbf{i}\end{bmatrix}\equiv C(q),$$ with eigenvalues $q_s$ and $\bar q_s$. \subsection{Arrowhead and DPR1 matrices} Let $\star$ denote the transpose of a real matrix, the conjugate transpose (adjoint) of a complex or quaternionic matrix, and the conjugate of a scalar. The {\em arrowhead matrix} (Arrow) is a matrix of the form $$ A=\begin{bmatrix} D & u \\v^\star & \alpha \end{bmatrix}\equiv \operatorname{Arrow}(D,u,v,\alpha),$$ where $D$ is a diagonal matrix with diagonal elements $d_i\equiv D_{ii}$, $\operatorname{diag}(D),u, v \in\mathbb{F}^{n-1}$, and $\alpha\in\mathbb{F}$, or any symmetric permutation of such matrix. The {\em diagonal-plus-rank-one matrix} (DPR1) is a matrix of the form $$A=\Delta+x \rho y^\star\equiv \operatorname{DPR1}(\Delta,x,y,\rho),$$ where $\Delta$ is diagonal matrix with diagonal elements $\delta_i=\Delta_{ii}$, $\operatorname{diag}(\Delta),x, y \in\mathbb{F}^{n}$, and $\rho \in \mathbb{F}$. \section{Matrix-vector multiplication} \begin{lemma} \label{lemma0} Let $A=\operatorname{Arrow}(D,u,v,\alpha)$ be an arrowhead matrix with the tip at position $A_{ii}=\alpha$, and let $z$ be a vector. Then $w=Az$, where $$ \begin{aligned} w_j&=d_jz_j+u_jz_i, \quad i=1,2,\cdots,i-1\\ w_i&=v_{1:i-1}^\star z_{1:i-1} +\alpha z_i + v_{i:n-1}^\star z_{i+1:n} \\ w_j&=u_{j-1}z_i+d_{j-1}z_j,\quad j=i+1,i+2,\cdots,n. \end{aligned}$$ Further, let $A=\operatorname{DPR1}(\Delta,x,y,\rho)$ be a DPR1 matrix and let $\beta=\rho(y^\star x) \equiv \rho (y\cdot x)$. Then $w=Az$, where $$ w_i=\delta_i z_i+x_i\beta,\quad i=1,2,\cdots,n.$$ \end{lemma} \proof The formulas follow by direct multiplication. \qed \section{Determinants} Determinants are computed using two basic facts: the determinant of the triangular matrix is a product of diagonal elements ordered from the first to the last, and the determinant of the product is the product of determinants. If $\mathbb{F}\in \{\mathbb{R},\mathbb{C}\}$, we have the following lemmas. \begin{lemma} \label{lemma1} Let $A=\operatorname{Arrow}(D,u,v,\alpha)$ be an arrowhead matrix. If all $d_i\neq 0$, the determinant of $A$ is equal to \begin{equation} \det(A)= (\prod_i d_i)(\alpha-v^\star D^{-1}u). \label{da1} \end{equation} If $d_i=0$, then \begin{equation} \label{da2} \det(A)=-\bigg(\prod_{j=1}^{i-1}d_j\bigg)\cdot v_i^\star \cdot \bigg(\prod_{j=i+1}^{n-1}d_j\bigg)\cdot u_i. \end{equation} \end{lemma} \proof The proof is modeled after Proposition 2.8.3, Fact 2.14.2 and Fact 2.16.2 from \cite{Ber09}. The formula (\ref{da1}) follows from the factorization $$ \begin{bmatrix} D & u \\ v^\star & \alpha \end{bmatrix} =\begin{bmatrix} D & 0 \\ 0 & 1 \end{bmatrix} \begin{bmatrix} I & 0 \\ v^\star & 1 \end{bmatrix} \begin{bmatrix} I & D^{-1} u \\ 0 & \alpha - v^\star D^{-1} u \end{bmatrix}. $$ The formula (\ref{da2}) is the only non-zero term in the standard expansion of the determinant of the matrix $$ D=\begin{bmatrix}D_1 & 0 & 0 & u_1 \\ 0 & 0 & 0 & u_i \\ 0 & 0 & D_2 & u_2 \\ v_1^\star & v_i^\star & v_2^\star & \alpha \end{bmatrix}.$$ \qed \begin{lemma} \label{lemma2} Let $A=\operatorname{DPR1}(\Delta,x,y,\rho)$ be a DPR1 matrix. If all $\delta_i\neq 0$, the determinant of $A$ is equal to \begin{equation} \label{dd1} \det(A)=(\prod_i \delta_i)(1+y^\star \Delta^{-1}x\rho). \end{equation} If $\delta_i=0$, then \begin{equation} \label{dd2} \det(A)=(\prod_{j=1}^{i-1} \delta_j) y_i^\star (\prod_{j=i+1}^n \delta_j)x_i \rho. \end{equation} \end{lemma} \proof The proof is modeled after Fact 12.16.3 and Fact 12.16.4 from \cite{Ber09}. The formula (\ref{dd1}) follows from the factorizations $$ \Delta+x\rho y^\star=\Delta (I+\Delta^{-1}x\rho y^\star)$$ and $$ \begin{bmatrix} I & 0\\ d^\star & 1 \end{bmatrix} \begin{bmatrix} I+cd^\star & c \\ 0 & 1\end{bmatrix} \begin{bmatrix} I & 0 \\ -d^\star & 1\end{bmatrix}= \begin{bmatrix} I & c \\ 0 & 1+d^\star c\end{bmatrix}.$$ The formula (\ref{dd2}) follows from Fact 2.14.2 from \cite{Ber09}. \qed \subsection{Matrices of quaternions} The determinant of the matrix of quaternions can be defined using a determinant of its corresponding homomorphic complex matrix. For matrices of quaternions, the determinant is not well defined due to non-commutativity. However, the Study determinant, $|\det(A)|$, is well defined (see \cite{Asl96}). Therefore, after computing the respective determinant using the formulas from Lemma \ref{lemma1} and Lemma \ref{lemma2}, it suffices to compute the absolute value. \section{Inverses} \begin{lemma} \label{lemma3} Let $A=\operatorname{Arrow}(D,u,v,\alpha)$ be a non-singular arrowhead matrix with the tip at the position $A_{ii}=\alpha$ and let $P$ be the permutation matrix of the permutation $p=(1,2,\cdots,i-1,n,i,i+1,\cdots,n-1)$. If all $d_j\neq 0$, the inverse of $A$ is a DPR1 matrix \begin{equation} \label{ia1} A^{-1} =\operatorname{DPR1}(\Delta,x,y,\rho)=\Delta+x\rho y^\star, \end{equation} where $$ \Delta=P\begin{bmatrix}D^{-1} & 0\\ 0 & 0\end{bmatrix}P^T, \ x=P\begin{bmatrix}D^{-1}u \\-1\end{bmatrix},\ y=P\begin{bmatrix}D^{-\star}v \\-1\end{bmatrix},\ \rho=(\alpha-v^\star D^{-1} u)^{-1}. $$ If $d_j=0$, the inverse of $A$ is an arrowhead matrix with the tip of the arrow at position $(j,j)$ and zero at position $A_{ii}$ (the tip and the zero on the shaft change places). In particular, let $\hat P$ be the permutation matrix of the permutation $\hat p=(1,2,\cdots,j-1,n,j,j+1,\cdots,n-1)$. Partition $D$, $u$ and $v$ as $$ D=\begin{bmatrix}D_1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & D_2\end{bmatrix},\quad u=\begin{bmatrix} u_1 \\ u_j \\u_2\end{bmatrix},\quad v=\begin{bmatrix} v_1 \\ v_j \\v_2\end{bmatrix}. $$ Then \begin{equation} \label{ia2} A^{-1}=P\begin{bmatrix} \hat D & \hat u\\ \hat v^\star & \hat \alpha \end{bmatrix}P^T, \end{equation} where \begin{align*} \hat D&=\begin{bmatrix}D_1^{-1} & 0 & 0 \\ 0 & D_2^{-1} & 0 \\ 0 & 0 & 0\end{bmatrix},\quad \hat u= \begin{bmatrix}-D_1^{-1}u_1 \\ -D_2^{-1}u_2\\ 1 \end{bmatrix} u_j^{-1},\quad \hat v= \begin{bmatrix}-D_1^{-\star}v_1 \\ -D_2^{-\star}v_2\\ 1\end{bmatrix}v_j^{-1},\\ \hat \alpha&=v_j^{-\star}\left(-\alpha +v_1^\star D_1^{-1} u_1+v_2^\star D_2^{-1}u_2\right) u_j^{-1}. \end{align*} \end{lemma} \proof The formula (\ref{ia1}) follows by multiplication. The formula (\ref{ia2}) is similar to the one from Section 2 of \cite{JSB15}. \qed \begin{lemma} \label{lemma4} Let $A=\operatorname{DPR1}(\Delta,x,y,\rho)$ be a non-singular DPR1 matrix. If all $\delta_j\neq 0$, the inverse of $A$ is a DPR1 matrix \begin{equation} \label{id1} A^{-1} =\operatorname{DPR1}(\hat \Delta,\hat x,\hat y,\hat \rho)=\hat\Delta+\hat x\hat \rho \hat y^\star, \end{equation} where $$ \hat \Delta=\Delta^{-1},\quad \quad \hat x=\Delta^{-1}x,\quad \hat y=\Delta^{-\star}y,\quad \hat \rho=-\rho(I+y^\star \Delta^{-1} x\rho)^{-1}. $$ If $\delta_j=0$, the inverse of $A$ is an arrowhead matrix with the tip of the arrow at position $(j,j)$. In particular, let $P$ be the permutation matrix of the permutation $p=(1,2,\cdots,j-1,n,j,j+1,\cdots,n-1)$. Partition $\Delta$, $x$ and $y$ as $$ \Delta=\begin{bmatrix}\Delta_1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & \Delta_2\end{bmatrix},\quad x=\begin{bmatrix} x_1 \\ x_j \\x_2\end{bmatrix},\quad y=\begin{bmatrix} y_1 \\ y_j \\y_2\end{bmatrix}. $$ Then, \begin{equation} \label{id2} A^{-1} =P\begin{bmatrix} D & u \\ v^T & \alpha \end{bmatrix}P^T, \end{equation} where \begin{align*} D&=\begin{bmatrix} \Delta_1^{-1} & 0\\ 0 &\Delta_2^{-1}\end{bmatrix},\quad u= \begin{bmatrix}-\Delta_1^{-1}x_1 \\ -\Delta_2^{-1}x_2\end{bmatrix} x_j^{-1},\quad v= \begin{bmatrix}-\Delta_1^{-\star}y_1 \\ -\Delta_2^{-\star}y_2\end{bmatrix}y_j^{-1},\\ \alpha&=y_j^{-\star}\left(\rho^{-1} +y_1^\star \Delta_1^{-1} x_1+y_2^\star \Delta_2^{-1}x_2\right) x_j^{-1}. \end{align*} \end{lemma} \proof The formula (\ref{id1}) follows form fact 2.16.3 of \cite{Ber09}. The formula (\ref{id2}) is similar to the one from Section 2 of \cite{JSB15b}. \qed \section{Block matrices} If elements of the matrices are themselves matrices in $\mathbb{F}^{k\times k}$ where $\mathbb{F}\in\{\mathbb{R},\mathbb{C},\mathbb{H}\}$, the above formulas can be used as follows. If, in Lemma \ref{lemma0}, the elements of the vector $z$ satisfy $z_i\in\mathbb{F}^{k\times k}$, $i=1,\ldots,n$, the formulas hold directly and yield a block vector $w$. The formulas (\ref{da1})-(\ref{dd2}) from Lemmas \ref{lemma1} and \ref{lemma2} return a matrix in $\mathbb{F}^{k\times k}$, so one more step is required -- computing the determinant of this matrix. The formulas for inverses (\ref{ia1})-(\ref{id2}) from Lemmas \ref{lemma3} and \ref{lemma4} return the corresponding block matrices provided all inverses within formulas are well defined. It is clearly possible that a block arrowhead or DPR1 matrix is non-singular even if some of the blocks involved in formulas (\ref{ia1})-(\ref{id2}) are singular. However, in such cases, the respective inverses do not have the structure required by the formulas, so the formulas cannot be applied. \section{Discussion and conclusion} We have derived formulas for basic matrix functions of arrowhead and diagonal-plus-rank-one matrices whose elements are real numbers, complex numbers, or quaternions. Each formula requires $O(n)$ arithmetic operations, so they are optimal. Each formula is unified in the sense that the same formula is used for any type of matrix element, including block matrices. When the formulas are applied to matrices of quaternions or block matrices, due to non-commutativity, the operations must be executed exactly in the order specified. Our results complement and extend the existing results in the literature, like \cite{NEG14, Ber09, GV96, Asl96}. The code, written in the programing language Julia \cite{Julia}, along with examples, is available on GitHub \cite{Inverses}. The code relies on the Julia's polymorphism (or multiple dispatch) feature. \section*{Funding} This work has been fully supported by Croatian Science Foundation under the project IP-2020-02-2240. \section{Introduction} \file{elsarticle.cls} is a thoroughly re-written document class for formatting \LaTeX{} submissions to Elsevier journals. The class uses the environments and commands defined in \LaTeX{} kernel without any change in the signature so that clashes with other contributed \LaTeX{} packages such as \file{hyperref.sty}, \file{preview-latex.sty}, etc., will be minimal. \file{elsarticle.cls} is primarily built upon the default \file{article.cls}. This class depends on the following packages for its proper functioning: \begin{enumerate} \item \file{natbib.sty} for citation processing; \item \file{geometry.sty} for margin settings; \item \file{fleqn.clo} for left aligned equations; \item \file{graphicx.sty} for graphics inclusion; \item \file{txfonts.sty} optional font package, if the document is to be formatted with Times and compatible math fonts; \item \file{hyperref.sty} optional packages if hyperlinking is required in the document; \item \file{endfloat.sty} optional packages if floats to be placed at end of the PDF. \end{enumerate} All the above packages (except some optional packages) are part of any standard \LaTeX{} installation. Therefore, the users need not be bothered about downloading any extra packages. Furthermore, users are free to make use of \textsc{ams} math packages such as \file{amsmath.sty}, \file{amsthm.sty}, \file{amssymb.sty}, \file{amsfonts.sty}, etc., if they want to. All these packages work in tandem with \file{elsarticle.cls} without any problems. \section{Major Differences} Following are the major differences between \file{elsarticle.cls} and its predecessor package, \file{elsart.cls}: \begin{enumerate}[\textbullet] \item \file{elsarticle.cls} is built upon \file{article.cls} while \file{elsart.cls} is not. \file{elsart.cls} redefines many of the commands in the \LaTeX{} classes/kernel, which can possibly cause surprising clashes with other contributed \LaTeX{} packages; \item provides preprint document formatting by default, and optionally formats the document as per the final style of models $1+$, $3+$ and $5+$ of Elsevier journals; \item some easier ways for formatting \verb+list+ and \verb+theorem+ environments are provided while people can still use \file{amsthm.sty} package; \item \file{natbib.sty} is the main citation processing package which can comprehensively handle all kinds of citations and works perfectly with \file{hyperref.sty} in combination with \file{hypernat.sty}; \item long title pages are processed correctly in preprint and final formats. \end{enumerate} \section{Installation} The package is available at author resources page at Elsevier (\url{http://www.elsevier.com/locate/latex}). It can also be found in any of the nodes of the Comprehensive \TeX{} Archive Network (\textsc{ctan}), one of the primary nodes being \url{http://tug.ctan.org/tex-archive/macros/latex/contrib/elsarticle/}. Please download the \file{elsarticle.dtx} which is a composite class with documentation and \file{elsarticle.ins} which is the \LaTeX{} installer file. When we compile the \file{elsarticle.ins} with \LaTeX{} it provides the class file, \file{elsarticle.cls} by stripping off all the documentation from the \verb+*.dtx+ file. The class may be moved or copied to a place, usually, \verb+$TEXMF/tex/latex/elsevier/+, or a folder which will be read by \LaTeX{} during document compilation. The \TeX{} file database needs updation after moving/copying class file. Usually, we use commands like \verb+mktexlsr+ or \verb+texhash+ depending upon the distribution and operating system. \section{Usage}\label{sec:usage} The class should be loaded with the command: \begin{vquote} \documentclass[<options>]{elsarticle} \end{vquote} \noindent where the \verb+options+ can be the following: \begin{description} \item [{\tt\color{verbcolor} preprint}] default option which format the document for submission to Elsevier journals. \item [{\tt\color{verbcolor} review}] similar to the \verb+preprint+ option, but increases the baselineskip to facilitate easier review process. \item [{\tt\color{verbcolor} 1p}] formats the article to the look and feel of the final format of model 1+ journals. This is always single column style. \item [{\tt\color{verbcolor} 3p}] formats the article to the look and feel of the final format of model 3+ journals. If the journal is a two column model, use \verb+twocolumn+ option in combination. \item [{\tt\color{verbcolor} 5p}] formats for model 5+ journals. This is always of two column style. \item [{\tt\color{verbcolor} authoryear}] author-year citation style of \file{natbib.sty}. If you want to add extra options of \file{natbib.sty}, you may use the options as comma delimited strings as arguments to \verb+\biboptions+ command. An example would be: \end{description} \begin{vquote} \biboptions{longnamesfirst,angle,semicolon} \end{vquote} \begin{description} \item [{\tt\color{verbcolor} number}] numbered citation style. Extra options can be loaded with\linebreak \verb+\biboptions+ command. \item [{\tt\color{verbcolor} sort\&compress}] sorts and compresses the numbered citations. For example, citation [1,2,3] will become [1--3]. \item [{\tt\color{verbcolor} longtitle}] if front matter is unusually long, use this option to split the title page across pages with the correct placement of title and author footnotes in the first page. \item [{\tt\color{verbcolor} times}] loads \file{txfonts.sty}, if available in the system to use Times and compatible math fonts. \item [{\tt\color{verbcolor} reversenotenum}] Use alphabets as author--affiliation linking labels and use numbers for author footnotes. By default, numbers will be used as author--affiliation linking labels and alphabets for author footnotes. \item [{\tt\color{verbcolor} lefttitle}] To move title and author/affiliation block to flushleft. \verb+centertitle+ is the default option which produces center alignment. \item [{\tt\color{verbcolor} endfloat}] To place all floats at the end of the document. \item [{\tt\color{verbcolor} nonatbib}] To unload natbib.sty. \item [{\tt\color{verbcolor} doubleblind}] To hide author name, affiliation, email address etc. for double blind refereeing purpose. \item[] All options of \file{article.cls} can be used with this document class. \item[] The default options loaded are \verb+a4paper+, \verb+10pt+, \verb+oneside+, \verb+onecolumn+ and \verb+preprint+. \end{description} \section{Frontmatter} There are two types of frontmatter coding: \begin{enumerate}[(1)] \item each author is connected to an affiliation with a footnote marker; hence all authors are grouped together and affiliations follow; \pagebreak \item authors of same affiliations are grouped together and the relevant affiliation follows this group. \end{enumerate} An example of coding the first type is provided below. \begin{vquote} \title{This is a specimen title\tnoteref{t1,t2}} \tnotetext[t1]{This document is the results of the research project funded by the National Science Foundation.} \tnotetext[t2]{The second title footnote which is a longer text matter to fill through the whole text width and overflow into another line in the footnotes area of the first page.} \end{vquote} \begin{vquote} \author[1]{Jos Migchielsen\corref{cor1}% \fnref{fn1}} \ead{[email protected]} \author[2]{CV Radhakrishnan\fnref{fn2}} \ead{[email protected]} \author[3]{CV Rajagopal\fnref{fn1,fn3}} \ead[url]{www.stmdocs.in} \cortext[cor1]{Corresponding author} \fntext[fn1]{This is the first author footnote.} \fntext[fn2]{Another author footnote, this is a very long footnote and it should be a really long footnote. But this footnote is not yet sufficiently long enough to make two lines of footnote text.} \fntext[fn3]{Yet another author footnote.} \affiliation[1]{organization={Elsevier B.V.}, addressline={Radarweg 29}, postcode={1043 NX}, city={Amsterdam}, country={The Netherlands}} \affiliation[2]{organization={Sayahna Foundation}, addressline={JWRA 34, Jagathy}, city={Trivandrum} postcode={695014}, country={India}} \end{vquote} \begin{vquote} \affiliation[3]{organization={STM Document Engineering Pvt Ltd.}, addressline={Mepukada, Malayinkil}, city={Trivandrum} postcode={695571}, country={India}} \end{vquote} The output of the above \TeX{} source is given in Clips~\ref{clip1} and \ref{clip2}. The header portion or title area is given in Clip~\ref{clip1} and the footer area is given in Clip~\ref{clip2}. \deforange{blue!70} \src{Header of the title page.} \includeclip{1}{130 612 477 707}{1psingleauthorgroup.pdf \deforange{orange} \deforange{blue!70} \src{Footer of the title page.} \includeclip{1}{93 135 499 255}{1pseperateaug.pdf \deforange{orange} Most of the commands such as \verb+\title+, \verb+\author+, \verb+\affiliation+ are self explanatory. Various components are linked to each other by a label--reference mechanism; for instance, title footnote is linked to the title with a footnote mark generated by referring to the \verb+\label+ string of the \verb=\tnotetext=. We have used similar commands such as \verb=\tnoteref= (to link title note to title); \verb=\corref= (to link corresponding author text to corresponding author); \verb=\fnref= (to link footnote text to the relevant author names). \TeX{} needs two compilations to resolve the footnote marks in the preamble part. Given below are the syntax of various note marks and note texts. \begin{vquote} \tnoteref{<label(s)>} \corref{<label(s)>} \fnref{<label(s)>} \tnotetext[<label>]{<title note text>} \cortext[<label>]{<corresponding author note text>} \fntext[<label>]{<author footnote text>} \end{vquote} \noindent where \verb=<label(s)>= can be either one or more comma delimited label strings. The optional arguments to the \verb=\author= command holds the ref label(s) of the address(es) to which the author is affiliated while each \verb=\affiliation= command can have an optional argument of a label. In the same manner, \verb=\tnotetext=, \verb=\fntext=, \verb=\cortext= will have optional arguments as their respective labels and note text as their mandatory argument. The following example code provides the markup of the second type of author-affiliation. \begin{vquote} \author{Jos Migchielsen\corref{cor1}% \fnref{fn1}} \ead{[email protected]} \affiliation[1]{organization={Elsevier B.V.}, addressline={Radarweg 29}, postcode={1043 NX}, city={Amsterdam}, country={The Netherlands}} \author{CV Radhakrishnan\fnref{fn2}} \ead{[email protected]} \affiliation[2]{organization={Sayahna Foundation}, addressline={JWRA 34, Jagathy}, city={Trivandrum} postcode={695014}, country={India}} \author{CV Rajagopal\fnref{fn1,fn3}} \ead[url]{www.stmdocs.in} \affiliation[3]{organization={STM Document Engineering Pvt Ltd.}, addressline={Mepukada, Malayinkil}, city={Trivandrum} postcode={695571}, country={India}} \end{vquote} \vspace*{-.5pc} \begin{vquote} \cortext[cor1]{Corresponding author} \fntext[fn1]{This is the first author footnote.} \fntext[fn2]{Another author footnote, this is a very long footnote and it should be a really long footnote. But this footnote is not yet sufficiently long enough to make two lines of footnote text.} \end{vquote} The output of the above \TeX{} source is given in Clip~\ref{clip3}. \deforange{blue!70} \src{Header of the title page..} \includeclip{1}{119 563 468 709}{1pseperateaug.pdf \deforange{orange} Clip~\ref{clip4} shows the output after giving \verb+doubleblind+ class option. \deforange{blue!70} \src{Double blind article} \includeclip{1}{124 567 477 670}{elstest-1pdoubleblind.pdf \deforange{orange} \vspace*{-.5pc} The frontmatter part has further environments such as abstracts and keywords. These can be marked up in the following manner: \begin{vquote} \begin{abstract} In this work we demonstrate the formation of a new type of polariton on the interface between a .... \end{abstract} \end{vquote} \vspace*{-.5pc} \begin{vquote} \begin{keyword} quadruple exiton \sep polariton \sep WGM \end{keyword} \end{vquote} \noindent Each keyword shall be separated by a \verb+\sep+ command. \textsc{msc} classifications shall be provided in the keyword environment with the commands \verb+\MSC+. \verb+\MSC+ accepts an optional argument to accommodate future revisions. eg., \verb=\MSC[2008]=. The default is 2000.\looseness=-1 \subsection{New page} Sometimes you may need to give a page-break and start a new page after title, author or abstract. Following commands can be used for this purpose. \begin{vquote} \newpageafter{title} \newpageafter{author} \newpageafter{abstract} \end{vquote} \begin{itemize} \leftskip-2pc \item [] {\tt\color{verbcolor} \verb+\newpageafter{title}+} typeset the title alone on one page. \item [] {\tt\color{verbcolor} \verb+\newpageafter{author}+} typeset the title and author details on one page. \item [] {\tt\color{verbcolor} \verb+\newpageafter{abstract}+} typeset the title, author details and abstract \& keywords one one page. \end{itemize} \section{Floats} {Figures} may be included using the command, \verb+\includegraphics+ in combination with or without its several options to further control graphic. \verb+\includegraphics+ is provided by \file{graphic[s,x].sty} which is part of any standard \LaTeX{} distribution. \file{graphicx.sty} is loaded by default. \LaTeX{} accepts figures in the postscript format while pdf\LaTeX{} accepts \file{*.pdf}, \file{*.mps} (metapost), \file{*.jpg} and \file{*.png} formats. pdf\LaTeX{} does not accept graphic files in the postscript format. The \verb+table+ environment is handy for marking up tabular material. If users want to use \file{multirow.sty}, \file{array.sty}, etc., to fine control/enhance the tables, they are welcome to load any package of their choice and \file{elsarticle.cls} will work in combination with all loaded packages. \section[Theorem and ...]{Theorem and theorem like environments} \file{elsarticle.cls} provides a few shortcuts to format theorems and theorem-like environments with ease. In all commands the options that are used with the \verb+\newtheorem+ command will work exactly in the same manner. \file{elsarticle.cls} provides three commands to format theorem or theorem-like environments: \begin{vquote} \newtheorem{thm}{Theorem} \newtheorem{lem}[thm]{Lemma} \newdefinition{rmk}{Remark} \newproof{pf}{Proof} \newproof{pot}{Proof of Theorem \ref{thm2}} \end{vquote} The \verb+\newtheorem+ command formats a theorem in \LaTeX's default style with italicized font, bold font for theorem heading and theorem number at the right hand side of the theorem heading. It also optionally accepts an argument which will be printed as an extra heading in parentheses. \begin{vquote} \begin{thm} For system (8), consensus can be achieved with $\|T_{\omega z}$ ... \begin{eqnarray}\label{10} .... \end{eqnarray} \end{thm} \end{vquote} Clip~\ref{clip5} will show you how some text enclosed between the above code\goodbreak \noindent looks like: \vspace*{6pt} \deforange{blue!70} \src{{\ttfamily\color{verbcolor}\expandafter\@gobble\string\\ newtheorem}} \includeclip{2}{1 1 453 120}{jfigs.pdf} \deforange{orange} The \verb+\newdefinition+ command is the same in all respects as its\linebreak \verb+\newtheorem+ counterpart except that the font shape is roman instead of italic. Both \verb+\newdefinition+ and \verb+\newtheorem+ commands automatically define counters for the environments defined. \vspace*{6pt} \deforange{blue!70} \src{{\ttfamily\color{verbcolor}\expandafter\@gobble\string\\ newdefinition}} \includeclip{1}{1 1 453 105}{jfigs.pdf} \deforange{orange} The \verb+\newproof+ command defines proof environments with upright font shape. No counters are defined. \vspace*{6pt} \deforange{blue!70} \src{{\ttfamily\color{verbcolor}\expandafter\@gobble\string\\ newproof}} \includeclip{3}{1 1 453 65}{jfigs.pdf} \deforange{orange} Users can also make use of \verb+amsthm.sty+ which will override all the default definitions described above. \section[Enumerated ...]{Enumerated and Itemized Lists} \file{elsarticle.cls} provides an extended list processing macros which makes the usage a bit more user friendly than the default \LaTeX{} list macros. With an optional argument to the \verb+\begin{enumerate}+ command, you can change the list counter type and its attributes. \begin{vquote} \begin{enumerate}[1.] \item The enumerate environment starts with an optional argument `1.', so that the item counter will be suffixed by a period. \item You can use `a)' for alphabetical counter and '(i)' for roman counter. \begin{enumerate}[a)] \item Another level of list with alphabetical counter. \item One more item before we start another. \end{vquote} \deforange{blue!70} \src{List -- Enumerate} \includeclip{4}{1 1 453 185}{jfigs.pdf} \deforange{orange} Further, the enhanced list environment allows one to prefix a string like `step' to all the item numbers. \begin{vquote} \begin{enumerate}[Step 1.] \item This is the first step of the example list. \item Obviously this is the second step. \item The final step to wind up this example. \end{enumerate} \end{vquote} \deforange{blue!70} \src{List -- enhanced} \includeclip{5}{1 1 313 83}{jfigs.pdf} \deforange{orange} \section{Cross-references} In electronic publications, articles may be internally hyperlinked. Hyperlinks are generated from proper cross-references in the article. For example, the words \textcolor{black!80}{Fig.~1} will never be more than simple text, whereas the proper cross-reference \verb+\ref{tiger}+ may be turned into a hyperlink to the figure itself: \textcolor{blue}{Fig.~1}. In the same way, the words \textcolor{blue}{Ref.~[1]} will fail to turn into a hyperlink; the proper cross-reference is \verb+\cite{Knuth96}+. Cross-referencing is possible in \LaTeX{} for sections, subsections, formulae, figures, tables, and literature references. \section[Mathematical ...]{Mathematical symbols and formulae} Many physical/mathematical sciences authors require more mathematical symbols than the few that are provided in standard \LaTeX. A useful package for additional symbols is the \file{amssymb} package, developed by the American Mathematical Society. This package includes such oft-used symbols as $\lesssim$ (\verb+\lesssim+), $\gtrsim$ (\verb+\gtrsim+) or $\hbar$ (\verb+\hbar+). Note that your \TeX{} system should have the \file{msam} and \file{msbm} fonts installed. If you need only a few symbols, such as $\Box$ (\verb+\Box+), you might try the package \file{latexsym}. Another point which would require authors' attention is the breaking up of long equations. When you use \file{elsarticle.cls} for formatting your submissions in the \verb+preprint+ mode, the document is formatted in single column style with a text width of 384pt or 5.3in. When this document is formatted for final print and if the journal happens to be a double column journal, the text width will be reduced to 224pt at for 3+ double column and 5+ journals respectively. All the nifty fine-tuning in equation breaking done by the author goes to waste in such cases. Therefore, authors are requested to check this problem by typesetting their submissions in final format as well just to see if their equations are broken at appropriate places, by changing appropriate options in the document class loading command, which is explained in section~\ref{sec:usage}, \nameref{sec:usage}. This allows authors to fix any equation breaking problem before submission for publication. \file{elsarticle.cls} supports formatting the author submission in different types of final format. This is further discussed in section \ref{sec:final}, \nameref{sec:final}. \enlargethispage*{\baselineskip} \subsection*{Displayed equations and double column journals} Many Elsevier journals print their text in two columns. Since the preprint layout uses a larger line width than such columns, the formulae are too wide for the line width in print. Here is an example of an equation (see equation 6) which is perfect in a single column preprint format: In normal course, articles are prepared and submitted in single column format even if the final printed article will come in a double column format journal. Here the problem is that when the article is typeset by the typesetters for paginating and fit within the single column width, they have to break the lengthy equations and align them properly. Even if most of the tasks in preparing your proof is automated, the equation breaking and aligning requires manual judgement, hence this task is manual. When there comes a manual operation that area is error prone. Author needs to check that equation pretty well. However if authors themselves break the equation to the single column width typesetters need not want to touch these area and the proof authors get will be without any errors. \setlength\Sep{6pt} \src{See equation (6)} \deforange{blue!70} \includeclip{4}{105 500 500 700}{1psingleauthorgroup.pdf} \deforange{orange} \noindent When this document is typeset for publication in a model 3+ journal with double columns, the equation will overlap the second column text matter if the equation is not broken at the appropriate location. \vspace*{6pt} \deforange{blue!70} \src{See equation (6) overprints into second column} \includeclip{3}{59 421 532 635}{elstest-3pd.pdf} \deforange{orange} \vspace*{6pt} \noindent The typesetter will try to break the equation which need not necessarily be to the liking of the author or as it happens, typesetter's break point may be semantically incorrect. Therefore, authors may check their submissions for the incidence of such long equations and break the equations at the correct places so that the final typeset copy will be as they wish. \section{Bibliography} Three bibliographic style files (\verb+*.bst+) are provided --- \file{elsarticle-num.bst}, \file{elsarticle-num-names.bst} and \file{elsarticle-harv.bst} --- the first one can be used for the numbered scheme, second one for numbered with new options of \file{natbib.sty}. The third one is for the author year scheme. In \LaTeX{} literature, references are listed in the \verb+thebibliography+ environment. Each reference is a \verb+\bibitem+ and each \verb+\bibitem+ is identified by a label, by which it can be cited in the text: \verb+\bibitem[Elson et al.(1996)]{ESG96}+ is cited as \verb+\citet{ESG96}+. \noindent In connection with cross-referencing and possible future hyperlinking it is not a good idea to collect more that one literature item in one \verb+\bibitem+. The so-called Harvard or author-year style of referencing is enabled by the \LaTeX{} package \file{natbib}. With this package the literature can be cited as follows: \begin{enumerate}[\textbullet] \item Parenthetical: \verb+\citep{WB96}+ produces (Wettig \& Brown, 1996). \item Textual: \verb+\citet{ESG96}+ produces Elson et al. (1996). \item An affix and part of a reference: \verb+\citep[e.g.][Ch. 2]{Gea97}+ produces (e.g. Governato et al., 1997, Ch. 2). \end{enumerate} In the numbered scheme of citation, \verb+\cite{<label>}+ is used, since \verb+\citep+ or \verb+\citet+ has no relevance in the numbered scheme. \file{natbib} package is loaded by \file{elsarticle} with \verb+numbers+ as default option. You can change this to author-year or harvard scheme by adding option \verb+authoryear+ in the class loading command. If you want to use more options of the \file{natbib} package, you can do so with the \verb+\biboptions+ command, which is described in the section \ref{sec:usage}, \nameref{sec:usage}. For details of various options of the \file{natbib} package, please take a look at the \file{natbib} documentation, which is part of any standard \LaTeX{} installation. In addition to the above standard \verb+.bst+ files, there are 10 journal-specific \verb+.bst+ files also available. Instruction for using these \verb+.bst+ files can be found at \href{http://support.stmdocs.in/wiki/index.php?title=Model-wise_bibliographic_style_files} {http://support.stmdocs.in} \section[Graphical ...]{Graphical abstract and highlights} A template for adding graphical abstract and highlights are available now. This will appear as the first two pages of the PDF before the article content begins. \pagebreak Please refer below to see how to code them. \begin{vquote} .... .... \end{abstract} \begin{graphicalabstract} \end{graphicalabstract} \begin{highlights} \item Research highlight 1 \item Research highlight 2 \end{highlights} \begin{keyword} .... .... \end{vquote} \section{Final print}\label{sec:final} The authors can format their submission to the page size and margins of their preferred journal. \file{elsarticle} provides four class options for the same. But it does not mean that using these options you can emulate the exact page layout of the final print copy. \lmrgn=3em \begin{description} \item [\texttt{1p}:] $1+$ journals with a text area of 384pt $\times$ 562pt or 13.5cm $\times$ 19.75cm or 5.3in $\times$ 7.78in, single column style only. \item [\texttt{3p}:] $3+$ journals with a text area of 468pt $\times$ 622pt or 16.45cm $\times$ 21.9cm or 6.5in $\times$ 8.6in, single column style. \item [\texttt{twocolumn}:] should be used along with 3p option if the journal is $3+$ with the same text area as above, but double column style. \item [\texttt{5p}:] $5+$ with text area of 522pt $\times$ 682pt or 18.35cm $\times$ 24cm or 7.22in $\times$ 9.45in, double column style only. \end{description} Following pages have the clippings of different parts of the title page of different journal models typeset in final format. Model $1+$ and $3+$ will have the same look and feel in the typeset copy when presented in this document. That is also the case with the double column $3+$ and $5+$ journal article pages. The only difference will be wider text width of higher models. Here are the specimen single and double column journal pages. \begin{comment} \begin{center} \hypertarget{bsc}{} \hyperlink{sc}{ {\bf [Specimen single column article -- Click here]} } \hypertarget{bsc}{} \hyperlink{dc}{ {\bf [Specimen double column article -- Click here]} } \end{center} \end{comment} \vspace*{-.5pc} \enlargethispage*{\baselineskip} \src{}\hypertarget{sc}{} \deforange{blue!70} \hyperlink{bsc}{\includeclip{1}{88 120 514 724}{elstest-1p.pdf}} \deforange{orange} \src{}\hypertarget{dc}{} \deforange{blue!70} \hyperlink{bsc}{\includeclip{1}{27 61 562 758}{elstest-5p.pdf}} \deforange{orange} ~\hfill $\Box$ \end{document} \section{} \label{} \section{} \label{} \section{} \label{}
1,941,325,220,634
arxiv
\section{Introduction} In the pioneering work \cite{DHKK}, Dimitrov--Haiden--Katzarkov--Kontsevich introduced the notion of categorical entropy of an endofunctor on a triangulated category with a split generator. A typical example of such triangulated category is given by the bounded derived category $\mathcal{D}^b(X)$ of coherent sheaves on a variety $X$. When $X$ is a smooth projective variety with ample (anti-)canonical bundle, the group of autoequivalences on $\mathcal{D}^b(X)$ is well-understood \cite{BO}. It is generated by tensoring line bundles, automorphisms on the variety, and degree shifts. The categorical entropy in this case was computed by Kikuta and Takahashi \cite{KT}. On the other hand, when the variety is Calabi--Yau, there are much more autoequivalences on $\mathcal{D}^b(X)$ because of the presence of spherical objects. In this article, we show that the composite of the simplest spherical twist $\mathrm{T}_{\mathcal{O}_X}$ with $-\otimes\mathcal{O}(-H)$ already gives an interesting categorical entropy. \begin{Thm}[= Theorem \ref{Thm:A}] Let $X$ be a strict Calabi--Yau manifold over $\mathbb{C}$ of dimension $d\geq 3$. Consider the autoequivalence $\Phi:=\mathrm{T}_{\mathcal{O}_X}\circ(-\otimes\mathcal{O}(-H))$ on $\mathcal{D}^b(X)$. The categorical entropy $h_t(\Phi)$ is a positive function in $t\in\mathbb{R}$. Moreover, for any $t\in\mathbb{R}$, $h_t(\Phi)$ is the unique $\lambda>0$ satisfying $$ \sum_{k\geq 1}\frac{\chi(\mathcal{O}(kH))}{e^{k\lambda}}=e^{(d-1)t}. $$ \end{Thm} A simple argument on Hilbert polynomial shows that this equation defines an algebraic curve over $\mathbb{Q}$ in the coordinate $(e^t,e^{\lambda})$. Thus the algebraicity conjecture in \cite[Question 4.1]{DHKK} holds in this case. The notion of categorical entropy is a categorical analogue of topological entropy of a continuous surjective self-map on a compact metric space. There is a fundamental theorem of topological entropy due to Gromov and Yomdin. \begin{Thm}[\cite{Gro1,Gro2,Yom}]\label{Thm:GY} Let $M$ be a compact K\"ahler manifold and let $f:M\rightarrow M$ be a surjective holomorphic map. Then $$ h_{\mathrm{top}}(f)=\log\rho(f^*), $$ where $h_{\mathrm{top}}(f)$ is the topological entropy of $f$, and $\rho(f^*)$ is the spectral radius of $f^*:H^*(M;\mathbb{C})\rightarrow H^*(M;\mathbb{C})$. \end{Thm} Kikuta and Takahashi proposed the following analogous conjecture on categorical entropy. \begin{Conj}[\cite{KT}]\label{Conj:KT} Let $X$ be a smooth proper variety over $\mathbb{C}$ and let $\Phi$ be an autoequivalence on $\mathcal{D}^b(X)$. Then $$ h_0(\Phi)=\log\rho(\mathrm{HH}_{\bullet}(\Phi)), $$ where $\mathrm{HH}_{\bullet}(\Phi):\mathrm{HH}_{\bullet}(X)\rightarrow\mathrm{HH}_{\bullet}(X)$ is the induced $\mathbb{C}$-linear isomorphism on the Hochschild homology group of $X$, and $\rho(\mathrm{HH}_{\bullet}(\Phi))$ is its spectral radius. \end{Conj} Note that one can replace $\mathrm{HH}_{\bullet}(\Phi)$ by the induced Fourier-Mukai type action on the cohomology $\Phi_{H^*}:H^*(X;\mathbb{C})\rightarrow H^*(X;\mathbb{C})$, because there is a commutative diagram \begin{equation*} \xymatrix{\mathrm{HH}_{\bullet}(X)\ar[rr]^{\mathrm{HH}_{\bullet}(\Phi)}\ar[d]_{I_K^{X}}&&\mathrm{HH}_{\bullet}(X)\ar[d]^{I_K^{X}}\\ H^*(X;\mathbb{C})\ar[rr]^{\Phi_{H^*}}&&H^*(X;\mathbb{C}),} \end{equation*} where $I_K^{X}$ is the modified Hochschild--Kostant--Rosenberg isomorphism \cite[Theorem 1.2]{MSM}. Hence $\rho(\mathrm{HH}_{\bullet}(\Phi))=\rho(\Phi_{H^*})$. Conjecture \ref{Conj:KT} has been proved in several cases. See \cite{K,KST,KT,Yos}. Now we explain the motivation of the present work, namely why we should not expect Conjecture \ref{Conj:KT} to hold in general. We first note that Theorem \ref{Thm:GY} does not hold if $f$ is not holomorphic. For example, there is a construction by Thurston \cite{Thur} of pseudo-Anosov maps that act trivially on the cohomology, and thus have zero spectral radius. Moreover, Dimitrov--Haiden--Katzarkov--Kontsevich \cite[Theorem 2.18]{DHKK} showed that the categorical entropy of the induced autoequivalence on the derived Fukaya category is equal to $\log\lambda$, where $\lambda>1$ is the stretch factor of the pseudo-Anosov map. Hence the analogous statement of Conjecture \ref{Conj:KT} is not true if $\mathcal{D}^b(X)$ is replaced by the derived Fukaya categories of the symplectic manifolds considered in \cite{DHKK,Thur}. Motivated by homological mirror symmetry, one may expect to find counterexamples of Conjecture \ref{Conj:KT} on the derived categories of coherent sheaves on Calabi--Yau manifolds. In other words, the discrepancy between complex and symplectic geometry should lead to the discrepancy between Theorem \ref{Thm:GY} and Conjecture \ref{Conj:KT}. Using Theorem 1.1, we construct counterexamples of Conjecture \ref{Conj:KT}. \begin{Prop}[= Proposition \ref{Prop:B}] For any even integer $d\geq4$, let $X$ be a Calabi--Yau hypersurface in $\mathbb{C}\mathbb{P}^{d+1}$ of degree $(d+2)$ and $\Phi=\mathrm{T}_{\mathcal{O}_X}\circ(-\otimes\mathcal{O}(-1))$. Then $$ h_0(\Phi)> 0=\log\rho(\Phi_{H^*}). $$ In particular, Conjecture \ref{Conj:KT} fails in this case. \end{Prop} Interestingly, as pointed out to the author by Genki Ouchi, the same autoequivalence does not produce counterexamples of Conjecture \ref{Conj:KT} if $X$ is an \emph{odd} dimensional Calabi--Yau manifold (see Remark \ref{Rmk:Genki}). \section{Preliminaries} \subsection{Categorical entropy} We recall the notion of categorical entropy introduced by Dimitrov-Haiden-Katzarkov-Kontsevich \cite{DHKK}. Let $\mathcal{D}$ be a triangulated category. A triangulated subcategory is called \emph{thick} if it is closed under taking direct summands. The \emph{split closure} of an object $E\in\mathcal{D}$ is the smallest thick triangulated subcategory containing $E$. An object $G\in\mathcal{D}$ is called a \emph{split generator} if its split closure is $\mathcal{D}$. \begin{Def}[\cite{DHKK}] Let $E$ and $F$ be non-zero objects in $\mathcal{D}$. If $F$ is in the split closure of $E$, then the \emph{complexity} of $F$ relative to $E$ is defined to be the function $$ \delta_t(E,F):= \inf\left\{ \displaystyle\sum_{i=1}^k e^{n_i t} \,\middle|\, \begin{xy} (0,5) *{0}="0", (20,5)*{A_{1}}="1", (30,5)*{\dots}, (40,5)*{A_{k-1}}="k-1", (60,5)*{F\oplus F'}="k", (10,-5)*{E[n_{1}]}="n1", (30,-5)*{\dots}, (50,-5)*{E[n_{k}]}="nk", \ar "0"; "1" \ar "1"; "n1" \ar@{-->} "n1";"0" \ar "k-1"; "k" \ar "k"; "nk" \ar@{-->} "nk";"k-1" \end{xy} \, \right\}, $$ where $t$ is a real parameter that keeps track of the shiftings. \end{Def} \begin{Def}[\cite{DHKK}] Let $D$ be a triangulated category with a split generator $G$ and let $\Phi:\mathcal{D}\rightarrow\mathcal{D}$ be an endofunctor. The \emph{categorical entropy} of $\Phi$ is defined to be the function $h_t(\Phi):\mathbb{R}\rightarrow[-\infty,\infty)$ given by $$ h_t(\Phi):=\lim_{n\rightarrow\infty}\frac{1}{n}\log\delta_t(G,\Phi^nG). $$ \end{Def} \begin{Lemma}[\cite{DHKK}] The limit $\lim_{n\rightarrow\infty}\frac{1}{n}\log\delta_t(G,\Phi^nG)$ exists in $[-\infty,\infty)$ for every $t\in\mathbb{R}$, and is independent of the choice of the split generator $G$. \end{Lemma} We will use the following proposition to compute categorical entropy. \begin{Prop}[\cite{DHKK,KT}]\label{Prop:DHKK} Let $G$ and $G'$ be split generators of $\mathcal{D}^b(X)$ and let $\Phi$ be an autoequivalence on $\mathcal{D}^b(X)$. Then the categorical entropy equals to $$ h_t(\Phi)=\lim_{n\rightarrow\infty}\frac{1}{n}\log\sum_{l\in\mathbb{Z}}\dim\mathrm{Hom}^l_{\mathcal{D}^b(X)}(G,\Phi^nG')e^{-lt}. $$ \end{Prop} \subsection{Spherical objects and spherical twists} We recall the notions of spherical objects and spherical twists introduced by Seidel-Thomas \cite{ST}. They are the categorical analogue of Lagrangian spheres in symplectic manifolds and the Dehn twists along Lagrangian spheres. \begin{Def}[\cite{ST}] An object $S\in\mathcal{D}^b(X)$ is called \emph{spherical} if $S\otimes\omega_X\cong S$ and $\mathrm{Hom}^{\bullet}_{\mathcal{D}^b(X)}(S,S)=\mathbb{C}\oplus\mathbb{C}[-\dim X]$. \end{Def} \begin{Def}[\cite{ST}] The \emph{spherical twist} $\mathrm{T}_S$ with respect to a spherical object $S$ is an autoequivalence on $\mathcal{D}^b(X)$ given by $$ E\mapsto\mathrm{T}_S(E):=\mathrm{Cone}(\mathrm{Hom}^{\bullet}_{\mathcal{D}^b(X)}(S,E)\otimes S\rightarrow E). $$ \end{Def} We also recall the definition of strict Calabi--Yau manifolds. \begin{Def}\label{Def:Strict CY} A smooth projective variety $X$ is called \emph{strict Calabi--Yau} if $\omega_X\cong\mathcal{O}_X$ and $H^i(X,\mathcal{O}_X)=0$ for all $0<i<\dim X$. This is equivalent to the condition that $\mathcal{O}_X$ is a spherical object. \end{Def} \section{Computation of categorical entropy} We fix the notations and assumptions that will be used throughout this section. \subsection*{Notations} We work over complex number field $\mathbb{C}$. \begin{itemize} \item $X$ is a strict Calabi--Yau manifold (Definition \ref{Def:Strict CY}) with a very ample line bundle $H$. \item $d:=\mathrm{dim}_{\mathbb{C}}X\geq3$. \item $\mathcal{O}:=\mathcal{O}_X$ and $\mathcal{O}(k):=\mathcal{O}_X(kH)$. \item $a_k:=h^0(\mathcal{O}(k))=\chi(\mathcal{O}(k))$ for $k>0$. \item $G:=\oplus_{i=1}^{d+1}\mathcal{O}(i)$ and $G':=\oplus_{i=1}^{d+1}\mathcal{O}(-i)$. By a result of Orlov \cite{Orlov}, both $G$ and $G'$ are split generators of $\mathcal{D}^b(X)$. \end{itemize} The goal of this section is to prove the following theorem. \begin{Thm}\label{Thm:A} Let $X$ be a strict Calabi--Yau manifold over $\mathbb{C}$ of dimension $d\geq 3$. Consider the autoequivalence $\Phi:=\mathrm{T}_{\mathcal{O}}\circ(-\otimes\mathcal{O}(-1))$ on $\mathcal{D}^b(X)$. The categorical entropy $h_t(\Phi)$ is a positive function in $t\in\mathbb{R}$. Moreover, for any $t\in\mathbb{R}$, $h_t(\Phi)$ is the unique $\lambda>0$ satisfying $$ \sum_{k\geq 1}\frac{\chi(\mathcal{O}(k))}{e^{k\lambda}}=e^{(d-1)t}. $$ \end{Thm} We begin with a lemma that will be crucial in the computation of categorical entropy. \begin{Lemma}\label{Lemma:degree} For any integers $n\geq 0$ and $k>0$, $\mathrm{Hom}^{l}(\mathcal{O},\Phi^n(G')\otimes\mathcal{O}(-k))$ is zero except for $l=d,d+(d-1),\ldots,d+n(d-1)$. \end{Lemma} \begin{proof} We prove by induction on $n$. The statement is true when $n=0$ by Kodaira vanishing theorem and Serre duality. By the definition of $\Phi$, there is an exact triangle $$ \Phi^{n-1}(G')\otimes\mathcal{O}(-1)\rightarrow\Phi^n(G')\rightarrow \mathrm{Hom}^{\bullet}(\mathcal{O},\Phi^{n-1}(G')\otimes\mathcal{O}(-1))\otimes\mathcal{O}[1]\xrightarrow{+1}. $$ By tensoring it with $\mathcal{O}(-k)$ and applying $\mathrm{Hom}^{\bullet}(\mathcal{O},-)$, we get a long exact sequence: $\mathrm{Hom}^{\bullet}(\mathcal{O},\Phi^{n-1}(G')\otimes\mathcal{O}(-k-1))\rightarrow\mathrm{Hom}^{\bullet}(\mathcal{O},\Phi^n(G')\otimes\mathcal{O}(-k))\rightarrow$ \\ $\mathrm{Hom}^{\bullet}(\mathcal{O},\Phi^{n-1}(G')\otimes\mathcal{O}(-1))\otimes\mathrm{Hom}^{\bullet}(\mathcal{O},\mathcal{O}(-k))[1] \xrightarrow{+1}.$ Suppose the statement is true for $n-1$. Then the first complex in the long exact sequence is non-zero only at degree $d,d+(d-1),\ldots,d+(n-1)(d-1)$, and the third complex is non-zero only at degree $d+(d-1),d+2(d-1),\ldots,d+n(d-1)$. Since we assume the dimension $d\geq 3$, the long exact sequence splits into short exact sequences, and the proof follows. \end{proof} For $n\geq0$ and $k>0$, define $$ B_{n,k}:=\sum_{m=0}^n\dim\mathrm{Hom}^{d+m(d-1)}(\mathcal{O},\Phi^n(G')\otimes\mathcal{O}(-k)) \cdot e^{-m(d-1)t}. $$ In particular, $B_{0,k}=\dim\mathrm{Hom}^d(\mathcal{O},\mathcal{O}(-k))=a_k$. The proof of Lemma \ref{Lemma:degree} also gives a recursive relation among $B_{n,k}$'s: \begin{equation}\label{Recursion} B_{n,k}=B_{n-1,k+1}+a_ke^{-(d-1)t}B_{n-1,1}. \tag{*} \end{equation} \begin{Lemma}\label{Lemma:recursion} Define $$ P_{s,k}:=\sum_{\substack{i_1+\cdots+i_q=s\\i_1\geq k}} a_{i_1}a_{i_2}\cdots a_{i_q} e^{-q(d-1)t}, $$ where the summation runs over all ordered partitions of $s$ with the first piece no less than $k$. Then we have $$ B_{n,k}=a_{n+k}+\sum_{s=1}^n a_s P_{n+k-s,k}. $$ \end{Lemma} \begin{proof} Induction on $n$ and use the recursive relation (\ref{Recursion}). \end{proof} Now we are ready to prove Theorem \ref{Thm:A}. \begin{proof}[Proof of Theorem \ref{Thm:A}] By Proposition \ref{Prop:DHKK}, the categorical entropy can be written as \begin{equation*} \begin{split} h_t(\Phi) & = \lim_{n\rightarrow\infty}\frac{1}{n}\log\sum_l \dim\mathrm{Hom}^l (G,\Phi^n(G'))e^{-lt} \\ & = \lim_{n\rightarrow\infty}\frac{1}{n}\log\sum_{k=1}^{d+1}\sum_l \dim\mathrm{Hom}^l(\mathcal{O},\Phi^n(G')\otimes\mathcal{O}(-k))e^{-lt} \\ & = \lim_{n\rightarrow\infty}\frac{1}{n}\log\sum_{k=1}^{d+1} B_{n,k}. \end{split} \end{equation*} By Lemma \ref{Lemma:recursion} and the fact that $\{a_k\}$ is an increasing sequence, we have $B_{n,1}\leq B_{n,k+1}\leq B_{n+k,1}$. Hence $$ h_t(\Phi)= \lim_{n\rightarrow\infty}\frac{1}{n}\log B_{n,1}. $$ Define $C_n:=B_{n,1}e^{-(d-1)t}$ and $a'_k:=a_ke^{-(d-1)t}$. Then again by Lemma \ref{Lemma:recursion}, \begin{equation*} \begin{split} C_n & = \Big(a_{n+1}+\sum_{s=1}^na_sP_{n+1-s,1}\Big)e^{-(d-1)t} \\ & =\sum_{i_1+\cdots+i_q=n+1}a'_{i_1}a'_{i_2}\cdots a'_{i_q}. \end{split} \end{equation*} Thus \begin{equation}\label{Cn} C_n=a'_1C_{n-1}+a'_2C_{n-2}\cdots+a'_nC_0+a'_{n+1}. \tag{**} \end{equation} Notice that $h_t(\Phi)= \lim_{n\rightarrow\infty}\frac{1}{n}\log B_{n,1}=\lim_{n\rightarrow\infty}\frac{1}{n}\log C_n$ is a positive real number for any $t\in\mathbb{R}$, because there exists some $k>0$ (depending on $d$ and $t$) such that $a'_k=a_ke^{-(d-1)t}>1$, hence $$ \lim_{n\rightarrow\infty}\frac{1}{n}\log C_n \geq\lim_{n\rightarrow\infty}\frac{1}{n}\log (a'_k)^ {\lfloor\frac{n+1}{k}\rfloor} =\frac{1}{k}\log a'_k >0. $$ By dividing both sides of (\ref{Cn}) by $C_n$ and taking $n\rightarrow\infty$, we can show that the categorical entropy $h_t(\Phi)$ is the unique $\lambda>0$ satisfying $$ \sum_{k\geq1}\frac{a'_k}{e^{k\lambda}}=1. $$ Or equivalently, $$ \sum_{k\geq 1}\frac{\chi(\mathcal{O}(k))}{e^{k\lambda}}=e^{(d-1)t}. $$ \end{proof} \section{Counterexample of Kikuta-Takahashi} \begin{Prop}\label{Prop:B} For any even integer $d\geq4$, let $X$ be a Calabi--Yau hypersurface in $\mathbb{C}\mathbb{P}^{d+1}$ of degree $(d+2)$ and $\Phi=\mathrm{T}_{\mathcal{O}}\circ(-\otimes\mathcal{O}(-1))$. Then $$ h_0(\Phi)> 0=\log\rho(\Phi_{H^*}). $$ In particular, Conjecture \ref{Conj:KT} fails in this case. \end{Prop} \begin{proof} By Theorem \ref{Thm:A}, we only need to show that the spectral radius $\rho(\Phi_{H^*})$ equals to 1. Consider another autoequivalence $\Phi':=\mathrm{T}_{\mathcal{O}}\circ(-\otimes\mathcal{O}(1))$ on $\mathcal{D}^b(X)$. By \cite[Proposition 5.8]{BFK}, there is a commutative diagram \begin{equation*} \xymatrix{\mathcal{D}^b(X)\ar[rr]^{\Phi'}\ar[d]_{\Psi}&&\mathcal{D}^b(X)\ar[d]^{\Psi}\\ \mathrm{HMF^{\mathrm{gr}}}(W)\ar[rr]^{\tau}&&\mathrm{HMF^{\mathrm{gr}}}(W).} \end{equation*} Here $W$ is the defining polynomial of $X$, $\mathrm{HMF^{\mathrm{gr}}}(W)$ is the associated graded matrix factorization category, $\Psi$ is an equivalence introduced by Orlov \cite{Orl2}, and $\tau$ is the grade shift functor on $\mathrm{HMF^{\mathrm{gr}}}(W)$ which satisfies $\tau^{d+2}=[2]$. Hence we have $(\Phi')^{d+2}=[2]$ and $(\Phi')_{H^*}^{d+2}=\mathrm{id}_{H^*}$. On the other hand, $(\mathrm{T}_{\mathcal{O}})_{H^*}$ is an involution on $H^*(X;\mathbb{C})$ when $X$ is an even dimensional strict Calabi--Yau manifold (\cite[Corollary 8.13]{Huy}). Thus $(\mathrm{T}_{\mathcal{O}})_{H^*}=(\mathrm{T}_{\mathcal{O}})_{H^*}^{-1}$. Hence we also have $\Phi_{H^*}^{d+2}=\mathrm{id}_{H^*}$, which implies that $\rho(\Phi_{H^*})=1$. \end{proof} \begin{Rmk} The autoequivalence $\Phi'$ that we considered in the proof is the one that corresponds to the monodromy around the Gepner point ($\mathbb{Z}_{d+2}$-orbifold point) in the K\"ahler moduli of $X$. \end{Rmk} \begin{Rmk}[Genki Ouchi]\label{Rmk:Genki} The functor $\Phi=\mathrm{T}_{\mathcal{O}}\circ(-\otimes\mathcal{O}(-1))$ does not produce counterexamples of Conjecture \ref{Conj:KT} if $X$ is an \emph{odd} dimensional Calabi--Yau manifold. When $X$ is of odd dimension, Lemma \ref{Lemma:degree} implies $$ h_0(\Phi)=\lim_{n\rightarrow\infty}\frac{1}{n}\log\chi(G,\Phi^n(G'))\leq\log\rho([\Phi]). $$ On the other hand, we have $h_0(\Phi)\geq\log\rho([\Phi])$ by Kikuta-Shiraishi-Takahashi \cite[Theorem 2.13]{KST}. \end{Rmk} \subsection*{Acknowledgement} I would like to thank Fabian Haiden for suggesting this problem to me, and Genki Ouchi for reading and pointing out Remark \ref{Rmk:Genki}. I would also like to thank Philip Engel, Hansol Hong, Atsushi Kanazawa, Koji Shimizu, Yukinobu Toda and Cheng-Chiang Tsai for helpful coversations and correspondences. Finally, I would like to thank Shing-Tung Yau and Harvard University Math Department for warm support.
1,941,325,220,635
arxiv
\section{Introduction} There has been a lot of interest in the effects of thermal fluctuations on the behaviour of flux-lines ever since the discovery of the high-$T_c$ superconductors. The high temperatures and fields at which these new materials remain in the mixed state, and their small coherence lengths, mean that these fluctuations may be strong enough to totally alter the nature of the vortex state from the mean-field solution of an Abrikosov lattice. Debate and controversy has continued on the possibilities of different vortex phases above and below the irreversibility line (at which the vortices become depinned). We believe that the results of the calculations in this paper shed some light on certain questions on the vortex state. Our primary conclusion is that there is very little entanglement near and below the irreversibility line. In this work, we start with the triangular vortex-lattice, and consider the free energy costs of topological defects within the lattice. Because of thermal fluctuations, the spontaneous formation of defects of finite free energy cost is important in the description of the vortex state. These defects may include `crossings' where two or more lines may pass through the same point, and `braids' where a collection of lines twist around each other as one moves along the field direction. Our approach is always to begin with the Abrikosov lattice of vortices that corresponds to the mean-field solution for the Ginzburg-Landau free energy functional within the lowest Landau level approximation\cite{Eilenberger,Fetter}. We realize of course that the LLL approximation only holds strictly in the high field regime of the superconducting phase diagram, just below $H_{c2}$, and even that the existence of this lattice may be questioned. Having set up the perfect lattice of vortices, we than allow a restricted number of the vortex-lines to move subject to certain boundary conditions that define a topological defect, and then find the minimum free energy cost of such a defect. The free energy cost of a crossing configuration provides an estimate of the energy barrier for vortex lines to cross through each other, or to cut and reconnect. Accurate estimates of the crossing energy barrier in the different regions of the $H$-$T$ phase diagram are desired because of their relevance to the vortex dynamics of high-$T_c$ superconductors. The energy barrier, $U_{\times}$, to crossing/cutting of vortex lines is a fundamental parameter of the entangled state \cite{Cates,Marchetti}. This has encouraged calculations of this energy in different approximations \cite{Wilkin,Carraro}. The calculation in this paper is the first to consider the effects on $U_{\times}$ of the surrounding triangular lattice. The braid defects are examples of the sort of configurations one expects to find in the entangled vortex state. As well as affecting the response to pinning centers, they will have interactions with any component of current parallel to the applied field. The growth of braid defects may be a source of dissipation of such longitudinal currents. An important result of our calculation is that there are no stable two-line or three-line braid defects, at least within our approximations. The smallest braid defect that is stable, but which has a relatively high free energy cost, is the six-line braid. The non-existence of stable braid configurations with small free energies has the consequence that in a sample of typical size of high-$T_c$ superconductor there is a large region of the \hbox{$H$-$T$ phase} diagram at which there will be no braid defects present in thermal equilibrium. We estimate that there will be no braids for values of the reduced temperature far above the irreversibility line and quite close to the $H_{c2}$ line (see Section~\ref{sec:noent}). If there are no braid defects present in a given sample then the vortex system is {\em disentangled}. This is contrary to the picture of an entangled vortex liquid suggested by Nelson \cite{Nelson}. In Section~\ref{sec:method} we describe the formulation of our calculations. Section~\ref{sec:small} contains our calculations and results for crossing and braid defects involving two, three, six and twelve lines. In Section~\ref{sec:large} we extend our method to large scale defects of the Abrikosov lattice. The free energy cost of an infinite straight screw dislocation is calculated, and the result is used to find a limiting form for the free energy cost of very large braids. This free energy is found to depend on the area enclosed by the braid as well as the perimeter length. In Section~\ref{sec:appl} it is shown how our results for small braids lead to the conclusion that entanglements are not important in a large region above and below the irreversibility line of high-$T_c$ superconductors of typical sizes. Other applications of the calculations are also suggested in Section~\ref{sec:appl}. \section{Method of the Calculations} \label{sec:method} For the sake of clarity, and to set the notation used, the general method behind the calculations is now described. First, the Ginzburg-Landau theory is formulated and the Lowest Landau Level is defined. Then the ground state corresponding to an Abrikosov triangular lattice is presented, and it is shown how to construct deviations from the perfect lattice which remain in the LLL. The thermodynamics of flux-lines within a type-II superconductor are well described for fields $H$ near $H_{c2}(T)$ by Ginzburg-Landau theory. Within this theory, the free energy density is expanded in terms of the superconducting order parameter and its derivatives, \begin{equation} {\cal F}\{\psi\}=\sum_{\mu=1}^3\frac{1}{2m_{\mu}} {|(-i\hbar\partial_\mu -2e\hbox{ A}_\mu)\psi|}^2 +\alpha {|\psi|}^2 + \frac{\beta}{2}{|\psi|}^4 + \frac{b^2}{2\mu_0} +const,\\ \end{equation} where $\hbox{\bf b}=\hbox{\boldmath $\nabla$} \times \hbox{\bf A}$ is the microscopic flux density. This equation applies for general anisotropy but only applies to a homogenous system (layering effects are ignored). For relevance to layered superconductors, we consider a limited anisotropy with $m_x=m_y=m_{ab}$ and $m_z=m_c$. For a uniform external field $\hbox{\bf H}_0$ parallel to the z-axis, it is the Gibbs free energy, ${\cal G}\{\psi\} =F_0+{\cal F}\{\psi\} - \hbox{\bf b.}\hbox{\bf H}_0$ that controls the properties of the system \cite{Ruggeri}. Our main approximation (valid for high fields) is to restrict the order parameter to the `Lowest Landau Level' (LLL) subspace, defined by \begin{equation} \label{lllcondition} \Pi \psi =0, \end{equation} where $\Pi =\Pi_x +i\Pi_y$, $\Pi_x=-i\hbar (\partial/\partial x)-2eA_x$, and $\Pi_y=-i\hbar (\partial/\partial y) -2eA_y$. Within this restriction, substitution into the Gibbs free energy leads to an effective free energy density of \cite{Ruggeri} \begin{equation} \label{free} {\cal F}\{\psi\}= \alpha_H {|\psi|}^2 +\frac{\beta_K}{2} {|\psi|}^4+ \frac{\hbar^2}{2m_c} {\left|\frac{\partial\psi}{\partial z}\right|}^2, \end{equation} where $\alpha_H =\alpha + (e\hbar/m_{ab})\mu_0 H_0$, and $\beta_K=\beta-\mu_0 (e\hbar/m_{ab})^2$. Here, $\alpha_H$ is our reduced temperature variable, which is negative below the $H_{c2}$ line and positive above. It is useful to make the transformations \begin{eqnarray} \label{transform} \psi\rightarrow \tilde{\psi}&=& {\left(\frac{\beta_K}{|\alpha_H|}\right)} ^{\frac{1}{2}}\psi\nonumber\\ z\rightarrow h&=& {\left(\frac{2m_c|\alpha_H|}{\hbar^2}\right)}^{\frac{1}{2}}z. \end{eqnarray} Thus, $h$ is the distance along the $c$-axis in units of the mean field correlation length in this direction, $\xi_c=\sqrt{\hbar^2/2m_c|\alpha_H|}$. Substituting (\ref{transform}) into (\ref{free}) and integrating over all space (for an infinite bulk superconductor) gives a total free energy \begin{equation} \label{totalfree} F= {\left(\frac{\hbar^2}{2m_c}\right)}^{\frac{1}{2}} \frac{{|\alpha_H|}^ {\frac{3}{2}}}{\beta_K} \int dh \int d^2r \left\{ -{|\tilde{\psi}|}^2 +\frac{1}{2} {|\tilde{\psi}|}^4 + {\left|\frac{\partial\tilde{\psi}}{\partial h}\right|}^2 \right\}. \end{equation} A consequence of using the LLL approximation is that solutions of (\ref{lllcondition}) have the general form in the plane perpendicular to the magnetic field \cite{Eilenberger} \begin{equation} \psi_0 (x,y)=f(z)e^{-{\pi y^2}/{\eta}}, \end{equation} where $z$ is not the third dimension in space, but the complex variable, $z=x+iy$, and $\eta=\Phi_0/B$, ($B\equiv \langle b\rangle$ is the mean magnetic flux density, and $\Phi_0=h/2e$ is the flux quantum). $f(z)$ is an analytic function of $z$. Any function of the above form is a function within the LLL subspace. $f(z)$ may be written quite generally in a product form \begin{equation} \label{product} f(z)\propto \prod_i(z-z_i), \end{equation} where the zeros in $f(z)$, $z=z_i$, correspond to the positions of vortices within the superconductor. Of course it is well known that the order parameter that minimizes the free energy (\ref{free}) in the mixed state is a triangular periodic lattice of flux-lines \cite{Fetter}. The corresponding function $f(z)$ for such a lattice is a Jacobi theta function \cite{Eilenberger} \begin{equation} \label{jacobi} \psi_0 (x,y)=\phi(\hbox{\bf r}|\hbox{\bf 0})= Ce^{-\frac{\pi y^2}{\eta}}\vartheta_3\left(\frac{\pi z}{l}, \frac{\pi\tau}{l}\right). \end{equation} Properties of $\phi(\hbox{\bf r}|\hbox{\bf 0})$ and related functions $\phi(\hbox{\bf r}|\hbox{\bf r}_0)$ that span the LLL subspace are given in \cite{Eilenberger}. The function ${\left|\phi(\hbox{\bf r}|\hbox{\bf 0}) \right|}^2$ has the periodicity $\hbox{\bf r}_I$ and $\hbox{\bf r}_{II}$ where $\hbox{\bf r}_I=l(1,0)$, $\hbox{\bf r}_{II}=l(1/2,\sqrt{3}/2)$, and $\tau=x_{II} +iy_{II}={l}/{2}+il\sqrt{3}/2$. $l$ is the distance between neighboring zeros (vortices) in $\phi(\hbox{\bf r}|\hbox{\bf r}_0)$. The area of the unit cell must contain one quantum of flux, so that $\eta=l^2\sqrt{3}/2$. The prefactor in (\ref{jacobi}) is found to be $C=3^{1/8}$, if we use the normalization condition \cite{Eilenberger} \begin{equation} \label{normal} \langle {|\psi_0|}^2\rangle =\int_{\mbox{\scriptsize \em unit cell}} \frac{d^2r}{\eta} {\left| \phi(\hbox{\bf r}|\hbox{\bf 0}) \right|}^2=1. \end{equation} To find the free energy of the Abrikosov lattice, we substitute $\tilde{\psi}(x,y,h)=K\psi_0(x,y)$ into (\ref{totalfree}). As ${\left|\psi_0\right|}^2$ is periodic, we can integrate over one unit cell to find the average free energy density \begin{equation} f= {\left(\frac{\hbar^2}{2m_c}\right)}^{\frac{1}{2}} \frac{{|\alpha_H|}^ {\frac{3}{2}}}{\beta_K} \int_{\mbox{\scriptsize \em unit cell}} \frac{d^2r}{\eta} \left\{ -K^2{\left|\psi_0\right|}^2 +K^4{\left|\psi_0\right|}^4 \right\}. \end{equation} This is minimized with respect to $K$ (using (\ref{normal})) by the condition $K^2=1/\beta_A$, where $\beta_A$ is the Abrikosov parameter $\beta_A=\langle {\left|{\psi_0}\right|}^4\rangle / {\langle {\left|{\psi_0}\right|}^2\rangle}^2 \simeq 1\cdot 1596$. The free energy per flux-line per unit length for the Abrikosov lattice is therefore given by $f_L=-{{\alpha_H}^2}/{2\beta_K\beta_A}$. As any LLL function has the general form (\ref{product}), including $f(z)= \vartheta_3\left(\pi z/l,\pi\tau/l\right)$ (see Appendix~\ref{ap:theta}), we can therefore construct a function representing the order parameter for a triangular lattice with one flux line displaced from $z=z_0$ to $z=\zeta_0$, by \begin{equation} \label{displaced} \psi_{\mbox{\scriptsize \em displaced}}(\hbox{\bf r})=\psi_0(\hbox{\bf r}) \frac{(z-\zeta_0)}{(z-z_0)}. \end{equation} This cancels out the first order zero in $\psi_0$ at $z=z_0$ and replaces it with a first order zero at $z=\zeta_0$. Because this new function has the form of (\ref{product}), it is still within the LLL subspace, even though it is obviously not the configuration of lowest free energy (which is $\psi_0$). (A similar method to this was used by Brandt\cite{Brandt}). (\ref{displaced}) can be generalized to any number of displaced lines, to form order parameters for various defects of the perfect lattice within the LLL subspace. The free energy cost of any such defect is given by integrating over all space the difference in the free energy densities of the defect and the ground state: \begin{equation} \label{defect} \Delta F_{\mbox{\scriptsize \em defect}}= {\left(\frac{\hbar^2}{2m_c}\right)}^{\frac{1}{2}} \frac{{|\alpha_H|}^ {\frac{3}{2}}}{\beta_A\beta_K} \int dh \int d^2r \left\{ \cal{F}\left(\psi_{\mbox{\scriptsize \em defect}} \right) - \cal{F}\left(\psi_0\right) \right\}, \end{equation} where \begin{equation} \label{freeint} \cal{F}(\psi)= -{|{\psi}|}^2 +\frac{1}{2\beta_A} {|{\psi}|}^4 +{\left|\frac{\partial{\psi}}{\partial h}\right|}^2. \end{equation} In all the problems described in this paper, the procedure is to define a defect in the perfect lattice by allowing the positions of some of the vortices to vary with $h$ subject to certain boundary conditions leaving the rest of the vortices fixed in their positions in the triangular lattice. The total free energy of the defect is then minimized with respect to the positions of the chosen vortices, subject to these boundary conditions. This is done by expanding the free energy density as a polynomial in a set of variables that describe the coordinates in the $x$-$y$ plane of the chosen lines, say $\{\zeta_i(h)\}$ for the vortices labelled by $i$. The coefficients in the expansion will be functions of $x$ and $y$ only (as there is no $h$ dependence of $\psi_0$). These coefficients must be integrated over the $x$-$y$ plane to give the free energy per unit length along $h$, that depends on $\{\zeta_i(h)\}$ and their first derivatives. The form of the variables $\{\zeta_i(h)\}$ that minimizes the total free energy will be solutions to Euler-Lagrange non-linear differential equations. By solving these the lowest free energy cost of a given type of defect can be found. One of our main approximations is that we do not allow the lines not directly involved in the defect to move in response to the presence of the defect (i.e.\ the relaxation of the surrounding lattice). This may seem like a poor approximation that will seriously over estimate the free energies of the real defects. However, we have found that this is not the case when we allow the nearest neighboring lines to move (as in Section~\ref{sec:relax}) and we believe that relaxation of the lattice will only slightly reduce our values of defect free energy costs. The reason for this is that we are considering a lattice of vortex {\em lines}, which cost energy to tilt with respect to the field direction, so if we consider a localized defect taking place over a small length scale, $L$, then the surrounding lines would have to tilt considerably over the distance $L$ if they are to reduce their interaction energy with the defect lines, and this will cost too much tilt energy. An important general result of our method is that for any defect of the ground state made from changing the positions of $n$ of the vortices, the free energy cost will diverge logarithmically with the total size of the system, unless the $n$ lines {\em move symmetrically} about their `mean midpoint'\cite{Brandt}. That is, if $n$ lines move from their ground state values $\{z_i\}$ to the new positions $\{\zeta_i(h)\}$, then the condition for a non-divergent free energy is: \begin{equation} \label{symcond} \sum_{i=1}^n \zeta_i(h) = \sum_{i=1}^n z_i, \end{equation} i.e.\ if we define the `center' of the defect as the vector sum of the coordinates of the ground state lattice position, $z_i$, then this condition says that the vectors of the coordinates of the defect line must always sum to this `center'. An important point to note is the restriction in this derivation to a defect of $n$ lines. If one wanted to consider a defect that by definition could not satisfy the symmetry relation (\ref{symcond}) then a finite free energy could still be obtained, but only by allowing the `relaxation' of other lines in the system. The motion of these extra lines would be so as to ensure the symmetry condition for the total number of moving lines, and through this cancel out the logarithmic divergence. However, if it is possible for a simple $n$-line defect to satisfy (\ref{symcond}), then the free energy cost will depend only on the {\em locally} surrounding lattice. We take advantage of this result in the following calculations to reduce the number of free parameters used for describing a given defect. \section{Calculations for Small Defects} \label{sec:small} \subsection{Two Lines Crossing} The first defect to which the above method is applied is the case of two neighboring lines within the Abrikosov lattice moving together to meet at a point. The importance of this defect is that it is the configuration providing an energy barrier to the `cutting' or `reconnecting' of two vortices. The simplest way to estimate the energy of two lines crossing is to keep all the surrounding lines in their original positions in the lattice, and to assume that the two lines move symmetrically towards each other (see Fig~\ref{fig:1}). This avoids a divergence in the free energy cost, and results in an order parameter depending on one parameter only. \begin{figure}[htbp] \epsfxsize=15cm \begin{center} \leavevmode\epsfbox{matty1.eps} \\ \caption{ (i) Cross-section of the vortex lattice on the $x$-$y$ plane, showing the path along which two neighboring lines may cross. The ground state positions of vortices are marked by the small circles. (ii) Schematic side-view of the vortex lattice with two lines crossing. The solid lines are in the plane of the paper and the dotted lines represent vortices just behind/in front of this plane. \label{fig:1}} \end{center} \end{figure} The calculation for this simple defect is described in more detail than the other problems in order to demonstrate the general procedure of these calculations. If we take the origin, $O$, of the complex $z$-plane to be at the midpoint of the two lines, then we have two vortices (= zeros in the order parameter) which move from $z=\pm {l}/{2}$ to $z=\pm a(h)$, with the boundary conditions $a(\pm\infty)=\pm{l}/{2}$. Using (\ref{displaced}), the LLL order parameter for two lines crossing may be written \begin{equation} \label{abriko} \psi_{2}(x,y,h)=\psi_0(x,y)\frac{(z+a(h))(z-a(h))}{(z+\frac{l}{2}) (z-\frac{l}{2})}. \end{equation} The subscript, $2$, indicates the number of lines allowed to deviate from their positions in the ground state triangular lattice. It is now necessary to find an expression for the free energy change of the displaced lines compared to the undistorted lattice as a functional of $a(h)$, then minimize this functional to find the correct shape of the crossing lines (i.e.\ the configuration of lowest free energy). Looking at (\ref{defect}) and (\ref{freeint}) one sees that we need to expand the differences ${|\psi_{2}|}^2 -{|\psi_0|}^2$, and ${|\psi_{2}|}^4 -{|\psi_0|}^4$, as well as ${\left|{\partial\psi_2}/{\partial h}\right|}^2 $ in terms of $a$. The coefficients of these expressions are then integrated over the $x$-$y$ plane which leads to a free energy per unit length along the $h$-direction of the form: \begin{eqnarray} \label{poly} f_2\left\{ a(h)\right\} &\equiv& \int \frac{d^2r}{l^2} \left\{ \cal{F}(\psi_2) - \cal{F}(\psi_{0}) \right\}\nonumber\\ &=&\sum_{i=0}^4{c^{(2)}_ia^{2i}} +c^{(2)}a^2{\left(\frac{da}{dh}\right)}^2, \end{eqnarray} with the coefficients $c^{(2)}_i$ equal to the $c^{(2)}_{i0}$ given in Table~\ref{tab:c2}, and $c^{(2)}\simeq 94.20/l^4$. For example, the coefficient $c^{(2)}_1$ was found by calculating: \begin{eqnarray} c^{(2)}_1&=&2\int \frac{d^2r}{l^2}\, \frac{{|\psi_0|}^2 Re\left( z^2\right)}{{|z^2-(\frac{l}{2})^2|}^2}\: -\frac{2}{\beta_A}\int \frac{d^2r}{l^2}\, \frac{{|\psi_0|}^4 Re\left( z^2\right){|z|}^2}{{|z^2-(\frac{l}{2})^2|}^4}\\ &\simeq& -5\cdot 201 /l^2. \end{eqnarray} Note that all of the integrals that make up these coefficients must be convergent, as the denominator of the integrands always grows (for large $z$) as at least two powers of $|z|$ higher than for the numerator (there is no logarithmically divergent coefficient as the symmetry condition (\ref{symcond}) is satisfied). \begin{figure}[tbp] \epsfxsize=8cm \begin{center} \leavevmode\epsfbox{gr1.eps} \caption{ The potential term, $\sum_i c_i^{(2)}a^{2i}$, for two lines crossing. \label{gr:1}} \end{center} \end{figure} The first term in (\ref{poly}) represents the potential energy cost per unit length with the two lines displaced but still straight. The form of this term is shown in Fig~\ref{gr:1}. The second term is the tilt energy of the two lines. It increases with the distance $a$, which is consistent with an attractive force between the anti-parallel components of the vortex segments in the two lines. One could think of the equation (\ref{poly}) in analogy with a Lagrangian density in classical mechanics, of the general form $\cal{L}=T-V$ for a body moving in one dimension, $a$, and with `time' equivalent to the $h$-direction. The total free energy is proportional to the integral over all $h$ of (\ref{poly}). Applying the Euler-Lagrange equation for stationary values of a functional, and integrating once, one arrives at the equation for the form of $a(h)$ that minimizes the free energy change, $f_2-a'({\partial f_2}/{\partial a'})=\mbox{\em const}$. Substituting (\ref{poly}) and applying the boundary conditions gives \begin{equation} c^{(2)}a^2{\left(\frac{da}{dh}\right)}^2= \sum_{i=0}^4{c^{(2)}_ia^{2i}}. \end{equation} This can now be integrated up from the point of crossing, $a=0$ and (say) $h=0$, to give the correct form of $a(h)$. As $h\rightarrow 0$ this will tend to $a(h)\simeq (4c_0^{(2)}/c^{(2)})^{1/4} \sqrt{h}$ (this form at small $a$ has been found before\cite{Wilkin}). The result is shown in Fig~\ref{gr:2}. The stationary form of $a(h)$ can now be substituted into (\ref{poly}) and (\ref{defect}), and the integral over $h$ performed to find the total free energy change for crossing. \begin{figure}[htbp] \epsfxsize=8cm \begin{center} \leavevmode\epsfbox{grah.eps} \caption{ The form of $a(h)$ that minimizes the free energy subject to the crossing boundary condition that $a(0)=0$. The expected form as $h\rightarrow 0$ is shown.\label{gr:2} } \end{center} \end{figure} We find $\int dh\; f\!\left\{ a(h)\right\}\simeq 2\cdot 32$, which gives \begin{equation} \label{change} \Delta F_{2\times} = {\left(\frac{\hbar^2}{2m_c}\right)}^{\frac{1}{2}} \frac{{|\alpha_H|}^ {\frac{3}{2}}}{\beta_K\beta_A } \frac{2\Phi_0}{\sqrt{3}B} \times 2\cdot 32. \end{equation} To put this in a simpler form, we use the dimensionless factor $\alpha_T$ defined by \begin{equation} \label{alphat} \alpha_H={\left( \frac{\beta_Ke\mu_0Hk_BT\sqrt{2m_c}}{4\pi\hbar^2}\right) } ^{\frac{2}{3}}\alpha_T. \end{equation} Substituting (\ref{alphat}) into (\ref{change}) gives \begin{equation} \label{result} \Delta F_{2\times}=0\cdot 58\;k_BT{|\alpha_T|}^{\frac{3}{2}}. \end{equation} All of the following free energies will also be quoted in this form. However, there is at present a variety of different units used in the literature. For comparison, we can also write $\alpha_T$ in terms of the temperature, the field, and the Ginzburg number $Gi$. (The Ginzburg number is a useful parameter that describes the strength of thermal fluctuations of the superconducting order parameter \cite{Blatter}). We use the definition of $Gi$ given by \cite{Blatter}: \begin{equation} Gi=\frac{1}{2{(8\pi)}^2}{\left( \frac{2k_BT_c} {\mu_0{H_c}^2(0)\xi_{ab}^2(0)\xi_c(0)}\right)}^2. \end{equation} The factor $(1/8\pi)^2$ enters as we change the units of magnetic energy from c.g.s.\ to S.I. Using standard relations for the coherence length and critical fields we find that: \begin{equation} {|\alpha_T|}^\frac{3}{2}=\frac{\sqrt{2}{(1-t+\hbox{h})}^\frac{3}{2}} {{Gi}^{1/2}\hbox{h}t}. \end{equation} $t$ and $\hbox{h}$ are the dimensionless temperature and field $t=T/T_c$ and $\hbox{h}=H/H_{c2}(0)$ where $H_{c2}(0)$ denotes the straight line extrapolation to zero temperature of the $H_{c2}$ line when the field is applied along the $c$-axis. \subsection{Two Lines Crossing with Relaxation of Nearest Lines} \label{sec:relax} The result in (\ref{result}) is only an upper bound on the actual energy for the two lines to cross. This is because we have ignored any relaxation that the remaining lines in the lattice may undergo in reaction to the crossing so as to decrease the free energy cost of the defect. In order to test how good an estimate this result is, we now investigate the extent that allowing the surrounding vortices to move may change the free energy. It seems likely that it is the lines closest to the crossing region which will move the most in this defect, and therefore affect the energy of the crossing the most. As a second approximation to (\ref{result}), a calculation was performed where the two lines closest to the crossing center, above and below, were allowed to move symmetrically in response to the crossing (see Fig~\ref{fig:2}). The new order parameter with this allowed motion of the four lines is: \begin{equation} \psi_{4}(x,y,h)=\psi_0(x,y)\frac{(z^2-{a(h)}^2)(z^2+{b(h)}^2)} {(z^2-{(\frac{l}{2})}^2) (z^2+{(\frac{\sqrt{3}l}{2})}^2)}. \end{equation} \begin{figure}[tbp] \epsfxsize=10cm \begin{center} \leavevmode\epsfbox{matty2.eps} \caption{ Cross-sectional view of the lattice when two lines are allowed to cross, and the two nearest neighbors move in response. \label{fig:2}} \end{center} \end{figure} The boundary conditions of this problem are that $a(\pm \infty )=l/2$, $b(\pm \infty )=l\sqrt{3}/2$, and $a(0)=0$. Following the general procedure of the first calculation, the free energy per unit length as a function of $a(h)$ and $b(h)$ was found to be: \begin{eqnarray} \label{freeex} f_4\{ a(h),b(h)\} &=& \sum_{i,j=0}^4 c^{(4)}_{ij}a^{2i}b^{2j} +a^2{\left( \frac{da}{dh}\right) }^2\sum_{i=0}^2u_ib^{2i}\nonumber\\ &&+b^2{\left( \frac{db}{dh}\right) }^2\sum_{i=0}^2v_ia^{2i} +ab\frac{da}{dh}\frac{db}{dh}\sum_{i,j=0}^1w_{ij}a^{2i}b^{2j}. \end{eqnarray} The coefficients $c^{(4)}_{ij}$, $u_i$, $v_i$, and $w_{ij}$ are given in Table~\ref{tab:c4}. Again the first term can be viewed as a potential term due to the interaction between straight lines. Its form in the $a$-$b$ plane is shown in Fig~\ref{gr:3}. The other terms describe the tilt energies of the lines, with a fairly complex dependence on the positions of the lines. Looking at the form of the potential term $C_4(a,b)=\sum_{i,j=0}^4 c^{(4)}_{ij}a^{2i}b^{2j}$ in Fig~\ref{gr:3} we find that the interaction of straight lines in the LLL Abrikosov lattice is not at all as we might expect. One would think that parallel vortices always have a repulsive force between each other, but instead Fig~\ref{gr:3} shows the surprising feature that if the two crossing lines are placed at the midpoint, the two nearest neighboring lines are actually pulled towards the midpoint! (i.e.\ the free energy is reduced by moving towards the center.) It is impossible to explain this effect by any simple two-body interaction between vortex-lines. The minimum value of $C_4(a,b)$ when $a=0$ is $C_4(0,0.73)\simeq 0.54$ which is considerably less than the value when the two extra lines remain at their original positions, $C_4(0,\sqrt{3}/2)\simeq 0.79$. Therefore, allowing nearby lines to move will make a large difference to the potential terms for two lines crossing. However, one must also take into account the bending terms of all the lines involved in the full solution. \begin{figure}[tbp] \epsfxsize=8cm \begin{center} \leavevmode\epsfbox{ grex.eps} \caption{ The potential term, $C_4(a,b)=\sum_{i,j} c_{ij}^{(4)}a^{2i}b^{2j}$, for two lines crossing with the two nearest other lines also allowed to move. The dashed line shows the actual path taken in the full solution. \label{gr:3}} \end{center} \end{figure} Now, to find the configuration of $a(h)$ and $b(h)$ that minimizes the free energy cost of two lines crossing, we have to solve two coupled Euler-Lagrange equations of the form: \begin{equation}\label{euler} \frac{\partial f_4}{\partial x_i}-\frac{d}{dh}\left( \frac{\partial f_4} {\partial x_i'} \right) =0. \end{equation} Substituting (\ref{freeex}) into (\ref{euler}) leads to two coupled equations that were solved numerically with a relaxation method, using a software package. The resulting configuration from the solution to (\ref{euler}) is only slightly different from the configuration when the nearby lines are kept fixed. The two nearest lines move in by $\sim 0.03\, l$ at $h=0$ (the actual path is shown in Fig~\ref{gr:3}). The difference in energy is very small, with the same answer to the accuracy quoted as (\ref{result}). As we had expected the movement of the nearest lines to those crossing to affect the free energy change the most, it seems that the initial approximation of not allowing any of the surrounding lines to move is quite good, and we are fairly confident that the result (\ref{result}) is close to the exact answer. The reason that the relaxation of surrounding lines has so little effect on the crossing energy is that it costs too much free energy to bend these extra lines over the short distance along $h$ that crossing takes place, so that the minimum in the potential term is never reached. This is a consequence of having a lattice of lines rather than pancake vortices. \subsection{Search for a Two Line Braid} The other extension for the two-line problem is not to restrict the lines to moving in one direction only but to allow for the possibility that the two lines may `braid' around each other (by braid it is meant that the pair of lines twist around each other by half a rotation as we move along the direction of the lines). One of the original motivations to calculating the crossing energy was that it was thought to be an energy barrier to two topologically distinct configurations. For example, if one considers the two distinct braid configurations consisting of either a twist by $+\pi$ or by $-\pi$, then to go between the two states one has to pass through a configuration with two lines passing through each other. In fact, we show that there are no stable two-line braid-defects of the LLL Abrikosov solution (although the two-line crossing still remains an important energy barrier between larger entanglements). We again allow the two neighboring lines to move symmetrically with respect to each other, but throughout the $x$-$y$ plane. The positions of the two lines are given by $\left( X(h),Y(h)\right)$ and $\left( -X(h),-Y(h)\right)$ respectively (see Fig~\ref{fig:3}). The new order parameter is given by \begin{equation} \psi_{2}(x,y,h)=\psi_0(x,y)\frac{(z^2-{\left( X+iY\right)}^2)} {(z^2-{(\frac{l}{2}})^2)}. \end{equation} \begin{figure}[htbp] \epsfxsize=10cm \begin{center} \leavevmode\epsfbox{matty3.eps} \caption{ Cross-section of the vortex-lattice with two lines allowed to braid around each other. \label{fig:3}} \end{center} \end{figure} The boundary conditions for a braid are that $X(\infty )=l/2$, $X(-\infty )=-l/2$, $Y(\pm\infty)=0$. With a similar calculation to the one parameter case, the free energy as a function of $X$ and $Y$ can be derived: \begin{equation} \label{freebr} f_2\{ X(h),Y(h)\} =\sum_{i,j=0}^4 c^{(2)}_{ij}X^{2i}Y^{2j} +c^{(2)}\left( X^2+Y^2\right) \left( {\left( \frac{dX}{dh}\right) }^2 + {\left(\frac{dY}{dh}\right)}^2 \right), \end{equation} where $c^{(2)}_{ij}$ are given in Table~\ref{tab:c2} (Setting $Y(h)=0$, $X(h)=a(h)$ reduces (\ref{freebr}) to (\ref{poly}) ). The first term $C_2(X,Y)=\sum_{i,j=0}^4 c^{(2)}_{ij}X^{2i}Y^{2j}$ is the interaction term without bending of the lines, shown in Fig~\ref{gr:4}. We could now go on as before in trying to solve the Euler-Lagrange equations to minimize the integral of (\ref{freebr}) subject to the braid boundary conditions. Inspection of Fig~\ref{gr:4} reveals that there is no point in this-- there can be no braid solution for this case! This is because the form of the potential part of the free energy is always increasing as we increase $Y$ away from the midpoint (the midpoint is a saddle-point of $C_2(X,Y)$). As the bending term will increase with $(X^2+Y^2)$ for any given tilt, it is clear that the free energy cost of any two line braid will increase continuously as the path of the braid moves away from the midpoint. i.e.\ there are no stationary configurations for the braid boundary conditions other than the direct crossing already discussed. \begin{figure}[htbp] \epsfxsize=8cm \begin{center} \leavevmode\epsfbox{ grxy.eps} \caption{ The potential term, $C_2=\sum_{i,j} c_{ij}^{(2)}X^{2i}Y^{2j}$, for two lines allowed to braid or cross. A braid will correspond to going from the minimum at $(0.5,0)$ around to the minimum at $(-0.5,0)$. The path with lowest potential will be along the $X$-axis, which corresponds to a crossing defect. \label{gr:4}} \end{center} \end{figure} One might find it extremely surprising that two straight vortices placed at the same point will not repel each other along the $Y$ direction as well as the $X$ direction, even in the presence of the surrounding lattice. This seems to be another non-intuitive feature of the way vortices interact in the LLL. It is a possibility that the absence of a braid solution is an artefact of our approximation of not allowing the surrounding vortices to move. However, the result of Section~\ref{sec:relax} suggests that these surrounding lines should not have a great effect on the two-line defect. Also, a similar calculation to that in Section~\ref{sec:relax} was performed where the two nearest neighboring lines were allowed to move in response to the motion of the two lines along $Y$. This still gave the same result that the potential term always increases as $Y$ increases. \subsection{Three- and Six-Line Defects} Due to the symmetries of the triangular lattice, there are two other forms of braid defects which can be handled in the same way as for the two-line defect. Allowing motion of the three lines that form the smallest triangle (see Fig~\ref{fig:4}), or the six lines that form a hexagon surrounding one central line (Fig~\ref{fig:5}). \begin{figure}[htbp] \epsfxsize=10cm \begin{center} \leavevmode\epsfbox{matty4.eps} \vspace{.3in} \caption{ Cross-section of the vortex-lattice with three lines allowed to braid around each other. The center of the defect is labelled $z_t$. \label{fig:4}} \end{center} \end{figure} \begin{figure}[htbp] \epsfxsize=10cm \begin{center} \leavevmode\epsfbox{matty5.eps} \caption{ Cross-section of the vortex-lattice with six lines allowed to braid around each other. The position of the central vortex is $z_v$. \label{fig:5}} \end{center} \end{figure} If we assume that in the lowest energy configurations the lines in these defects move symmetrically about their center, we can write the order parameter of the defects as: \begin{eqnarray} \psi_{3}(x,y,h)&=&\psi_0(x,y)\frac{({(z-z_t)}^3+i{R_3}^3e^{i3\theta_3}) } {({(z-z_t)}^3+i{(\frac{l}{\sqrt{3}}})^3)},\\ \psi_{6}(x,y,h)&=&\psi_0(x,y)\frac{({(z-z_v)}^6-{R_6}^6e^{i6\theta_6})} {({(z-z_v)}^6-l^6)}, \end{eqnarray} The free energy costs per unit length are of the form: \begin{equation} \label{freen} f_n\{{R_n}(h),\theta_n(h)\} =\sum_{i,j,k} c^{(n)}_{ijk}{R_n}^{3i} \cos^jn\theta_n \sin^kn\theta_n +c^{(n)}\left( {R_n}^{2n-2}{\left( \frac{d{R_n}}{dh}\right) }^2 + {R_n}^{2n}{\left(\frac{d\theta_n}{dh}\right)}^2 \right), \end{equation} with $n=3,6$. The constants $c^{(3)}_{ijk}$ and $c^{(6)}_{ijk}$ are given in Table~\ref{tab:c3}. $c^{(3)}\simeq 317.9/l^6$, and $c^{(6)}\simeq 117.7/l^{12}$. The conditions for a braid are $\theta_n(\infty)=0$, $\theta_n(-\infty)=2\pi/n$ and ${R_n}(\pm\infty )$ equal to the ground state value. Again, to minimize the total free energy of given defects, two coupled E-L equations must be satisfied. \begin{figure}[tbp] \epsfxsize=10cm \begin{center} \leavevmode\epsfbox{matty6.eps} \vspace{0.5in} \caption{ Cross-section of the vortex-lattice showing the path taken in a three line crossing defect. \label{fig:6}} \end{center} \end{figure} \begin{figure}[tbp] \epsfxsize=10cm \begin{center} \leavevmode\epsfbox{matty7.eps} \caption{ Cross-section of the vortex-lattice showing the path taken in a six line crossing defect. \label{fig:7}} \end{center} \end{figure} The results of the numerical solutions to the E-L equations are: (i) There is no braid solution for the three line defect, other than where the three lines meet at the center of the triangle (as in Fig~\ref{fig:6}). (ii) The free energy cost of the three lines `crossing' is {\em lower} than the cost of two lines crossing, the result being: \begin{equation} \Delta F_{3\times}=0\cdot 51\;k_BT{|\alpha_T|}^{\frac{3}{2}}. \end{equation} (iii) There is a solution for a six-line braid with an energy cost of: \begin{equation} \label{res6br} \Delta F_{6\mbox{\scriptsize\em br}}=2\cdot 19\;k_BT{|\alpha_T|}^{\frac{3}{2}}. \end{equation} (iv) There is also a solution for the six-line defect where the lines all meet at a point on the center line (see Fig~\ref{fig:7}), and this crossing defect has a lower free energy cost than the six-line braid: \begin{equation} \Delta F_{6\times}=1\cdot 38\;k_BT{|\alpha_T|}^{\frac{3}{2}}. \end{equation} It will seem surprising that we have found the energy for three lines to meet at a point to be lower than the energy for two lines. This is due to the potential term when three lines meet, $c_{000}^{(3)}\simeq 0.54$, being lower than the potential term for two lines at their midpoint, $c_{00}^{(2)}\simeq 0.79$. Note that both of these values are for when the surrounding lattice remains fixed, so it is quite possible that relaxation of the surrounding lattice will reverse this inequality in the potentials. As was shown in Section~\ref{sec:relax}, allowing the surrounding lines to move may decrease the potential at the crossing point significantly. However, we also showed in Section~\ref{sec:relax} that the full solution including the bending terms in the free energy does not allow the surrounding lines to relax much, and so we believe that this result of $\Delta F_{3\times}<\Delta F_{2\times}$ will hold even when all surrounding lines are taken into account. \subsection{Discussion on Two-, Three- and Six-Line Braid Solutions} \label{sec:disc} To compare the calculations for the above defects, and to understand what is required for braid solutions to exist it is useful to cast the free energies (\ref{freebr}, \ref{freen}) into the same form. We do this by making the transformations, $\xi ={(X+iY)}^2$, $\xi= {R_3}^3\exp (i3\theta_3)$, or $\xi= {R_6}^6\exp (i6\theta_6)$ respectively. This gives a free energy per unit length in terms of the complex variable $\xi$: \begin{eqnarray} \label{freexi} f_n\{\xi (h)\}&=& \sum_{ij=0}^4 d^{(n)}_{ij} {\left[Re(\xi )\right]}^i {\left[Im(\xi )\right]}^j +d^{(n)}\left( {\left( \frac{d \, Re(\xi)}{dh}\right) }^2 + {\left( \frac{d \, Im(\xi)}{dh}\right) }^2 \right)\\ &=&D_n(\xi) +d^{(n)}{\left| \frac{d \xi}{dh}\right| }^2 ,\label{freexi2} \end{eqnarray} for $n=2,3,6$. With this substitution, the two- three- and six-line braids all have the same boundary conditions: $\xi(\pm \infty)=x_n$ where $x_n$ is real, and $\xi$ goes around the origin of the complex plane once, as $h$ goes from $-\infty$ to $+\infty$. $x_2=(l/2)^2$, $x_3=(\sqrt{3}l/2)^3$, and $x_6=l^6$. In each case, $\xi=x_n$ corresponds to the only minimum of $D_n(\xi)$ (the ground state configuration). In the analogy with classical dynamics, the solutions to these boundary conditions which minimize the integral over $h$ of (\ref{freexi2}) correspond to the motion of a particle of mass $m=2d^{(n)}$ in a two-dimensional potential $V(x,y)=-D_n(x+iy)$. \begin{figure}[htbp] \epsfxsize=7cm \begin{center} \leavevmode\epsfbox{ grxy2.eps} \caption{ The potential term, $D_2(\xi )=\sum_{i,j} d_{ij}^{(2)} {\left[ Re(\xi )\right]}^{i}{\left[ Im(\xi )\right]}^{j}$, where $\xi=(X+iY)^2$. A two-line braid corresponds to starting and ending at the minimum at $(0.25,0)$ while going around the origin once. The path with the lowest potential will correspond to a crossing defect. \label{gr:5}} \end{center} \end{figure} \begin{figure}[htbp] \epsfxsize=7cm \begin{center} \leavevmode\epsfbox{ grtri.eps} \caption{ The potential term, $D_3(\xi )=\sum_{i,j} d_{ij}^{(3)} {\left[ Re(\xi )\right]}^{i}{\left[ Im(\xi )\right]}^{j}$, where $\xi={R_3}^3\exp{(i3\theta )}$. Again, the three-line braid path with the lowest potential will correspond to a crossing defect. \label{gr:6}} \end{center} \end{figure} \begin{figure}[htbp] \epsfxsize=7cm \begin{center} \leavevmode\epsfbox{ grhex.eps} \caption{ The potential term, $D_6(\xi )=\sum_{i,j} d_{ij}^{(6)} {\left[ Re(\xi )\right]}^{i}{\left[ Im(\xi )\right]}^{j}$, where $\xi={R_6}^6\exp{(i6\theta )}$. Now there is a `valley path' in the potential that corresponds to a braid without the lines meeting at the center. \label{gr:7}} \end{center} \end{figure} It is clear that a braid solution where the lines in the defect do not meet at their center will only exist if there is a `valley path' in the $\xi$-plane for the function $D_n(\xi)$ which goes around the origin, starting and finishing at $\xi=x_n$. Otherwise any braid configuration will be able to continuously lower its total free energy by allowing $\xi$ to move nearer the origin as it goes around and the only solution will be a crossing defect (where $\xi$ goes to the origin and back along the real axis). By looking at the forms of $D_n(\xi)$ in Figs~\ref{gr:5},~\ref{gr:6}~and~\ref{gr:7} one can see why there is no braid solution for $n=2,3$, as there are no valley paths. For $n=6$ the presence of a braid solution is explained by the almost circular valley path in $D_6$. The presence of a valley path is a {\em minimum} requirement for a braid solution to exist. It does not guarantee that this braid will have a lower energy cost than the corresponding crossing defect, as our results for $n=6$ show. Even though the height of the potential term $D_6(\xi )$ is higher at the origin than at any point in the braid path, the six-line defect can still lower its total free energy when $\xi$ goes to the origin and back more than when $\xi$ goes around the valley as the bending term in (\ref{freexi2}) will be lower when there is less of a change in $|\xi|$. For the braid boundary conditions, this means that for a defect of given length along $h$, the bending term will be lower when the length of the path in the $\xi$ plane is smaller. Thus there is a balance here between the potential term which favours the braid defect around the valley, and the bending term which favours the crossing defect. In this case the relative terms give the lower total free energy to the crossing defect, although stationary solutions exist for both defects with a free energy barrier between the two. \subsection{The Twelve-Line Braid Defect} So far in this paper, the symmetry of the configurations has allowed us to reduce the problems to just two coupled equations in two parameters, describing the two-dimensional motion of a single `particle'. In order to consider larger braids than the 2- 3- and 6-line defects, one now has to include more degrees of freedom. The simplest problem we can think of with more than six-lines involved is the `twelve line hexagonal defect' (see Fig~\ref{fig:8}). We can still take advantage of the symmetry of this configuration by assuming that it always costs least free energy when the six lines on the vertices of the hexagon move symmetrically about the hexagon's center, and the six lines on the sides of the hexagon also move symmetrically. This allows us to reduce the number of degrees of freedom in this defect to four (corresponding to the two-dimensional motion of two particles in the E-L equations). \begin{figure}[htbp] \epsfxsize=10cm \begin{center} \leavevmode\epsfbox{matty8.eps} \vspace{0.5in}\caption{ Cross-section of the vortex-lattice with twelve lines allowed to move within a defect. The dashed line shows a possible braid path. The dotted line shows the path of a crossing defect. \label{fig:8}} \end{center} \end{figure} In the ground state, the complex positions of the `edge lines' are given by the roots of ${(z-z_v)}^6-z_1^6=0$, and the `corner lines' by ${(z-z_v)}^6-z_2^6=0$, where $z_1=2l$ and $z_2=\sqrt{3}l\exp (i\pi /6)$. If the complex positions relative to $z_v$ move from $(z_1,z_2)$ to the new positions $(\zeta_1,\zeta_2)$ then the order parameter of the system can be written: \begin{equation} \psi_{12}(x,y,h)=\psi_0(x,y)\frac{({(z-z_v)}^6-{\zeta_1}^6) ({(z-z_v)}^6-{\zeta_2}^6)} {({(z-z_v)}^6-{z_1}^6)({(z-z_v)}^6-{z_2}^6)}. \end{equation} To find the free energy of twelve-line defects, the same procedure is followed where we substitute into (\ref{freeint}), then integrate over the $x$-$y$ plane, to get a free energy per unit length as a function of the complex $\zeta_1(h)$ and $\zeta_2(h)$. We omit the details of the calculation, which requires a lot of space but results in four coupled non-linear second order equations in the real and imaginary parts of ${\zeta_1}^6$ and ${\zeta_2}^6$. This system of four E-L equations was numerically solved subject to the braid boundary conditions. The actual path of the braid of lowest free energy was close to the hexagonal perimeter, and this solution gave a total free energy cost of: \begin{equation} \label{res12br} \Delta F_{12br}\simeq 6\cdot 5\;k_BT{|\alpha_T|}^{\frac{3}{2}}. \end{equation} A crossing solution was also looked for, where all twelve lines meet at a point on the central line (at $z=z_v$), moving along the dotted line in Fig~\ref{fig:8}. For the six line case this form of defect has a lower energy cost than a braid. For twelve lines crossing the result is \begin{equation} \label{res12x} \Delta F_{12\times}\simeq 8\cdot 6\;k_BT{|\alpha_T|}^{\frac{3}{2}}. \end{equation} This is higher than the twelve line braid free energy. We expect that all larger braids will have lower energies than the corresponding crossing defects. \section{Calculations for Large Defects} \label{sec:large} In this section, the energy cost is calculated for an infinite straight screw dislocation (see Fig~\ref{fig:9}), and also for two opposite screw dislocations a large distance apart. The results of these are used to give the energy cost for very large braids. The single screw dislocation may be a first step towards calculating the energy costs of the screw-edge dislocation loops that are thought to be important in describing the dynamics of the Abrikosov crystal\cite{Labusch,Nelson2}. The creation/growth of large braid defects has been proposed as a mechanism for the longitudinal resistivity in type-II superconductors \cite{Fisher,Feigel}. \subsection{Infinite Screw Dislocation} \label{sec:screw} The order parameter of the infinite screw defect in Fig~\ref{fig:9} will depend on only one free parameter, $s(h)$: \begin{equation} \psi_{s}(x,y,h)=\psi_0(x,y)\frac{\sin{\pi (z-s(h))}}{\sin{\pi z}}, \end{equation} where the origin of the $x$-$y$ plane is now taken to be at the ground state position of one of the lines involved in this defect. This order parameter replaces the first order zeros of $\psi_0$ at $z=nl$ with first order zeros at $z=nl+s(h)$. \begin{figure}[htbp] \epsfxsize=15cm \begin{center} \leavevmode\epsfbox{matty9.eps} \caption{ (i) Cross-section of the lattice in the $x$-$y$ plane showing the path of an infinite straight screw defect. (ii) Schematic side-view of the same defect. \label{fig:9}} \end{center} \end{figure} The usual procedure is now followed of integrating the difference of the free energy density over the $x$-$y$ plane. All of the integrands are periodic in $x$, giving terms which all diverge linearly with the system size in the $x$-direction, $L_x$. Also, the bending term contains an integral which tends to a constant for large $y$, while all other terms are convergent when integrated over $y$. This leads to the result for the free energy per unit length as a function of $s(h)$: \begin{equation} \label{freescr} f_s\{ s(h)\} =\sum_{i=1}^2 \frac{L_x}{l} c_i^{(s)} \sin^{2i}\left(\frac{\pi s}{l}\right) +\frac{L_xL_y}{l^2} c^{(s)} {\left( \frac{ds}{dh}\right)}^2, \end{equation} with $c_1^{(s)}\simeq 0.2883$, $c_2^{(s)}\simeq 2.288\times 10^{-4}$ and $c^{(s)}=\pi^2 \langle {\left|\psi_0\right|}^2\rangle \simeq 9.8696$. To find the general form of $s(h)$ that minimizes the integral over $h$ of (\ref{freescr}), for any given system size, we make the substitution $h\rightarrow \tilde{h}=h/\xi_s$ with $\xi_s=\sqrt{L_y/l}$. This gives a free energy density as a function of $s(\tilde{h})$ with no dependence on $L_y$ and a linear dependence on $L_x$. The resulting E-L equation in $s(\tilde{h})$ is independent of $L_x$ and $L_y$, so increasing $L_y$ increases the scale of the screw defect along the $h$-direction, $\xi_s$, and the resulting free energy cost (found by integrating $f_s\{s(\tilde{h})\}$ over all $\tilde{h}$ with $s(\tilde{h})$ the solution to the E-L equation), is proportional to $L_x\sqrt{L_y}$: \begin{equation} \Delta F_{s}\simeq 0\cdot 54 \frac{L_x\sqrt{L_y}}{l^{3/2}} \;k_BT{|\alpha_T|}^{\frac{3}{2}}. \end{equation} That is, the free energy cost increases linearly with the number of lines involved in the defect ($n\propto L_x$) as one would expect, but it also has a divergence as the size of the system increases in the direction perpendicular to the defect. This is yet another result that is a special feature of the long-range vortex-vortex interactions in the lowest Landau level, and one that would not occur with short-range two-body interactions as in the London limit. The long range dependence of the bending term in the screw defect has important implications to the free energy cost of large braids, which are developed in the next section. \subsection{Two Opposite Screws} A similar calculation which removes this extra divergence of the energy with $L_y$ is of two `opposite' screws a large distance $W_y=n\sqrt{3}l$ from each other. This defect has the order parameter: \begin{equation} \psi_{2s}(x,y,h)=\psi_0(x,y)\frac{\sin{\pi (z-\frac{nl\sqrt{3}}{2}i-s(h))} \sin{\pi (z+\frac{nl\sqrt{3}}{2}i+s(h))}} {\sin{\pi (z-\frac{nl\sqrt{3}}{2}i)}\sin{\pi (z+\frac{nl\sqrt{3}}{2}i)}}, \end{equation} where the origin in $(x,y)$ is now at a vortex half-way between the two opposite screws. For large enough $n$, the potential terms in the free energy cost for a given $s(h)$ become just twice the corresponding terms for the single screw. The bending term now contains an integral that diverges linearly with $n$ rather than $L_y$. The actual result is \begin{equation} f_{2s}\{ s(h)\} =\sum_{i=1}^2 2\frac{L_x}{l} c_i^{(s)} \sin^{2i}\left(\frac{\pi s}{l}\right) +4n\sqrt{3}\frac{L_x}{l} c^{(s)} {\left( \frac{ds}{dh}\right)}^2. \end{equation} To solve the resulting E-L equation we make a similar transformation as before: $h\rightarrow \tilde{h}={(2n\sqrt{3})}^{1/2}$. This leads to the same E-L equation as for a single screw, and the free energy cost of this `double screw' is: \begin{equation} \Delta F_{2s}\simeq 1\cdot 5 \frac{L_x\sqrt{W_y}}{l^{3/2}} \;k_BT{|\alpha_T|}^{\frac{3}{2}} . \end{equation} As might be deduced from the result for the single screw, the energy of a large double screw is proportional to the square root of the distance between the two screws. It also suggests that, at least where the LLL approximation is valid, the energy cost of large braids will not simply be proportional to the circumference of the braids (or equivalently the radius of, or the number of lines in the braid) as has often been suggested. This is shown in the next section. \subsection{A Limiting Form For Large Braids} We now show that as a braid of general shape becomes very large, the free energy cost has a simple form depending only on the area enclosed by the braid and the length of the perimeter of the defect. We consider a large braid involving $n$ vortices (an example would be the braid in Fig~\ref{fig:11}), where the lines move from $z=z_i$ to $z=z_{i+1}$ as $h$ increases between $\pm\infty$, with $i=1,n$ and $z_{n+1}=z_1\exp{(2i\pi)}$. \begin{figure}[htbp] \epsfxsize=10cm \begin{center} \leavevmode\epsfbox{matty11.eps} \caption{ Example of the shape of a very large braid defect in the $x$-$y$ plane where the vortices move from $z_i$ to $z_{i+1}$ with increasing $h$. \label{fig:11}} \end{center} \end{figure} Three assumptions are required to make the necessary approximations: (i) The regions over which the braid is straight are much larger than regions where the effect of the `corners' of the braid on the potential term are important. (ii) All lines in the braid move by the same distance, $s(h)$ towards their nearest neighbor. i.e.\ $\zeta_i(h)=z_i+ (z_{i+1}-z_i)(s(h)/l)$. (iii) The braid is everywhere a large distance from the `center' of the braid. The center, which is where we place the origin, $O$, is defined by $O=(1/n)\sum z_i$. Therefore this condition means that $|z_i|\gg l$. These assumptions allow us to approximate the potential term in the free energy cost using the result for a straight screw (which is proportional to the length of the screw) and to calculate the bending term in a simple form as follows. The order parameter of this general $n$-line braid is given by \begin{equation} \psi_{n\, br}(x,y,h)=\psi_0(x,y)\frac{\prod_{i=1}^nz-z_i-(z_{i+1}-z_i)(s(h)/l)} {\prod_{i=1}^nz-z_i}, \end{equation} To find the bending term we need the partial derivative of the order parameter with respect to $s$: \begin{eqnarray} \frac{\partial\psi_{n\, br}}{\partial s}&=&\psi_0 \frac{\sum_{j=1}^n\left\{ -(z_{j+1}-z_j)(1/l) \prod_{i=1,i\neq j}^n\left( z-z_i-(z_{i+1}-z_i)(s(h)/l)\right)\right\}} {\prod_{i=1}^n(z-z_i)}\\ &\simeq& \psi_0 \sum_{j=1}^n\frac{ -(z_{j+1}-z_j)/l} {z-z_j}, \end{eqnarray} where the second line uses condition (iii). We now consider this sum when the complex coordinates $z=x+iy$ are far from the perimeter of the braid. That is, when $|z-z_j| \gg 1$ for all $j$, the sum becomes \begin{eqnarray} \frac{\partial\psi_{n\, br}}{\partial s}&=&\sum_{j=1}^n \psi_0 \ln{\left( \frac{z-z_{j+1}}{z-z_j}\right)}\\ &=&\left\{ \begin{array}{lr} 2i\pi\psi_0 &\mbox{if $z$ inside braid.}\\ 0&\mbox{ if $z$ outside braid.}\end{array}\right. \end{eqnarray} That is, the partial derivative of the order parameter is periodic within the braid and zero outside. Although this result does not hold when $z$ is near the perimeter, when we integrate over the $x$-$y$ plane then for large braids the integral will be dominated by the contribution where this does hold. The limiting result for the bending term is: \begin{equation} \int \frac{d^2r}{l^2} {\left|\frac{\partial\psi_n}{\partial h}\right|}^2 =\frac{A}{l^2}\pi^2\langle {\left|\psi_0\right|}^2\rangle {\left(\frac{ds}{dh}\right)}^2, \end{equation} where ${ A}$ is the area enclosed by the braid. Adding the potential term taken from the infinite screw calculation gives a free energy cost for the large braid as a function of $s(h)$ as: \begin{equation} f_{n\, br}\{ s(h)\} =\sum_{i=1}^2 \frac{P}{l} c_i^{(s)} \sin^{2i}\left(\frac{\pi s}{l}\right) + 4\frac{{A}}{l^2}c^{(s)}{\left( \frac{ds}{dh}\right)}^2, \end{equation} with ${P}$ the length of the perimeter of the braid. Making the correct transformation in the length scale of the defect, $h\rightarrow \tilde{h}=h\sqrt{Pl/4A}$ allows us to use the results from Section~{\ref{sec:screw}} to find a total free energy cost for a large braid of: \begin{equation} \Delta F_{n\, br}\simeq 1\cdot 1 \frac{\sqrt{PA}}{l^\frac{3}{2}} \;k_BT{|\alpha_T|}^{\frac{3}{2}}. \label{reslarge} \end{equation} An important caveat for the large defects in this section is that our results will only be good for defects of less than a certain size. As in all of our calculations we have used the LLL approximation, we have assumed that we may ignore fluctuations in the microscopic flux density. This will lead to good approximations of the energy costs only when the size of the defects is less than the distance over which the flux density can change significantly, which is the magnetic penetration depth. In the $x$-$y$ plane, the dimensions of the defect must be less than $\lambda_{ab}$. The extent of the defect in the field direction must be less than $\lambda_c$. \section{Applications} \label{sec:appl} \subsection{The nature of the vortex state}\label{sec:noent} The consequences of finding no stable two-line (or three-line) braids, and of a relatively high free energy cost for the six-line braid greatly affects our picture of the vortex state in thermal equilibrium at and above the irreversibility line. If stable two-line braids of low free energy existed they would be created in large numbers by thermal fluctuations at relatively low temperatures, and give rise to a system with vortices twisting around each other in the lattice, i.e.\ an entangled state. However, the absence of such low energy braids means that extensive entanglement will not occur over a large region of the $H$-$T$ phase diagram that includes the irreversibility line, as we now show. We deduce from our results that the braid defect of lowest free energy is the six-line hexagonal braid, with an energy cost given in Eq.~(\ref{res6br}). This is because we have shown that smaller braids have no saddle point solutions and the lines can just pull through each other while continuously lowering their free energy. We use the result (\ref{res6br}) to make an estimate of how many of these braids will be present in a sample of superconducting material of dimensions typical to those available. The stable six-line braid defect extends along the direction of the vortices a distance $L_{6\,br}\sim 50\,\xi_c$, where $\xi_c$ is the superconducting coherence length along the field direction. If we consider a sample with dimensions along the field axis of $L_c$, then in thermal equilibrium, the average number of braids along any six lines that surround a given line will be $N_{6\,br}=(L_{c}/L_{6\,br})\exp{(-\Delta F_{6\, br}/k_BT)}$. The number of six-line braids that any one vortex line is involved in will be $6N_{6\,br}$. If we take an estimate of $\alpha_T$ on the melting line \cite{Wilkin2} as $\alpha_T\sim -8$ we find that the exponential gives a factor of $\exp{(-\Delta F_{6\, br}/k_BT)}\sim e^{-50}\sim 10^{-22}$. Taking an extreme lower limit on the coherence length of high-$T_c$ superconductors of $\xi_c\sim10^{-10}\hbox{m}$, we would need a sample size of $L_c\simeq10^{13}\hbox{m}$ for there to be an average of one braid defect on any given line! Alternatively, if we consider the number of six-line braids in the whole sample, then at a field of 10 T there will be approximately one braid defect in $10^5\, \hbox{mm}^3$ at the melting line. (Decreasing the external field reduces the number of defects if we remain on the same $\alpha_T=const$ line). We now consider at what regimes the braid defects will proliferate. Using the above estimates we find that, for a sample with $L_c\sim 1\hbox{mm}$ there will be an average one braid defect per line at $\alpha_T\sim -3$. Above this line in the $H$-$T$ phase diagram, braid defects will be plentiful and the vortex state will be strongly entangled. Below this line however, the average line will not be involved in such defects and the vortices can be well identified between the top and the bottom of the sample. These arguments make it clear that below the melting line in the crystalline phase, and for a large region above in the flux-liquid phase, the vortices in type-II superconductors of sizes currently available will not be in a strongly entangled state, when at thermal equilibrium. This is contrary to the picture previously presented in \cite{Nelson} where entanglement has been claimed to play a major role in these systems near the irreversibility line. Instead, the lines maintain a correlation between their positions at the top and and their positions at the bottom of the sample (for a recent experimental realization, see \cite{Yao}). Note that this does not mean the lines are not moving, either by fluctuations transverse to their length, or their motion as a whole when in the liquid state. It simply means that the lines do not twist around each other to form long-lived topologically distinct entangled states. Of course, our results are only strictly valid in the high field regime where the LLL approximation holds. However, if braid defects are of such high energy costs here, it is hard to see how the Boltzmann factors will come down as we {\em lower} the temperature or field. Also our calculations are within the low-temperature crystalline state and it may be questioned if we can extend the results to the liquid state. These defects depend only locally on the surrounding lattice, and we expect the liquid state to retain crystalline order over such length scales for a large region above melting, in which case our results may still apply. It is also important to note that the absence of entanglement is a size dependent phenomenon. In a big enough sample, entanglements will arise in principle, but, at least near the melting line, they will be completely absent in practice. The above arguments are only for a system in thermal equilibrium, and if the system has been rapidly cooled from a high temperature to near the melting line it is still possible for entanglements to be present for some time. The absence of any long lived braid defects within the system of vortices can be tested experimentally. Neutron scattering, or possibly muon spin resonance, could be used to investigate the magnetic fields associated with the braids which are transverse to the externally applied field. There should be a locally qualitative difference between the resulting spectra due to the fluctuations in the transverse displacement of a vortex along its length, and from the local transverse field of a stable braid defect. \subsection{Longitudinal Resistivity due to Braid Defects} Experiments have been performed where a current is applied between terminals on the top and bottom faces of a superconductor, with the external magnetic field applied parallel to this current \cite{Busch,Safar,Cruz}, allowing measurement of the longitudinal resistivity. It has been known for some time that the spontaneous production of finite vortex loops at non-zero temperatures can be a source of non-linear resistivity in the Meissner-phase of any superconductor \cite{Langer}. This idea has been extended to the problem of the resistivity along the field direction in the mixed state of a type-II superconductor \cite{Feigel}. If one projects the vortex lines onto the $x$-$y$ plane, then entanglements of the vortices may be identified as `planar loops' perpendicular to the field direction. For any current, $J_z$, flowing in this direction, these planar loops will have an interaction with the current analogous to the vortex loops in the Meissner-phase. The braid defects that we have calculated are precisely the sort of defect that project to a planar loop. It is important to distinguish between two different limiting cases. The case we consider is where the vortex-lattice contains `weak entanglement' only, i.e.\ entanglements produced by the presence of the current itself. This should be distinguished from `strong entanglement' where planar loop type entanglements will exist at equilibrium, on all length scales, which gives rise to a linear longitudinal resistivity \cite{Feigel}. However, we have shown that in a large region of the phase diagram, there are very few entanglements present in the size of samples used in experiments, so that the systems are always in the weak entanglement limit. The fact that the resistivity has been observed to become non-linear below a line much higher than the irreversibility line in YBCO \cite{Cruz} supports our claims in Section~\ref{sec:noent} concerning the absence of entanglements. Following the arguments of Feigelman et.\ al.\cite{Feigel}, but using our results for the free energy costs of the relevant defects, we find that in the presence of a current density $J_z$, a large braid has a total free energy cost of: \begin{equation} \Delta F({ A}, J_z)= \epsilon_A { A}^{\frac{3}{4}} -\phi_0 J_z { A}, \end{equation} with $\epsilon_A\simeq 2.1 \, l^{-3/2}k_BT{|\alpha_T|}^{3/2} =2.3 \, (B/\phi_0)^{3/4}k_BT{|\alpha_T|}^{3/2} $. Therefore there will be a critical area, ${ A}_c(J_z)$, above which the braid will be driven to larger growth by the magnetic interaction with the current: \begin{equation} { A}_c \simeq 8.75\, \frac{ (k_BT)^4 {\alpha_T}^6 B^3}{{\phi_0}^7 {J_z}^4}. \end{equation} This will cause dissipation, which will be proportional to the probability of nucleating large enough braids by thermal fluctuations. The nucleation of large enough braids requires jumping a free energy barrier, which will lead to a longitudinal resistivity of the form: \begin{equation} \label{weakres} \rho_{zz}=\frac{E_z}{J_z}\propto e^{-{\left( \frac{J_T}{J_z}\right) }^3}, \end{equation} with $J_T\simeq 1.43 \, (B/{\phi_0}^2) k_BT{\alpha_T}^2$. This form is quite different to that given previously \cite{Feigel}, where it was assumed that the free energy cost of planar loops had a linear dependence on their radius. Unfortunately, the applicability of the above results to any experiments that measure this longitudinal resistivity is rather doubtful. This is because at regions of the $H$-$T$ phase diagram of interest, and for all reasonable current densities, the critical radius of a braid that needs to be created is far larger than the magnetic penetration depth $\lambda_{ab}$, (i.e.\ the length scale in the $ab$ plane over which the magnetic field may vary significantly). Therefore, the important braid defects will be outside the scale where the LLL approximation may hold. \subsection{Crossing Defects as Energy Barriers to Disentanglement} The energy barrier $U_{\times}$ to the crossing of two vortices, or to a process of cutting and reconnection, is the energy cost of the configuration with the highest free energy that must exist during these processes. This configuration is assumed to be where the two lines meet each other at a point, as in the crossing defects that we have calculated. Therefore, our result for $\Delta F_{2\times}$ for the free energy cost of a two-line crossing defect maybe used as an estimate of $U_{\times}$. The value of $U_{\times}$ has been shown to be an important parameter of the entangled vortex state \cite{Marchetti,Cates}. The relaxation time of the entangled vortices grows exponentially with $U_{\times}/k_BT$. A different form for the longitudinal resistivity to (\ref{weakres}) that applies when there is strong entanglement\cite{Feigel} (i.e.\ braids exist on all length scales) is proportional to $\exp (-U_{\times}/k_BT)$ (as crossing defects will be the basic energy barrier to the growth of braids). It is doubtful where this applies though, as we have shown above that the vortex state is not entangled over a large region of the $H$-$T$ phase diagram. Estimates of $U_{\times}$ have also been used \cite{Wilkin} to explain the transition from local to non-local resistivity seen in the DC flux transformer experiments on YBCO \cite{Safar,Cruz}. In this interpretation, the average length over which crossing defects take place is associated with a correlation length of vortex `identity' \cite{Wilkin}. When this length scale becomes smaller than the system size, the vortices will be uncorrelated between the top and bottom of the sample, and non-local effects will be of less importance in the resistivity. This argument does not depend on the entanglement of the vortices. \section{Summary} To summarize, we have calculated the free energy costs of an assortment of topological defects of the Abrikosov vortex-lattice that is the mean-field solution for the Ginzburg-Landau free energy functional of a perfect bulk Type-II superconductor. All of our calculations are made within the lowest Landau level, which is a good approximation at high fields just below the $H_{c2}$ line. Although we have made a large approximation in not allowing the surrounding lattice to move in response to these defects, we believe that our answers are still very close to the full solutions with relaxation included. This is a consequence of the elastic line nature of the lattice, whereby the vortices' resistance to tilting holds them near to their ground state positions. The free energy costs calculated are for crossing defects of 2, 3, 6, and 12 lines; braid defects of 6 and 12 lines (see Table~\ref{tab:res}); infinite screw defects and a form for very large braids. Within the LLL, one cannot truly view the vortex system in terms of discrete lines with simple two-body interactions between them (as is the case in the opposite extremes of the London limit). This non-trivial nature of vortex-vortex interactions has led to a few unexpected results of our calculations. Firstly there were no braid solutions for two- or three- line defects -- if any braid configuration of two/three lines is imposed it may continuously lower its energy by the lines meeting each other at a point (and thus forming a crossing defect)! Secondly, we have found a lower free energy cost for three lines crossing than for a two-line crossing. One assumes that this is a consequence of the lattice symmetry, together with the effect stated above that the surrounding lines in the lattice do not relax much in response to a localized defect. Thirdly, it costs a lower free energy for a six-line crossing defect than a six-line braid, even though the potential terms are higher for the crossing defect (the tilt terms decrease when the six lines move nearer each other). This odd result was not carried through to the twelve-line defects, and we believe that all larger braids cost less energy than the associated crossings. Finally, we found that for large screw defects, the free energy cost diverges with the system size perpendicular to the defect. This effect (a long range dependence in the tilt energy) leads to a result for the free energy cost of large braids which depends on the area enclosed by the braid in a new way. We think that our results will be useful in interpreting experiments on Type-II superconductors where the dynamics of the vortices play an important role, at least in the regimes of high field, and where any layered structure of the superconductors may be ignored (i.e.\ as long as the coherence length $\xi_c$ is greater than the inter-layer thickness). Some particular areas of application were outlined in Section~\ref{sec:appl}, but there are still some remaining questions yet to be cleared up. For instance, what if any are the consequences of a lower energy cost of crossing for three-lines than two-lines? We have not yet been able to use the energy costs calculated to make quantitative predictions on transport properties such as the resistivity. However, we have shown that the equilibrium vortex state will not be entangled over a significant region above and below the irreversibility line of a high-$T_c$ superconductor. This prediction is open to experimental test, possibly with a magnetic probe like neutron scattering. \begin{center} {\bf ACKNOWLEDGEMENTS} \end{center} \bigskip One of us (MJWD) would like to thank the EPSRC for funding this research.
1,941,325,220,636
arxiv
\section{Introduction} \label{sec:typesetting-summary} Integer programs (IPs) whose constraint matrix has a special block structure have received a considerable attention in recent years. As an important subclass of the general IP, it finds applications in a variety of optimization problems including scheduling~\cite{chen2018covering,jansen2018empowering,knop2019combinatorial}, routing~\cite{chen2018covering}, stochastic integer multi-commodity flows~\cite{hemmecke2010polynomial}, stochastic programming with second-order dominance constraints~\cite{gollmer2011note}, etc. \begin{comment} which is an important class of IP that has received considerable attention in recent years. A standard form of an integer program (IP) is defined as follows: \begin{eqnarray}\label{ILP} \min\{f(\vex): \ensuremath{\mathsf{A}}\xspace \vex=\veb, \vel\le \vex\le \veu, \vex\in \ensuremath{\mathbb{Z}}^{\tilde{N}} \}, \end{eqnarray} where $\ensuremath{\mathsf{A}}\xspace$ is called the \emph{constraint matrix} with dimension $\bar{M}\times \tilde{N}$, and the coordinates of $\ensuremath{\mathsf{A}}\xspace,\veb,\vel,\veu$ are integers. Let $\Delta$ be the largest absolute value among all the entries of $\ensuremath{\mathsf{A}}\xspace$. It is well-known that IP\eqref{ILP} is NP-hard~\cite{karp1972reducibility} in general, which motivates the study on the special cases of IP. Famous polynomially solvable cases include IPs with few rows and small coefficients as shown by Papadimitriou~\cite{papadimitriou1981complexity} and IPs with few variables as shown by Lenstra~\cite{lenstra1983integer}. Later on, {\it iterative augmentation} methods have been introduced to solve IPs in variable dimension where the constraint matrix has a special block structure, with successful applications in scheduling, routing and other various optimization problems~\cite{chen2018covering,jansen2018empowering,knop2019combinatorial}. On a very high level, an iterative augmentation algorithm proceeds by starting with an initial feasible solution $\vex$ and iteratively finding \emph{augmenting steps} $\vey \in \ensuremath{\mathbb{Z}}^{\tilde{N}}$, i.e., $\vex + \vey$ is feasible and $\vew (\vex + \vey) < \vew \vex$. An augmentation step $\vey$ is specified by a ``direction'' $\veg\in\ensuremath{\mathbb{Z}}^{\tilde{N}}$ and a step length $\rho\in\ensuremath{\mathbb{Z}}_{>0}$, namely $\vey=\rho\veg$. The efficiency of an iterative augmentation algorithm crucially relies on the search space of all directions. The \emph{Graver basis of $\ensuremath{\mathsf{A}}\xspace$}, $\ensuremath{\mathcal{G}}(\ensuremath{\mathsf{A}}\xspace)$, has been proved to be a good choice as directions. Indeed, essentially all existing fixed-parameter tractable (FPT) algorithms for block-structured integer programming rely on the fact that the $\ell_1$- or $\ell_{\infty}$-norm of the Graver basis can be bounded. Unfortunately, it has been shown very recently that the boundedness no longer holds for {\it 4-block $n$-fold} IP~\cite{chen2020new}, which gives the motivation of this paper. To better explain the technical challenge, we first briefly introduce some concepts. \end{comment} In this paper, we consider a specific class of block-structured IP as follows: \begin{equation}\label{ILP:2} ({\rm IP})_{n,\veb,\vel,\veu,f}: \quad \min\{f(\vex): {H}_{\textnormal{com}} \vex=\veb, \, \vel\le \vex\le \veu,\, \vex\in \ensuremath{\mathbb{Z}}^{t_B + nt_A} \}, \end{equation} where $f: \mathbb{R}^{t_B+nt_A}\rightarrow \mathbb{R}$ is a separable convex function, and ${H}_{\textnormal{com}}$ consists of small submatrices $A_i$, $B$, $C$ and $D_i$ as follows: \begin{eqnarray}\label{eq:matrix} {H_{\textnormal{com}}}:= \begin{pmatrix} C & D_1 & D_2 & \cdots & D_n \\ B & A_1 & 0 & & 0 \\ B & 0 & A_2 & & 0 \\ \vdots & & & \ddots & \\ B & 0 & 0 & & A_n \end{pmatrix} \hspace{15mm} {H}:= \begin{pmatrix} C & D_1 & D_2 & \cdots & D_n \\ B_1 & A_1 & 0 & & 0 \\ B_2 & 0 & A_2 & & 0 \\ \vdots & & & \ddots & \\ B_n & 0 & 0 & & A_n \end{pmatrix} \enspace . \end{eqnarray} Here, $A_i$'s (or $B$ or $C$ or $D_i$'s, resp.) are $s_A\times t_A$ (or $s_B\times t_B$ or $s_C\times t_C$ or $s_D\times t_D$, resp.) matrices, and furthermore, $s_B=s_A=1$. Let $\Delta$ be the largest absolute value among all the entries of $A_i,B,C$ and $D_i$. We call IP~\eqref{ILP:2} {\it combinatorial 4-block $n$-fold} IP (and $H_{\textnormal{com}}$ combinatorial 4-block $n$-fold matrix) as it generalizes the combinatorial $n$-fold IP studied in~\cite{knop2019combinatorial} (combinatorial $n$-fold IP can be viewed as a special case where $C=B=0$ and all the entries of $A_i$'s are 1). In the meantime, IP~\eqref{ILP:2} is a special case of the generalized 4-block $n$-fold IP~\cite{hemmecke2014graver} where the constraint matrix $H$ consists of submatrices $A_i$, $B_i$, $C$ and $D_i$ as Eq~\eqref{eq:matrix}. It is worth mentioning that the overall structure of $H$ implies that $s_C=s_D$, $s_A=s_B$, $t_B=t_C$ and $t_A=t_D$. We see that combinatorial 4-block $n$-fold IP studied in this paper restricts the generalized 4-block $n$-fold IP in two ways -- it requires $B_i=B$ and also $s_B=1$. The goal of this paper is to study FPT algorithms for combinatorial 4-block $n$-fold IP by taking $\Delta$, $s_A,s_B,s_C,s_D$ and $t_A,t_B,t_C,t_D$ as parameters, i.e., we aim for an algorithm that runs polynomially in $n$. There are two facts that make combinatorial 4-block $n$-fold IP an interesting subclass of the general block-structured IP. From a theoretical point of view, combinatorial 4-block $n$-fold IP exhibits an interesting \lq\lq intermediate\rq\rq\, phenomenon in its Graver basis (see Section~\ref{sec:pre} for the definition). As we will provide more details later in the related work, so far, FPT algorithms have been developed for several special cases of the generalized 4-block $n$-fold IP (see, e.g.,~\cite{aschenbrenner2007finiteness,cslovjecsek2021block,hemmecke2013n,jansen2018empowering,koutecky2018parameterized}). All of these algorithms rely on the fact that the $\ell_{\infty}$-norm (or even $\ell_{1}$-norm) of the Graver basis elements for these special cases are bounded by some FPT-value. Unfortunately, Chen et al.~\cite{chen2020new} showed very recently that the $\ell_{\infty}$-norm of Graver basis elements for 4-block $n$-fold IP is $\Omega(n)$. It thus becomes a challenging problem that without the boundedness of $\ell_{\infty}$-norm, what other properties can we expect from the Graver basis elements which may lead to an FPT algorithm? In this paper, we observe an interesting phenomenon: On the one hand, the $\ell_{\infty}$-norm of the Graver basis elements for combinatorial 4-block $n$-fold IP is still $\Omega(n)$ even if $s_D=1$ (see Theorem~\ref{thm:lower-bound}). On the other hand, Graver basis elements whose $\ell_{\infty}$-norm is bounded by some FPT-value seem to be strong enough for the purpose of decomposition. More precisely, we have the following Theorem~\ref{11-n}, which states that for some fixed $\lambda$ and any $\veg \in\ker_{\ensuremath{\mathbb{Z}}}(H_{\textnormal{com}})$, $\lambda \veg$ can always be decomposed into the summation of Graver basis elements with $\ell_{\infty}$-norm bounded by some FPT-value. Interestingly, this $\lambda$ only depends on $t_B$ and $\Delta$. \begin{theorem}\label{11-n} Let $H_{\textnormal{com}}$ be a combinatorial $4$-block $n$-fold matrix. Then there exists a positive integer $\lambda\le 2^{2^{2^{{\mathcal{O}}(t_B^2\log (t_B\Delta))}}}$ (which is only dependent on $t_B$ and $\Delta$) such that for any $\ve g\in\ker_{\ensuremath{\mathbb{Z}}}(H_{\textnormal{com}})$, we have $\lambda\veg=\veg_1+\veg_2+\cdots+\veg_p$ for some $p\in \mathbb{Z}_{>0}$ and $\veg_j\in \ker_{\ensuremath{\mathbb{Z}}}(H_{\textnormal{com}})$, and furthermore, $\veg_j\sqsubseteq \lambda\veg$ and $\|\veg_j\|_\infty=2^{2^{{\mathcal{O}}(t_A\log\Delta+s_Dt_D\log\Delta)}\cdot 2^{2^{{\mathcal{O}}(t_B^2\log\Delta)}}}$. \end{theorem} Here the upper bounds for $\lambda$ and $\|\veg_j\|_{\infty}$'s are triply exponential in the parameters. Utilizing Theorem~\ref{11-n}, we are able to show that $\|\veg\|_{\infty}= {\mathcal{O}}_{FPT}(n)$ for any Graver basis element $\veg$, and develop an algorithm of running time ${\mathcal{O}}_{FPT}(n^{3+o(1)})$ for combinatorial 4-block $n$-fold IP, where ${\mathcal{O}}_{FPT}$ hides a multiplicative factor that only depends on $\Delta,s_A,s_B,s_C,s_D,t_A,t_B,t_C,t_D$. The special feature implied by Theorem~\ref{11-n} as well as our techniques may be of separate interest for a broader class of IPs. It is worth mentioning that 4-block $n$-fold IP has been studied before mainly by Hemmecke et al.~\cite{hemmecke2014graver} and Chen et al.~\cite{chen2020new}, whose results are parallel to ours. In particular, Chen et al.~\cite{chen2020new} considered the 4-block $n$-fold IP (see matrix $H$ in Eq\eqref{eq:matrix}) where $A_i=A$, $B_i=B$ and $D_i=D$, showed that the infinity norm of Graver basis elements for such 4-block $n$-fold IP is bounded by $\min\{n^{{\mathcal{O}}(t_A^2)},n^{{\mathcal{O}}(s_D)}\}$, and developed an algorithm of running time $\min\{n^{{\mathcal{O}}(t_A^2t_B)},n^{{\mathcal{O}}(s_Dt_B)}\}$. Consequently, their results do not yield FPT algorithms for combinatorial 4-block $n$-fold IP. From an application point of view, combinatorial 4-block $n$-fold IP generalizes combinatorial $n$-fold IP and thus offers a stronger tool for optimization problems. In particular, Knop and Kouteck{\`y}~\cite{knop2018scheduling} modeled parallel machine scheduling problem $R||C_{\max}$ and $R||\sum_{\ell}w_\ell C_\ell$ as $n$-fold IPs and developed FPT algorithms (parameters include the largest job processing time, different types of machines and different types of jobs). Utilizing combinatorial $4$-block $n$-fold IP, we are able to model a broader class of scheduling problems and derive similar FPT algorithms. In particular, we consider two generalizations of the classical scheduling model. One is the bi-objective scheduling problem $R||\theta C_{\max}+\sum_{\ell}w_\ell C_\ell$, which considers the combination of two common scheduling objectives. The other is the scheduling problem with job rejection $R||C_{\max}+E$, where jobs can be rejected at a certain cost and the goal is to minimize the scheduling cost plus the total rejection cost. The reader may refer to Section~\ref{appli} for the precise definition of the two problems and the corresponding FPT algorithms. \begin{remark*} Theorem~\ref{11-n} as well as our FPT algorithm for combinatorial $4$-block $n$-fold IP can be extended to the case when $B$ is an $s_B\times t_B$ matrix with rank 1. For such a matrix $B$, we can always transform it into $\bar{B}$, in which the first row is $\ver_1^{\top}$, and all the other rows are $\ve0$. It implies that when $\text{rank}(B)=1$, it is sufficient to consider such a case $B=(\ver_1,\ve 0,\ldots,\ve 0)^{\top}$, where $\ver_1\neq \ve 0$. Such a generalization allows submatrices $A_i$'s to contain multiple rows subject to that these rows are \lq\lq local constraints\rq\rq. Hence, our result also generalizes $n$-fold IP. It is, however, not clear whether Theorem~\ref{11-n} still holds if we allow the $n$ submatrices $B\in\ensuremath{\mathbb{Z}}^{1\times t_B}$ to be different. \end{remark*} \begin{comment} Specifically, if it holds that $A_i=A$, $B_i=B$ and $D_i=D$ for all $i$, then ${H}$ is the standard 4-block $n$-fold matrix. We focus on fixed-parameter tractable (FPT) algorithms in this paper, and will take $s_A,s_B,s_C,s_D,t_A,t_B,t_C,t_D$ and $\Delta$ as parameters. (called a generalized \emph{$4$-block $n$-fold matrix}) is built from small submatrices $A_i$, $B_i$, $C$ and $D_i$: Two well-known special cases of the (generalized) 4-block $n$-fold IP include the (generalized) $n$-fold IP where $C=B_i=0$ for all $i$, and the (generalized) two-stage stochastic IP where $C=D_i=0$, as shown in the following matrices. \begin{eqnarray} {H}^{\textnormal{n-fold}}:= \begin{pmatrix} D_1 & D_2 & \cdots & D_n \\ A_1 & 0 & & 0 \\ 0 & A_2 & & 0 \\ \vdots & & \ddots & \\ 0 & 0 & & A_n \end{pmatrix}\hspace{15mm} {H}^{\textnormal{two-stage}}:= \begin{pmatrix} B_1 & A_1 & 0 & & 0 \\ B_2 & 0 & A_2 & & 0 \\ \vdots & & & \ddots & \\ B_n & 0 & 0 & & A_n \end{pmatrix}. \label{matrix:1} \end{eqnarray} It has been shown that the $\ell_{1}$-norm of the Graver basis elements for the (generalized) $n$-fold IP, and the $\ell_{\infty}$-norm of the Graver basis elements for the (generalized) two-stage stochastic IP, are bounded by a factor that only depends on the parameters $s_A,s_B,s_C,s_D,t_A,t_B,t_C,t_D$ and $\Delta$, i.e., independent of $n$, which further gives rise to the FPT algorithms (see~\cite{hemmecke2001decomposition,hemmecke2013n, klein2020complexity}). It remains as a major open problem whether an FPT algorithm exists for the (generalized) 4-block $n$-fold IP~\cite{hemmecke2014graver}. Unfortunately, in a very recent work Chen et al.~\cite{chen2020new} showed that the $\ell_{\infty}$-norm of the Graver basis elements for the 4-block $n$-fold IP is dependent on $n$, which suggests that the common approaches adopted for the (generalized) $n$-fold IP and the (generalized) two-stage stochastic IP might not work. It becomes an interesting problem whether we can exploit some other properties of the Graver basis that allow us to bypass the unboundedness. Furthermore, Chen et al.~\cite{chen2020new} provided an improved algorithm for the 4-block $n$-fold IP with a running time of $\min\{{\mathcal{O}}_{FPT}(n^{O(s_Dt_B)}),{\mathcal{O}}_{FPT}(n^{O(t_A^2t_B)})\}$, where the notation ${\mathcal{O}}_{FPT}$ hides a factor that only depends on the parameters. This running time is dependent on the number of columns of matrices $A,B$. For many applications, especially scheduling problems (see, e.g.,~\cite{mnich2015scheduling}), the IP formulation usually involves a large $t_A$, but on the other hand, $s_A=1$~\cite{knop2018scheduling}. Consequently, it is desirable if an FPT algorithm can be developed for such special cases of the 4-block $n$-fold IP. \end{comment} \begin{comment} This paper is mainly aimed at the special case of the generalized 4-block $n$-fold IP. That is, $H_{\textnormal{com}}$ is presented as the following form: \begin{eqnarray}\label{eqblock} H \begin{pmatrix} C & D_1 & D_2 & \cdots & D_n \\ B & A_1 & 0 & & 0 \\ B & 0 & A_2 & & 0 \\ \vdots & & & \ddots & \\ B & 0 & 0 & & A_n \end{pmatrix}, \end{eqnarray}where $B$ has the dimension $1\times t_B$. We first prove a crucial theorem involving decomposability of any basis and then due to the decomposability of a Graver basis we get the conclusion about the $\ell_\infty$-norm of the Graver basis elements. The results are listed as follows: \begin{itemize} \item For an arbitrary basis $\ve g$ (not necessarily Graver basis), there exists a number $\lambda= {\mathcal{O}}_{FPT}(1)$ such that $\lambda \ve g$ can be decomposed, i.e., $\lambda\veg=\vegamma_1+\vegamma_2+\cdots+\vegamma_p$, and $\forall j\in \{1,2,\ldots,p\}$ $H\ve \vegamma_j=\ve0$ and $\vegamma_j\sqsubseteq \lambda\veg$ and $\|\vegamma_j\|_\infty\le{\mathcal{O}}_{FPT}(1)$, where $p\in \ensuremath{\mathbb{Z}}$. \item The $\ell_\infty$-norm of the Graver basis elements of the new focused IP is upper bounded by ${\mathcal{O}}_{FPT}(n)$. \item The combinatorial 4-block $n$-fold IP can be solved in ${\mathcal{O}}_{FPT}(n^2)$ time. \end{itemize} Beyond theoretical interest, our focused problem also has great practical appeal. As the following we will talk about, scheduling with rejection problem and {\color{red}2} can be very important and meaningful applications. \end{comment} \subparagraph*{Related work.} The existence of FPT algorithms for the generalized 4-block $n$-fold IP (where the constraint matrix is given by $H$ in Eq~\eqref{eq:matrix}) remains as one major open problem in the area of integer programming. However, important progress has been achieved in recent years on its special cases. In particular, extensive research has been carried out on two fundamental subclasses -- generalized $n$-fold IP and two-stage stochastic IP. In the generalized $n$-fold IP, $C=B_i=0$ for all $i$ and the constraint matrix is denoted as ${H}^{\textnormal{n-fold}}$. In the generalized two-stage stochastic IP $C=D_i=0$ for all $i$ and the constraint matrix is denoted as ${H}^{\textnormal{two-stage}}$. Here by saying \lq\lq generalized\rq\rq\, we mean submatrices in ${H}^{\textnormal{n-fold}}$ and ${H}^{\textnormal{two-stage}}$ are not necessarily the same. When $D_i=D$ and $A_i=A$ for all $i$ in ${H}^{\textnormal{n-fold}}$, we call it $n$-fold IP. When $B_i=B$ and $A_i=A$ for all $i$ in ${H}^{\textnormal{two-stage}}$, we call it two-stage stochastic IP. \begin{comment} \begin{eqnarray} {H}^{\textnormal{n-fold}}:= \begin{pmatrix} D_1 & D_2 & \cdots & D_n \\ A_1 & 0 & & 0 \\ 0 & A_2 & & 0 \\ \vdots & & \ddots & \\ 0 & 0 & & A_n \end{pmatrix}\hspace{15mm} {H}^{\textnormal{two-stage}}:= \begin{pmatrix} B_1 & A_1 & 0 & & 0 \\ B_2 & 0 & A_2 & & 0 \\ \vdots & & & \ddots & \\ B_n & 0 & 0 & & A_n \end{pmatrix}. \label{matrix:1} \end{eqnarray} \end{comment} $N$-fold IP was studied by De Loera et al.~\cite{de2008n}. In 2013, Hemmecke et al.~\cite{hemmecke2013n} developed the first FPT algorithm. Later on, a series of researches have been carried out to further improve its running time and also to extend the algorithm to the generalized $n$-fold IP~\cite{altmanova2019evaluating,cslovjecsek2021block,eisenbrand2018fastera,eisenbrand2019algorithmic,jansen2018empowering,jansen2019near}. Most recently, Cslovjecsek et al.~\cite{cslovjecsek2021block} presented an algorithm of running time $2^{{\mathcal{O}}(s^2_As_D)}(s_Ds_A\Delta)^{{\mathcal{O}}(s_A^2+s_As_D^2)} (nt_A)^{1+o(1)}$ for the generalized $n$-fold IP. Two-stage stochastic IP was first studied by Hemmecke and Schultz~\cite{hemmecke2003decomposition} who presented an algorithm of running time $poly(n)\cdot f(s_A,s_B,t_A,t_B,\Delta)$ for some computable function $f$, and then studied by Aschenbrenner and Hemmecke~\cite{aschenbrenner2007finiteness}. Their result was improved by in a series of subsequent papers~\cite{eisenbrand2019algorithmic,jansen2021double,klein2020complexity,koutecky2018parameterized}. The current best-known algorithm for the generalized two-stage stochastic IP runs doubly exponential in the parameters $\Delta, s_A, t_B$ by Klein~\cite{klein2020complexity}. It is worthwhile to mention that Chen et al.~\cite{chen2020blockstructured} proved that the generalized 4-block $n$-fold IP is NP-hard if $A_i\in \ensuremath{\mathbb{Z}}^{1\times 2}$, and we take $\log\Delta$ and small matrices as parameters. Their results in~\cite{chen2020blockstructured} are totally different from our results taking $\Delta$ and small matrices as parameters. \section{Notations and Preliminaries}\label{sec:pre} \subparagraph*{Notations.} We write column vectors in boldface, e.g., $\vex, \vey$, and their entries in normal font, e.g., $x_i, y_i$. If $\vex\in \ensuremath{\mathbb{Z}}^{d_1}$ and $\vey\in \ensuremath{\mathbb{Z}}^{d_2}$, then we abuse the notation by using $(\vex,\vey)$ to denote a column vector in $\ensuremath{\mathbb{Z}}^{d_1+d_2}$. Recall that a solution $\vex$ for $4$-block $n$-fold IP is a $(t_B+nt_A)$-dimensional column vector, and we write it into $n+1$ \emph{bricks}, such that $\vex=(\vex^0,\vex^1,\cdots,\vex^n)$ where $\vex^0 \in \ensuremath{\mathbb{Z}}^{t_B}$ and each $\vex^i \in \ensuremath{\mathbb{Z}}^{t_A}$, $1\le i\le n$. We call $\vex^i$ the \emph{$i$-th brick} for $0\le i\le n$. For a vector or a matrix, we write $\|\cdot\|_{\infty}$ to denote the maximal absolute value of its elements. For two vectors $\vex,\vey$ of the same dimension, $\vex\cdot\vey$ denotes their inner product. We use $[i]$ to represent the set of integers $\{1,2,\cdots,i\}$, and $[i:j]$ for $\{i,i+1,\cdots,j\}$ where $i<j$. Two vectors $\vex$ and $\vey$ are called {\it sign-compatible} if $x_i\cdot y_i\ge 0$ holds for every pair of coordinates $(x_i,y_i)$. Recall the matrix $H_{\textnormal{com}}$ in Eq~\eqref{eq:matrix}. We denote by $H_{\textnormal{com}}^{\textnormal{n-fold}}$ the submatrix obtained from $H_{\textnormal{com}}$ by removing the first column $(C,B,B,\cdots,B)^{\top}$, and $H_{\textnormal{com}}^{\textnormal{two-stage}}$ the submatrix obtained by removing the first row $(C,D_1,\cdots,D_n)$. Throughout this paper, we use ${\mathcal{O}}_{FPT}(1)$ to represent a parameter that depends only on $\Delta,s_A,s_B,s_C,s_D,t_A,t_B,t_C,t_D$ where $\Delta$ is the maximal absolute value among all the entries of $A_i,B_i,C,D_i$. In other words, ${\mathcal{O}}_{FPT}(1)$ is only dependent on the small matrices $A_i,B_i,C,D_i$ and is independent of $n$. For any computable function $g(x)$, we write ${\mathcal{O}}_{FPT}(g)$ to represent a computable function $g'(x)$ such that $|g'(x)|\le {\mathcal{O}}_{FPT}(1) \cdot |g(x)|$. \subparagraph*{Graver basis.} We define $\sqsubseteq$ to be the \emph{conformal order} in $\mathbb{R}^d$ such that $\vex\sqsubseteq\vey$ if $\vex$ and $\vey$ are sign-compatible and $|x_i|\le |y_i|$ for each $i=1,...,d$. Given any subset $X\subseteq \mathbb{R}^d$, we say $\vex$ is a $\sqsubseteq$-\emph{minimal} element of $X$ if $\vex \in X$ and there does not exist $\vey \in X, \vey\neq \vex$ such that $\vey\sqsubseteq\vex$. It is known that every subset of $\ensuremath{\mathbb{Z}}^d$ has finitely many $\sqsubseteq$-minimal elements. Then the \emph{Graver basis} (\cite{graver1975foundations}) of an integer matrix $\ensuremath{\mathsf{A}}\xspace$ is defined the finite set $\ensuremath{\mathcal{G}}(\ensuremath{\mathsf{A}}\xspace)$, which consists of all $\sqsubseteq$-minimal elements of $\text{ker}_{\ensuremath{\mathbb{Z}}}(\ensuremath{\mathsf{A}}\xspace)\backslash \{\ve0\}$, where $\text{ker}_{\ensuremath{\mathbb{Z}}}(\ensuremath{\mathsf{A}}\xspace)=\{\vex\in \ensuremath{\mathbb{Z}}^{\tilde{N}}|\ensuremath{\mathsf{A}}\xspace\vex=\ve0\}$. We will use the following lemmas, where Lemma~\ref{lemma:merging-lemma} follows from the Steinitz Lemma~\cite{grinberg1980value,steinitz1913bedingt}. \begin{theorem}[\cite{klein2020complexity}, Theorem 2] \label{lemma23} Let $\veg$ be a Graver element of a generalized two-stage stocastic IP with constraint matrix ${H}^{\textnormal{two-stage}}$. Then $\|\veg\|_\infty\le g_\infty({H}^{\textnormal{two-stage}})$, where $g_\infty({H}^{\textnormal{two-stage}})$ only depends on $s_B,t_B,\Delta$ and $g_\infty({H}^{\textnormal{two-stage}})\le 2^{2^{{\mathcal{O}}(s_Bt_B^2\log (s_B\Delta))}}$. \end{theorem} \begin{lemma}[\cite{chen2020new}] \label{lemma:merging-lemma} Let $\vex_1,\vex_2,\cdots,\vex_m$ be a sequence of vectors in $\mathbb{Z}^d$ such that $\vex=\sum_{i=1}^m\vex_i$, and $\|\vex_i\|_{\infty}\le \zeta$. Then the set $[m]$ can be partitioned into $m'$ subsets $T_1,T_2,\cdots,T_{m'}$ satisfying that: $\cup_{j=1}^{m'}T_j=[m]$, and for every $1\le j\le m'$ it holds that $\sum_{i\in T_j} \vex_i\sqsubseteq \vex$, $|T_j|\le (c\zeta)^{d^2}$ for some constant $c$. In particular, if $d=1$, then $|T_j|\le 6\zeta+2$ for all $j$. \end{lemma} Additional preliminaries are given in Appendix~\ref{appsec:pre}. \section{Structural results for Combinatorial 4-block $n$-fold}\label{sec:structure} The goal of this section is to prove Theorem~\ref{11-n}, based on which in Section~\ref{sec:alg} we will be able to bound the $\ell_{\infty}$-norm of the Graver basis elements of combinatorial 4-block $n$-fold IP, and design an FPT algorithm using the iterative augmentation framework developed in a series of prior research works (see Graver-best augmentation in Appendix~\ref{appsec:pre}). Towards the proof of Theorem~\ref{11-n}, we first give an example. \subparagraph*{Example.} Let $H_0$ be a 4-block $n$-fold matrix where $C=(-1,-1,-1)$, $D=(5,3)$, $B=(0,-1,1)$ and $A=(3,4)$. Let $\veg=(\veg^0,\veg^1,\cdots,\veg^n)$ such that $\veg^0=(1,n-1,n)$ and $\veg^i=(1,-1)$ for all $i$ (see the left hand side of Eq~\eqref{eq:11decompose} where $\veg$ is written explicitly). It is not difficult to verify that $H_0\veg =\ve 0$. Moreover, we are able to prove that $\veg$ is a Graver basis element, thus proving Theorem~\ref{thm:lower-bound} (see Appendix~\ref{appsec:lower-bound} for the omitted proof). \begin{theorem}\label{thm:lower-bound} There exists a 4-block $n$-fold IP where $s_B=s_D=1$ such that $\|\veg\|_{\infty}=\Omega(n)$ for some Graver basis element $\veg$. \end{theorem} On the other hand, we observe that while $\veg$ cannot be decomposed into ``thin'' kernel elements in the same orthant, by multiplying $\veg$ with some small value (which is 11), such a decomposition follows: \begin{eqnarray}\label{eq:11decompose} && 11\times \begin{pmatrix} 1 \\ n-1 \\ n\\ 1 \\ -1 \\ 1 \\ -1 \\ \vdots\\ 1 \\ -1 \\ 1 \\ -1 \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 11\\ 3 \\ -5 \\ 3 \\ -5 \\ \vdots\\ 3 \\ -5 \\ 7 \\ -8 \end{pmatrix} +\begin{pmatrix} 0 \\ 11 \\ 11\\ 8 \\ -6 \\ 0 \\ 0 \\ \vdots\\ 0 \\ 0 \\ 0 \\ 0 \end{pmatrix} +\begin{pmatrix} 0 \\ 11 \\ 11\\ 0 \\ 0 \\ 8 \\ -6 \\ 0 \\ 0 \\ \vdots\\ 0 \\ 0 \end{pmatrix} +\cdots+ \begin{pmatrix} 0 \\ 11 \\ 11\\ 0 \\ 0 \\ \vdots\\ 0 \\ 0 \\ 8 \\ -6 \\ 0 \\ 0 \end{pmatrix} + \begin{pmatrix} 11 \\ 0 \\ 0\\ 0 \\ 0 \\ 0 \\ 0 \\ \vdots\\ 0 \\ 0 \\ 4 \\ -3 \end{pmatrix}. \label{eq50} \end{eqnarray} Notice that there are in total $n+1$ vectors on the right side of Eq~\eqref{eq50} and let them be $\veg_1,\veg_2,\cdots,\veg_{n+1}$: Among $\veg_j$'s the first vector $\veg_1=(0,0,11,3,-5,\cdots,3,-5,7,-8)$ consists of $\veg_1^0=(0,0,11)$, $n-1$ copies of $\veg_1^i=(3,-5)$ and one copy of $\veg_1^n=(7,-8)$. Each of $\veg_2$ to $\veg_n$ consists of $(0,11,11)$, one copy of $(8,-6)$ and 0's. And the last vector $\veg_{n+1}$ consists of $(11,0,0)$, one copy of $(4,-3)$ and 0's. It is easy to verify that $\veg_j\sqsubseteq 11\veg$ and $H_0\veg_j=\ve 0$. It is also easy to see that $\veg_j\not\sqsubseteq \veg$. \begin{comment} In the following part of this section, we show that the above observation is not a coincidence but is true for any integral vector in the kernel. More precisely, we consider \emph{the combinatorial 4-block $n$-fold IP} where the constraint matrix is given as follows: \begin{eqnarray} H_{\textnormal{com}} \begin{pmatrix} C & D_1 & D_2 & \cdots & D_n \\ B & A_1 & 0 & & 0 \\ B & 0 & A_2 & & 0 \\ \vdots & & & \ddots & \\ B & 0 & 0 & & A_n \end{pmatrix},\label{eq15} \end{eqnarray} where $s_B=1$, that is, $B\in\ensuremath{\mathbb{Z}}^{1\times t_B} $. We restate Theorem~\ref{11-n} below. \begin{T1} If $s_B=1$, then for any $\ve g\in\ker_{\ensuremath{\mathbb{Z}}}(H_{\textnormal{com}})$, there exists a positive integer $\lambda$ that only depends on $t_B$ and $\Delta$, such that $\lambda\veg$ can be decomposed into elements of $\ker_{\ensuremath{\mathbb{Z}}}(H_{\textnormal{com}})$ whose $\ell_{\infty}$-norm is bounded by ${\mathcal{O}}_{FPT}(1)$, i.e., there exist $\veg_j\in \ker_{\ensuremath{\mathbb{Z}}}(H_{\textnormal{com}})$ such that $\lambda\veg=\veg_1+\veg_2+\cdots+\veg_p$, where $p\in \ensuremath{\mathbb{Z}}$, and furthermore, $\veg_j\sqsubseteq \lambda\veg$, $\|\veg_j\|_\infty={\mathcal{O}}_{FPT}(1)$. \end{T1} \end{comment} \subparagraph*{Overview.} Now we give a high level overview on the proof of Theorem~\ref{11-n}. Recall that $H_{\textnormal{com}}$ is a combination of two submatrices, the first row $(C,D_1,\cdots,D_n)$ and a two-stage stochastic matrix $H_{\textnormal{com}}^{\textnormal{two-stage}}$. Therefore, any $\veg\in \ker_{\ensuremath{\mathbb{Z}}}(H_{\textnormal{com}})$ also satisfies that $\veg\in \ker_{\ensuremath{\mathbb{Z}}}(H_{\textnormal{com}}^{\textnormal{two-stage}})$, and by Theorem~\ref{lemma23} for any $\lambda\in\ensuremath{\mathbb{Z}}_{>0}$ we have $\lambda \veg = \sum_{j=1}^L\ve\xi_j$ where $\|\ve\xi_j\|_{\infty}={\mathcal{O}}_{FPT}(1)$, $\ve\xi_j\sqsubseteq \lambda\veg$ and $\ve\xi_j\in \ker_{\ensuremath{\mathbb{Z}}}(H_{\textnormal{com}}^{\textnormal{two-stage}})$. Note that $(C,D_1,\cdots,D_n)\ve\xi_j$ is not necessarily $\ve0$ and hence $\ve\xi_j$'s may not belong to $\ker_{\ensuremath{\mathbb{Z}}}(H_{\textnormal{com}})$. To show $\lambda\veg$ can be decomposed into sign-compatible elements of $\ker_{\ensuremath{\mathbb{Z}}}(H_{\textnormal{com}})$ with bounded $\ell_{\infty}$-norm, it suffices to show that if $L$ (and consequently $\|\lambda\veg\|_{\infty}$) is too huge, then there exists some $\veeta\in \ker_{\ensuremath{\mathbb{Z}}}(H_{\textnormal{com}})$ such that $\veeta\sqsubseteq \lambda \veg$ and $\|\veeta\|_{\infty}={\mathcal{O}}_{FPT}(1)$. Afterwards, we proceed to decompose $\lambda\veg-\veeta$. A natural idea to construct such an $\veeta$ is to select a subset $S$ with an ${\mathcal{O}}_{FPT}(1)$ number of $\ve\xi_j$'s such that $(C,D_1,D_2,\cdots,D_n)\sum_{j\in S}\ve\xi_j=\ve 0$. Unfortunately, this idea has been used by Chen et al.~\cite{chen2020new} before and it has been observed that to make $(C,D_1,D_2,\cdots,D_n)\sum_{j\in S}\ve\xi_j=\ve 0$, the cardinality of $S$ needs to be $\Omega(n)$. To bypass this obstacle, $\ve\eta$ needs to be constructed in a more \lq\lq flexible\rq\rq\, way than a direct summation of $\ve\xi_j$'s. Thus, we try to enable a \lq\lq cross-position\rq\rq\, construction, that is, we will allow each brick $\veeta^i$ to consist of bricks from different positions of $\ve\xi_j$'s, e.g., $\veeta^i=\ve\xi_{j_1}^{i_1}+\ve\xi_{j_2}^{i_2}$ where $i_1,i_2$ may be different from $i$. This will cause a critical problem. Suppose $\veeta^i=\ve\xi_{j_1}^{i_1}+\ve\xi_{j_2}^{i_2}$ and $\veeta^{i'}=\ve\xi_{j_3}^{i_3}+\ve\xi_{j_4}^{i_4}$, then how should we set the value of $\veeta^0$ to ensure that $B\veeta^0+A_i\veeta^i=\ve 0$? We observe that, if the decomposition $\lambda \veg = \sum_{j=1}^L\ve\xi_j$ satisfies that $B\ve\xi_{j}^0$ equals the same value for all $j$ (called \emph{the uniform condition}), and additionally if it holds that $A_i=A_{i_1}=A_{i_2}$ and $A_{i'}=A_{i_3}=A_{i_4}$, then by setting $\veeta^0=\ve\xi_{j_1}^0+\ve\xi_{j_2}^0$ (or equivalently, $\veeta^0=\ve\xi_{j_3}^0+\ve\xi_{j_4}^0$) we have $B\ve\eta^0+A_i\veeta^i=B\ve\xi^0_{j_1}+A_{i_1}\ve\xi_{j_1}^{i_1}+B\ve\xi^0_{j_2}+A_{i_2}\ve\xi_{j_2}^{i_2}=\ve 0$, and similarly $B\ve\eta^0+A_{i'}\veeta^{i'}=0$. That means, \lq\lq cross-position\rq\rq\, construction is possible if the uniform condition is met. Unfortunately, the uniform condition is not necessarily true. Only for combinatorial 4-block $n$-fold IP and some suitably chosen $\lambda={\mathcal{O}}_{FPT}(1)$ we can guarantee the uniform condition (nevertheless, our proof can be extended to go a bit beyond combinatorial 4-block $n$-fold IP, as we discuss at the end of Section~\ref{subsec:thm-proof}.) With the uniform condition, the construction of $\veeta$ still has two major challenges. One is that $\veeta$ must satisfy $(C,D_1,\cdots,D_n)\veeta=\ve 0$. We will generalize the Steinitz Lemma to a \lq\lq colorful\rq\rq\, variant to handle it (see Lemma~\ref{lemma:color}). The other challenge is more fundamental and is due to \lq\lq cross-position\rq\rq\, construction itself. Say, e.g., $\veeta^i=\ve\xi_{j_1}^{i_1}+\ve\xi_{j_2}^{i_2}$. While we know $\ve\xi_{j_1}^{i_1}\sqsubseteq \lambda\veg^{i_1}$ given that $\ve\xi_{j_1}\sqsubseteq \lambda\veg$, it is not necessary that $\ve\xi_{j_1}^{i_1}\sqsubseteq \lambda\veg^{i}$. How can we select the right bricks so that $\veeta^i\sqsubseteq \lambda\veg^i$ for all $i$? Indeed, is it even possible or not? Towards this, our rough idea is as follows: we consider every coordinate of $\lambda\veg^i$. If it is sufficiently large (larger than some threshold $\sigma={\mathcal{O}}_{FPT}(1)$), then the summation of any ${\mathcal{O}}_{FPT}(1)$ bricks $\ve\xi_j$'s should never violate it. Otherwise, it may be violated and is critical. We will introduce a hierarchy over $\lambda\veg^i$'s depending on each of its coordinate being critical or not, and the \lq\lq cross-position\rq\rq\, construction will only be carried out for positions (e.g., $i_1$ and $i_2$ in $\veeta^i=\ve\xi_{j_1}^{i_1}+\ve\xi_{j_2}^{i_2}$) in the same level under the hierarchy. We will show that, by doing so, if $\|\lambda\veg\|_{\infty}$ is sufficiently large, then $\veeta\sqsubseteq \lambda\veg$ can be guaranteed through a counting argument. The remainder of this section is devoted to the proof of Theorem~\ref{11-n}. Towards this, we first introduce some concepts. Consider the generalized 4-block $n$-fold IP with constraint matrix $H$ and let ${\veg}\in \ker_{\ensuremath{\mathbb{Z}}}(H)$ be an arbitrary kernel element. A decomposition $\veg=\sum_{j=1}^N\veeta_j$ is called \emph{uniform}, if for all $j$ it holds that $\veeta_j\sqsubseteq \veg$, $H^{\textnormal{two-stage}}\veeta_j=\ve 0$, and moreover, there is some fixed $\veq\in\ensuremath{\mathbb{Z}}^{s_B}$, $\veq\neq \ve0$, such that for any $j\in [N]$, \begin{eqnarray} &&B_i\veeta_j^0=\ve 0, \forall i\in [n] \qquad\textnormal{or} \qquad B_i\veeta_j^0=\veq, \forall i\in [n] .\label{eq5-extra} \end{eqnarray} That is, for all $i$ and $j$, $B_i\veeta_j^0$ may only take two possible values. For each $j$, $B_i\veeta_j^0$ must be the same for all $i$. We say $\veeta_j$ is {\it tier-0} if $B_i\veeta_j^0=\ve 0$, and $\veeta_j$ is {\it tier-1} if $B_i\veeta_j^0=\veq$. Consequently, $A_i\veeta_j^i=\ve 0$ for all $i$ or $A_i\veeta_j^i=-\veq$ for all $i$. In case of combinatorial 4-block $n$-fold IP, $B_i=B$, and hence Eq~\eqref{eq5-extra} is simplified such that $B\veeta_j^0$ is either $0$ or $q$ for all $j$. Consider an arbitrary $\bar{\veg}\in \ker_{\ensuremath{\mathbb{Z}}}(H)$ that admits a uniform decomposition $\bar{\veg}=\sum_{j=1}^{\bar{N}}\bar{\veeta}_j$ such that $\|\bar{\veeta}_j\|_{\infty}\le \bar{\eta}_{\max}$. As each $\bar{\veeta}_j$ is either tier-0 or tier-1, we denote by ${\bar{N}}_0$ (or ${\bar{N}}_1$) the number of tier-0 (or tier-1) vectors among $\bar{\veeta}_1$ to $\bar{\veeta}_{\bar{N}}$. We say that the decomposition is $\omega$-balanced if ${\bar{N}}_0\le \omega {\bar{N}}_1$, and exact $\omega$-balanced if the equality holds. In particular, we define that $\ve0$ admits an $\omega$-balanced uniform decomposition. \begin{lemma}\label{lemma:balance} For any $\bar{\veg}\in \ker_{\ensuremath{\mathbb{Z}}}(H)$, if $\bar{\veg}$ admits a uniform decomposition $\bar{\veg}=\sum_{j=1}^{\bar{N}}\bar{\veeta}_j$ where $\|\bar{\veeta}_j\|_{\infty}\le \bar{\eta}_{\max}$, then there exists ${\veg}\in \ker_{\ensuremath{\mathbb{Z}}}(H)$ such that ${\veg}\sqsubseteq \bar{\veg}$, $B_i(\bar{\veg}^0-\veg^0)=\ve 0$ for all $i\in [n]$, and ${\veg}$ admits an $\omega$-balanced uniform decomposition for $\omega\le (\Delta t_D \bar{\eta}_{\max})^{{\mathcal{O}}(s_D^2)}$. Moreover, if $\bar{\veg}-\veg\neq 0$, then we have $\bar{\veg}-\veg=\veg_1+\veg_2+\cdots+\veg_p$ for some $p\in \mathbb{Z}$ and $\veg_j\in \ker_{\ensuremath{\mathbb{Z}}}(H)$, and furthermore, $\veg_j\sqsubseteq \bar{\veg}-\veg$ and $\|\veg_j\|_\infty\le (\Delta t_D \bar{\eta}_{\max})^{{\mathcal{O}}(s_D^2)}$. \end{lemma} \begin{remark*} If $\bar{\veg}^0=\ve0$, then Lemma~\ref{lemma:balance} holds for $\veg=\ve0$. \end{remark*} It suffices to focus on a balanced uniform decomposition. Further notice that if $\bar{\veeta}_{j_1}$ is tier-1 and $\bar{\veeta}_{j_2}$ is tier-0, then $\bar{\veeta}_{j_1}+\bar{\veeta}_{j_2}$ is tier-1. Hence, we have the following. \begin{lemma}\label{lemma:all-balance} If ${\veg}=\sum_{j=1}^{\bar{N}}\bar{\veeta}_j$ is an $\omega$-balanced uniform decomposition where $\bar{\eta}_{\max}=\max_{j\in[\bar{N}]}\|\bar{\veeta}_j\|_{\infty}$, then ${\veg}$ admits a uniform decomposition ${\veg}=\sum_{j=1}^{N}{\veeta}_j$ such that every $\veeta_j$ is tier-1, and ${\eta}_{\max}=\max_{j\in[{N}]}\|{\veeta}_j\|_{\infty}\le \omega\bar{\eta}_{\max}$. \end{lemma} We will prove the following Lemma~\ref{lemma:sub} in Section~\ref{subsec:lemma}. Then we show the existence of such decomposition for combinatorial 4-block $n$-fold IP, thus concluding Theorem~\ref{11-n} in Section~\ref{subsec:thm-proof}. \begin{lemma}\label{lemma:sub} Suppose ${\veg}\in \ker_{\ensuremath{\mathbb{Z}}}(H)$ admits a uniform decomposition ${\veg}=\sum_{j=1}^{N}{\veeta}_j$ such that $\|\veeta_j\|_{\infty}\le \eta_{\max}$, and every $\veeta_j$ is tier-1. There exists $\tau=(\Delta\eta_{\max})^{\Delta^{{\mathcal{O}}(s_At_A+s_Dt_D)}}$ such that {if $\|\veg\|_{\infty}> \tau$}, then there exists $\veeta\in \ker_{\ensuremath{\mathbb{Z}}}(H)$ such that $\veeta\sqsubseteq {\veg}$ and $\|\veeta\|_\infty\le \tau$, and furthermore, $\veeta^{0}=\sum_{j\in S} \veeta_j^0$ for some $S\subseteq [N]$. \end{lemma} \subsection{Proof of Lemma~\ref{lemma:sub}}\label{subsec:lemma} \subsubsection{A hierarchical structure over bricks of \text{${\veg}$}}\label{subsec:notion} As we describe in the overview, we will construct $\veeta$ in Lemma~\ref{lemma:sub} from the bricks $\veeta_j^i$ via \lq\lq cross-position\rq\rq\, construction. For each $\veeta^i$, there will be some restrictions regarding which brick $\veeta_j^{i'}$ can be used, indicated by the hierarchical structure we introduce in the following. We first observe that as $A_i$'s and $D_i$'s are small submatrices with the largest coefficient bounded by $\Delta$, there are in total at most $\Delta^{{\mathcal{O}}(s_At_A+s_Dt_D)}$ different kinds of $A_i$'s and $D_i$'s, and hence $\varphi\le \Delta^{{\mathcal{O}}(s_At_A+s_Dt_D)}={\mathcal{O}}_{FPT}(1)$ different pairs of $(A_i,D_i)$. By re-indexing, we may divide $[n]$ into $\varphi$ subsets as $[n]=\bigcup_{k=1}^{\varphi} [r_{k-1}+1:r_k]$ where $0=r_0<r_1<r_2<\cdots<r_{\varphi}=n$ such that $(A_i,D_i)$'s are identical for every $i\in [r_{k-1}+1:r_k]$. \begin{comment} Consequently, $H_{\textnormal{com}}$ can be written as follows: \begin{eqnarray} H_{\textnormal{com}}=\left( \begin{array}{ccccccccccc} C & D_{r_1} & \cdots & D_{r_1} & D_{r_2} & \cdots & D_{r_2} & \cdots & D_{r_\varphi} & \cdots & D_{r_\varphi} \\ B & A_{r_1} & & & & & & & & & \\ \vdots & &\ddots& & & & & & & & \\ B & & & A_{r_1} & & & & & & & \\ B & & & & A_{r_2} & & & & & & \\ \vdots & & & & & \ddots & & & & & \\ B & & & & & & A_{r_2} & & & & \\ \vdots & & & & & & & \ddots & & & \\ B & & & & & & & & A_{r_\varphi} & & \\ \vdots & & & & & & & & & \ddots & \\ B & & & & & & & & & &A_{r_\varphi} \end{array} \right). \hspace{0.5cm} \end{eqnarray} \end{comment} Let $I_0=\{0\}$ and $I_k=[r_{k-1}+1:r_k]$. For simplicity we let $D_{r_0}=D_0=C$, then $(C,D_1,D_2,\cdots,D_n)\veeta_j=\sum_{k=0}^{\varphi}\sum_{i\in I_k}D_{r_k}\veeta_j^i$. We define {\it type} and {\it subtype} for integer vectors. {Let $\sigma$ be some sufficiently large value (it suffices to take $\sigma= (\Delta \eta_{\max})^{\Delta^{{\mathcal{O}}(s_At_A+s_Dt_D)}}$ as we will explain later}). We classify each integer $x$ into 5 {\it types}: (i) $0$, if $x=0$, (ii) close-positive, if $1\le x\le \sigma$, (iii) faraway-positive, if $x>\sigma$, (iv) close-negative, if $-\sigma\le x\le -1$, and (v) faraway-negative, if $x<-\sigma$. \begin{comment} \begin{itemize} \item $0$, if $x=0$. \item close-positive, if $1\le x\le \sigma$. \item faraway-positive, if $x>\sigma$. \item close-negative, if $-\sigma\le x\le -1$. \item faraway-negative, if $x<-\sigma$. \end{itemize} \end{comment} We can further classify all integers into $2\sigma+3$ {\it subtypes} by sub-dividing the type close-positive (or close-negative) into $\sigma$ categories, that is, $x$ is called of subtype-$x$ if $-\sigma\le x\le \sigma$. We now extend the definition of types and subtypes to vectors. All $d$-dimensional vectors can be classified into $5^d$ types (or $(2\sigma+3)^d$ subtypes) such that two vectors $\vex$ and $\vey$ belong to the same type (or subtype) as a vector if and only if for every $1\le \ell\le d$, the $\ell$-th coordinate of $\vex$ and $\vey$ have the same type (or subtype) as an integer. Now we classify the indices $0\le i\le n$ based on ${\veg}$ as follows: \begin{itemize} \item Megazone. Each $I_k$, $0\le k\le \varphi$ is called a megazone. There are $\varphi+1=\Delta^{{\mathcal{O}}(s_At_A+s_Dt_D)}$ megazones. \item Zone. A megazone is sub-divided into zones such that indices $i,i'$ belong to the same zone if and only if they belong to the same megazone and ${\veg}^i$ and ${\veg}^{i'}$ have the same type. There are at most $1+5^{t_A}\varphi=\Delta^{{\mathcal{O}}(s_At_A+s_Dt_D)}$ different zones. For $0\le \nu\le 5^{t_A}\varphi$, let $\beta_\nu\in\ensuremath{\mathbb{Z}}_{\ge 0}$ be the number of indices belonging to zone-$\nu$. \item Subzone. A zone is sub-divided into subzones so that indices $i,i'$ belong to the same subzone if and only if they belong to the same zone and ${\veg}^i$ and ${\veg}^{i'}$ have the same subtype. There are at most $(2\sigma+3)^{t_A}\cdot (1+5^{t_A}\varphi)$ subzones. For $0\le \iota\le (2\sigma+3)^{t_A} (1+5^{t_A}\varphi)-1$, let $\gamma_{\iota}\in\ensuremath{\mathbb{Z}}_{\ge 0}$ be the number of indices belonging to subzone-$\iota$. \item Slot. Every index $0\le i\le n$ is called a slot. There are $n+1$ slots. \end{itemize} Figure~\ref{fig1} in Appendix~\ref{figure} illustrates the relationships among megazones, zones and subzones. It is remarkable that the number of zones, $1+5^{t_A}\varphi$, is independent of the value of $\sigma$. $\sigma$ only comes into play at subzone level, which is crucial to our proof. Further, note that megazone-0 only contains 1 zone, and this zone contains 1 subzone, and this subzone contains 1 slot, which is slot-0. For simplicity, we let slot-0 be in subzone-0 and zone-0. For ease of description, we will take a viewpoint of the {\it Scheduling} problem. We view each brick $\veeta^i_j$ as a job and there are $N(n+1)$ jobs. We assume there are $n+1$ machines (from machine~0 to machine~$n$), and think of each job $\veeta_j^i$ as a job originally scheduled on machine~$i$. Machines can be divided into megazones, zones and subzones based on their indices. A job (brick) that is originally scheduled on a machine in megazone-$k$ (or zone-$\nu$ or subzone-$\iota$, resp.) is called a megazone-$k$ (or zone-$\nu$ or subzone-$\iota$, resp.) job (brick). We add up jobs on each machine just like adding up vectors, whereas the load of machine~$i$ in the original schedule is ${\veg}^i$. Constructing a new vector $\veeta\sqsubseteq {\veg}$ is like rescheduling jobs. That is, we remove jobs from machines in the original schedule, and then select and re-assign a subset of suitable jobs to machines. By doing so, we obtain a partial schedule. The load of machine~$i$ in the partial schedule, which is the summation of jobs assigned to it, will be $\veeta^i$. We will take ${\veg}^i$ as the {\it capacity} of machine~$i$. If the summation of several jobs equals $\vex\sqsubseteq{\veg}^i$, then we say the jobs fit machine~$i$. To prove Lemma~\ref{lemma:sub}, we need to construct a partial schedule $\ve\eta$ such that (i) $H^{\textnormal{two-stage}}\ve\eta=\ve 0$, (ii) $(C,D_1,\cdots,D_n)\ve\eta=\ve 0$ and (iii) $\ve\eta\sqsubseteq \lambda \veg$. In the following Subsection~\ref{subsec:prop-1}, Subsection~\ref{subsec:prop-2}, Subsection~\ref{subsec:prop-3} we will identify the conditions for the partial schedule to satisfy each property respectively, and finalize the proof of Lemma~\ref{lemma:sub} in Subsection~\ref{subsec:final-lemma}. \subsubsection{Selecting jobs to satisfy property (i) - $H^{\textnormal{two-stage}}\ve\eta=\ve 0$}\label{subsec:prop-1} Recall that $A_i$'s are the same for $i$ in each megazone (and hence in each zone). For $\nu\ge 1$, let machine~$i$ be an arbitrary zone-$\nu$ machine and $\veeta_{j_1}^{i'}$ be an arbitrary zone-$\nu$ job. Then $A_i\veeta_{j_1}^{i'}=-\veq$ by the definition in Eq~\eqref{eq5-extra}. If we put one zone-$\nu$ job $\veeta_{j_1}^{i'}$ on machine~$i$ and meanwhile put one zone-$0$ job $\veeta_{j_2}^0$ on machine~$0$, then it holds that $B_i\veeta_{j_2}^0+A_i\veeta_{j_1}^{i'}=\ve 0$. Hence, we have the following observation. \begin{observation}\label{obs:prop-i} Let $h$ be an arbitrary non-negative integer. Let $\ve\eta=(\veeta^0,\veeta^1,\cdots,\veeta^n)$ be a partial schedule where we assign arbitrary $h$ jobs in zone-$\nu$ to each zone-$\nu$ machine (i.e., for every $i$ in zone-$\nu$, $\veeta^i$ is the summation of $h$ zone-$\nu$ jobs). Then $H^{\textnormal{two-stage}}\ve\eta=\ve 0$. \end{observation} \subsubsection{Selecting jobs to satisfy property (ii) - $(C,D_1,\cdots,D_n)\ve\eta=\ve 0$}\label{subsec:prop-2} Recall that $D_0=C$ and $(D_0,D_1,\cdots,D_n)\sum_{j=1}^N\veeta_j=\ve 0$, which is a long sequence of addition consisting of $(n+1)N$ summands. We are interested in a subsequence whose sum is $0$ and meanwhile respects Observation~\ref{obs:prop-i}, that is, we want to select exactly $h\beta_{\nu}$ jobs from zone-$\nu$ such that their sum (after multiplying corresponding $D_{i}$'s) is $0$ (while recall that there are exactly $\beta_{\nu}$ zone-$\nu$ machines). Towards this, we first prove the following lemma, which gives a \lq\lq colorful\rq\rq\, version of the Steinitz Lemma. \begin{lemma}\label{lemma:color} Let $\vex_1,\ldots,\vex_{M}\in \mathbb{Z}^d$ be a sequence of vectors such that $\|\vex_i\|_{\infty}\le \zeta$ for some $\zeta\ge 1$ and every $i=1,\ldots,M$. Furthermore, there are $\mu$ colors, and each vector $\vex_i$ is associated with one color. There are in total $\alpha_j\overline{m}$ vectors of color $j$ where $\alpha_j,\overline{m}\in\ensuremath{\mathbb{Z}}_{>0}$ and $\sum_{j=1}^{\mu}\alpha_j=\alpha$, $M=\alpha \overline{m}$. Suppose $\sum_{i=1}^{M}\vex_i=\ve0$ and $M$ is sufficiently large (i.e., $M>(2d\zeta+2\mu\zeta+1)^{d+\mu}\alpha+\alpha+d+\mu$), then among $\vex_1,\cdots,\vex_{M}$ we can find $\alpha_jm$ vectors of each color $j$ such that their summation is $\ve0$, and $m\le (2d\zeta+2\mu\zeta+1)^{d+\mu}$. \end{lemma} By the Steinitz Lemma, it is easy to see the existence of a subset of vectors that add up to $\ve0$. Lemma~\ref{lemma:color} further indicates that the number of vectors of each color in this subset is proportional to their number in the whole set of $M$ vectors. Notice that $\overline{m}$ and $m$ are independent with each other. $\overline{m}$ may be very large, while $m$ can be bounded by an FPT-value. See the proof in Appendix~\ref{appsec:color}. { Now we apply Lemma~\ref{lemma:color} to the equation $(D_0,D_1,\cdots,D_n)\sum_{j=1}^N\veeta_j=\sum_{i,j,\ell} D_\ell\veeta_j^i=\ve 0$ as follows. If $i$ belongs to some zone-$\nu$ (which further belongs to some megazone-$k$), then we take each summand $D_{\ell}\veeta_j^i$ as a vector in $\ensuremath{\mathbb{Z}}^{s_D}$ of color $\nu$. Consequently, we have in total $1+5^{t_A}\varphi=\Delta^{{\mathcal{O}}(s_At_A+s_Dt_D)}$ different colors, and $M=(n+1)N$ vectors where the number of vectors in each color $\nu$ is $N\beta_\nu$. Further notice that $\|D_{\ell}\veeta_j^i\|_{\infty}\le t_D\Delta\eta_{\max}$. Hence, as long as $M=(n+1)N>\rho (n+1)$ for $\rho=(\Delta \eta_{\max})^{\Delta^{{\mathcal{O}}(s_At_A+s_Dt_D)}}$, we can always find out $m\beta_\nu$ summands in color $\nu$ (corresponding to $m\beta_\nu$ jobs in zone-$\nu$) such that $m\le(\Delta \eta_{\max})^{\Delta^{{\mathcal{O}}(s_At_A+s_Dt_D)}}$, and they sum up to $\ve0$. Moreover, Lemma~\ref{lemma:color} can be applied iteratively until there are fewer than $\rho(n+1)$ jobs left. } Our argument above implies the following. \begin{lemma}\label{obs:prop-ii} There exist some $m,\rho=(\Delta \eta_{\max})^{\Delta^{{\mathcal{O}}(s_At_A+s_Dt_D)}}$ such that if $N>\rho$, then all the $(n+1)N$ jobs (bricks) can be divided into $\lfloor \frac{N-\rho}{m}\rfloor +1 :=\psi+1$ groups such that \begin{compactitem} \item Except the last group, each group consists of $\beta_\nu m$ zone-$\nu$ jobs for all $\nu$. \item The last group consists of $\beta_\nu m'$ zone-$\nu$ jobs where ${m}'\le \rho+m$. \item If we evenly distribute jobs in every group to machines such that a zone-$\nu$ machine is assigned $m$ jobs (or $m'$ jobs if it is the last group), then the partial schedule $\veeta$ satisfies that $(C,D_1,\cdots,D_n)\veeta=\ve 0$. \end{compactitem} \end{lemma} \begin{remark*} Note that the number of zones, and thus $m,\rho,\psi$, are all independent of $\sigma$. We pick $\sigma\ge (\rho+m)\eta_{\max}$ which guarantees that when we evenly distribute jobs in each group to machines, the infinity norm of their sum never exceeds $\sigma$. \end{remark*} Notice that since we assign the same number of zone-$\nu$ jobs to zone-$\nu$ machines, by Observation~\ref{obs:prop-i} the partial schedule $\veeta$ in Lemma~\ref{obs:prop-ii} also satisfies that $H^{\textnormal{two-stage}}\veeta=\ve0$, and hence $H\veeta=\ve0$. \subsubsection{Selecting jobs to satisfy property (iii) - $\ve\eta\sqsubseteq {\veg}$}\label{subsec:prop-3} According to Lemma~\ref{obs:prop-ii}, by evenly distributing jobs to machines in each zone, every group of jobs induces a partial schedule $\veeta$. We show in this subsection that if there are sufficiently many groups, then there must be a group which induces $\veeta\sqsubseteq {\veg}$. For simplicity we ignore the last group and focus on remaining groups. We first briefly argue why evenly distributing jobs to machines in each zone in an arbitrary way may generate a partial schedule that is $\not\sqsubseteq{\veg}$. Note that when we apply Lemma~\ref{lemma:color} to divide jobs into groups, we can only guarantee there are $\beta_\nu m$ jobs from each zone-$\nu$ (and hence every machine in zone-$\nu$ can get exactly $m$ jobs in zone-$\nu$), but we cannot guarantee there are $\gamma_{\iota} m$ jobs from each subzone-$\iota$. Hence, when we evenly distribute jobs, some machine in subzone-$\iota_1$ may get jobs from subzone-$\iota_2$. As the subtypes of ${\veg}^{\iota_1}$ and ${\veg}^{\iota_2}$ are different, a job that fits a subzone-$\iota_2$ machine does not necessarily fit a subzone-$\iota_1$ machine. Note that megazone-$0$ only contains one zone (and one subzone). Thus all megazone-$0$ jobs (and thus megazone-$0$ jobs in each group), fit machine~$0$. From now on we only consider machine~$1$ to machine~$n$, and only consider groups of jobs which are not the last group. Consider machines and jobs in each zone-$\nu$. Since in each zone ${\veg}^i$'s have the same type, we know if some coordinate, say, the $h$-th coordinate of ${\veg}^i$ is $0$, then the $h$-th coordinate of any zone-$\nu$ job is also $0$. Recall that we have set $\sigma\ge (m+\rho)\eta_{\max}$ to be sufficiently large such that if we add any $m$ jobs, the absolute value of each coordinate of the sum is no more than $\sigma$. Hence, when we distribute jobs to machines in each zone-$\nu$, if the sum of $m$ jobs does not fit machine~$i$ (i.e., $\not\sqsubseteq{\veg}^i$), then the violation must occur at some coordinate of ${\veg}^i$ which is close-positive or close-negative (i.e., with a value in $[1,\sigma]\cup [-\sigma,-1]$). We call all close-positive or close-negative coordinates of each ${\veg}^i$ as critical coordinates. Recall that $\veg^i$'s in the same zone share the same type, and hence the same critical coordinates. Let $CI_\nu=\{h_1^\nu,h_2^\nu,\cdots,h^\nu_{|CI_\nu|}\}$ be the set of critical coordinates for zone-$\nu$, that is, for any $i$ in zone-$\nu$, the $h_\ell^\nu$-th coordinate of ${\veg}^i$ falls in $[1,\sigma]\cup [-\sigma,-1]$. We consider the $h_\ell^\nu$-th coordinate of every job in zone-$\nu$. We say a job is {\it good} if its $h_\ell^\nu$-th coordinate is $0$ for {\it all} $1\le \ell\le |CI_\nu|$, and is {\it bad} otherwise (i.e., its $h_\ell^\nu$-th coordinate is nonzero for some $\ell$). It is clear that good jobs never cause trouble in the sense that any $m$ good jobs in zone-$\nu$ fit a zone-$\nu$ machine. It suffices to consider the scheduling of bad jobs. Recall there are $\gamma_{\iota}$ slots (and hence $\gamma_{\iota}$ machines) in each subzone-$\iota$. We say a group is {\it bad} in subzone-$\iota$ if it contains more than $\gamma_{\iota}$ bad jobs in subzone-$\iota$, and is {\it good} if it is not a bad group in {\it any} subzone. We have the following lemmas regarding good and bad groups. \begin{lemma}\label{lemma:good} If a group is good and is not the last group in Lemma~\ref{obs:prop-ii}, then there is an assignment of jobs to machines such that the partial schedule $\veeta$ satisfies that $H\veeta=\ve0$, $\|\veeta\|_{\infty}\le m\eta_{\max}$ and $\veeta\sqsubseteq {\veg}$. \end{lemma} \begin{proof} Notice that a good group does not necessarily contain exactly $m\gamma_\iota$ jobs in each subzone-$\iota$, but it contains no more than $\gamma_{\iota}$ bad jobs in each subzone-$\iota$. Hence, we reschedule jobs to obtain a partial schedule such that every machine in subzone-$\iota$ is assigned 1 or 0 bad job in subzone-$\iota$, together with $m-1$ or $m$ good jobs in zone-$\nu$ (that contains subzone-$\iota$). We claim that, this partial schedule $\veeta$ satisfies Lemma~\ref{lemma:good}. First, by Lemma~\ref{obs:prop-ii} we know that jobs in every zone-$\nu$ is evenly distributed among machines in zone-$\nu$, hence $H\veeta=\ve0$. Next, by the definition we know a subzone-$\iota$ job is originally scheduled on a subzone-$\iota$ machine, hence in the rescheduling it either stays at the original machine or moves to another subzone-$\iota$ machine. By the definition of a subzone all machines in subzone-$\iota$ share the same value on critical coordinates. This means, a single bad job in subzone-$\iota$ fits any machine in subzone-$\iota$. Recall that the critical coordinate of a good job always has value $0$, so $m$ good jobs, or a bad job with $m-1$ good jobs fit any machine in subzone-$\iota$. Hence, $\veeta\sqsubseteq {\veg}$. \end{proof} In the meantime, there are not too many bad groups as implied by the following lemma. \begin{lemma}\label{lemma:bad} The total number of bad groups is bounded by $(2\sigma+3)^{t_A}(1+5^{t_A}\varphi)\sigma t_A$. \end{lemma} \begin{proof} Consider any slot $i$ in a subzone-$\iota$, we know there are $|CI_\nu|$ critical coordinates. Let ${\veg}^i=({\veg}^i[1],{\veg}^i[2],\cdots,{\veg}^i[t_A])$. Recall there are $\gamma_\iota$ slots (indices) in subzone-$\iota$. Consider the summation of absolute value over critical coordinates of ${\veg}^i$'s in each subzone-$\iota$, we have $$\sum_{i\in \textnormal{subzone}-\iota}\sum_{h\in CI_\nu}|{\veg}^i[h]|\le |CI_\nu|\sigma\gamma_{\iota}\le \sigma\gamma_{\iota} t_A.$$ Note that every bad job in subzone-$\iota$ lies in the same orthant with nonzero value at some critical coordinate, and must thus contribute at least $1$ to the above value. Recall that a bad group must be bad in at least one subzone, and any bad group in subzone-$\iota$ contains more than $\gamma_{\iota}$ bad jobs in subzone-$\iota$. Hence, a bad group in subzone-$\iota$ contributes at least $\gamma_{\iota}$ in total, which implies that there can be at most $\sigma t_A$ bad groups in subzone-$\iota$. Given that there are $(2\sigma+3)^{t_A}(1+5^{t_A}\varphi)$ subzones, there can be at most $(2\sigma+3)^{t_A}(1+5^{t_A}\varphi)\sigma t_A$ bad groups, and Lemma~\ref{lemma:bad} is proved. \end{proof} \subsubsection{Finalizing the proof of Lemma~\ref{lemma:sub}}\label{subsec:final-lemma} {By Lemma~\ref{obs:prop-ii}, except the last group, there are $\psi=\lfloor\frac{N-\rho}{m}\rfloor$ groups, where each group is either bad or good. By Lemma~\ref{lemma:bad} there are at most $(2\sigma+3)^{t_A}(1+5^{t_A}\varphi)\sigma t_A=(\Delta\eta_{\max})^{\Delta^{{\mathcal{O}}(s_At_A+s_Dt_D)}}$ bad groups, hence if $\frac{N-\rho}{m}\ge (2\sigma+3)^{t_A}(1+5^{t_A}\varphi)\sigma t_A+1$, there will be at least one good group, and by Lemma~\ref{lemma:good} it induces some $\veeta$ such that ${\veeta}\sqsubseteq {\veg}$, $\|\veeta\|_{\infty}\le m\eta_{\max}\le \tau$ and $H\veeta=\ve0$. Further notice that only zone-0 jobs will be put on machine~$0$, hence $\veeta^0$ is the summation of some $\veeta_j^0$'s. Thus Lemma~\ref{lemma:sub} is proved.} \subsection{Proof of Theorem~\ref{11-n}}\label{subsec:thm-proof} Now we are ready to prove Theorem~\ref{11-n}. Consider an arbitrary $\veg\in \ker_{\ensuremath{\mathbb{Z}}}(H_{\textnormal{com}})$. As $\veg\in\ker_{\ensuremath{\mathbb{Z}}}(H_{\textnormal{com}}^{\textnormal{two-stage}})$, we know there exists a decomposition $\veg=\sum_j\ve\xi_j$ where $\ve\xi_j\in\ker_{\ensuremath{\mathbb{Z}}}(H_{\textnormal{com}}^{\textnormal{two-stage}})$, $\|\ve\xi_j\|_{\infty}={\mathcal{O}}_{FPT}(1)$ and $\ve\xi_j\sqsubseteq \veg$. But when can we guarantee that this can lead to a uniform decomposition? We observe that $B\ve\xi_j^0$'s are integers when $s_B=1$, and $B\ve\xi_j^0+A_i\ve\xi_j^i= 0$. If we aim for a uniform decomposition by merging $\ve\xi_j$'s, then the question becomes whether we can partition $\ve\xi_j$'s into different groups such that $B\ve\xi_j^0$'s within each group sum up to the same value (bounded by ${\mathcal{O}}_{FPT}(1)$). An even partition does not need to exist, but we have the following sufficient condition. \begin{lemma}\label{lemma:number} Let $x_1,x_2,\cdots,x_m\in \ensuremath{\mathbb{Z}}$ and $\zeta\in\ensuremath{\mathbb{Z}}_{>0}$ be integers such that $|x_i|\le \zeta$ for $i\in [m]$ and $\sum_{i=1}^m x_i=x$. If $x$ is a multiple of $(6\zeta^2+2\zeta+1)!$, then the $m$ integers can be partitioned into $m'$ subsets $T_1,T_2,\cdots,T_{m'}$ such that $\bigcup_{k=1}^{m'}T_k=[m]$, and for all $k\in [m']$ it holds that $|T_k|\le 2^{{\mathcal{O}}(\zeta^2\log\zeta)}$, $\sum_{i\in T_k}x_i\in \{0,sgn(x)\cdot (6\zeta^2+2\zeta+1)!\}$ where $sgn$ denotes the standard sign function such that $sgn(x)=1$ if $x>0$, $sgn(x)=-1$ if $x<0$, and $sgn(x)=0$ if $x=0$. \end{lemma} With Lemma~\ref{lemma:number}, we are able to prove the following. \begin{lemma}\label{lemma:dec-1} Let $\veg\in \ker_{\ensuremath{\mathbb{Z}}}(H_{\textnormal{com}})$. Let \begin{eqnarray*} \lambda=(6\lambda_0^2+2\lambda_0+1)!=2^{2^{2^{{\mathcal{O}}(t_B^2\log \Delta)}}}, \textnormal{ where } \lambda_0:=\Delta t_B g_{\infty}(H_{\textnormal{com}}^{\textnormal{two-stage}})=2^{2^{{\mathcal{O}}(t_B^2\log \Delta)}}. \end{eqnarray*} If $B\veg^0$ is a multiple of $\lambda$, then $\veg$ admits a uniform decomposition $\veg=\sum_{j=1}^N\veeta_j$ such that $\|\veeta_j\|_{\infty}\le 2^{2^{2^{{\mathcal{O}}(t_B^2\log \Delta)}}}$. Furthermore, $B\veeta_j^0$ is a multiple of $\lambda$ for all $j$. \end{lemma} Now we are ready to prove our main theorem. \begin{proof}[Proof of Theorem~\ref{11-n}] Consider any $\veg\in \ker_{\ensuremath{\mathbb{Z}}}(H_{\textnormal{com}})$. Clearly $B(\lambda\veg^0)$ is a multiple of $\lambda$, thus by Lemma~\ref{lemma:dec-1}, $\lambda\veg$ admits a uniform decomposition $\lambda\veg = \sum_{j=1}^N\veeta_j$ where {$\|\veeta_j\|_{\infty}\le \eta_{\max}=2^{2^{2^{{\mathcal{O}}(t_B^2\log \Delta)}}}$ and every $B\veeta_j^0$ is a multiple of $\lambda$. } If this decomposition is not $\omega$-balanced for $\omega\le (\Delta t_D\eta_{\max})^{{\mathcal{O}}(s_D^2)}$, then by Lemma~\ref{lemma:balance} we obtain $\veeta\sqsubseteq \lambda\veg$ with $\|\veeta\|_{\infty}\le (\Delta t_D\eta_{\max})^{{\mathcal{O}}(s_D^2)}$, $\veeta\in \ker_{\ensuremath{\mathbb{Z}}}(H_{\textnormal{com}})$ and $B\veeta^0=\ve 0$, which is a multiple of $\lambda$. Otherwise this decomposition is $\omega$-balanced. By Lemma~\ref{lemma:all-balance}, we can obtain a uniform decomposition $\lambda\veg = \sum_{j=1}^{N'}{\veeta}_j'$ such that $\max_j\|{\veeta}_j'\|\le \omega\eta_{\max}$ and all $\veeta_j'$'s are tier-1. According to Lemma~\ref{lemma:sub}, if $\lambda\|\veg\|_{\infty}>\tau$ for $\tau=(\omega\Delta\eta_{\max})^{\Delta^{{\mathcal{O}}(s_At_A+s_Dt_D)}}=2^{2^{{\mathcal{O}}(s_At_A\log\Delta+s_Dt_D\log\Delta)}\cdot 2^{2^{{\mathcal{O}}(t_B^2\log\Delta)}}}$, then we are able to find some $\veeta\sqsubseteq \lambda\veg$ such that $H_{\textnormal{com}}\veeta=\ve 0$, $\|\veeta\|_\infty={\mathcal{O}}_{FPT}(1)$ and $\veeta^0=\sum_{j\in S}\veeta_j^0$ for some $S\subseteq [N]$. As every $B\veeta_j^0$ is a multiple of $\lambda$, $B\veeta^0$ is also a multiple of $\lambda$. In both cases, we find $\veeta\sqsubseteq \lambda\veg$ where $B\veeta^0$ is a multiple of $\lambda$. Now consider $\lambda\veg-\veeta$. Obviously $\lambda\veg-\veeta\in \ker_{\ensuremath{\mathbb{Z}}}(H_{\textnormal{com}})$. It is easy to see $B(\lambda\veg^0-\veeta^0)$ is a multiple of $\lambda$. Thus, if $\|\lambda\veg-\veeta\|_{\infty}> \tau$ we can continue to decompose $\lambda\veg-\veeta$ using our argument above. Observing that $s_A=1$, Theorem~\ref{11-n} is proved. \end{proof} \begin{remark*} Theorem~\ref{11-n} is also true for a slight generalization of combinatorial 4-block $n$-fold IP. In $H_{\textnormal{com}}$, suppose $B$ is not a $1\times t_B$ matrix, but rather an $s_B\times t_B$ matrix of the form $(\ver_1,\ve0,\cdots,\ve0)^{\top}$, that is, except the first row $\ver_1^{\top}$, all the other rows of $B$ are $\ve0$. We call such an IP \emph{almost combinatorial 4-block $n$-fold IP}. For such $B$, we observe that for any $\vex\in\ensuremath{\mathbb{Z}}^{t_B}$, $B\vex=(\ver_1\cdot\vex,\ve0,\cdots,\ve0)$. Hence, our argument in the proof above applies directly, i.e., Theorem~\ref{11-n} holds for almost combinatorial 4-block $n$-fold IP (see Appendix~\ref{appsec:almost} for a formal proof). In other words, Theorem~\ref{11-n} as well as our FPT algorithm for combinatorial 4-block $n$-fold IP can be extended to the case when $\text{rank}(B)=1$. Such a generalization allows submatrices $A_i$'s to contain multiple rows subject to that these rows are ``local constraints''. \end{remark*} \section{Algorithms}\label{sec:alg} Using Theorem~\ref{11-n}, we are able to bound the $\ell_\infty$-norm of the Graver basis elements: \begin{theorem}\label{coro:graver} Let $\veg\in\ensuremath{\mathcal{G}}(H_{\textnormal{com}})$ be a Graver basis element, then $\|\ve g\|_\infty= g_{\infty}(H_{\textnormal{com}})$ where $g_{\infty}(H_{\textnormal{com}})\le 2^{2^{{\mathcal{O}}(t_A\log\Delta+s_Dt_D\log\Delta)}\cdot 2^{2^{{\mathcal{O}}(t_B^2\log\Delta)}}}\cdot n={\mathcal{O}}_{FPT}(n).$ \end{theorem} Utilizing Theorem~\ref{coro:graver} and the iterative augmentation framework (see Appendix~\ref{appsec:pre}), we are able to prove the following theorem. \begin{theorem}\label{thm20} Consider combinatorial 4-block $n$-fold IP with a separable convex objective function $f$ mapping $\ensuremath{\mathbb{Z}}^{t_B+nt_A}$ to $\ensuremath{\mathbb{Z}}$. Let $P$ be the set of feasible integral points, and let $\hat{f}:=\max_{x,y\in P}(f(x)-f(y))$. Then it can be solved in $2^{2^{{\mathcal{O}}(t_A\log\Delta+s_Dt_D\log\Delta)}\cdot 2^{2^{{\mathcal{O}}(t_B^2\log\Delta)}}}\cdot n^{3}\hat{L}^2\log(\hat{f})={\mathcal{O}}_{FPT}(n^3\hat{L}^2 \log(\hat{f}))$ time, where $\hat{L}$ denotes the logarithm of the largest number occurring in the input. \end{theorem} The running time can be improved if the objective function is linear. \begin{comment} \section{Discussion}\label{discuss} In the above sections, we talk about the IP when $H_{\textnormal{com}}$ is as the constraint matrix, where $B\in\ensuremath{\mathbb{Z}}^{1\times t_B}$. A more detailed analysis tells us that when $B\in\ensuremath{\mathbb{Z}}^{s_B\times t_B}$ and all the rows of matrix $B$ are zero except for the first row, the above results about solving the IP could be obtained by a similar way. $\lambda$ in Eq~\eqref{eq2} is not an integer any more, but a vector. That is, $\ve\lambda=((6\lambda_0^2+2\lambda_0+1)!,0,\cdots,0)$. Moreover, \begin{eqnarray*} &&\ve\lambda+A_i\tau_j {\ve\xi}_j^i=B\tau_j\ve\xi_j^0+A_i\tau_j{\ve\xi}_j^i=\ve 0, \hspace{0.5cm}\forall i\in [n],\forall j\in [L], \end{eqnarray*} whereas $A_i\ve\eta_j^i=A_i\ve\eta_{j'}^i$ for all $j,j'$. Thus, we can generalize the results on $B_j=B$ and $B\in\ensuremath{\mathbb{Z}}^{1\times t_B}$ to the results on $B_j=B$, $B\in\ensuremath{\mathbb{Z}}^{s_B\times t_B}$ and all the rows of matrix $B$ are zero except for the first row. Naturally, we have the following two theorems: \begin{theorem} If $B_j=B$, $B\in\ensuremath{\mathbb{Z}}^{s_B\times t_B}$ and all the rows of matrix $B$ are zero except for the first row, combinatorial 4-block $n$-fold IP can be solved in ${\mathcal{O}}_{FPT}(n^3\log n)$ time. \end{theorem} \begin{theorem} If $B_j=B$, $B\in\ensuremath{\mathbb{Z}}^{s_B\times t_B}$ and all the rows of matrix $B$ are zero except for the first row, the combinatorial 4-block $n$-fold IP with a separable convex objective function $f$ mapping $\ensuremath{\mathbb{Z}}^{t_B+nt_A}$ to $\ensuremath{\mathbb{Z}}$ can be solved in ${\mathcal{O}}_{FPT}(n^4\hat{L} \log^2(\hat{f}))$ time, where $\hat{L}$ denotes the logarithm of the largest number occurring in the input, $\hat{f}:=\max_{x,y\in P}(f(x)-f(y))$ and $P$ is the set of feasible integral points for the IP. \end{theorem} \end{comment} \section{Applications in Scheduling with High Multiplicity}\label{appli} It has been shown by Knop and Kouteck{\`y}~\cite{knop2018scheduling} that the classical scheduling problems $R||C_{\max}$ and $R||\sum_{\ell}w_\ell C_\ell$ can be modeled as $n$-fold IPs, based on which FPT algorithms can be developed. However, when we try to model more sophisticated scheduling problems, especially scheduling with rejection $R||C_{\max}+E$ or bicriteria scheduling $R||\theta C_{\max}+\sum_{\ell}w_\ell C_\ell$, we run into 4-block $n$-fold IP. This is because for these problems, $C_{\max}$ needs to be taken as a variable in the IP, while for $R||C_{\max}$ we can use binary search on $C_{\max}$ and hence $n$-fold IP is sufficient. We formally describe the scheduling problem. Given are $m$ machines and $k$ different types of jobs, with $N_j$ jobs of type $j$. A job of type $j$ has a processing time of $p^i_j\in\ensuremath{\mathbb{Z}}_{\ge 0}$ if it is processed by machine~$i$. For scheduling with rejection $R||C_{\max}+E$, every job of type $j$ also has a rejection cost $u_j$. A job is either processed on one of the machine, or is rejected. The goal is to minimize the makespan $C_{\max}$ plus the total rejection cost $E$. \begin{theorem}\label{thmmm} $R||C_{\max}+E$ can be solved in $m^{3+o(1)} 2^{2^{{\mathcal{O}}(k^2\log p_{\max})}\cdot 2^{2^{{\mathcal{O}}(\log p_{\max})}}}|I|$ time, where $|I|$ denotes the length of the input. \end{theorem} More precisely, $|I|$ is bounded by ${\mathcal{O}}(kp_{\max} (\max\{\log N_{\max}, \log u_{\max}\}))$ where $N_{\max}=\max_j N_j$ and $u_{\max}=\max_j u_j$. One may suspect that the problem can be solved through generalized $n$-fold IP by guessing out the value of $C_{\max}$. However, this will require $p_{\max}\cdot \max_jN_j$ enumerations. See Appendix~\ref{sche1} for a detailed proof of Theorem~\ref{thmmm}. For bicriteria scheduling $R||\theta C_{\max}+\sum_{\ell}w_\ell C_\ell$, each job $\ell$ of type $j$ has a weight $w_j$, and the goal is to find an assignment of jobs to machines such that $\theta C_{\max}+\sum_{\ell}w_\ell C_\ell$ is minimized, where $C_\ell$ is the completion time of job $\ell$, and $\theta$ is a fixed input value. \begin{theorem}\label{them18} $R||\theta C_{\max}+\sum_{\ell}w_\ell C_\ell$ can be solved in $m^3 2^{2^{{\mathcal{O}}(k^2\log p_{\max})}\cdot 2^{2^{{\mathcal{O}}(\log p_{\max})}}}|I|^3$ time, where $|I|$ denotes the length of the input. \end{theorem} More precisely, $|I|$ is bounded by ${\mathcal{O}}(kp_{\max} (\max\{\log N_{\max},\log w_{\max}\}))$ where $N_{\max}=\max_j N_j$, $w_{\max}=\max_j w_j$. See Appendix~\ref{sche2} for a detailed proof of Theorem~\ref{them18}. For identical machines, $k\le p_{\max}$ and we obtain FPT algorithms parameterized by $p_{\max}$. \clearpage
1,941,325,220,637
arxiv
\section{Introduction} It has been known for some time now that Supersymmetric [SUSY] Grand Unified Theories [GUTs]\cite{DRW} $\bullet$ suppress the non-SUSY GUT prediction for proton decay, via heavy gauge boson exchange, by increasing the GUT scale, and $\bullet$ lead to a 10\% increase (for one pair of Higgs doublets) in the GUT prediction for the weak mixing angle, $\sin^2\theta_W$, compared to non-SUSY GUTs. Now, since $\sin^2\theta_W$ is known to better than 1\% accuracy, one naturally uses the values of the two low energy parameters, the fine structure constant, $\alpha$, and $\sin^2\theta_W$ as input to determine the GUT coupling, $\alpha_G$, and the GUT scale, $M_G$, and predicts the value of the strong coupling, $\alpha_s$. [Note, all low energy parameters are typically evaluated at the scale, $M_Z$.] This prediction is in excellent agreement with the low energy data\cite{webber,el,lp}. The global fit to $\alpha_s$ from measurements at all energies\cite{webber} gives $\alpha_s = 0.117 \pm 0.005$ with low (high) energy measurements favoring values of $\alpha_s \le (\ge)\; 0.12$. A global fit to all electroweak data\cite{el} gives $\alpha_s = 0.127 \pm 0.005 \pm 0.002 \pm 0.001$. In comparison, SUSY GUTs predict\cite{lp} $\alpha_s = 0.127 \pm 0.005 \pm 0.002$, where the first error is the uncertainty in the low energy sparticle spectrum, and the second, the top and Higgs masses. Note that the non-supersymmetric GUT prediction\cite{langacker} $\alpha_s = 0.073 \pm 0.001 \pm 0.001$ is clearly ruled out. In addition, SUSY GUTs contain new sources of proton decay (dimension 5 operators) mediated by color triplet Higgs fermions\cite{wsy,PDECAY}. They typically lead to the dominant decay mode $p \rightarrow K^+ \bar{\nu}$ with the lifetime, $\tau_p \propto \tilde{M}_t^2$, where $\tilde{M}_t$ is an effective color triplet mass. These results are valid for any grand unified symmetry group, as long as SU(3)$\times$ SU(2)$\times$ U(1) is embedded in an unbroken simple subgroup above the GUT scale. Consideration of fermion masses, however, leads us to the special grand unified symmetry group SO(10)\cite{SO(10)}. It is the unique group which combines all the fermions of one family \{u, d, e, $\nu$\} into one irreducible 16 dimensional representation with the addition of only one electroweak singlet state, $\bar{\nu}$, the so-called right-handed neutrino. Consequently, SO(10) Clebschs can be used to relate all charged fermion mass matrices. In this context, a paper by Anderson et al.\cite{ADHRS}[Paper I] showed how all charged fermion masses and mixing angles can be reasonably described in terms of four dominant effective mass operators at $M_G$. Hall and one of us (S.R.)\cite{hr} [Paper II] then showed how to extend the effective theory at $M_G$ to a complete SO(10) SUSY GUT valid up to the Planck or string scales, $M$. That theory included several sectors necessary for -- (1) GUT symmetry breaking, (2) doublet-triplet splitting (Higgs sector), (3) charged fermion masses (fermion mass sector) and also (4) giving a large mass to the electroweak singlet neutrinos (neutrino mass sector). It contained sufficient symmetry to be ``natural." Hence, it included all operators consistent with the symmetries and only contained the SO(10) states\footnote{These two features are consistent with a stringy origin to such a model\cite{CCL}.} --- \begin{equation}(n_{16} + 3)\;{\bf 16} + n_{16} \; \bar{\bf 16} + n_{10} \; {\bf 10} + n_{45} \; {\bf 45} + n_{54} \; {\bf 54} + n_1 \; {\bf 1} \label{eq:states} \end{equation} with $n_{16}, n_{10}, n_{45}, n_{54}, n_1$ specific integers. Finally, the effective mass operators of paper I were recovered upon integrating out states with mass greater than $M_G$. In any complete GUT, the prediction for $\alpha_s$ receives corrections at one loop from thresholds at the weak scale, including the sparticle spectrum, and at the GUT scale from states with mass of order $M_G$. The dominant contribution at $M_G$ comes from states in the symmetry breaking and Higgs sectors. The Higgs sector, on the other hand, also affects the proton decay rate. The effective color triplet Higgs fermion mass, $\tilde{M}_t$, must be significantly larger than $M_G$ in order to increase the proton lifetime. The Higgs doublets on the other hand must remain massless at the GUT scale. In this paper, $\bullet$ we show that the threshold corrections due to doublet-triplet splitting, in the simplest models, increases the predicted value for $\alpha_s(M_Z)$. Thus, while the GUT predictions for $\alpha_s$ and $\tau_p$ are unrelated at the tree level, at one loop the two effects are coupled. One cannot suppress proton decay without at the same moment increasing the prediction for $\alpha_s$. We then show that the model of paper II is ruled out by this effect. $\bullet$ We also present a new complete SUSY GUT which appears to be consistent with all low energy data at the $1 \sigma$ level\cite{bcrw}. This model has in fact a much simpler GUT symmetry breaking sector than that in paper II with fewer states. We present some typical results for proton decay rates in this model. In a future paper\cite{lr}, we will present a more detailed study. \section{One loop threshold corrections at $M_G$.} One loop threshold corrections at $M_G$ for gauge couplings are given by\cite{threshold} \begin{equation} \alpha_i^{-1}(M_G)=\alpha_{G}^{-1}-\Delta_i \end{equation} where $\Delta_i$ is the leading log threshold correction to $\alpha_i$. Furthermore, \begin{equation} \label{eq:1} \Delta_i={1 \over 2 \pi} \sum_\zeta b_i^\zeta \log \abs{M_\zeta \over M_G} \end{equation} where the sum is over all superheavy particles and $b_i^\zeta$ is the contribution the superheavy particle $\zeta$ would make to the beta function coefficient $b_i$ if the particle were not integrated out at $M_G$. At one loop the definition of the GUT scale is somewhat arbitrary. In order to avoid large logarithms it must certainly be at the geometric mean of the heavy masses; otherwise we are free to choose its value. A particularly convenient choice is to define $M_G$ as the scale where the two gauge couplings, $\alpha_i, \;\; i = 1,2$, meet. At this point $\Delta_1 = \Delta_2$ and we define \begin{equation}\tilde\alpha_G \equiv \alpha_1(M_G) = \alpha_2(M_G).\end{equation} Then define \begin{equation}\epsilon_3 \equiv (\alpha_3(M_G) - \tilde\alpha_G)/\tilde\alpha_G, \end{equation} i.e. the relative shift in $\alpha_3$ at $M_G$. In general, a value of $\epsilon_3 \sim - (2 - 3 \%)$ is needed to obtain $\alpha_s \sim \; 0.12$. \subsection{Formula for $\epsilon_3$ in a general SO(10) theory} The most general SO(10) SUSY GUT we consider includes the complete multiplets listed in equation (\ref{eq:states}) with arbitrary values of $n_{16}, n_{10}, n_{45}, n_{54}, n_1$. These states can be decomposed into their SU(3) $\times$ SU(2) $\times$ U(1) content by first considering the decomposition to SU(5) [see table ~\ref{t:su5decomp}] and then the rest of the way \cite{slansky}. We have derived a general formula for $\epsilon_3$ in this case (see appendix 1) given by the expression (valid to lowest order in $\tilde\alpha_G$) \begin{eqnarray} \epsilon_3 & = & {\tilde\alpha_G \over 2 \pi} \sum_\gamma[(b_3^\gamma - b_2^\gamma)- {1 \over 2}(b_2^\gamma - b_1^\gamma)] \log \abs{{\det}' {M_\gamma \over M_G}} \label{eq:epsilon} \end{eqnarray} where \begin{equation} {\det}' M_\gamma = \left\{ \matrix{\det M_\gamma \hfill & {\rm if} & \gamma=t,g,w,s,\sigma \hfill \cr \det M_\gamma \over {(M_{gaugino}^\gamma)^4} \hfill & {\rm if} & \gamma=q,u,e,x \hfill \cr \det M'_d & {\rm if} & \gamma = d \cr} \right. \label{eq:effdet} \end{equation} where $M'_d$ is defined as the doublet mass matrix $M_d$ with the massless Higgs doublets projected out.\footnote{Our notation for the states discussed above is found in table ~\ref{t:states}.} Note, $\epsilon_3$ only depends on the number of states in the theory through the mass matrices $M_\gamma$ in each charge sector $\gamma$. Moreover, the effective determinants, defined above, explicitly take into account the special contributions of vector multiplets to the threshold corrections. Putting in the values of $b_i^\gamma$ for $i = 1,2,3$ into eqn. (\ref{eq:epsilon}) we find $$\epsilon_3={\tilde\alpha_G \over \pi} \biggl( {3\over 2}\log\abs{{\det}'{M_g \over M_G}} - {3\over2}\log\abs{{\det}'{M_w \over M_G}} \biggr.$$ $$ +{33\over10}\log\abs{{\det}'{M_s \over M_G}} -{21\over10}\log\abs{{\det}'{M_\sigma \over M_G}} + {9\over10}\log\abs{{\det}'{M_u \over M_G}} +{3 \over 10}\log\abs{{\det}'{M_e \over M_G}} - {6\over5}\log\abs{{\det}'{M_q \over M_G}}$$ \begin{equation} \biggl. -{3 \over 5}\log\abs{{\det}'{M_d \over M_G}} + {3\over5}\log\abs{{\det}'{M_t \over M_G}} \biggr) . \label{eq:grandexpression} \end{equation} The dominant contribution to $\epsilon_3$ comes from the GUT symmetry breaking and electroweak Higgs sectors with superspace potential $W_{sym\, breaking}$ and $W_{Higgs}$, respectively. An additional small contribution comes from the fermion mass sector. These sectors are by reasons of ``naturalness" necessarily invariant under several U(1) symmetries (including an R symmetry). In appendix 2 we show that these symmetries impose stringent constraints on the form of $\epsilon_3$. In brief, $\epsilon_3$ is implicitly a function of the vacuum expectation values [vevs] of fields in the symmetry breaking sector. These vevs transform under the U(1) and R symmetries. As a consequence of the invariance, we find \begin{equation} {\epsilon}_3=f(\zeta_1,\ldots,\zeta_m) + {3 \tilde\alpha_G \over 5\pi}\log \abs{{{\det}\bar M_t \over M_G {\det} \bar M'_d}} + \cdots \label{eq:ep3} \end{equation} where the first term represents the contributions from $W_{sym\, breaking}$. It is only a function of U(1) and R invariant products of powers of vevs \{$\zeta_i$\}. The second term, coming from the Higgs sector, is discussed further in the next section and the ellipsis refers to the small additional contribution arising from the fermion mass sector. Note, $\bar M_t,\; \bar M'_d$ only include those states, from $5$s and $\bar 5$s of SU(5) contained in $W_{sym\, breaking}$ and $W_{Higgs}$, {\em which mix with the Higgs sector} (see the next section). \subsection{The dependence of $\epsilon_3$ on the Higgs sector} We assume the Higgs sector includes any number of {\bf 10}s with, for simplicity, only one of these, say ${\bf 10_1}$, coupling to light fermions. We also include the doublet-triplet splitting mechanism introduced by Dimopoulos-Wilczek~\cite{DW}. Accordingly, the terms in the superspace potential relevant for doublet - triplet splitting are of the form \begin{equation} W_{d-t}= \bar{5}_2\; [a_1 \,{3 \over 2} \;(B~-~L)] \;5_1 - \bar{5}_1 \;[a_1 \,{3 \over 2}\;(B~-~L)]\; 5_2+ \sum_{i, j \ge 2}M_{ij} \;\bar{5}_i\; 5_j \label{eq:Wdt} \end{equation} where $a_1 \,{3 \over 2}\; (B~-~L)$ is the vev of the field $A_1$ in the 45 dimensional representation and $(B~-~L)$ is the [Baryon - Lepton number] charge matrix (see eqn. (\ref{eq:a1})). Thus, since the doublets in $5_1, \bar{5}_1 \subset 10_1$ have zero $B~-~L$, they remain massless. [Note, some of the $\bar{5}_i \,,5_j$ states may come from $16$ and $\overline{16}$ representations, respectively, in $W_{sym\, breaking}$. We include, however, only those states which mix with $10_1$.] With this superpotential, we have $ \bar M'_d \equiv M[d]$, and the triplet mass matrix, $\bar M_t$, with non-vanishing determinant, includes the terms which mix states in $10_1$ with $\bar{5}_i ,\;5_j,\; {\rm for} \;i,j \ge 2$ and the sub-matrix $M[t]$, where the matrix $M[d],\; (M[t])$ is given by $M$ with Clebschs appropriate for doublets (triplets). Proton decay in this model is mediated by the Higgses in $10_1$. Hence, the coefficient of the resulting effective dimension 5 baryon violating operators\cite{wsy,PDECAY} is given by the inverse of the triplet mass matrix in the 11 direction, i.e. $$(M^{-1}_t)_{11} \equiv {{\det} M[t] \over {\det} \bar M_t} = {{\det} \bar M'_d \over {\det} \bar M_t} \times g $$ where $g \equiv { {\det} M[t] \over {\det} M[d]}$. We show, in appendix 2, that $ g = g(\zeta_1, \cdots, \zeta_m) $ is a holomorphic function of the set of U(1) and R invariant products of powers of vevs. If we now define the effective triplet mass by $\tilde{M_t}^{-1} \equiv (M^{-1}_t)_{11}$, we obtain the final form for the factor in eqn. (\ref{eq:ep3}) -- ${{\det} \bar M_t \over {\det} \bar M'_d} = \tilde M_t \; g(\zeta_1, \cdots, \zeta_m)$. As a consequence, eqn. ~(\ref{eq:ep3}) becomes \begin{equation} {\epsilon}_3=F(\zeta_1,\ldots,\zeta_m) + {3 \tilde\alpha_G \over 5\pi}\log \abs{{ \tilde{M_t} \over M_G}} + \cdots \end{equation} where $f$ and $g$ are absorbed in the function $F$. We thus find that suppressing the proton decay rate by increasing the ratio ${\tilde{M_t} \over M_G}$ has the effect of increasing the value of $\epsilon_3$ and consequently increasing the value of $\alpha_s$.\footnote{This observation in the context of minimal SU(5), where in that case $\tilde M_t$ is the color triplet Higgs mass, was discussed earlier by Hisano et al.\cite{PDECAY}.} \subsection{An Example : $\epsilon_3$ in the model of paper II} As an example, for the particular model of paper II, there is only one independent invariant ratio of the GUT scale vevs given by $\zeta= {{v \overline v}\over a_1 a_2}$. In principle, $\epsilon_3$ could depend on an arbitrary function $f(\zeta)$. However, when we put the effective determinants for this model in equation ~(\ref{eq:grandexpression}) we find \begin{equation} \epsilon_3= {3\, \tilde\alpha_G \over 5 \pi} \biggl\{21 \,\log(2) + \log \abs{{\tilde{M_t} \over M_G}}\biggr\}. \label{eq:modelII} \end{equation} Remarkably, all dependence of $\epsilon_3$ on the GUT scale vevs has dropped out except for the dependence on $\tilde{M_t}$. Unfortunately, the large positive constant that appears in the expression means that in order to get $\epsilon_3$ negative, the proton's lifetime would be many orders of magnitude lower than the experimental bound. In addition, changing the Yukawa coupling coefficients of the terms of $W_{sym\, breaking}$ cannot cure this problem. For the most part, changing the Yukawa coupling constants of one of the terms of $W_{sym\, breaking}$ has the result of multiplying the effective determinants of the mass matrices $M_{\gamma}$ in eqn. (\ref{eq:epsilon}), for states $\gamma$ in a complete SU(5) multiplet, all by the same amount. Thus, this multiplicative factor has no effect on $\epsilon_3$. Hence, as a consequence of (\ref{eq:modelII}), we find that the symmetry breaking sector of the model of paper II is ruled out by the dual constraints coming from the low energy measurement of $\alpha_s$ and the proton lifetime. Is it possible to find a symmetry breaking sector which has all the U(1) symmetries required for the naturalness of the theory and is consistent with the constraints of $\alpha_s$ and $\tau_p$? In the next section we describe a new SO(10) SUSY GUT which satisfies all our criteria. In fact it may contain the minimal symmetry breaking sector consistent with the requirements of (1) obtaining the effective fermion mass operators of paper I, (2) ``naturalness," and (3) retaining only those states at energies below $M_G$, which either have trivial SM charge or are contained in the minimal supersymmetric standard model [MSSM]. \section{A Complete SO(10) SUSY GUT} \subsection{The GUT symmetry breaking and Higgs sectors} For this model, \begin{eqnarray} W_{sym\, breaking}=& {1\over M} A_1' (A_1^3+{\cal S}_3 S A_1+{\cal S}_4 A_1 A_2) \\ & + A_2 (\psi \overline{\psi} + {\cal S}_1 \tilde A) + S \tilde{A}^2 \nn \\ & + S' (S {\cal S}_2+A_1 \tilde A) + {\cal S}_3 S'^2 \nn \end{eqnarray} where the fields \{$A_1,\; A_2,\; \tilde A, \; A'_1$\}, \{$S, \; S'$\}, $\psi, \; \bar\psi$, \{${\cal S}_1, \cdots , {\cal S}_4$\} are in the 45, 54, 16, $\bar{16}$ and 1 representations, respectively. [Note, traces and contractions of indices are implicit.] The supersymmetric minimization condition ${\partial W \over \partial A_1'}=0$ gives four discrete choices for the direction of the vev of $A_1$. We will assume that nature chooses the B~-~L direction. The second term gives $\tilde A$ a vev in the X direction, $A_2$ a vev in the Y direction, and $\psi$ and $\overline\psi$ vevs in the right-handed neutrino-like direction (see eqn. (\ref{eq:a1}) below). The third and fourth terms, and the ${\cal S}_4$ subterm of the first term, were added to give mass to all non-MSSM fields which are not in a singlet representation of the standard model gauge group. The last term was added in accordance with our ``naturalness" criterion, namely that the theory should be the most general one consistent with the symmetries. The term ${\cal S}_3 S'^2$ is consistent with the U(1) and R symmetries of the symmetry breaking sector of the theory that will be discussed below, and we are aware of no other additional symmetry that might exclude this term. Therefore the term must be included. The above vevs are given by \begin{equation} \langle A_1 \rangle = a_1 \left(\begin{array}{ccccc} 1 & & & & \\ & 1 & & & \\ & & 1 & & \\ & & & 0 & \\ & & & & 0 \end{array}\right) \otimes \eta \equiv a_1 {3 \over 2} (B - L) \label{eq:a1} \end{equation} $$ \langle A_2 \rangle = a_2 \left(\begin{array}{ccccc} 1 & & & & \\ & 1 & & & \\ & & 1 & & \\ & & & -3/2 & \\ & & & & -3/2 \end{array}\right) \otimes \eta \equiv a_2 {3 \over 2} Y $$ $$ \langle \tilde{A} \rangle = \tilde{a} \left(\begin{array}{ccccc} 1 & & & &\\ & 1 & & & \\ & & 1 & & \\ & & & 1 & \\ & & & & 1 \end{array}\right) \otimes \eta \equiv \tilde{a} {1 \over 2} X $$ $$ \langle S \rangle = s \left(\begin{array}{ccccc} 1 & & & & \\ & 1 & & & \\ & & 1 & & \\ & & & -3/2 & \\ & & & & -3/2 \end{array}\right) \otimes {\bf 1} $$ $$ \langle \psi \rangle = v \, \, |{\rm SU(5) \, \, singlet} \rangle$$ $$\langle \overline{\psi} \rangle = \overline{v} \, \, |{\rm SU(5) \, \, singlet} \rangle$$ where $$ \eta = \left( \begin{array}{cc} 0 & -i \\ i & 0 \end{array} \right) \; , {\bf 1} = \left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right) \; .$$ The vacuum minimization conditions are explicitly \begin{equation} a_1^2=-s {\cal S}_3, \qquad {\cal S}_1 \tilde a = {1\over4}v \overline v \end{equation} $$s {\cal S}_2+{2\over5}a_1 \tilde a=0, \qquad a_2 {\cal S}_1+2 s \tilde a=0$$ Using these equations, the set \{$a_1, a_2, \tilde a, v, \overline v, {\cal S}_4$\} form a complete set of independent variables. Two caveats -- $\bullet$ Note at tree level the GUT symmetry breaking vevs are undetermined since the potential in these directions are both supersymmetric and flat. We will not discuss the process for fixing these vevs in this paper. That analysis must necessarily include supersymmetry breaking effects as well as supergravity and radiative corrections. $\bullet$ We do not describe the source of supersymmetry breaking in this paper. Soft SUSY breaking operators are included at the GUT scale and renormalized down to the weak scale when making any comparison with the low energy data. The same doublet-triplet splitting mechanism that was used in paper II can be used in this model. Accordingly, the Higgs sector of the Lagrangian is given by \begin{equation} L_{Higgs}= [10_1 A_1 10_2 + {\cal S}_5 10^2_2 |_F + {1 \over M} [z^* 10_1^2 |_D \end{equation} where the first term is $W_{Higgs}$ and the second generates a $\mu$ term when the F component of the hidden sector field $z$ gets a vev of order $10^{10} GeV$. We then obtain $\mu = \langle F_z \rangle/M$. \subsubsection{U(1) and R symmetries of $W_{sym\,breaking}$ and $\epsilon_3$} $W_{sym\,breaking}$ has a [U(1)]$^4 \times$ R symmetry as is summarized in table \ref{t:u1}. Up to arbitrary Yukawa coupling coefficients assumed to be of $O(1)$, $W_{sym\,breaking}$ is the most general superspace potential consistent with these symmetries. Using $a_1, a_2, \tilde a, v, \overline v$, and ${\cal S}_4$ as independent variables, the only invariant under a [U(1)]$^4 \times$R rotation of the vevs is $\zeta={a_1^4\over a_2^2 {\cal S}_4^2}$. After evaluating $\epsilon_3$ explicitly using equation (\ref{eq:grandexpression}) we obtain \begin{equation} {\epsilon}_3={3 \tilde\alpha_G \over 10 \pi} \biggl\{ 2 \; \log{256\over 243} - \log\abs{(1-25 \zeta)^4 \over (1-\zeta)}+2 \log \abs{{\tilde{M_t} \over M_G}} \biggr\}. \end{equation} Before we can check whether the experimental measurement of $\alpha_s(M_Z)$ in this model is consistent with the non-observation of proton decay, we must discuss the fermion mass sector of the theory. The proton lifetime and branching ratios depend crucially on the couplings of the color triplet Higgses to fermions. In a theory of fermion masses, these couplings are related to the Yukawa couplings of fermions to Higgs doublets, {\em but they are not identical}. For example, the dimension 5 operators responsible for proton decay are given by \begin{eqnarray} & {1 \over \tilde M_t} \; {\bf Q} {1 \over 2} C_{QQ} {\bf Q} \; {\bf Q} C_{QL} {\bf L} & \\ & {1 \over \tilde M_t} \; {\bf \bar u} C_{ue} {\bf \bar e} \; {\bf \bar u} C_{ud} {\bf \bar d} & \nonumber \end{eqnarray} where $C_{QQ}, C_{QL}, C_{ue}$ and $C_{ud}$ are 3$\times$3 complex matrices. The matrices $C_{QQ}$ and $g_u$, the up quark Yukawa matrix, contain the same independent parameters but the SO(10) Clebschs are different. An example is presented in the next section. \subsection{Fermion mass sector} We take the flavor sector of our model to be that of model 4 in paper I. Preliminary results of Blazek, Carena, Wagner and one of us (S.R.)\cite{bcrw} suggest that this model provides the best fit to the low energy data. The flavor sector is specified by a particular set of four operators $\{ O_{33}, O_{32}$, $O_{22}, O_{12} \}$. Three of these operators -- $O_{33}, O_{32}$, and $O_{12}$ -- are uniquely specified by choosing model 4. On the other hand there are 6 choices for operator $O_{22}$, labelled $(a, \cdots, f)$, as all give identical entries in the charged fermion Yukawa matrices. As a result we can construct 6 possible models $4 (a, \cdots, f)$. The four effective fermion mass operators for model $4c$ are given by \begin{eqnarray} O_{33} = & 16_3\ 10_1 \ 16_3 & \label{eq:4coper}\\ O_{23} = & 16_2 \ {A_2 \over {\tilde A}} \ 10_1 \ {A_1 \over {\tilde A}} 16_3 & \nonumber \\ O_{22} = & 16_2 \ {{\tilde A} \over {\cal S}_M} \ 10_1 \ {A_1 \over {\cal S}_M} \ 16_2 & \nonumber \\ O_{12} = & 16_1 \left( {{\tilde A}\over {\cal S}_M}\right)^3 \ 10_1 \left( {{\tilde A} \over {\cal S}_M} \right)^3 16_2 & \nonumber \end{eqnarray} The superspace potential for the complete theory above the GUT scale which reproduces model $4c$ is given by (see fig. 1) \begin{eqnarray} W_{fermion} = & & \\ &16_3 10_1 16_3 + {\bar \Psi}_1 A_1 16_3 + & {\bar \Psi}_1 {\tilde A} \Psi_1 + \Psi_1 10_1 \Psi_2 \nonumber \\ & + {\bar \Psi}_2 {\tilde A} \Psi_2 + {\bar \Psi}_2 A_2 16_2 + & {\bar \Psi}_3 A_1 16_2 \nonumber \\ & + \Psi_3 10_1 \Psi_4 + & {\cal S}_M \sum_{a=3}^9 ( {\bar \Psi}_a \Psi_a ) \nonumber \\ & + {\bar \Psi}_4 {\tilde A} 16_2 + {\bar \Psi}_5 {\tilde A} \Psi_4 + & {\bar \Psi}_6 {\tilde A} \Psi_5 \nonumber \\ & + \Psi_6 10_1 \Psi_7 + {\bar \Psi}_7 {\tilde A} \Psi_8 + & {\bar\Psi}_8 {\tilde A} \Psi_9 + {\bar \Psi}_9 {\tilde A} 16_1 \nonumber \end{eqnarray} This superpotential is consistent with the symmetries discussed previously with the addition of one new U(1) given in table ~\ref{t:u1}. However it is not the most general fermion sector consistent with these symmetries. In fact {\em one and only one} new operator must be added \begin{equation} {\bar \Psi}_6 A_2 16_3 . \end{equation} It is easy to see that this operator leads to one new effective operator at the GUT scale when heavy states are integrated out (see fig. 2). The new operator is \begin{eqnarray} O_{13} = & 16_1 \left( {{\tilde A}\over {\cal S}_M}\right)^3 \ 10_1 \left( { A_2 \over {\cal S}_M} \right) 16_3 & .\label{eq:13oper} \end{eqnarray} The complete model 4c is thus defined with this new operator and includes the operators in eqn. (\ref{eq:4coper}) and (\ref{eq:13oper}). A fit of model 4c to the low energy data\cite{bcrw} for certain ranges of soft SUSY breaking parameters agrees to better than $1 \sigma$ for all observables. A global $\chi^2$ analysis of models 4, 6 (a - f) (paper I) for all regions of parameter space is now underway\cite{bcrw}. It is already clear, however, that an additional operator such as the 13 operator in eqn. (\ref{eq:13oper}) is absolutely necessary to fit the data. Whether this model fits the data better than any other remains to be seen. For example, a different choice of 22 operator results in a different U(1) symmetry and thus by ``naturalness," a distinct theory. In particular, we have checked that for model 4b there are no new effective fermion mass operators generated, while for model 4a the new 13 operator \begin{eqnarray} O_{13} = & 16_1 \left( {{\tilde A}\over {\cal S}_M}\right)^3 \ 10_1 \left( {\tilde A A_2 \over {\cal S}^2_M} \right) 16_3 & \end{eqnarray} is needed. Finally, consider the matrices relevant for proton decay for the case of model 4c. In particular, the matrix $C_{QQ}$ is given by \begin{eqnarray} C_{QQ} & = \left(\begin{array}{ccc} 0 & C & {1 \over 3} D e^{i\delta} \\ C & -{1 \over 2} E e^{i \phi} & { 1 \over 3} B \\ {1 \over 3} D e^{i\delta} & { 1 \over 3} B & A \end{array} \right) &. \end{eqnarray} This should be compared with the up quark Yukawa matrix in the same model given by \begin{eqnarray} g_u & = \left(\begin{array}{ccc} 0 & C & {1 \over 3} D e^{i\delta} \\ C & 0 & -{4 \over 3} B \\ -{4 \over 3} D e^{i\delta} & -{ 1 \over 3} B & A \end{array} \right) &. \end{eqnarray} where the 7 Yukawa parameters $A,\;B,\;C,\;D,\;E, \; \phi$ and $\delta$ are obtained by fitting to quark and lepton masses and the CKM mixing angles\cite{bcrw}. Note that the Clebschs in the two matrices differ by as much as a factor of 4. These Clebschs affect proton decay branching ratios. It is thus important to calculate the branching ratios in models which are consistent with the observed fermion masses and mixing angles. \subsubsection{Additional threshold corrections at $M_G$} The dominant effective operators at the scale $M_G$ are obtained by integrating out states with mass greater than $M_G$. Higher order corrections to these operators are also obtained. These typically lead to $O(10\%)$ corrections to the leading terms in the Yukawa matrices. Of course the terms in the Yukawa matrices will also receive corrections at one loop. We have neglected these corrections in the following analysis. \subsubsection{Neutrino masses} For completeness we include a minimal neutrino mass sector \begin{equation} W_{neutrino} = \bar{\psi} \; \sum_{i= 1}^3 \;16_i \; N_i \end{equation} where the states $N_i, \; i=1, \cdots, 3$ are SO(10) singlets. This term has the effect of giving GUT scale Dirac masses to the right-handed neutrinos in the 16's. Thus, the SM left-handed neutrinos are absolutely massless in this theory. If necessary, left-handed neutrinos can be given masses as described in paper II\cite{hr}. However, in order to do so we must either break one linear combination of the U(1)s or introduce additional SO(10) singlets. In either case, we must then check for ``naturalness" and add any operator allowed by the symmetries of the theory. \subsection{Symmetries} This theory has 5 global U(1) symmetries and a global continuous R symmetry. The charges of most of the states are given in table \ref{t:u1}. The charges of the other states can easily be derived. We have checked our theory for ``naturalness." We find that only 3 additional operators need to be added to the superpotential --$$\bar\psi_2 A'_1 \psi_1 {\cal S}_2, \;\; \bar\psi_2 A'_1 16_3 {\cal S}_3, \;\; \bar\psi_5 A'_1 \psi_3 {\cal S}_3.$$ These three operators have no direct effect on any observable properties since the vev of $A'_1$ vanishes. With the inclusion of these three operators the total superspace potential for model 4c is ``natural" (i.e. no additional operators consistent with these symmetries are allowed) {\em for all possible powers of the fields}. In addition the theory has a matter reflection symmetry (see Dimopoulos and Georgi, ref.~\cite{DRW}) which forbids dimension 4 baryon or lepton number violating operators. Some problems, however, remain to be solved in our model. Given the states and symmetries, we find that $f_{\alpha \beta}$, the coefficient of the general gauge kinetic term, is trivial in this model. Thus we have not explicitly included the sector of the theory which generates gaugino masses once SUSY is broken. In addition, the U(1) symmetries of the theory are not sufficient to significantly constrain the Kahler potential. For example, terms such as $${ 1 \over M^2} \; \psi_4^* \;{\cal S}_M^* \; \tilde A \; 16_2$$ are allowed which mix light generations with heavy states. This term (and others like it) is allowed since it already appears in the structure of the Feynman diagrams of fig. 1. These off diagonal terms in the Kahler potential can affect fermion mass operators as well as introduce flavor changing neutral current processes at low energies\cite{hkr}. When deriving the effective theory at $M_G$ we have implicitly assumed that the Kahler potential is universal for all 16s in the theory. \section{Results for $\alpha_s(M_Z)$ vs. Proton Decay in model 4c} We now check whether the new model is consistent with $\alpha_s$ and the proton lifetime. We have calculated the proton decay rate for the three dominant modes --- $K^+ \bar{\nu},$ $\pi^+ \bar{\nu},$ $ K^0 \mu^+$ where for neutrino modes we sum over the three neutrino species. We have included both gluino and chargino loops, as well as LLLL, LLRR and RRRR operators in our analysis, where L(R) refers to left-handed (right-handed) fermion fields. We used values for the dimensionless Yukawa parameters at $M_G$ which give predictions for fermion masses and mixing angles in excellent agreement with the data\cite{bcrw}. The values for soft SUSY breaking parameters are also consistent with electroweak symmetry breaking and the experimental measurement of $b \rightarrow s + \gamma$. We renormalized the dimensionless (dimensionful) parameters at two (one) loops to low energies in order to make contact with the data. A detailed report on this work is in preparation\cite{lr}. For the calculation presented here we take the effective Higgs triplet mass, $\tilde{M_t} = a_1^2/{\cal S}_5 = 4 \times 10^{19} GeV$ with $a_1 = M_G \approx 2\times10^{16} GeV$ and ${\cal S}_5 = 10^{13} GeV$. This corresponds to light Higgs doublets with mass $10^{13} GeV$. Is it natural to have such light Higgs doublets? Are we populating the GUT desert? To address this question we should compare the Higgs doublet mass with the spectrum of masses for states in the symmetry breaking sector of the theory. These in fact range from $10^{13} - 10^{20} GeV$. Thus we have taken the doublet mass to lie at the lower bound of this GUT scale spectrum. This seems to be the only natural criteria for setting an upper bound on $\tilde{M_t}$. The results for proton decay are given in table \ref{t:pdecay} for two different values of soft SUSY breaking parameters and dimensionless couplings\cite{bcrw}. For comparison we also present the branching ratios obtained in a generic minimal SU(5) SUSY GUT from Hisano et al.\cite{PDECAY}. The soft SUSY breaking parameters at the GUT scale are for (case A) $M_{1/2} = 250, m_0 = 750, \mu = 64, m_{H_u} = 904, m_{H_d} = 1200 $ GeV and (case B) $M_{1/2} = 100, m_0 = 3000, \mu = 322, m_{H_u} = 4200, m_{H_d} = 3150 $ GeV with for both cases, $A_0 =0$ and $\tan\beta = 53$. The experimental lower bound on the proton lifetime into the mode $K^+ \bar{\nu}$ is $10^{32}$ years\cite{kam}. Thus these values are consistent with the non-observation of proton decay to date. However, these values are to be considered as {\em upper limits} on the proton lifetime. In particular, $\tau_p\; Br^{-1}(p \rightarrow K^+ \bar{\nu})$ scales as $({\tilde{M_t} \over 10^{19} GeV}{0.003 ~GeV^3\over \beta})^2$, where $\beta$, with values in the accepted range $\beta = (0.003 - 0.03) GeV^3$\cite{beta}, is a measure of the matrix element of a 3 quark operator between the proton state and the vacuum. Are these upper bounds on proton decay consistent with the experimental measurement of $\alpha_s(M_Z)$? We have evaluated the expression for $\epsilon_3$ including the additional states with GUT scale masses contained in the fermion mass sector of the theory. These typically shift the value of $\epsilon_3$ by a small amount in the positive direction. We find that for typical values of $a_1, a_2,$ and ${\cal S}_4$ around the GUT scale, we can obtain $\epsilon_3$ negative for $\tilde{M_t}$ of order $10^{19}$ GeV. For example, with $a_1 = 2 a_2 = 2 {\cal S}_4 = {\tilde a \over 3} = M_G $ we have $\zeta = 16$ and $\epsilon_3 \approx -0.030$, which includes a positive contribution of $0.005$ from the fermion mass sector. We have also checked that for these values of the parameters the gauge coupling satisfies one loop perturbativity up to $M = 10^{18}$ GeV with $\alpha_G(M) = 0.39$. Note that above the GUT scale we use the threshold boundary condition $$\alpha_G^{-1}(M_G) = \tilde\alpha_G^{-1} + \Delta_2(M_G)$$. \section{Conclusion} We have presented a new complete SO(10) SUSY GUT. This theory has several interesting features. The superspace potential is ``natural" to all orders in the fields. It contains what may possibly be the minimal GUT symmetry breaking sector necessary to obtain the desired adjoint vevs consistent with (1) ``naturalness" and (2) fermion masses. We have also shown that one loop GUT scale threshold corrections are a significant constraint on the GUT symmetry breaking sector of the theory. This constraint, for example, is sufficient to rule out the model of paper II\cite{hr}. The one loop threshold corrections relate the prediction for $\alpha_s(M_Z)$ to the proton lifetime. Finally we have calculated one loop GUT scale threshold corrections to gauge couplings and proton decay rates to different final states in the new model. The results are consistent with the low energy measurement of $\alpha_s(M_Z)$. The proton decay branching ratios provide a powerful test for theories of fermion masses. There is reasonable hope that new results from SuperKamiokande or Icarus could confirm or rule out these theories. It would be presumptious of us to conclude without discussing some of the open questions. We have not discussed the origin of SUSY breaking nor how it feeds into the visible sector with the one exception of the $\mu$ term which we have nominally considered. We also do not discuss what determines the GUT vevs. At tree level, in the globally supersymmetric theory considered here and neglecting SUSY breaking, these are flat directions of the potential. Finally, and perhaps most seriously, the symmetries discussed in this paper do not significantly constrain the Kahler potential. Flavor mixing in the Kahler potential could lead to dangerous flavor changing neutral current processes. We have {\em assumed} the trivial universal Kahler potential in our analysis. \acknowledgments We would like to thank to Tom\'{a}\v{s} Bla\v{z}ek, Marcela Carena and Carlos Wagner for letting us use the results of work in progress on a general $\chi^2$ analysis of fermion masses. This research was supported in part by the U.S. Department of Energy contract DE-ER-01545-640. \newpage \begin{center} {\bf APPENDIX 1} \end{center} {\bf Proof of equation (\ref{eq:epsilon})} We define the GUT scale $M_G$ as the point where $\tilde\alpha_G \equiv \alpha_1(M_G)=\alpha_2(M_G)$. This means that $\Delta_1(M_G)=\Delta_2(M_G)$. We then define the relative shift in $\alpha_3(M_G)$ by \begin{eqnarray} \epsilon_3 & \equiv & (\alpha_3(M_G)- \tilde\alpha_G)/\tilde\alpha_G \\ & = & \alpha_3(M_G) \,(\Delta_3-\Delta_1 |_{M_G}) \nonumber \end{eqnarray} We then have \begin{equation} \Delta_3-\Delta_1|_{M_G} = {1 \over 2 \pi} (\sum_\gamma (b_3^\gamma-b_1^\gamma) \log \abs{{\det}' M_\gamma} - \sum_\gamma (b_3^\gamma-b_1^\gamma) n_\gamma \log M_G) \end{equation} where $n_\gamma=\hbox{the mass dimension of }{\det}' M_\gamma$ and ${\det}' M_\gamma$ is defined in eqn. (\ref{eq:effdet}), except for ${\det}' M_d$ where it will be convenient to define $\tilde M_d$ by $\det \tilde M_d=M_G \,{\det} M'_d$, and ${\det}'M_d = {\det} \tilde M_d$. This redefinition does not affect eqn. (\ref{eq:epsilon}). Note, the matrix $\tilde M_d$ is defined such that $n_t = n_d$. In addition, $$\Delta_1 |_{M_G}=\Delta_2 |_{M_G}$$ which implies \begin{equation} \sum_\gamma (b_1^\gamma-b_2^\gamma) \log\abs{{\det}' M_\gamma} = (\sum_\gamma (b_1^\gamma-b_2^\gamma) n_\gamma) \log M_G \end{equation} Substituting for $\log M_G$ we obtain \begin{eqnarray} \epsilon_3 & \approx & {\tilde\alpha_G \over2\pi} \bigl\{ \sum_\gamma (b_3^\gamma-b_1^\gamma) \log \abs{{\det}' M_\gamma} - {{\sum_\gamma (b_3^\gamma-b_1^\gamma) n_\gamma} \over {\sum_\gamma (b_1^\gamma-b_2^\gamma) n_\gamma}} \sum_\gamma (b_1^\gamma-b_2^\gamma) \log \abs{{\det}' M_\gamma} \bigr\} \nonumber\\ & = & {1 \over 2 \pi} {1 \over c_{12}} \sum_\gamma(b_1^\gamma c_{23}+b_2^\gamma c_{31}+b_3^\gamma c_{12}) \log \abs{{\det}' M_\gamma} \label{eq:epsint} \end{eqnarray} where $c_{ij}=\sum_\gamma (b_i^\gamma-b_j^\gamma) n_\gamma$. To evaluate the $c_{ij}$s, define $n_{54}$, $n_{45}$ and $n_{10}$ to be the number of 54, 45, and 10 representations in the theory, respectively, and $n_{16}$ and $n_{\overline{16}}$ to be the number of supermassive 16 and $\overline{16}$ representations, respectively. For any SO(10) model built with only 1, 10, 16, $\overline{16}$, 45, and 54 representations and one pair of light Higgs doublets, we have $$\begin{array}{rcl} n_{16} &=& n_{\overline{16}} \\ n_g = n_w &=& n_{45}+n_{54} \\ n_x &=& n_{45}+n_{54}-3 \\ n_u = n_e &=& n_{45}+n_{16}-3 \\ n_s = n_\sigma &=& n_{54} \\ n_q &=& n_{45}+n_{54}+n_{16}-3 \\ n_d = n_t &=& n_{10}+n_{16} \end{array} $$ Evaluating $c_{23}$ explicitly \defn_{54}{n_{54}} \defn_{45}{n_{45}} \defn_{16}{n_{16}} \defn_{10}{n_{10}} \begin{equation} \begin{array}{rcl} c_{23}&=&(4 n_s+3 n_q+2 n_w+3 n_x+n_d)-(5 n_s+2 n_q+n_u+3 n_g+2 n_x+n_t) \\ &=& \{4 (n_{54})+3 (n_{54}+n_{45}+n_{16}-3)+2(n_{54}+n_{45})+\\ && 3(n_{54}+n_{45}-3)+(n_{10}+n_{16}) \\ && - [5 (n_{54})+2(n_{54}+n_{45}+n_{16}-3)+(n_{45}+n_{16}-3)+3(n_{54}+n_{45})+\\ &&2(n_{54}+n_{45}-3)+(n_{10}+n_{16})]\} \\ &=& -3 \end{array} \label{eq:c23} \end{equation} Similarly \begin{eqnarray} c_{31}=9 \label{eq:c31} \\ c_{12}=-6 \nn \end{eqnarray} Thus, the $c_{ij}$s are completely independent of the number of fields in any theory built with 1s, 10s, 16s, $\overline{16}$s, 45s, and 54s. Plugging eqns. (\ref{eq:c23}) and (\ref{eq:c31}) into eqn. (\ref{eq:epsint}), eqn. (\ref{eq:epsilon}) readily follows. \begin{center} {\bf APPENDIX 2} \end{center} {\bf U(1) symmetries and the dependence of $\epsilon_3$ on GUT scale vevs} In this appendix we prove that the contribution to $\epsilon_3$ from the GUT symmetry breaking sector is only a function of U(1) and R invariant products of powers of vevs. The proof relies on two facts: \begin{enumerate} \item the effective determinants of mass matrices are holomorphic functions of the symmetry breaking vevs, and \item the effective determinants have simple phase rotations under U(1) and R symmetry transformations. \end{enumerate} Note, since the effective determinants are independent of the conjugates of vevs, the U(1) and R invariance of $\epsilon_3$ is very restrictive. We first discuss the case for mass matrices which do {\em not} include vector multiplets, followed immediately by the case for mass matrices including vector multiplets. Note, in the first case the mass matrices themselves are, by construction, holomorphic functions of vevs. This is however not true for the latter case which is why it requires a separate discussion. Consider a general superspace potential $W(\Phi_1, \Phi_2, \ldots, \Phi_N)$ whose superfields rotate under a U(1) symmetry as $$\Phi_j \mathrel{{\mathop\to^\theta}} e^{i Q_j \theta} \Phi_j$$ By defining the shifted fields $\hat \Phi_i \equiv \Phi_i-\langle\Phi_i\rangle$ and expanding the superspace potential about $\langle\Phi\rangle$ we can find the fermion mass matrices. \begin{equation} W(\hat\Phi_1+\langle\Phi_1\rangle,\ldots)= \ldots+\sum_\gamma \psi_\gamma m_\gamma(\langle\Phi_1\rangle,\langle\Phi_2\rangle,\ldots) \psi_\gamma + \ldots \end{equation} Now consider what would happen if the superfields received a different vacuum expectation value, $$\langle \Phi_j \rangle^{\hbox{\tiny new}} = e^{i Q_j \theta} \langle \Phi_j \rangle^{\hbox{\tiny old}}$$ Under this change, the mass matrices would change. \begin{equation} W(\hat\Phi_1+e^{i Q_1 \theta} \langle \Phi_1 \rangle,\ldots)=\ldots+\sum_\gamma \psi_\gamma m_\gamma^\theta \psi_\gamma+\ldots \end{equation} where $$m_\gamma^\theta \equiv m_\gamma(e^{i Q_1 \theta} \langle \Phi_1 \rangle, e^{i Q_2 \theta} \langle \Phi_2 \rangle, \ldots)$$ However, if we rotate the shifted fields $\hat\Phi_j$ by $e^{i Q_j \theta}$, the superspace potential will be invariant under the combined rotation of $\hat\Phi$ and $\langle \Phi \rangle$. \begin{equation} \begin{array}{l} W(e^{i Q_1 \theta} \hat\Phi_1+e^{i Q_1 \theta} \langle \Phi_1 \rangle, \ldots, e^{i Q_N \theta} \hat\Phi_N+e^{i Q_N \theta} \langle \Phi_N \rangle) \\ \\ = \ldots+\sum_\gamma \psi_\gamma \pmatrix{e^{i Q_1 \theta} \cr & e^{i Q_2 \theta} \cr && \ddots} m_\gamma^\theta \pmatrix{e^{i Q_1 \theta} \cr & e^{i Q_2 \theta} \cr && \ddots} \psi_\gamma +\ldots \\ \\ = W(\hat\Phi_1+\langle \Phi_1 \rangle, \ldots, \hat\Phi_N+\langle \Phi_N \rangle) \\ = \ldots+\sum_\gamma \psi_\gamma \ m_\gamma \ \psi_\gamma+\ldots \end{array} \end{equation} Therefore, $m_\gamma^\theta$ rotates in a very simple way. \begin{equation} m_\gamma^\theta=\pmatrix{e^{-i Q_1 \theta} \cr & e^{-i Q_2 \theta} \cr && \ddots} m_\gamma \pmatrix{e^{-i Q_1 \theta} \cr & e^{-i Q_2 \theta} \cr && \ddots} \end{equation} Thus, \begin{equation} \det m_\gamma^\theta=e^{-2 i \theta \sum Q} \det m_\gamma \label{eq:u1charge} \end{equation} where the sum is over all fields that have columns in the mass matrix. Similar arguments can show that under an R symmetry rotation, $\langle \Phi \rangle \to e^{i Q \theta} \langle \Phi \rangle$, $m_\gamma^\theta$ is equal to \begin{equation} e^{i Q_W \theta} \pmatrix{e^{-i Q_1 \theta} \cr & e^{-i Q_2 \theta} \cr && \ddots} m_\gamma \pmatrix{e^{-i Q_1 \theta} \cr & e^{-i Q_2 \theta} \cr && \ddots} \end{equation} where $Q_W$ is the charge of the superspace potential under the R symmetry. Therefore, \begin{equation} \det m_\gamma^\theta= e^{i Q_W N \theta-2 i \theta \sum Q} \det m_\gamma \label{eq:rcharge} \end{equation} where $N=\dim m_\gamma$. The situation is a bit more complicated for mass matrices which receive contributions from vector multiplets. The proof that the determinants and hence the effective determinants have simple phase rotations under the U(1) and R symmetry transformations can readily be extended to these mass matrices. However, since the entries in the gaugino-chiral fermion mixing rows are actually the complex conjugates of vevs, the determinants of these matrices are not holomorphic. However, in the following we prove that the {\em effective determinants} of these mass matrices, which include vector multiplets, {\em are} in fact holomorphic functions of the vevs. Since the would-be Goldstone fermion states are perpendicular to the massive chiral fermion states, any mass matrix containing gaugino-chiral fermion mixing can be written in the form \begin{equation} \pmatrix{ 0 & x_1^*&x_2^*&x_3^*&\cdots&x_N^* \cr \overline x_1^* & c_{11}&c_{12}&c_{13}&\cdots&c_{1N} \cr \overline x_2^* & c_{21}&c_{12}&c_{23}&\cdots&c_{2N} \cr \overline x_3^* & c_{31}&c_{12}&c_{33}&\cdots&c_{3N} \cr \vdots &\vdots&\vdots&\vdots& \ddots & \vdots \cr \overline x_N^* & c_{N1}&c_{N2}&c_{N3}&\cdots&c_{NN} \cr} \end{equation} with $\sum_j c_{ij} x_j=0 \hbox{ for all }i$, $\sum_i c_{ij} \bar{x}_i=0 \hbox{ for all }j,$ and the $c_{ij}$s, $x$s, and $\overline x$s are functions of the vevs but not of their conjugates. The rows and columns of the mass matrix can be rearranged so that $x_N$ and $\overline x_N$ are not zero. However, the determinant of this matrix can be reduced by elementary row and column operations. Namely, by adding to the last column the second column multiplied by $x_1\over x_N$ plus the third column multiplied by $x_2\over x_N$, and so forth, the determinant of the mass matrix becomes $$\left| \matrix{ 0 & x_1^*&x_2^*&\cdots&x_{N-1}^*&x_N^*+{x_1^* x_1\over x_N}+\ldots+{x_{N-1}^* x_{N-1}\over x_N} \cr \overline x_1^* & c_{11}&c_{12}&\cdots&c_{1,N-1}&0 \cr \overline x_2^* & c_{21}&c_{12}&\cdots&c_{2,N-1}&0 \cr \overline x_3^* & c_{31}&c_{12}&\cdots&c_{3,N-1}&0 \cr \vdots &\vdots&\vdots&\ddots & \vdots&\vdots \cr \overline x_N^* & c_{N1}&c_{N2}&\cdots&c_{N,N-1}&0 \cr } \right|$$ Doing the analogous operation on the rows, the determinant becomes $$\left| \matrix{ 0 & x_1^*&x_2^*&\cdots&x_{N-1}^*&{1\over x_N}\sum_k^N x_k^* x_k \cr \overline x_1^* & c_{11}&c_{12}&\cdots&c_{1,N-1}&0 \cr \overline x_2^* & c_{21}&c_{12}&\cdots&c_{2,N-1}&0 \cr \vdots &\vdots&\vdots&\ddots&\vdots & \vdots \cr \overline x_{N-1}^* & c_{N-1,1}&c_{N-1,2}&\cdots&c_{N-1,N-1}&0 \cr {1\over\overline x_N}\sum_k^N \overline x_k^* \overline x_k & 0& 0& \cdots&0&0 \cr } \right|$$ Thus, the determinant is equal to \begin{eqnarray} {\sum_k^N x_k^* x_k \over x_N}{\sum_k^N \overline x_k^* \overline x_k \over \overline x_N} \left| \matrix{ c_{11} & \cdots & c_{1,N-1} \cr \vdots & \ddots & \vdots \cr c_{N-1,1} & \cdots & c_{N-1,N-1} } \right| \\ \nn \\ = (\sum_k^N x_k^* x_k) (\sum_k^N \overline x_k^* \overline x_k) f({\rm vevs}) \nn \end{eqnarray} where $f$ is a holomorphic function of the vevs. By setting $\overline v=v$, $\overline x_i$ will equal $x_i \hbox{ for all }i$ and the mass of the vector multiplet is ${\sqrt{\sum_k^N x_k^* x_k}}$. Therefore, the determinant is equal to $M_{vector\,multiplet}^4$ times $f$ and the effective determinant is just the function $f$. Note, $M_{vector\,multiplet}$ is thus always canceled from the denominator of the effective determinant (eqn. \ref{eq:epsilon}) and no conjugates of vevs can appear in the effective determinant. Thus, the effective determinants for mass matrices containing gauginos are holomorphic functions of the vevs; just as those discussed earlier for mass matrices which do not have gaugino-chiral fermion mixing entries. These simple transformation properties of the mass matrices, under U(1) rotations of the vevs, have significant consequences for the form of ${\epsilon}_3$. Consider the following expression entering ${\epsilon}_3$ (see eqn. \ref{eq:grandexpression}) --- \begin{equation} {3\over 2}\log{{\det}'{M_g \over M_G}} - {3\over2}\log{{\det}'{M_w \over M_G}} +{33\over10}\log{{\det}'{M_s \over M_G}} -{21\over10}\log{{\det}'{M_\sigma \over M_G}} \label{grandexpression} \end{equation} $$+{9\over10}\log{{\det}'{M_u \over M_G}} +{3 \over 10}\log{{\det}'{M_e \over M_G}} - {6\over5}\log{{\det}'{M_q \over M_G}}$$ It is now easy to show that it is invariant under the U(1) and R symmetries. Namely, the U(1) charge of the determinant of $M_w$ (eqn. \ref{eq:u1charge}) is equal to the charge of the determinant of $M_g$ which is equal to -2 times the sum of U(1) charges of all fields in the 24 representation of SU(5). Therefore, the U(1) rotation of ${\det} \,M_g$ will cancel the rotation of ${\det}\, M_w$ in expression, eqn. (\ref{grandexpression}). In addition, we note that the U(1) charges of the effective determinants of \{$M_s$, $M_\sigma$\}, \{$M_u$, $M_e$\}, and $M_q$ equal -1 times the sum of U(1) charges of all fields in the \{15 and $\overline{15}$\}; \{10 and $\overline{10}$\}, and \{10, 15, $\overline{10}$ and $\overline{15}$\} representations of SU(5), respectively. Therefore, the U(1) rotation of ${\det}' M_q$ is canceled by the rotations of the effective determinants of $M_u, M_e, M_s,$ and $M_\sigma$ in expression, eqn. (\ref{grandexpression}). Similar arguments show that the expression in eqn. (\ref{grandexpression}) is invariant under an R symmetry rotation. Thus finally we arrive at the conclusion that the expression in eqn. (\ref{grandexpression}) is invariant under the U(1) and R symmetries of $W_{sym.\,breaking}$ for any superspace potential built with 1, 10, 16, $\overline{16}$, 45, and 54 representations of SO(10). Moreover, since expression, eqn. (\ref{grandexpression}), is holomorphic, the contribution of the GUT symmetry breaking sector to $\epsilon_3$ is only a function of U(1) and R invariant products of powers of vevs. The same is not true for the contributions from either the Higgs or fermion mass sectors. This is because both the Higgs and fermion mass sectors contain massless states that must be projected out of the mass matrices before the effective determinants are taken. After this projection, the determinants of the resulting mass matrices are no longer holomorphic functions of the vevs. Nevertheless for the Higgs sector we can prove a similar but limited result, namely, that $g \equiv {{\det} M[t] \over {\det} M[d]} = g(\zeta_1, \cdots, \zeta_m)$; i.e., $g$ is a function of U(1) invariants only. By eqns. (\ref{eq:u1charge}) and (\ref{eq:rcharge}), we see that ${\det} M[t]$ and ${\det} M[d]$ transform in the same way under the U(1) and R symmetries; therefore the ratio is invariant.
1,941,325,220,638
arxiv
\section{Introduction} With rapid advances in quantum computing, it is becoming a critical technology attracting significant investment at the global level. In terms of quantum hardware development, IBM is one of the leading companies with the world's most powerful 127-qubit quantum computer based on superconducting technologies released in 2021 \cite{ibm127qubit}. They also have a promising roadmap to develop a quantum computer with 1,121 qubits by 2023 \cite{ibmroadmap}. Apart from IBM, many other major companies, such as Microsoft, Google, DWave, Rigetti, IonQ, and several research groups worldwide, are also working towards building a large-scale quantum computer with fault-tolerant error correction capabilities. \cite{quantumsurvey-raj}. They strive to make quantum computing trustworthy enough to tackle computationally intractable tasks for classical supercomputers. Therefore, these rapid advancements in quantum hardware trigger more investments in quantum software engineering and quantum algorithms development to maximize the practical use of quantum computers. There are now legitimate shreds of evidence that quantum computers can solve many complex problems which are challenging to tackle with classical supercomputers, ranging from chemistry problems \cite{Kandala2017Hardware-efficientMagnets} to machine learning \cite{Biamonte2017QuantumLearning}, cryptography\cite{Quan2021AVerification}, and finances \cite{Griffin2021QuantumFinance}. Some notable algorithms have been proposed in the last few decades, such as Deutsch-Jozsa's \cite{djalgo}, Shor's \cite{shoralgo}, and Grover's \cite{groveralgo}. We have also witnessed highly sophisticated quantum algorithms such as Quantum Approximate Optimization Algorithm (QAOA) \cite{qaoa} and Variational Quantum Eigensolver (VQE) \cite{vqe} in recent years. These algorithms have been directly applied to problems of practical relevance, albeit at the proof-of-concept level due to hardware limitations. In terms of quantum software engineering, the number of new quantum programming languages, software development kits (SDKs), and platforms has been accelerating rapidly. Currently, a user can develop quantum applications using popular SDKs and languages like Qiskit \cite{qiskit}, Cirq \cite{cirq}, Q\# \cite{qsharp}, and Braket \cite{amz-braket-sdk}. Afterward, those quantum applications can be compiled and run on a quantum simulator or sent to a physical quantum computer via cloud-based services such as IBM Quantum \cite{ibmq}, and Amazon Braket \cite{amazonbraket}. However, quantum software engineering and quantum cloud computing are still confronting many challenges and some of these are discussed in the next sections. \subsection{Challenges for quantum software engineering} Quantum software engineering is a rapidly developing emerging area, and there are many open challenges. First, the development of quantum applications is time-consuming for software engineers, mainly because of the requirement of prior quantum knowledge. Quantum programming is underpinned by the principles of quantum mechanics, which are quite different from the traditional models. Therefore a quantum programmer must overcome the hurdle of learning quantum mechanics to develop applications. A basic example is the difference of the fundamental unit: a classical bit has two states: 0 and 1, whereas a quantum bit (qubit) could also be placed in a \textit{"superposition"} state, i.e., a combination state of 0 and 1 simultaneously \cite{qbook-nielsenchuang}. Second, quantum computing is a promising way to solve several intractable tasks even for a classical supercomputer, but it may not entirely replace classical computers. In other words, there are many tasks in which both quantum and classical approaches could have the same performance, such as performing a simple calculation. While classical solutions are still dominant in today's industry, it is challenging to decide which approach is more suitable when transferring the existing classical systems to quantum: whether to replace the entire system with quantum or integrate quantum into the already well-established classical system \cite{Grossi2021AComputing}. Besides, the variety of current quantum SDKs and the heterogeneous quantum technologies could confuse software engineers in picking an appropriate technique for their software. Each SDK and language has different environment configuration requirements, syntax, and methods to connect with the quantum simulator or quantum computer. Additionally, there is no well-known standardization or life cycle in quantum software engineering similar to practices like Agile and DevOps in the traditional realm \cite{Weder2021QuantumLifecycle}. Recently, numerous solutions have been proposed to eliminate these burdens and accelerate quantum software development \cite{quantumsurvey-raj}. Many studies have focused on developing new quantum SDKs \cite{qiskit, cirq, Rigetti2021ForestFramework}, quantum programming languages \cite{qsharp, Fu2021Quingo:Features, Mccaskey2021ExtendingComputing}, and platforms \cite{quantumpath}. However, few studies \cite{Dreher2019PrototypeDevelopment} have considered the potential approach of leveraging modern classical techniques and computation models to apply in the quantum realm, which has motivated us to contribute to this research. \subsection{Challenges for quantum cloud computing} The most widely adopted way to access today’s quantum computers is through a cloud service from external vendors, such as IBM Quantum \cite{ibmq}, Amazon Braket \cite{amazonbraket}, Azure Quantum \cite{azurequantum}, and Google Quantum Computing Service \cite{googleqcs}. However, the difference between quantum and classical approaches poses many challenges for the quantum cloud computing paradigm. First, we can not permanently deploy a quantum application on a cloud-based quantum computer as a service for invoking many times from the end-users, similar to what we do with traditional ones. Instead, we need to build a suitable quantum circuit for our application, then deploy them to quantum computers, and wait for the execution result \cite{Garcia-Alonso2022QuantumGateway}. Additionally, quantum cloud providers and customers need to establish a win-win paradigm to maximize quantum advantages while optimizing the budget and the resources. The current pay-per-use pricing model offered by cloud vendors such as Amazon Braket \cite{amazonbraket} needs to go along with the computing model like serverless, a trending model for the classical cloud, to balance the benefit of both parties. An example of this approach is the concept of Quantum Serverless \cite{Johnson2021QuantumServerless}, proposed by IBM in their Quantum Summit 2021, coming up with the development of Qiskit Runtime \cite{qiskitruntime}. However, by sticking to a specific quantum vendor and technology, we could encounter another popular challenge known as the \textit{"vendor lock-in"} problem \cite{Hassan2021SurveyComputing}. Therefore, an effort to make a universal quantum serverless platform, working with multiple quantum SDKs and providers, is another pivotal inspiration for our proposed work. \subsection{Contributions} In order to address and mitigate the challenges highlighted above, we propose QFaaS \textit{(\textbf{Q}uantum \textbf{F}unction-\textbf{a}s-\textbf{a}-\textbf{S}ervice)}, a universal quantum serverless framework, which offers the function-as-a-service deployment model for quantum computing. Our framework could ease the quantum software development process, enabling traditional software engineers to quickly adapt to the quantum transition while continuously utilizing their familiar models and techniques. The key contributions of our proposed research are as follows: \begin{itemize} \item We design a novel framework for developing quantum function as a service, supporting popular quantum SDKs and languages, including Qiskit, Cirq, Q\#, and Braket, to perform the computation on classical computers, quantum simulators, and quantum computers provided by multiple vendors (IBM Quantum and Amazon Braket). \item We evaluate the suitability, conduct empirical investigations, and apply state-of-the-art classical technologies and models, such as containerization, GitOps, and function-as-a-service for quantum software engineering. By leveraging the Docker container with Kubernetes as the underlying technique, our framework is portable and scalable for further migration or expansion to a large-scale system. \item We utilize the DevOps techniques in operating QFaaS, including continuous integration and continuous deployment, which supports to automate the quantum software development cycle, from quantum environment setup to hybrid quantum-classical function deployment. \item We introduce a unified 6-stage life cycle for a quantum function, from function development, deployment, pre-processing, backend selection, quantum execution, and post-processing. This lifecycle provides a baseline for a quantum software engineer to plan and organize their software development process. \item We propose two operation workflows for both kinds of users: quantum software engineers and end-users, to utilize our framework for the hybrid quantum-classical applications. The end-users can access the deployed function as a service through the QFaaS API (Application Programming Interface) gateway. Our framework also provides multiple ways for users to interact with the core components, including QFaaS Dashboard (a modern web-based application), QFaaS CLI (an interactive command-line tool), and QFaaS Core APIs. \item We have implemented two application use cases with QFaaS to validate our proposed design and demonstrate how our framework can facilitate quantum software development in practice. We also conduct a set of benchmark tests to evaluate the performance of our framework and offer an insight into the current status of today's quantum computers and simulators. This paper proposes the framework and essential implementation, but we have a viable plan to add additional functionality to the QFaaS platform. Ultimately, our framework is expected to be a universal environment for designing advanced practical quantum-classical applications. \end{itemize} The rest of the paper is organized as follows: After introducing the fundamentals of quantum computing, section 2 presents the current state of quantum software development, quantum computing as a service (QCaaS) model, and serverless quantum computing. Section 3 discusses the related work and briefly compares our framework's benefits with existing work. Section 4 introduces the details of the QFaaS framework, including the design principle, principal components, structure and life cycle of a quantum function, and the operation workflow of our framework. Section 5 describes the design and implementation of QFaaS core components and functions. Then, section 6 demonstrates the operation of QFaaS in two use cases and its performance. Following the discussion of the advantages of our framework for software engineering in section 7, we conclude and present our plan the future work in section 8. \section{Background} \subsection{The Fundamentals of Quantum Computing} This section briefly summarizes several essential characteristics and building blocks of gate-based quantum computing before diving into the state-of-the-art development of quantum software engineering and serverless quantum computing. \subsubsection{Qubits, Superposition and Entanglement} Quantum computing is based on the theory of quantum mechanics and, therefore, is fundamentally different from classical computing \cite{qbook-nielsenchuang}. The basic units of classical and quantum computing are strikingly different at the fundamental level: a classical bit and a quantum bit (or qubit). A bit has two states for computation, either 0 or 1. Besides these classical states, a qubit can have a \textit{superposition} state, i.e., a combination of states 0 and 1 simultaneously. Often quantum algorithms can achieve exponential speed-up by leveraging this characteristic compared with the classical solution. We can describe the general state of a qubit $|\psi\rangle$ as follows: \begin{displaymath} |\psi\rangle = \alpha|0\rangle + \beta|1\rangle \end{displaymath} where $\alpha, \beta \in \mathbb{C}$ are complex numbers. However, whenever we measure the superposition state, it could collapse to one of the classical states (i.e., 0 or 1): \begin{displaymath} ||\alpha||^2 + ||\beta||^2 = 1 \end{displaymath} where $||\alpha||^2$ and $||\beta||^2$ is the probability of 0 and 1 as a result after measuring qubit $|\psi\rangle$. Hence, it is not straightforward to design a useful quantum algorithm by only utilizing the superposition attribute. Another critical characteristic of qubits that could be leveraged to design quantum algorithms is \textit{entanglement}. Entanglement is a robust correlation between two qubits, i.e., one party always knows precisely the state of the other, even if they are very far away. In other words, if a pure state $|\psi\rangle_{AB}$ on two systems A and B cannot be written as $|\psi\rangle_A \otimes|\phi\rangle_B$, we called it is \textit{entangled} \cite{IBMQuantum2022QiskitOnline}. \subsubsection{Quantum Gates} To perform the quantum operations on qubits, we apply quantum gates, which is conceptually similar to how we apply classical gates, such as AND, OR, XOR, and NOT on classical bits to perform classical computation. The quantum gates are always reversible, i.e., a qubit does not change its state if we apply the same quantum gate twice between the qubit initialization and the measurement. For example, we could use the Hadamard (H) gate on qubit $|0\rangle$ to create an equal superposition $|+\rangle = \tfrac{1}{\sqrt{2}}(|0\rangle+|1\rangle)$. If we apply the H gate again, the $|+\rangle$ state will be reversed to the original $|0\rangle$ state. A quantum gate U could be represented by a unitary matric such that $U^\dagger U = \mathbb{I}$ where I is the identity matrix. The general representation of a single-qubit gate U is as follows: \begin{displaymath} U(\theta, \phi, \lambda) = \begin{bmatrix} \cos(\theta/2) & -e^{i\lambda}\sin(\theta/2) \\ e^{i\phi}\sin(\theta/2) & e^{i\lambda+i\phi}\cos(\theta/2) \end{bmatrix} \end{displaymath} where $\theta, \phi, \lambda$ are the different parameters for each specific gate \cite{IBMQuantum2022QiskitOnline}. We can categorize quantum gates into two main types: single-qubit gates and multiple-qubit gates. Some popular single-qubit gates are Pauli gates (Pauli-X, Pauli-Y, Pauli-Z), the Hadamard (H) gate, and the Phase (P) gate. For example, the Hadamard (H) gate could be represented as the following (with $\theta, \phi, \lambda$ = $\tfrac{\pi}{2}, 0, \pi$, respectively): \begin{displaymath} H = U(\tfrac{\pi}{2}, 0, \pi) = \tfrac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \end{displaymath} We can also apply quantum gates to multiple qubits simultaneously by using multi-qubit gates, such as Controlled-NOT (CNOT) gate and Toffoli gate. CNOT gate, for instance, is a controlled two-qubit gate, i.e., the target qubit will change its state if the control qubit is $|1\rangle$ and will not change its state if the control qubit is $|0\rangle$ \cite{IBMQuantum2022QiskitOnline}. The matrix representation of the CNOT gates is as follows: \begin{displaymath} \text{CNOT} = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ \end{bmatrix} \end{displaymath} We can create the \textit{entangled} state (Bell state) by applying the CNOT gate to $|0+\rangle$ state: \begin{displaymath} \text{CNOT}|0{+}\rangle = \text{CNOT}( \tfrac{1}{\sqrt{2}}(|00\rangle + |01\rangle)) = \tfrac{1}{\sqrt{2}}(|00\rangle + |11\rangle) \end{displaymath} \subsubsection{Quantum Circuits and Quantum Algorithms} When implementing a quantum algorithm using the gate-based approach, we need to connect an appropriate combination of quantum gates to build quantum circuits. A quantum circuit's general operation includes three main stages: 1) Initializing the qubits, 2) Applying the quantum gates, and 3) Performing the measurement. For example, the quantum circuit shown in Figure \ref{fig:djalgo} implements Deutsch-Jozsa's algorithm \cite{djalgo}. The main objective of this algorithm is to determine whether the property of the oracle is constant (i.e., always return 0 or 1) or balanced (i.e., return 0 and 1 with the same probability). The oracle in this circuit is a \textit{"black box"} where we do not know which binary value is inside. However, when we query it with arbitrary input data, it will return a binary answer, either 0 or 1. For the traditional approach, we need to interact with the oracle at least two times and at most $2^{n-1}+1 $ times, where n is the number of input bits. Using the Deutsch-Jozsa algorithm, we need to query the oracle only once to get the final result. If the measurement outcomes of all the qubits are 0, we can determine that the oracle is constant; otherwise, it is balanced. This algorithm was also the first to demonstrate that quantum computers could outperform the classical computer in 1992. \cite{IBMQuantum2022QiskitOnline}. \begin{figure}[htbp] \centering \includegraphics[scale=0.26]{images/djalgo-qiskit.png} \caption{An example quantum circuit for the Deutsch-Jozsa Algorithm (generated by using Qiskit)} \label{fig:djalgo} \end{figure} \subsection{Quantum Software Engineering} \subsubsection{Quantum SDKs and languages} At the moment, when starting to develop a new quantum application, we do not necessarily have to learn a new programming language. Instead, we can continue utilizing our familiar languages, such as Python \cite{qiskit, cirq} or C++ \cite{Mccaskey2021ExtendingComputing}. Fortunately, we have a ton of available quantum software development kits (SDKs) and programming languages to choose from, thanks to the productive work in this field of both the quantum industry and the research community. Some popular SDKs and languages that were originated from well-known companies are: \begin{itemize} \item \textbf{Qiskit} \cite{qiskit} is well-known and probably the most popular (as per the Github repository’s star count) Python-based open-source SDK for developing gate-based quantum programs, initially developed by IBM. Qiskit has a wide range of additional libraries and support tools and is best suited to the IBM Quantum Cloud platform \cite{ibmq}. We can create a Qiskit program by multiple methods, by writing the Python script, using Jupyter Notebook, or using an online Quantum Composer provided by IBM Quantum. Then, we can run that program by using a built-in simulator (such as Aer Simulator, or QASM simulator) or sending it to execute in an IBM quantum computer. \item \textbf{Cirq} \cite{cirq} is another prevalent open-source Python software library introduced by Google for quantum computing. This SDK supports us in writing, manipulating, and optimizing quantum gate-based circuits. Cirq programs can run on built-in simulators (wave functions and density matrices), and Google’s quantum processors \cite{googleqcs}. It is also an underlying SDK for TensorFlow Quantum \cite{tensorflowquantum}, a high-level library for performing hybrid quantum-classical machine learning tasks. \item \textbf{Q\#} \cite{qsharp} is a new programming language from Microsoft for developing and executing quantum algorithms. It comes along with Microsoft’s Quantum Development Kit, which includes a set of toolkits and libraries for quantum software development. Q\# also offers many ways to use, by creating standalone programs, using Jupyter notebooks, using the command line, or integrating with host languages such as C\# or Python. \item \textbf{Braket} \cite{amz-braket-sdk} is an emerging Python-based SDK of Amazon to interact with their quantum computing service, named Amazon Braket \cite{amazonbraket}. This SDK provides multiple ways to prototype and develop hybrid quantum applications, then run them on simulators (fully managed and local simulators) or quantum computers (provided by third-party hardware companies D-Wave, Rigetti, and IonQ). \end{itemize} Besides, there are numerous quantum languages and SDKs proposed by research groups and other companies over the world, such as \textit{Forest} and \textit{pyQuil} by Rigetti \cite{Rigetti2021ForestFramework}; \textit{Strawberry Fields} \cite{Killoran2019StrawberryComputing} and \textit{PennyLane} \cite{Bergholm2020PennyLane:Computations} by Xanadu; \textit{Quingo} \cite{Fu2021Quingo:Features}; \textit{QIRO} \cite{Ittah2022QIRO:Optimization}; or \textit{qcor} \cite{qcor}. \subsubsection{NISQ era and the Hybrid Quantum-Classical model} John Preskill proposed the \textit{“Noisy Intermediate-Scale Quantum (NISQ)”} term in 2018 \cite{Preskill2018QuantumBeyond} to describe the current state of quantum computers. This term indicates two characteristics of today’s quantum devices, including \textit{“noisy,”} i.e., unstable and error-prone quantum state due to the affection of various environmental actions, and \textit{“intermediate scale,”} i.e., the quantum volume is at the medium level, with about a few tens of qubits \cite{Weder2020TheLifecycle}. The most powerful quantum computer with 127 qubits, Eagle (released by IBM in 2021 \cite{ibm127qubit}) could also be categorized as a NISQ computer. These limitations of current quantum devices pose many challenges for quantum software engineers to develop, execute, and optimize quantum applications. Due to the NISQ nature, the typical pattern for developing today’s quantum programs combines quantum and classical parts \cite{Leymann2021HybridPerspective}. In this hybrid model, the classical components are mainly used for pre-processing and post-processing the data. In contrast, the remaining part is sent to quantum computers for computation. The quantum execution parts are repeated many times and measure the average values to mitigate the error caused by the noisy quantum environment. An example of the hybrid quantum-classical model is Shor’s algorithm \cite{shoralgo} to find prime factors of integer numbers. In this algorithm, we execute the period-finding part, leveraging the Quantum Fourier Transform on quantum computers and then performing the classical post-process to measure the prime factors based on the outcome of the quantum computation part. Other hybrid computation examples are the Quantum Approximate Optimization Algorithm (QAOA) \cite{qaoa}, or the Variational Quantum Eigensolver (VQE) \cite{vqe}. \subsubsection{Variation of Quantum Software Development Lifecycle} In traditional software development, a lifecycle is an overall procedure of developing and operating an application, involving many steps from designing, executing, maintaining, investigating, and adapting software \cite{Weder2020TheLifecycle}. Standardizing a lifecycle for quantum software development is also inevitably essential to ensure stability and scalability for the long term. Several studies have proposed various software lifecycles for quantum computing recently. In recent years, DevOps (Development and Operations) has been a trending model adopted by numerous companies to accelerate the software development process and increase revenue faster. DevOps is a union of people, processes, and products whose primary goal is to deliver value to end-users continuously \cite{devops}. Gheorghe-Pop et al. \cite{Gheorghe-Pop2020QuantumComputing} proposed the Quantum DevOps workflow for extending traditional DevOps phases into quantum software engineering. This workflow includes six continuous steps in each Dev and Ops phase: 1) Plan $ \rightarrow $ 2) Code $ \rightarrow $ 3) Build $ \rightarrow $ 4) Test $ \rightarrow $ 5) Release $ \rightarrow $ 6) Feedback. Benjamin et al. \cite{Weder2020TheLifecycle} proposed an overall 10-step quantum software lifecycle. These steps include: 1) Quantum-Classical Splitting $ \rightarrow $ 2) Hardware-independent implementation $ \rightarrow $ 3) Quantum Circuit Enrichment $ \rightarrow $ 4) Hardware-Independent Optimization $ \rightarrow $ 5) Quantum Hardware Selection $ \rightarrow $ 6) Readout-Error Mitigation Preparation $ \rightarrow $ 7) Compilation and Hardware dependent Optimization $ \rightarrow $ 8) Integration $ \rightarrow $ 9) Execution $ \rightarrow $ 10) Result Analysis. Then, they proposed an altered lifecycle \cite{Weder2021QuantumLifecycle} with eight steps, including 1) Requirement Analysis $ \rightarrow $ 2) Quantum-Classical Splitting $ \rightarrow $ 3) Architecture and Design $ \rightarrow $ 4) Implementation $ \rightarrow $ 5) Testing $ \rightarrow $ 6) Deployment $ \rightarrow $ 7) Observability $ \rightarrow $ 8) Analysis. Along with the Quingo framework proposed in \cite{Fu2021Quingo:Features}, the authors also suggested a six-phase life cycle for a quantum program, including 1) Editing $ \rightarrow $ 2) Classical compiling $ \rightarrow $ 3) Pre-executing $ \rightarrow $ 4) Quantum compiling $ \rightarrow $ 5) Quantum executing $ \rightarrow $ 6) Classical post-executing. Due to the immature development and lack of standardization, more efforts are still needed to advance this field and adapt to quantum hardware's continuous growth. From the practical point of view, in the design and implementation of the QFaaS framework, we customized and proposed a sample 6-stage lifecycle for quantum function development, which will be described in detail in section \ref{qfaaslifecycle}. \subsection{Quantum Computing as a Service (QCaaS)} Today's quantum computers are made available to the industry and research community as a cloud service by a quantum cloud provider \cite{Garcia-Alonso2022QuantumGateway}. This scheme is well known as Quantum Computing as a Service (QCaaS or QaaS), which corresponds with well-known paradigms in cloud computing such as Platform as a Service (PaaS) or Infrastructure as a Service (IaaS) \cite{cloud-raj}. In terms of QCaaS, software engineers can develop quantum programs and send them to quantum cloud providers to execute that program on appropriate hardware. After finishing the computation, the users only need to pay for the actual execution time of the quantum program (pay-per-use model). In this way, QCaaS is an efficient way that optimizes the user's budget for using quantum computing services and the provider's resources. Many popular cloud providers nowadays offer quantum computing services using their quantum hardware, such as IBM Quantum \cite{ibmq}, which is publicly accessible for everyone in their early phase. Besides, other quantum computing services (such as Amazon Braket \cite{amazonbraket}, and Azure Quantum \cite{azurequantum}) collaborate with other hardware companies such as D-Wave, Rigetti, and IonQ to provide commercial services. For example, Amazon Braket, a new Quantum Computing service of Amazon Web Services (AWS), currently offers the pay-per-use pricing model as Table \ref{tab:braket}: \begin{table}[h!] \caption{Amazon Braket Pricing for using Quantum Computers (April, 2022) \cite{amazonbraket}} \label{tab:braket} \begin{tabular}{cccl} \toprule \textbf{Hardware Provider} & \textbf{QPU Family} & \textbf{Per-task price} (\$) & \textbf{Per-shot price} (\$) \\ \midrule D-Wave & 2000Q & 0.3 & 0.00019 \\ D-Wave & Advantage & 0.3 & 0.00019 \\ IonQ & IonQ device & 0.3 & 0.01 \\ OQC & Lucy & 0.3 & 0.00035 \\ Rigetti & Aspen-11 & 0.3 & 0.00035 \\ Rigetti & M-1 & 0.3 & 0.00035 \\ \bottomrule \end{tabular} \end{table} By running a gate-based quantum application on the Rigetti M-1 quantum computer with 10,000 shots (i.e., iterated execution) at Amazon Braket, we need to pay: \textit{Total charges = Task charge + Shots charge} \textit{= the number of task * per-task price + the number of shots * per-shot price } = 0.3*1 + 10,000 * 0.00035 = \$3.80 \cite{amazonbraket}. However, this paradigm still faces many challenges before solving real-world applications due to the limitation of today’s NISQ computers \cite{Preskill2018QuantumBeyond}. These devices have a small number of qubits that are error-prone and limited in capabilities. Therefore, improving the quality and quantity of qubits for quantum computers will accelerate of QCaaS model and quantum software development. \subsection{Serverless Quantum Computing} \subsubsection{Serverless Computing and Function as a Service (FaaS)} In the classical computing domain, serverless is an emerging model and could be considered a second phase for traditional cloud computing \cite{Schleier-Smith2021WhatBecome}. Serverless does not mean the absence of physical servers; it refers to an execution model that simplifies the application development without worrying about setting up the underlying system infrastructure. In other words, serverless implies that the existence of the servers is abstracted away from the software engineers. This computing model fits with modern software architecture, especially the microservice applications, where the overall application is decomposed into multiple small and independent modules \cite{Eismann2021ServerlessHow}. The serverless computing concept generally incorporates both Function-as-a-Service (FaaS) and Backend-as-a-Service (BaaS) models \cite{Taibi2021ServerlessHeading}. FaaS refers to the stateless ephemeral function model where a function is a small and single-purpose artifact with few lines of programming code. BaaS is a concept to describe serverless-based file storage, database, streaming, and authentication service. As FaaS is a subset of the serverless model, its main objective is to provide a concrete and straightforward way to implement software compared with traditional monolith architecture. FaaS allows the software engineer to focus only on coding rather than environmental setup and infrastructure deployment. A function can be triggered by a database, object storage, or deployed as a REST API and accessed via an HTTP connection. Functions also need to be scalable, i.e., automatically scaling down when idle and scaling up when the request demand increases. In this way, a FaaS platform could be an efficient way to optimize the resource for providers and reduce costs for customers. There are numerous open-source FaaS platforms in the traditional cloud-native landscape, such as OpenFaaS, OpenWhisk, Kubeless, Knative, and also many commercial serverless platforms such as AWS Lambda, Azure Functions, Google Cloud Functions \cite{Hassan2021SurveyComputing}. \subsubsection{Severless Quantum Computing} In November 2021, along with the 127-qubit quantum computer, IBM also introduced the concept of Quantum Serverless \cite{Johnson2021QuantumServerless}, an adapted execution model for combing quantum and classical resources together. This model followed the principles of the traditional serverless platform, which embodies four key characteristics: 1) The only job of software engineers is to focus on their coding without any concern about infrastructure management; 2) All components are cloud-based services; 3) The services are scalable and 4) It fits with the pay-per-use pricing model. IBM also introduced Qiskit Runtime \cite{qiskitruntime} in 2021, which allows users to execute the pre-built circuit by their developer teams, and it could be a premature example of their proposed concept. A serverless quantum computing model could also be a viable solution for utilizing today’s quantum computers effectively. Indeed, by decomposing a monolith application into multiple single-purpose functions, we could distribute them to various backend devices. This approach is well-suited to the current state of NISQ devices, in which each device has limited resources and could be accessed anywhere through the quantum cloud. Besides, we could implement a hybrid quantum-classical model by combining quantum functions and classical functions in a unified application. This approach could leverage the power of existing quantum computers to facilitate new promising techniques, such as hybrid quantum-classical machine learning \cite{Biamonte2017QuantumLearning}. Although we can adopt the serverless model for quantum computing, the way a traditional service and quantum service can be deployed and executed are different. We can deploy a service directly and permanently to a classical server one time, and it could be invoked many times by the end-users later on. However, we cannot do the same thing with quantum computers, i.e., a quantum program cannot be deployed persistently in a specific quantum computer \cite{Garcia-Alonso2022QuantumGateway}. With today’s quantum computer, an appropriate quantum circuit (based on user inputs) needs to be built every time we execute a specific task. Then, that circuit will be transpiled to corresponding quantum system-level languages (such as QASM \cite{qasm}) before being sent to a quantum cloud service for execution. Therefore, an adaptable serverless model for executing quantum tasks is needed to address this challenge. By leveraging the ideas of the serverless model and combining quantum and classical parts in a single service, we can adapt to the current nature of quantum cloud services, accelerate the software development process and optimize quantum resource consumption. This kind of computing model could be a potential approach to enable quantum software engineers and end-users can realize the actual advantages of quantum computing and explore more complicated quantum computation in the future. \section{Related Work} This section discusses the related work in the context of frameworks for developing quantum applications. Table \ref{tab:relatedwork} summarizes the difference between QFaaS and related platforms in the context of various capabilities offered by them. \begin{table*}[htbp] \caption{Feature comparison of QFaaS and Related Work for Quantum Software Development \textit{(N/A: No information available)}} \label{tab:relatedwork} \centering \renewcommand{\arraystretch}{1.2} \begin{tabular}{p{0.18\textwidth}p{0.08\textwidth}p{0.1\textwidth}p{0.12\textwidth}p{0.08\textwidth}p{0.06\textwidth}p{0.08\textwidth}p{0.09\textwidth}} \toprule \textbf{Features} & \textbf{algo2qpu} \cite{Sim2018AComputers} & \textbf{SQC} \cite{Strangeworks2022StrangeworksPlatform} & \textbf{QuantumPath} \cite{quantumpath} & \textbf{SCIQC} \cite{Grossi2021AComputing} & \textbf{QAPI} \cite{Garcia-Alonso2022QuantumGateway} & \textbf{Quingo} \cite{Fu2021Quingo:Features} & \textbf{QFaaS} \textit{(Proposed)} \\ \midrule Quantum SDKs and languages & Forest & Qiskit, Q\#, Cirq, Forest, ProjectQ, ... & Qiskit, Q\#, Cirq, Ocean, pyQuil & Qiskit & N/A & Quingo & Qiskit, Q\#, Cirq, Braket \\ Code Development Environment & N/A & Web IDE & Visual Editor & N/A & $\times$ & N/A & Web IDE, Local IDE\\ Templates library & \checkmark & \checkmark & \checkmark & $\times$ & $\times$ & $\times$ & \checkmark \\ Quantum + Classical integration & \checkmark & \checkmark & \checkmark & \checkmark & $\times$ & \checkmark & \checkmark \\ API Gateway & $\times$ & $\times$ & \checkmark & $\times$ & \checkmark & $\times$ & \checkmark \\ Built-in REST API & $\times$ & \checkmark & $\times$ & $\times$ & $\times$ & $\times$ & \checkmark \\ Serverless FaaS & $\times$ & $\times$ & $\times$ & \checkmark & $\times$ & $\times$ & \checkmark\\ Containerization & $\times$ & $\times$ & $\times$ & \checkmark & $\times$ & $\times$ & \checkmark \\ DevOps \textit{(CI/CD)} & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & \checkmark \\ UI Dashboard & $\times$ & \checkmark & \checkmark & \checkmark & $\times$ & $\times$ & \checkmark \\ CLI tool & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & \checkmark \\ Implementation \textit{(with practical use cases)} & \checkmark & \checkmark & \checkmark & $\times$ & \checkmark & \checkmark & \checkmark \\ Job monitoring & $\times$ & $\times$ & $\times$ & \checkmark & $\times$ & $\times$ & \checkmark\\ Scalability & $\times$ & $\times$ & $\times$ & N/A & $\times$ & $\times$ & \checkmark \\ Quantum Simulators & N/A & \checkmark & \checkmark & \checkmark & N/A & \checkmark & \checkmark \\ Quantum Providers \textit{(S: single, M: multiple)} & \checkmark (S) & \checkmark (M) & \checkmark (M) & \checkmark (S) & \checkmark (S) & N/A & \checkmark (M) \\ \bottomrule \end{tabular} \end{table*} To the best of our knowledge, existing quantum computing platforms lack various capabilities to provide a universal environment for developing service-based quantum applications. In 2018, Sim et al. \cite{Sim2018AComputers} proposed \textit{algo2qpu}, a hardware and software agnostic framework that supports designing and testing adaptive hybrid quantum-classical algorithms on the Rigetti cloud-based quantum computer. They implemented two applications in quantum chemistry and quantum machine learning using their proposed framework. This work motivated us to develop a more efficient and flexible framework for hybrid quantum-classical software development. Strangeworks Quantum Computing (SQC) \cite{Strangeworks2022StrangeworksPlatform} is an online platform that allows us to use various templates, such as Qiskit, Q\#, Cirq, ProjectQ, Ocean, Pennylane, and Forest, to develop quantum programs (using Jupyter Notebook and Python script). These programs can run on IBM Quantum machines and other enterprise quantum hardware. SQC could be a convenient tool for quantum experiments and visualization. However, the running model of SQC and QFaaS is different. SQC provides a \textit{“code-and-run”} environment, where we can write our standalone program and run it directly. In contrast, QFaaS provides a function-as-a-service model, where each quantum function is deployed as a service for end-users through an API gateway. This way, we can easily integrate these quantum services into our existing application, following the microservice application scheme. Another related work applying the serverless model for quantum computing is proposed by Grossi et al. \cite{Grossi2021AComputing}. In this work, they proposed an architecture design to integrate a Minimum Viable Product (MVP) solution for merging quantum-classical applications with reusable code. This framework leverages multiple cloud-native technologies provided by IBM Cloud, such as IBM Cloud Functions and IBM Containers, to implement Qiskit programs on IBM Quantum. However, the implementation of this proposed system is not available, and it depends solely on IBM-based platforms, which could lead to the vendor lock-in problem of serverless computing. Garcia-Alonso et al. \cite{Garcia-Alonso2022QuantumGateway} proposed the proof of concept about Quantum API Gateway and provided a simple validation using Python and Flask platform on the Amazon Braket platform. QuantumPath, proposed by Hevia et al. \cite{quantumpath}, is a quantum software development platform aiming to support multiple quantum SDKs and languages, supporting gate-based and annealing quantum applications, and provide multiple design tools for creating a quantum algorithm. QuantumPath does not support serverless and scalability features for further expansion, and it is still in the preliminary phase without providing performance evaluation to validate the proposed design. X.Fu et al. \cite{Fu2021Quingo:Features} proposed the overall framework for developing heterogeneous quantum-classical applications, adapting with NISQ devices. Instead of working with popular quantum languages and SDKs, they also proposed a new programming language, called Quingo, to describe the quantum kernel. Although this is an exciting direction for further quantum framework development, it can face many challenges when developing a new programming language compared to improving the well-known languages. For example, expanding the support for large developer communities, security testing for potential vulnerabilities, and covering all aspects of the quantum and classical computation. Considering the limitations of the existing platforms, we propose a unified framework for quantum computing that bridges the existing gaps. We focus on building the QFaaS framework by leveraging the advantages of traditional software engineering to quantum computing to alleviate the challenges when developing a hybrid quantum-classical function as a service. Our framework adapts to the NISQ era and is ready for a more stable quantum generation in the near future. \section{Overview of QFaaS Framework} This section presents the design principles, main components, function development life cycle, and the operational workflows of QFaaS for developing and using hybrid quantum-classical functions. \subsection{Design Principles} We design the QFaaS framework based on the following main principles: \begin{itemize} \item \textbf{Modularity}: The whole framework is built as a combination of multiple modules, where each module manages a specific functionality, thus simplifying further expansion and maintenance. This principle is inspired by the microservice software architecture and \textit{“everything-as-a-service”} (XaaS) paradigm \cite{xaas}. Therefore, we could easily integrate new functionalities into the current framework without affecting other modules. \item \textbf{Serverless}: The quantum software engineers only need to focus on their programming, and the framework automatically carries out the rest of the deployment and execution procedures. Once deployed, the end-users can access the deployed services through the cloud-based API gateway in multiple methods. If a pricing strategy is established, they only need to pay for their actual resource usage. \item \textbf{Flexibility}: The framework allows users to choose their preferred quantum languages, libraries, and quantum providers to avoid potential vendor lock-in situations. The proposed framework supports the current NISQ computers and multiple quantum simulators. Its architecture provides the flexibility to implement possible extensions to support various quantum technologies in the future. \item \textbf{Seamlessness}: The framework supports continuous integration and continuous deployment (CI/CD), which are two of the most essential characteristics of the DevOps life cycle to continuously deliver value to end-users without any interruption. Utilization of this model boosts application development and becomes more reliable when compared with the traditional paradigm \cite{Gheorghe-Pop2020QuantumComputing}. \item \textbf{Reliability}: The framework architecture is implemented using state-of-the art technologies to ensure high availability, security, fault tolerance, and trustworthiness of the overall system. \item \textbf{Scalability}: As one of the critical characteristics of the serverless model, the framework is scalable and adapts to the actual user requests to optimize both the performance and the resource consumption, eventually providing the optimum cost for end-users. \item \textbf{Transparency}: The operation workflow of the framework is transparent to both the quantum software engineers and end-users. It also provides information of diagnosing, troubleshooting, logging, and monitoring for further investigations. \item \textbf{Security}: The interactions among different components need to be secure. The framework has in-built identity and access management features to guarantee they have sufficient privileges before performing each operation in the system. \end{itemize} \subsection{QFaaS Architectural Components} The architecture design of QFaaS comprises a set of principal components, which are pluggable and extendable. We break down QFaaS into six components: the QFaaS Core APIs and API Gateway, the Application Deployment Layer, the Classical Cloud Layer, the Quantum Cloud Layer, the Monitoring Layer, and the User Interface. Figure \ref{fig:qfass-design} illustrates the overall design, including the architecture and principal components of our framework. \begin{figure}[htbp] \centering \includegraphics[scale=0.8]{images/qfaas-design.pdf} \caption{Overview of QFaaS Components and Architecture Design} \label{fig:qfass-design} \end{figure} \subsubsection{QFaaS APIs and API Gateway} QFaaS comprises two kinds of APIs that expose to the appropriate user, called Service APIs and QFaaS Core APIs. \begin{itemize} \item \textbf{Service APIs} are the set of APIs corresponding to the deployed functions. Each function running on the Kubernetes cluster has a unique API URL accessible to an authorized end-user. \item \textbf{QFaaS Core APIs} set is one of the most important components in the QFaaS framework. It comprises a set of secure REST APIs, which provide primary operations among all components of the whole system. These APIs simplify the function development, invocation, management and monitoring. QFaaS Core APIs also facilitate the main functionalities of the QFaaS Dashboard and QFaaS CLI tool. We explain the detailed design and implementation of QFaaS Core APIs in Section \ref{section-coreapi}. \end{itemize} The API gateway serves as an entrance where users can interact with other components. This gateway routes users’ requests to suitable components and delivers the result back to the users after completing the processing. \subsubsection{Application Deployment Layer} This layer serves as a bridge between quantum software engineers and the cloud layers to deploy and publish each function as a service for end-users. It takes the principal responsibility for developing, storing, and deploying quantum functions by combining four key components: \begin{itemize} \item \textbf{Code Repository} is a Git-based platform to store function codes with version control management, which is essential for collaboration in software development. We could deploy this repository privately or publicly with popular Git-based open-source platforms such as Gitlab, Github, and Bitbucket. \item \textbf{Function Templates} include container-based quantum software environment setup and sample function format for all supported quantum SDKs and languages, including Qiskit, Cirq, Q\#, and Braket. \item \textbf{QFaaS Automation} implements the Continuous Integration and Continuous Deployment (CI/CD) process, following the DevOps paradigm to ensure the continuous delivery of reliable quantum functions for end-users. \item \textbf{Container Registry} stores Docker images of functions and environmental setup after implementing the QFaaS Automation process. These images are immutable and could be used to deploy a function as a container-based service to the classical cloud layer. \end{itemize} \subsubsection{Classical Cloud Layer} This layer comprises a cluster of cloud-based classical computers (physical servers or virtual machines), where the QFaaS functions are deployed and executed. All the classical processing tasks, including pre-processing and post-processing, are performed at this layer. We employed Kubernetes to orchestrate all the \textit{pods} (the container-based unit of Kubernetes) for the QFaaS function across all cluster nodes. Each function will be run on a pod and could be scaled up horizontally by replicating the original pod. All quantum SDKs and languages have their built-in quantum simulator, which can run directly inside a pod at the Kubernetes cluster. We call this kind of simulator as the \textit{internal quantum simulator}, while the \textit{external quantum simulator} is used to indicate the simulator provided by quantum cloud providers. We also deployed a NoSQL database (MongoDB) on this layer to store the processed job results and information of users, functions, and backends. \subsubsection{Quantum Cloud Layer} This layer is the external part, indicating the quantum cloud providers (such as IBM Quantum and Amazon Braket), where the quantum job can be executed in a physical quantum computer. Quantum providers can provide either quantum simulators or actual quantum computers through their cloud services and could be accessed from the Classical Cloud layer. \subsubsection{Monitoring Layer} This layer includes different components which periodically check the status of other layers: \begin{itemize} \item \textbf{Quantum job monitoring} component periodically checks the job status, queuing information, and quantum job result from external quantum providers. \item \textbf{Function monitoring} component provides the current status of deployed quantum functions and function usages (such as the number of invocations). \item \textbf{System monitoring} component provides the status of the overall systems, such as network status and resource consumption at the Kubernetes cluster. \item \textbf{Deployment monitoring} component tracks the function deployment process to provide the quantum software engineers with helpful information to figure out the issue during the function development process. \end{itemize} \subsubsection{User Interface} QFaaS offers two main ways for quantum software engineers and end-users to interact with the core components using a command-line interface (QFaaS CLI) or a friendly user interface (QFaaS Dashboard). \begin{figure}[htbp] \centering \includegraphics[scale=0.33]{images/qfaas-cli.png} \caption{QFaaS CLI tool (sample commands for working with QFaaS functions)} \label{fig:qfass-cli} \end{figure} \begin{itemize} \item \textbf{QFaaS CLI} tool (shown in Figure \ref{fig:qfass-cli}) is a Python-based command-line (CLI) tool for working with local function development environment. This CLI tool is mainly built for quantum software engineers to easily use their local IDE (Integrated Development Environment) such as Visual Code, Atom, and PyCharm to develop the functions and interact with the QFaaS core system remotely. The QFaaS CLI’s features are similar, corresponding with the supported features offered by the QFaaS Dashboard for quantum software engineers. \begin{figure}[htbp] \centering \includegraphics[scale=0.22]{images/dashboard-invoke.png} \caption{QFaaS Dashboard Example for invoking a Qiskit function with IBM Quantum provider} \label{fig:qfass-dashboard} \end{figure} \item \textbf{QFaaS Dashboard} (shown in Figure \ref{fig:qfass-dashboard}) is a modern web-based user interface built by using ReactJS\footnote{https://reactjs.org/} for the frontend and Python 3.10 for its API backend (QFaaS Core APIs). This dashboard allows: \begin{enumerate} \item \textit{Quantum software engineers} to develop functions using a built-in code editor or upload their code files; update and manage their functions, templates; monitor the status of function deployment or overall system. \item \textit{End-users} to use the deployed quantum function by invoking (i.e., sending requests), monitoring, and managing their requests and results. \end{enumerate} \end{itemize} \subsection{Hybrid Quantum-Classical Function Life Cycle} \label{qfaaslifecycle} Inspired from several proposed Quantum Software Life Cycles \cite{Weder2020TheLifecycle, Weder2021QuantumLifecycle,Fu2021Quingo:Features}, we propose an altered 6-phase life cycle of a hybrid quantum-classical function (see Figure \ref{fig:qfass-fn}) for QFaaS as follows: \begin{figure}[htbp] \centering \includegraphics[scale=0.097]{images/fn-lifecycle.png} \caption{Hybrid Quantum-Classical Function Life Cycle.} \label{fig:qfass-fn} \end{figure} \begin{enumerate} \item \textbf{Function Development}: Quantum Software Engineers develop their quantum function, using quantum SDKs for quantum computation and optionally including classical parts for pre-processing, post-processing, and backend selection. After finishing the development, functions are pushed to a Git-based code repository and the function deployment process is triggered. \item \textbf{Function Deployment}: The quantum function is deployed at the classical backend (e.g., Kubernetes cluster) by employing modern software engineering technologies and models, such as containerization, continuous integration, and continuous deployment (CI/CD). During the deployment process, some components of the function could be compiled. For example, we use \textit{dotnet} to compile Q\# code into a classical binary, which could be imported into the main function handler. After deployment, the function is published as a service, allowing end-users access through the API gateway. \item \textbf{Classical Pre-Processing}: The function executes the classical computation task to pre-process the user’s input data and pass all the requested data to the appropriate component in the function handler. \item \textbf{Backend Selection}: An appropriate quantum backend must be selected for the quantum execution based on user preference. These quantum backends could be a quantum simulator or a physical quantum computer provided by a quantum cloud computing vendor (such as IBM Quantum, Amazon Braket, and Azure Quantum). In the case of simulators, it could be either the built-in simulator of the corresponding quantum SDK, which runs on top of a classical computer at the Kubernetes cluster (internal quantum simulators), or an external quantum simulator provided by the quantum cloud provider (such as IBMQ QASM Simulator, Amazon Braket SV Simulator). \item \textbf{Quantum Computation}: A corresponding quantum circuit will be built and then sent to the selected quantum backend according to user input. Suppose the end-user does not specify a specific quantum backend, QFaaS could automatically choose the best suitable (least busy) backend at the quantum provider to send the quantum circuit for execution. As today's quantum computers are NISQ devices \cite{Preskill2018QuantumBeyond}, each quantum execution should be run many times (shots) to mitigate the quantum errors. Besides, due to the limited number of available quantum computers, a quantum task (job) needs to be queued at the quantum provider (from seconds to hours) before execution. After the quantum computation is finished, the outcome is sent back to the function handler for post-processing or directly returned to the end-users. \item \textbf{Classical Post-Processing}: This optional step takes place at classical computation nodes to process the outcome from the quantum backend before sending it to the end-user via the API gateway. \end{enumerate} \subsection{Operation Workflows of QFaaS} \subsubsection{Workflow for developing and deploying quantum functions} We simplify the function development process for quantum software engineers by using QFaaS. They can follow these workflows to create a new function, update an existing function or figure out the issues during the development process: \begin{itemize} \item \textbf{Develop a new function}: Figure \ref{fig:qfass-dev} illustrates the function developing process, including seven main steps. The first two steps are the responsibilities of the quantum software engineer, and the QFaaS framework handles the remaining steps. Our simple 2-step procedure to develop a new quantum function is as follows: \begin{enumerate} \item Create a new function by using the QFaaS Dashboard or the CLI tool: \begin{itemize} \item Specify which quantum templates will be used (Qiskit, Cirq, Q\#, or Braket); environment variables (such as secrets, annotations, scale factor) for the function. \item Compose Quantum functions using local IDE or the web-based IDE of QFaaS Dashboard. \end{itemize} \item Push function codes to the Code Repository through the QFaaS API Gateway. \textit{After these first two steps, QFaaS will automatically take responsibility for the rest of the deployment procedure by performing the following steps:} \item The API gateway forwards the function code and pushes it to the Code Repository. \item This triggers the Automation components to start the deployment process. We implemented the QFaaS Automation component on top of Gitlab Runner\footnote{https://docs.gitlab.com/runner/}. \item Pull the function template and combine it with function code to build up and containerize it into a Docker image. Then, those images will be pushed to a Container Registry to be stored for further utilization (such as migrating or scaling up a function). \item Deploy the function into the Kubernetes cluster at the classical cloud layer. \item Expose the service API URL endpoint corresponding to the deployed function. After this stage, the function serves as a service and is ready for invoking from end-users. \end{enumerate} \begin{figure}[htbp] \centering \includegraphics[scale=0.49]{images/qfaas-dev.pdf} \caption{Operation workflow for developing and deploying QFaaS functions} \label{fig:qfass-dev} \end{figure} \item \textbf{Update, delete functions}: QFaaS facilitate these actions for quantum software engineer by providing corresponding features at the Dashboard and CLI tool. After an update is triggered at the Function Development layer, QFaaS will activate the Automation component and take over the remaining tasks. \item \textbf{Monitor functions and system status}: The Monitoring layer periodically checks the status of the deployment process, function usage, and system status (such as network status, Kubernetes cluster resource consumption) to provide insights for quantum software engineers on demand. \item \textbf{Troubleshoot and diagnose the problem}: If any issues are discovered during the deployment or invocation, QFaaS will check and provide detailed logs for the engineer to investigate and figure out the problem. Once engineers fix all issues, it will automatically deploy to the cluster to ensure the continuous integration and continuous delivery of the latest version of the function for end-users. \end{itemize} \subsubsection{Workflow for invoking quantum functions} The end-users could invoke (i.e., send their request to) the deployed function in many ways: 1) using the QFaaS Dashboard, 2) using the QFaaS CLI tool, or 3) integrating and calling QFaaS APIs from other applications or third-party API testing tools (such as Postman\footnote{https://www.postman.com/}). \begin{figure}[htbp] \centering \includegraphics[scale=0.55]{images/qfaas-user.pdf} \caption{Operation workflow for invoking QFaaS functions} \label{fig:qfass-use} \end{figure} Figure \ref{fig:qfass-use} demonstrates the overall workflow for the function invocation, including seven main steps. In this procedure, end-users need to send their request (the first and only step) and then wait for the QFaaS framework to take responsibility for the remaining tasks. \begin{enumerate} \item \textbf{Sending request}: In the requested data, user can clarify their preferred backend or let the framework automatically select the suitable backend, the result retrieval method, and the number of shots they want to repeat the quantum task. We describe the detailed sample request in section \ref{sec:fn-invocation}. \textit{ After receiving the user’s requested data, QFaaS will automatically do the rest of the process.} \item \textbf{Routing the requests}: The API Gateway routes user requests to appropriate available functions. Suppose the function is not started yet or is being scaled down to zero. In that case, QFaaS will initialize and activate this function to process the user request (this situation is also known as \textit{cold-start} in serverless terminology). \item \textbf{Pre-processing} \textit{(optional)}: User’s input data is pre-processed at classical computation node. \item \textbf{Backend Selection}: An appropriate backend is selected based on user requests and the availability at the quantum provider. \item \textbf{Executing the quantum job}: A corresponding user’s input data is generated and sent to the selected backend device. The quantum backend device could be an internal quantum simulator (5a), an external quantum simulator (5b), or a physical quantum computer (5c). Then, that circuit is compiled to the appropriate quantum system language (such as QASM), and the quantum backend performs the quantum task. After finishing the execution, the outcome from the quantum backend will be sent back to the function handler for further processing. Suppose end-users want to track the response data later on, the function will send back the quantum Job ID and information of the backend device after successfully submitting the quantum circuit to the quantum backend. \item \textbf{Post-processing} \textit{(optional)}: The outcome from quantum backends could be analyzed and post-processed before sending back to end-users and storing it to the database. \item \textbf{Returning the results}: After the previous step, the final result will be returned to end-users via the API Gateway, as the same way when they submit the request. End-users can get the result data, and information about the backend device is used for the quantum execution. The quantum engineers could also add other information and further analysis to help end-users get insight from the execution result. \end{enumerate} \section{Design and Implementation of QFaaS} \subsection{QFaaS Core APIs} \label{section-coreapi} This section provides a detailed design and implementation of the QFaaS Core APIs, which take responsibility for primary functionalities in the QFaaS framework. We have developed this API set using Python 3.10 with FastAPI\footnote{https://fastapi.tiangolo.com/}, a high-performance Python-based framework supporting the Asynchronous Server Gateway Interface (ASGI) for concurrent execution. We also used the MongoDB\footnote{https://www.mongodb.com/} database to store the persistent data as JSON schemes. Figure \ref{fig:qfass-coreapi} depicts the overall class diagram, with attributes and methods of each object in QFaaS Core APIs. \begin{figure}[htbp] \centering \includegraphics[scale=0.12]{images/QFaaSClassDiagram.png} \caption{Overall Class Diagram of QFaaS Core REST APIs} \label{fig:qfass-coreapi} \end{figure} All objects are associated with four essential CRUD operations: Create - Read (get) - Update and Delete, using the proper HTTP methods, namely POST, GET, PUT, and DELETE, respectively. \begin{itemize} \item \textbf{User:} This class defines user attributes and methods to facilitate access control features. We categorized three different users: administrator, software engineer, and end-user with different privileges in the system. Administrators have complete control of all components; software engineers can develop and deploy functions, while end-users can only use their appropriate functions. Each active user is assigned a unique token (using OAuth2 Bearer specification\footnote{https://datatracker.ietf.org/doc/html/rfc6750}), which is used for authentication, authorization, and dependency check for each interaction with the core components of the QFaaS, such as creating a new function or invoking a function. This implementation enhances security for the whole framework and provides a multiple-user environment for taking advantage of the QFaaS framework. \item \textbf{Function:} The Function class defines each function's properties and supported methods. Each function belongs to a specific software engineer (author) and could be published for the end-user. The \textit{CRUD}, \textit{invoke()} and \textit{scale()} methods of this object interacts directly with other architectural components such as Code Repository, Container Registry and Kubernetes Cluster to handle the function deployment, management, and invocation. \item \textbf{Job:} A job in QFaaS is a computation task submitted to a quantum backend (the internal cluster or external providers) for execution. All properties and methods of a job are defined in the Job class. Each Job could have two unique IDs: \textit{jobID} assigned by QFaaS associated with a \textit{providerJobID} given by an external provider for further job monitoring. The function invocation initializes the job object. After finishing the execution, job results are stored in the NoSQL database for further investigation or post-processing. \item \textbf{Provider:} The provider class handles a user's authorization to external quantum providers (such as IBM Quantum and Amazon Braket). The design of this class ensures that each user has the specific privilege to access their quantum providers only. \item \textbf{Backend:} A backend is a device, such as a classical computer, a quantum simulator, or a quantum computer, which takes responsibility for the job execution. The Backend class defines the attributes and methods to interact with the backend provided by the classical cluster or external quantum providers. \end{itemize} \subsection{QFaaS Function Development} \subsubsection{Function Templates} Quantum function templates are Docker images containing the environmental setup for quantum function development. Each quantum SDK has a corresponding \lstinline[keywordstyle=\color{black}, basicstyle=\ttfamily\small\color{black}]{Dockerfile}; which specifies all necessary libraries and packages for that SDK. These templates inherit the \textit{of-watchdog}\footnote{https://github.com/openfaas/of-watchdog} component for initiating and monitoring functions. In this way, quantum functions could be kept \textit{“warm”} (i.e., running) with low latency and maintain persistent HTTP connections, quickly serving the user’s request. \subsubsection{Function Structure} Each quantum function has a simple working directory, including main components following the common pattern of the serverless platform (such as AWS Lambda \cite{awslambda}): \begin{itemize} \item \textbf{Function dependencies:} We can declare all necessary Python-based libraries in the \lstinline[keywordstyle=\color{black}, basicstyle=\ttfamily\small\color{black}]{requirements.txt} file. These libraries will be automatically installed during the function deployment and could be imported for use within the function. \item \textbf{Function handler}: include the source code for the function, including classical parts (using Python) and quantum parts (using Qiskit, Cirq, Q\#, and Braket). When end-users invoke the function, QFaaS executes the function handler and starts the computation at an appropriate backend device. Handler for Qiskit and Cirq function could be defined at \lstinline[keywordstyle=\color{black}, basicstyle=\ttfamily\small\color{black}]{handler.py} file while Q\# function requires us to define an additional Q\# code at \lstinline[keywordstyle=\color{black}, basicstyle=\ttfamily\small\color{black}]{handler.qs} file and then import it to the main \lstinline[keywordstyle=\color{black}, basicstyle=\ttfamily\small\color{black}]{handler.py} file. The sample format for each function handler to handle user requests and return the response, with classical pre-processing and post-processing parts, is defined using the general Python syntax as Code \ref{lst:fn-code}. \end{itemize} First, we import all necessary quantum libraries (and Q\# operation for Q\# function only). Then, we define the function methods as follows: \begin{itemize} \item \textbf{Classical pre-processing and post-processing}: In each function, we can optionally define the pre-processing and post-processing methods executed on classical computers (Kubernetes cluster). \item \textbf{Backend selection}: We can also define the custom strategy to allow end-users to select the appropriate quantum backend (internal simulator, external simulator, or quantum computer at cloud providers). \item \textbf{Main handler method}: The primary handle method is similar to AWS Lambda \cite{awslambda}, while the user’s requests are delivered to the function in two objects: \lstinline[keywordstyle=\color{black}, basicstyle=\ttfamily\small\color{black}]{event} and \lstinline[keywordstyle=\color{black}, basicstyle=\ttfamily\small\color{black}]{context}. The \lstinline[keywordstyle=\color{black}, basicstyle=\ttfamily\small\color{black}]{event} is JSON-based data that comprise user input data, followed by the QFaaS sample format, while \lstinline[keywordstyle=\color{black}, basicstyle=\ttfamily\small\color{black}]{context} provides HTTP methods (such as GET and POST), HTTP headers, and other properties, which are optional for the request. After finishing all the processing, the function handler could return the result to end-users, including the HTTP Status Code and response data in JSON format. QFaaS uses JSON as a default format to standardize user requests and responses, allowing the quantum software engineer to customize their format if needed. \end{itemize} \begin{lstlisting}[language=Python, caption=Sample structure of a hybrid quantum-classical function, label={lst:fn-code}] import qfaas import <additional_libraries> def pre_processing(data): [pre processing method] def post_processing(data): [post processing method] def backend_selection(input): [quantum backend selection method] def handle(event, context): [main function handler] return result \end{lstlisting} \subsection{QFaaS Core Functionalities Implementation} In this section, we describe the implementation of two core functionalities in the QFaaS framework, including function deployment and function invocation. \subsubsection{Function Deployment} The overall function deployment procedure is implemented following Algorithm \ref{algo:fn-deployment}. Before launching the deployment process, the \lstinline[keywordstyle=\color{black}, basicstyle=\ttfamily\small\color{black}]{dependencyCheck()} validates the access token and checks the permission of the current user, who is sending the function deployment request. The deployment process is only triggered if the \lstinline[keywordstyle=\color{black}, basicstyle=\ttfamily\small\color{black}]{dependencyCheck()} is passed and the function name is valid. First, the function code template is retrieved based on the user selection. Then, the function codes are integrated into that template to create a function package and pushed to the Code Repository for versioning control purposes. The Automation component will be triggered to perform the \lstinline[keywordstyle=\color{black}, basicstyle=\ttfamily\small\color{black}]{ContinuousDeployment} process whenever new function codes are updated (lines 8 - 13). This process starts by containerizing the function package into an appropriate Docker image. Then, it uploads that image to the Container Registry to facilitate the continuous deployment process. Afterward, a container-based service is deployed into the Kubernetes cluster, and a corresponding API endpoint is published. Finally, this service is ready for invocation from authorized users through the QFaaS API gateway. The sequence diagram of a sample function deployment process is shown in Figure \ref{fig:qfass-seqdep}. \begin{algorithm} \small \SetKwData{fnCode}{fnCode} \SetKwData{fnName}{fnName} \SetKwData{fnImage}{fnImage} \SetKwData{function}{function} \SetKwData{template}{template} \SetKwData{fnTemplate}{fnTemplate} \SetKwData{fnConfig}{fnConfig} \SetKwData{fnService}{fnService} \SetKwData{fnEndpoint}{fnEndpoint} \SetKwData{currentUser}{currentUser} \SetKwFunction{getCurrentUser}{getCurrentUser} \SetKwFunction{dependencyCheck}{dependencyCheck} \SetKwFunction{fnNameCheck}{fnNameCheck} \SetKwFunction{ContinuousDeployment}{ContinuousDeployment} \SetKwFunction{pushImageToContainerRegistry}{pushImageToContainerRegistry} \SetKwFunction{pushFnPackageToCodeRepository}{pushFnPackageToCodeRepository} \SetKwFunction{getTemplate}{getTemplate} \SetKwFunction{createFunctionPackage}{createFunctionPackage} \SetKwFunction{buildFunctionImage}{buildFunctionImage} \SetKwFunction{deployServiceToCluster}{deployServiceToCluster} \SetKwFunction{publishServiceAPI}{publishServiceAPI} \SetKwInOut{KwIn}{Input} \SetKwInOut{KwOut}{Output} \SetKwProg{procedure}{procedure}{}{} \KwIn{\fnName: function name, \fnCode: main handler code, \fnTemplate: template name, \fnConfig: additional configuration} \KwOut{\fnService: Deployed service of given function, \fnEndpoint: API endpoint URL of deployed function} \currentUser $\leftarrow$ \getCurrentUser() \\ \eIf{\dependencyCheck(\currentUser) is passed}{ \eIf{\fnNameCheck(\fnName) is valid}{ \template $\leftarrow$ \getTemplate(\fnTemplate) \\ \function $\leftarrow$ \createFunctionPackage(\fnCode, \template, \fnConfig) \\ \pushFnPackageToCodeRepository(\function) \\ \textit{Trigger the Continuous Deployment} \\ \procedure{\ContinuousDeployment}{ \fnImage $\leftarrow$ \buildFunctionImage(\function) \\ \pushImageToContainerRegistry(\fnImage) \\ \fnService $\leftarrow$ \deployServiceToCluster(\fnImage) \\ \fnEndpoint $\leftarrow$ \publishServiceAPI(\fnService) }{\KwRet{\fnService, \fnEndpoint}} }{\KwRet{$FunctionNameError$}} }{\KwRet{$PermissionError$}} \caption{QFaaS Function Deployment} \label{algo:fn-deployment} \end{algorithm} \begin{figure}[htbp] \centering \includegraphics[scale=0.24]{images/QFaaSDeployment.png} \caption{Sequence Diagram of function deployment process in QFaaS} \label{fig:qfass-seqdep} \end{figure} \subsubsection{Function Invocation} \label{sec:fn-invocation} We describe the implementation of handling the function invocation in Algorithm \ref{algo:fn-invocation}. Similar to the function deployment, we perform the \lstinline[keywordstyle=\color{black}, basicstyle=\ttfamily\small\color{black}]{dependencyCheck()} as a mandatory requirement before each invocation to ensure the framework's security. After passing that validation, the QFaaS API gateway forwards the request to the corresponding service in the cluster. \begin{algorithm} \small \SetKwData{req}{req} \SetKwData{fnEndpoint}{fnEndpoint} \SetKwData{result}{result} \SetKwData{jobID}{jobID} \SetKwData{currentUser}{currentUser} \SetKwData{function}{function} \SetKwData{processedData}{processedData} \SetKwInOut{KwIn}{Input} \SetKwInOut{KwOut}{Output} \SetKwData{qcircuit}{qcircuit} \SetKwData{pToken}{pToken} \SetKwData{backend}{backend} \SetKwProg{procedure}{procedure}{}{} \SetKwFunction{forwardRequest}{forwardRequest} \SetKwFunction{preProcessing}{preProcessing} \SetKwFunction{buildQuantumCircuit}{buildQuantumCircuit} \SetKwFunction{getProviderToken}{getProviderToken} \SetKwFunction{BackendSelection}{BackendSelection} \SetKwFunction{getFunction}{getFunction} \SetKwFunction{submitJob}{submitJob} \SetKwFunction{postProcessing}{postProcessing} \SetKwFunction{getID}{getID} \KwIn{\req: User request data (JSON), \fnEndpoint: API endpoint of function} \KwOut{\result: Computation result data, or \jobID: Job ID for later tracking } \currentUser $\leftarrow$ \getCurrentUser() \\ \eIf{\dependencyCheck(\currentUser) is passed}{ \function $\leftarrow$ \getFunction(\fnEndpoint) \\ \forwardRequest(\req, \function) \\ \processedData $\leftarrow$ \preProcessing(\req.data) \\ \qcircuit $\leftarrow$ \buildQuantumCircuit(\processedData) \\ \pToken $\leftarrow$ \getProviderToken(\currentUser) \\ \backend $\leftarrow$ \BackendSelection(\qcircuit.qubit, \req.backendInfo, \pToken) \\ job $\leftarrow$ \submitJob(\currentUser, \qcircuit, \backend, \req.shots) \\ \eIf{\req.waitForResult == \textsf{True}}{ \result $\leftarrow$ \postProcessing(job.result) \\ \KwRet{\result} }{ \jobID $\leftarrow$ \getID(job) \\ \KwRet{\jobID} } }{\KwRet{$PermissionError$}} \caption{QFaaS Function Invocation} \label{algo:fn-invocation} \end{algorithm} We define a sample JSON format for the request and response format in the function template. However, quantum software engineers can customize the request and response to adapt to their functional requirements. A sample JSON request is as follows (Code \ref{lst:invoke-rqst}): \begin{lstlisting}[language=json, caption=Sample JSON request format to invoke a QFaaS function, label={lst:invoke-rqst}] { "input": <input data>, "provider": <provider name>, "shots": <number of shots>, "waitForResult": <true or false>, "backendInfo": { "autoselect": <true or false>, "type": <expected backend name if autoselect = true>, "backendName": <quantum backend name if autoselect = false> } }\end{lstlisting} Apart from the input data, end-users can also define: \begin{itemize} \item \lstinline[keywordstyle=\color{black}, basicstyle=\ttfamily\small\color{black}]{provider}: clarify their preferred quantum backend, either an internal simulator or quantum cloud provider (including IBM Quantum and Amazon Braket). In the case of selecting an external quantum cloud provider, they need to specify the information of their preferred quantum backend in the \lstinline[keywordstyle=\color{black}, basicstyle=\ttfamily\small\color{black}]{backendInfo} section. If the \lstinline[keywordstyle=\color{black}, basicstyle=\ttfamily\small\color{black}]{autoselect} variable is \lstinline[keywordstyle=\color{black}, basicstyle=\ttfamily\small\color{black}]{true} and the expected backend \lstinline[keywordstyle=\color{black}, basicstyle=\ttfamily\small\color{black}]{type} (internal/external simulator or quantum computer) is set, QFaaS will automatically select the best-suited quantum backend (least busy devices with enough qubits for the computation). The users can also manually select a specific quantum backend by setting this value to \lstinline[keywordstyle=\color{black}, basicstyle=\ttfamily\small\color{black}]{false} and declaring the backend name in the \lstinline[keywordstyle=\color{black}, basicstyle=\ttfamily\small\color{black}]{backendName}. \item \lstinline[keywordstyle=\color{black}, basicstyle=\ttfamily\small\color{black}]{shots}: number of repetition times they want the quantum computation to perform \item \lstinline[keywordstyle=\color{black}, basicstyle=\ttfamily\small\color{black}]{waitForResult}: if this option is set to true, they need to wait at the current session until the function execution is done and receive the result. This waiting time at the quantum cloud provider could be very long (for queuing, transpiling, and executing) due to the current NISQ nature. Otherwise, they have another option by letting QFaaS send their job to the quantum backend device and providing them the Job ID to track the result later. \end{itemize} Based on the functionalities of that service, the pre-processing and post-processing could be performed before and after the execution. Following that optional data processing, a corresponding quantum circuit is built based on the user’s input data. Then, the \textit{BackendSection} will be performed as Algorithm \ref{algo:backendselection}. \begin{algorithm} \small \SetKwData{rQ}{rQ} \SetKwData{be}{be} \SetKwData{token}{token} \SetKwData{backend}{backend} \SetKwInOut{KwIn}{Input} \SetKwInOut{KwOut}{Output} \SetKwData{beList}{beList} \SetKwData{provider}{provider} \SetKwFunction{getInternalBackend}{getInternalBackend} \SetKwFunction{getProvider}{getProvider} \SetKwFunction{getBackend}{getBackend} \SetKwFunction{getLeastBusyBackend}{getLeastBusyBackend} \SetKwFunction{getBackendList}{getBackendList} \SetKwProg{procedure}{procedure}{}{} \KwIn{\rQ: number of required qubit, \be: backend info, \token: Token to access external provider (optional)} \KwOut{\backend: quantum backend object } \procedure{BackendSelection (\rQ, \be, \token):}{ \backend $\leftarrow$ null \\ \eIf{\be.internal == true}{ \backend $\leftarrow$ \getInternalBackend(\be.backendName) }{ \provider $\leftarrow$ \getProvider(\token) \\ \beList $\leftarrow$ [] \\ \For{b in \provider.\getBackendList())}{ \If{b.qubit $\ge$ \rQ and b.operational == \textsf{True} and b.type in \be.type}{ \beList $\xleftarrow{+}$ b } } \eIf{\be.autoselect == \textsf{True}}{ \backend $\leftarrow$ \getLeastBusyBackend(\beList) }{ \If{\be.backendName in \beList}{ \backend $\leftarrow$ \getBackend($\be.backendName$) } } } }{\KwRet{\backend}} \caption{QFaaS Backend Selection} \label{algo:backendselection} \end{algorithm} Given the number of required qubits for the quantum circuit, backend preference (such as backend type, manually or automatically selected), and provider access information, an appropriate backend object is returned. Users can allow the framework to automatically select the best-suited backend in a specific type, such as a quantum simulator, quantum computer, or any kind. In that case, QFaaS will inspect and select the least busy backend (i.e., the backend that has the shortest waiting queue) from the provider. After a suitable backend is determined, the function invocation is continued by submitting the quantum circuit to that backend for execution. Then, based on user preference of whether or not to wait until receiving the result, QFaaS will either return the final result (after performing the post-processing - if any) or the job ID for further tracking of the result. The overall interaction between different objects during the function invocation process is shown in Figure \ref{fig:qfass-seqinvoke}. \begin{figure}[htbp] \centering \includegraphics[scale=0.27]{images/QFaaSInvocation.png} \caption{Sequence Diagram of function invocation process in QFaaS} \label{fig:qfass-seqinvoke} \end{figure} \section{Performance Evaluation} \subsection{Environment setup} To validate the proposed framework, we deployed the core components of QFaaS on a set of four virtual machines (VMs) offered by the Melbourne Research Cloud\footnote{https://dashboard.cloud.unimelb.edu.au/}. We set up the Kubernetes cluster with Docker as underlying container technology on three VMs (one master node with 4 vCPU, 16GB RAM, and two worker nodes with 8vCPU, 32 GB RAM each). The QFaaS Code Repository and Automation components are deployed to the last VM (4 vCPU, 16 GB RAM). For the computation layer, we have tested the Qiskit functions with the QASM simulator on both the classical computers at the Kubernetes cluster and quantum backends at the IBM Quantum provider \cite{ibmq} (from the IBMQ hub at the University of Melbourne). For Braket functions, we used their local simulator and Amazon Braket quantum computing service (through the support of Strangeworks Backstage Pass program \cite{Strangeworks2022StrangeworksPlatform}). For Q\# and Cirq functions, we used their built-in quantum simulator and executed it on classical computers in the Kubernetes Cluster. \subsection{Case study 1: Quantum Random Number Generators (QRNG)} Random numbers play an essential role in cryptography, cybersecurity, finances, and many other scientific fields. By leveraging quantum principles, Quantum Random Number Generator (QRNG) has been proposed as a reliable way to provide truly randomness, which can not be achieved by using classical computers. This topic has been gaining much interest in the last 20 years \cite{Huang2021QuantumPlatform}. To give an example of how to generate quantum random numbers by utilizing the superposition state and demonstrate the workflow of QFaaS in action, we deployed a simple QRNG function in four different quantum SDKs and languages, including Qiskit, Cirq, Q\#, and Braket. \subsubsection{Developing QRNG function in multiple quantum SDKs} We used a simple quantum circuit for generating a random number with the number of qubits as the user input. The main idea of this circuit is to leverage the Hadamard gate to create the superposition state of each qubit and then measure to get a random value (0 or 1) with the same possibility (50\%). According to the user’s request, our functions will dynamically generate an appropriate quantum circuit with the corresponding qubits. Figure \ref{fig:qfass-qrng} shows an example of the quantum circuit for generating a 10-bit random number. \begin{figure}[htbp] \centering \includegraphics[scale=0.48]{images/codeSnippet.png} \caption{QRNG circuit for generating 10-bit random number (left - generated using Qiskit) and Sample code snippets to generate the quantum circuit with given integer \lstinline[keywordstyle=\color{black}, basicstyle=\ttfamily\small\color{black}]{input} (right)} \label{fig:qfass-qrng} \end{figure} Since Qiskit, Cirq and Braket are Python-based SDKs, we need to write quantum code to develop a quantum function in the default \lstinline[keywordstyle=\color{black}, basicstyle=\ttfamily\small\color{black}]{handler.py} file. When using Q\#, as it is an independent quantum programming language, we need to develop the function in a \lstinline[keywordstyle=\color{black}, basicstyle=\ttfamily\small\color{black}]{handler.qs} file, and the .NET framework will compile this file. Fortunately, it also supports integrating Q\# with Python code, and we need to import the operation from the Q\# file to the \lstinline[keywordstyle=\color{black}, basicstyle=\ttfamily\small\color{black}]{handler.py} file. QFaaS will take responsibility for the remaining procedure. We can also include several classical processing parts in each quantum function, especially for pre-processing the user input data and post-processing the result from the quantum computation layer. To validate this feature, we implemented simple post-processing for the QRNG function by analyzing all possible outcomes when the function is executed multiple times (shots) and returned the most frequent result to the end-user. The classical processing will be performed on the Kubernetes cluster, which runs on a classical computer after the quantum computation layer completes its processing. \subsubsection{Deploying and invoking QRNG Functions} After developing a quantum function, we upload it to QFaaS Code Repository using the QFaaS Dashboard or QFaaS CLI tool. Whenever the QFaaS Automation detects new updates from the Code Repository, it will automatically check, build the Docker images, push that image to the Docker registry and deploy it to the Kubernetes cluster. Before the deployment, we can also integrate some intermediate processes to verify source code quality or perform a security check. Subsequently, QFaaS will release each quantum function with a unique URL accessible to end-users via the API gateway. User request data will be \lstinline[keywordstyle=\color{black}, basicstyle=\ttfamily\small\color{black}]{jsonify} (i.e., converted to JSON format) and then sent to the API gateway. The request JSON for invoking the QRNG function using all supported SDKs and languages is similar. After finishing processing, the sample response is as Code \ref{lst:invoke-16b-rep}. This result indicates that the generated random number is 6493 (0001100101011101 in binary), one of the most frequent (2 times occurrence) random numbers generated by the Qiskit QRNG function. Thanks to the post-process, we also have other possible results after running this function 1024 times, i.e., 17990 and 26321. \begin{lstlisting}[language=json, caption=Sample response data for returning a random 16-bit number from Qiskit QRNG function, label={lst:invoke-16b-rep}] { "result": 6493, "backend_device": "ibmq_qasm_simulator", "detail": { "provider_info": { "shots": 1024, "job_id": "62301d63e6b7bb485520xxxx", "job_status": "DONE", "running_start_time": "2022-03-15 05:00:24.072000+00:00", "completion_time": "2022-03-15 05:00:25.093000+00:00", "total_run_time": 1.021}, "random_number_binary": "0001100101011101", "counts": 2, "all_possible_values": { "0001100101011101": 6493, "0100011001000110": 17990, "0110011011010001": 26321 } } }\end{lstlisting} \subsubsection{Performance Evaluation on Quantum Simulators} We conducted a series of experiments using the JMeter tool\footnote{https://jmeter.apache.org/} to benchmark the performance of the QRNG function in three different quantum simulators on the QFaaS framework. For a practically fair comparison, we used the default quantum simulator (QASM simulator for Qiskit, \textit{braket\_sv} simulator for Braket, and built-in simulator for Q\# and Cirq) of all frameworks for execution. We repeat each experiment 100 times, then measure the average response time and the standard deviation when executing the QRNG function using 1 qubit to 20 qubits in each quantum SDK. \begin{figure}[htbp] \centering \includegraphics[scale=0.14]{images/qrng-eval.png} \caption{Average response time evaluation of QRNG function using the simulator of popular quantum SDKs and languages} \label{fig:qrng-eval} \end{figure} Figure \ref{fig:qrng-eval} illustrates the average response time of three functions when we increase the number of qubits from 1 to 20. The Cirq simulator registers the fastest response time in all test cases with a slight increase from 49 ms for generating a 1-qubit random number to 61 ms for a 20-qubit one. A similar trend could also be seen if we look at the Qiskit, Q\#, and Braket function figures when the number of qubit increases from 1 to 15. The Qiskit function response time is slightly longer than Cirq and Q\# during the 1- to 15-qubit period. However, when the number of qubits reaches 20, the response time of the Q\# and Braket functions increases significantly and doubles the Qiskit counterpart. This evaluation demonstrates the current state of several popular quantum simulators, but it depends on specific quantum applications and could be changed with the further development of these SDKs. \subsubsection{Scalability evaluation} We could enable the auto-scaling feature to scale up the deployment horizontally (i.e., increase the number of pods), dealing with the scenario when the request workload grows significantly. To validate the scalability of our framework, we perform a set of evaluations on the 10-qubit Qiskit QRNG function. In this evaluation, we increase N - the number of concurrent users from 8 to 64, using the JMeter benchmarking tool. In each case, we conduct a set of three different scenarios: non-scale (1 pod/function), scale up to N/2 pods and scale up to N pods (we fixed the number of pods for evaluation purposes only). For example, suppose there are 64 users (N) invoking the function simultaneously. In that case, we will conduct three test cases: 1 pod, 32 pods (N/2), and 64 pods (N) and record the average response time and the standard deviation. \begin{figure}[htbp] \centering \includegraphics[scale=0.13]{images/scalable-eval.png} \caption{Scalability evaluation on Qiskit QRNG function.} \label{fig:scalable-eval} \end{figure} Figure \ref{fig:scalable-eval} demonstrates the result of our benchmarking. Overall, it is clear that if the function is non-scalable, the average response times for high-demand scenarios significantly increase. The previous section shows that the average response time for the 10-qubit Qiskit QRNG function is 81 ms. This figure jumps dramatically, up to 1703 ms, if 64 users use the function simultaneously. However, thanks to the containerization approach in our framework, we can quickly scale up deployment in seconds to ensure the response time is maintained. We can see that the average response time fluctuates between 87 to 148 ms if we scale up to N pods or from 102 to 180 ms when the number of pods is N/2. \subsection{Case study 2: Shor’s algorithm with IBM Quantum Cloud Provider} Shor’s algorithm \cite{shoralgo} is one of the most famous quantum algorithms for proving the advantage of quantum computing together with its classical counterpart. It is well-known for finding the prime factors of integers in polynomial time, which raises the severe risk for classical cryptography based on the security of large integers such as RSA. \subsubsection{Implement the Shor’s algorithm as a service using Qiskit API} In this case study, we demonstrate the implementation of Shor’s algorithm as a QFaaS function (Shor function) by utilizing the Qiskit Terra API \cite{qiskit}. The quantum circuit for Shor’s algorithm is also dynamically generated based on the input number that end-users want to factorize. For example, to factorize 15, we need to use 18 qubits for the corresponding quantum circuit (Figure \ref{fig:shor-circuit}). Using the Shor class in \lstinline[keywordstyle=\color{black}, basicstyle=\ttfamily\small\color{black}]{qiskit.algorithms}\footnote{https://qiskit.org/documentation/stubs/qiskit.algorithms.Shor.html}, we need to define a simple code to generate the quantum circuit to factorize the integer N, then execute that circuit at the appropriate backend device selected by QFaaS or through the end-users. \begin{figure}[htb] \centering \includegraphics[scale=0.37]{images/Shor-code.png} \caption{A simplified quantum circuit (left) and Qiskit code snippet (right) for implementing Shor algorithm to factorize 15} \label{fig:shor-circuit} \end{figure} \subsubsection{Deploy and invoke Shor function} The deployment process of the Shor function is similar to the QRNG function in the previous section. For example, we use the following request and submit it to the Shor function, using the 27-qubit quantum computer (\lstinline[keywordstyle=\color{black}, basicstyle=\ttfamily\small\color{black}]{ibm_cairo} node) at IBM Quantum Provider \cite{ibmq} for factorizing 15 (Code \ref{lst:shor-req}). After finishing the execution, the response data is as the following sample at Code \ref{lst:shor-rep}. We got the prime factors of 15 are 3 and 5 as expected. Due to the limitation of the current NISQ devices, we keep these example data small to demonstrate the viable workflow of developing, deploying, and using functions at QFaaS. We aim to develop a scalable QFaaS framework that could handle large-scale algorithms relevant to practical applications. \begin{minipage}{.5\textwidth} \hfill \begin{lstlisting}[language=json, caption=Sample request data of the Shor function to factorize 15, label={lst:shor-req}] { "input": 15, "provider": "ibmq", "shots": 100, "wait_for_result": true, "backend_info": { "hub": "ibm-q-melbourne", "api_token": "", "device": "ibm_cairo", "autoselect": false } }\end{lstlisting} \end{minipage}\hfill \begin{minipage}{.45\textwidth} \begin{lstlisting}[language=json, caption=Sample response data of the Shor function, label={lst:shor-rep}] { "result": [[3, 5 ]], "device": "ibm_cairo", "detail": { "required_qubits": 18, "shots": 100 } }\end{lstlisting} \end{minipage} \subsubsection{Performance Evaluation} In this evaluation, we compare the actual performance of the Shor function with today's quantum computers provided by IBM Quantum \cite{ibmq}. We pick five adequate integer numbers for the test cases, including 15, 21, 35, 39, and 55. All experiments are conducted on a 27-qubit quantum computer (\lstinline[keywordstyle=\color{black}, basicstyle=\ttfamily\small\color{black}]{ibm_cairo}, using Falcon r5.11 quantum processor) and the QASM simulator (\lstinline[keywordstyle=\color{black}, basicstyle=\ttfamily\small\color{black}]{ibmq_qasm_simulator}). \begin{figure}[htbp] \centering \includegraphics[scale=0.13]{images/shor-eval.png} \caption{Performance evaluation of Shor function on (\lstinline[keywordstyle=\color{black}, basicstyle=\ttfamily\small\color{black}]{ibm_cairo}) physical quantum backend and QASM simulator provided by IBM Quantum. f(15), f(21), f(35), f(39), and f(55) are five implemented test cases to factorize 15, 21, 35, 39, and 55, respectively.} \label{fig:shor-eval} \end{figure} Every time we invoke the Shor function with each test case, an appropriate circuit (quantum job) will be generated and sent to the IBM Quantum provider. Then, each quantum job will be validated and kept in the queue (from seconds to hours) before being executed in the backend due to the current fair-share nature of the IBM Quantum services. Therefore, to make a fair comparison, we only measure the actual running time (including the circuit validation and running in the system time, without the queuing time). We also execute each quantum job 100 times (shots) to ensure that the final result of all factorization problems is correct. In Figure \ref{fig:shor-eval}, the bar chart illustrates the actual running time, and the line chart indicates the number of qubits used for each test case in both backend devices. These input numbers need less than 27 qubits to build a corresponding circuit. Regarding the run time, we can see that the QASM simulator is much faster than the quantum computer when the number of required qubits is small, from 18 to 22 (for factorizing 15 and 21). However, we can see the opposite trend when executing 26-qubit circuits to factorize 35, 39, and 55. These circuits cost around 3 minutes to complete in an IBM Cairo quantum computer, whereas the QASM simulator takes 13.5 to 17 minutes to finish the execution. A significant reason for the considerable delay of the QASM Simulator in these test cases could be the complexity of the 26-qubit quantum circuit for the Shor algorithm, which requires a lot of resources to simulate. These results could give us insight into the selection order of current quantum computing services for developing quantum software. We could use the quantum simulator for prototyping and testing phases before entering the production stage with the quantum computers. \section{Discussion} By designing and developing QFaaS, we found that combining quantum and classical tasks along with the serverless-based model is possible and even an effective way to deal with the capabilities of current NISQ devices. Our framework provides a seamless quantum software development environment that allows software engineers to quickly develop, build, deploy, and eventually offer their quantum functions to end-users. End-users can also integrate these quantum functions into their existing classical application, especially suitable for microservice applications. Inspired by the advantages of the Serverless and DevOps models, combined with the Quantum Cloud Computing paradigm, we have designed and implemented QFaaS with a set of essential features for creating a unified environment for developing quantum applications: \begin{itemize} \item \textbf{Multiple quantum SDKs and languages supported}: QFaaS supports four quantum SDKs, and widely popular languages, including Qiskit \cite{qiskit}, Cirq \cite{cirq}, Q\# \cite{qsharp}, and Braket \cite{amz-braket-sdk}. \item \textbf{Containerized quantum environment}: The engineers can develop quantum functions without any concerns about environment setup. The function will be containerized into an appropriate Docker image, including all necessary libraries defined by quantum software engineers. This approach makes the deployment more flexible and allows quantum developers to easily migrate their functions to other systems. \item \textbf{User-friendly Web UI with built-in IDE}: Quantum engineers can easily create, update, delete, manage and monitor their quantum function by using QFaaS Dashboard. We also integrate a built-in IDE in the QFaaS Dashboard, allowing quantum software engineers to write and update their quantum codes directly. \item \textbf{Local software development environment with CLI tool}: Our platform also supports using local IDE such as Visual Code for the development process. Then, they can upload the function codes to QFaaS through the CLI tool with all actions they can do through the Dashboard. \item \textbf{Hybrid quantum-classical functions}: Quantum functions could include both quantum and classical parts (using Python), which supports hybrid computation. Quantum parts can be run on multiple quantum simulators (QASM simulator by Qiskit or built-in simulator by Cirq, Q\#, and Braket) or external quantum providers (supported both IBM Quantum \cite{ibmq} and Amazon Braket \cite{amazonbraket}). Classical parts (pre-processing, post-processing, or other classical processing) can be run on classical computation nodes at the Kubernetes cluster, where quantum functions are deployed. \item \textbf{Quantum API gateway}: After deploying, quantum functions will be published for end-users using an API gateway to use and integrate into their existing microservice application. Each function has a unique URL that allows the end-user to invoke or incorporate into other existing microservice applications as an API. \item \textbf{Continuous Integration and Continuous Deployment (CI/CD) pipelines}: With the DevOps-oriented approach, QFaaS creates a seamless software development process that continuously delivers value to end-users. After finishing the development process, quantum functions will be automatically compiled, containerized, and deployed to the Kubernetes cluster. When the engineer updates a quantum function, it will also automatically deploy and ensure the updated services are successfully deployed before terminating the old one to guarantee the continuous experience at the end-users side. \item \textbf{No vendor lock-in}: The serverless feature of QFaaS is built on top of OpenFaaS \cite{openfaas} and Kubernetes, both of which are well-known classical platforms. This way, we could provide deployment flexibility and avoid the vendor lock-in problem, i.e., it could be deployed in any cloud cluster. We also demonstrate the ability to use external quantum providers, such as IBM Quantum, to execute quantum tasks and plan to extend QFaaS capabilities to support other providers when they are accessible in our region. \item \textbf{High scalability and auto-scaling}: Ensures high availability and scalability for future expansion by deploying functions on the Kubernetes cluster. QFaaS also leverages the advantages of Kubernetes to support auto-scaling features, i.e., automatically scaling up or down vertically to adapt to the number of user requests. \item \textbf{Monitor function and system status}: The engineer or system manager can monitor the system status and all deployed functions using built-in monitoring components in QFaaS. \end{itemize} These features ease the quantum software development burden, enabling software engineers to focus on developing more complex quantum applications to achieve quantum advantages. Our work contributes to bridging the gaps between classical computing and the future computing model for developing practical quantum applications. \section{Conclusions and Future Work} In this work, we have developed QFaaS - a unified framework for developing quantum function as a service, enabling traditional software engineers to leverage their knowledge and experience to adapt to quantum counterparts in the \textit{Noisy Intermediate-Scale Quantum} era quickly. Our framework integrates several state-of-the-art methods such as containerization, DevOps, and the serverless model to reduce the burden for quantum software development and pave the way toward combining hybrid quantum and classical components. QFaaS provides essential features with multiple quantum software environments, leveraging the well-known quantum SDKs and languages to develop quantum functions running on quantum simulators or even a physical quantum computer. The current implementation of QFaaS demonstrates the possibility and advantages of our framework in developing quantum function as a service and continuously bringing its value to end-users. Due to the current limitation around access to quantum cloud services from our region, we are able to demonstrate the experiments with IBM Quantum and Amazon Braket at the moment. We plan to extend QFaaS's ability to connect with other providers in the future. We also are developing a machine learning (ML) based approach for automatic selection of quantum backend for hybrid quantum-classical applications. We will enhance the security and scalability capabilities in QFaaS to support a large number of requests from multiple users. As Quantum Software Engineering is still an emerging area of research with numerous challenges, there is a need for significant research effort to make it reliable and simultaneously adapt to rapid advances in quantum hardware. \textbf{Software Availability:} We will release the QFaaS framework in open source to develop our strong collaboration in building a unified quantum serverless framework and contribute positively to making the quantum transition in software development smooth and seamless. It will be available at https://github.com/cloudslab/ \begin{acks} This work is partially supported by the University of Melbourne by establishing an IBM Quantum Network Hub and the Nectar Research Cloud (offered by Australian Research Data Commons) at the University. We appreciate the support from Strangeworks Backstage Program for providing access to Amazon Braket quantum service. Hoa T. Nguyen acknowledges the support from the Science and Technology Scholarship Program for Overseas Study for Master’s and Doctoral Degrees, Vin University, Vingroup, Vietnam. \end{acks} \bibliographystyle{ACM-Reference-Format}
1,941,325,220,639
arxiv
\section{Introduction} \label{intro} When modelling complex systems, we often have to decide between two contrasting approaches: whether we focus on the behavior of the individuals that constitute the system, or on the aggregate behavior in the large. Furthermore, there is usually a trade-off between realism and tractability: while individual based models (IBMs) are versatile enough to incorporate many details of the system of interest, they can soon become computationally unmanageable if too many details are included. Moreover, with too much detail we may end up sacrificing insight into the way that the model works. Similarly, while aggregate models, usually based on differential equations for a continuous density of individuals, can be far more tractable and provide powerful insights through the use of analytical tools, they are susceptible to missing relevant details that can affect even the {\it qualitative} predictions of the model~\cite{McKane-PRL2005,Boland-2008,Boland-2009,Rogers-2012,Biancalani-2011, Grima-Nature, Goldenfeld-2011,Scott-2011,Biancalani-2010,Goldenfeld-2009,Lugo-2008} (see also \cite{McKane-TREE} for a recent general review). It seems, then, that an intermediate approach is desirable in order to find a good compromise between realism, efficiency, and insight. Stochastic hybrid models~\cite{Cassandra-2006, Hespanha-2006, Pola-2003, Sumpter-2006, Davis-1993, Davis-1984} are a general class of models that can be very useful in exploiting the benefits of both of these approaches. Of particular interest to us here is the subclass of so-called piecewise deterministic {M}arkov processes (PDMP)~\cite{Davis-1993,Davis-1984}, which has recently gained increased attention in the natural sciences~\cite{Faggionato-2009, Faggionato-2010, Zeiser-2008, Zeiser-2010, Crudu-press}. A PDMP models a system characterized by both discrete and continuous variables, where the former follow a stochastic process while the latter are governed by a deterministic (differential) equation. These two dynamics are coupled as the dynamical law for each depends on the current state of both. Motivated by applications to ecology, here we assume that the discrete variables refer to the number of individuals (e.g. plants, animals, etc.) that are immersed in an environment whose state is in turn characterized by the continuous variables (e.g. amount of water, temperature, etc.). Recently, Faggionato et al.~\cite{Faggionato-2009, Faggionato-2010} have initiated a study of PDMP by using tools from nonequilibrium statistical physics. In particular, these authors have obtained some formal results concerning the deterministic asymptotic behavior of the system, as well as the fluctuations about it, when increasing the rate of the stochastic transitions of the discrete variables. Here, we are also interested in the nonequilibrium statistical mechanics of PDMP; however, we will be concerned with the {\it qualitative} differences in behavior from the deterministic asymptotic limit, induced by the fluctuations, especially the existence of noise-induced quasi-oscillations in a regime where the deterministic approximation predicts only a stable fixed point. Furthermore, in contrast to the work of Faggionato et al., we do not restrict the discrete variables to a finite set, but allow them to take on any non-negative integer value. Indeed, the mesoscopic limit we consider assumes a relatively large number of individuals, in the spirit of the system-size expansion developed by van Kampen~\cite{vanKampen-book}. The techniques we use are general and can be applied to any system that could be modelled in a piece-wise deterministic fashion. An example could be an environment characterized by a set of continuous variables which would evolve deterministically were it not for the influence of a finite number of individuals that inhabit it. Here we will consider an application where the environmental variables are water in an ecosystem which contains a population of plants. Traditionally, such systems have been modelled by a fully deterministic dynamics of continuous densities. However, following the discussion above, it would seem more natural to model plants as discrete entities, rather than to define a density of plants. It is also well known that spatial interactions play a relevant role in these systems, as the emergence of spatial patterns are common \cite{Meron-2011,Rietkerk-2010,Rietkerk-2008,Meron-2004,Rietkerk-2002,Meron-2001}. In this present paper, however, we will only consider non-spatial models, implicitly implying that their area is sufficiently large that we can neglect any spatial structure altogether. A large region can of course contain a relatively large number of individuals, though still finite. As we will see below, demographic noise can still be relevant in such a system, and induce behavior qualitatively different from that described by deterministic approximations. For smaller numbers of plants these effects are expected to be even stronger, emphasizing the importance of stochastic modelling in this field. The paper is organized as follows. In Section \ref{s:gm} we define the general framework and introduce the model for semi-arid ecosystems which will serve as an application of the ideas and techniques which we develop. In Section \ref{s:gme} we show how to derive the master equation that governs the evolution of the system and in Section \ref{s:ad} we discuss the idea behind the system-size expansion which we use in the analysis. In Section \ref{s:dl} we obtain the leading term in this approximation which leads to the non-spatial version of deterministic equations extensively studied in the literature~\cite{Rietkerk-2010,Rietkerk-2008,Rietkerk-2002}. The next-to-leading term in the expansion, which characterizes the fluctuations in the system, is described in Section \ref{s:f}. Finally, in Section \ref{s:c} we present our conclusions. Most of the technical details are collected in the Appendixes. \section{Model definitions}\label{s:gm} \subsection{General setup} The simplest class of hybrid systems have states that are described by a pair of variables $(n, x)$, where $n$ is discrete and $x$ is continuous. The former would typically be the number of individuals (e.g. plants) in the system, while the latter would refer to the state of the environment (e.g. amount of water) which these individuals inhabit. The dynamics of the continuous variables $x$ is deterministic if conditioned on the discrete variables, $n$. On the other hand, the discrete variables follow a stochastic process whose transition probabilities depend on the continuous variables $x$. In other words, the continuous variables are governed by a deterministic differential equation of the form \begin{equation}\label{e:general} \dv{x}{t} = F(n, x) . \end{equation} By contrast, the discrete variables follow some stochastic transition rules that describe the processes which individuals, denoted by $P$, undergo. For instance, in the specific case of a birth-death process, these have the form \begin{subequations}\label{e:general_rules} \begin{align} P &\xrightarrow{\Gamma_b(x)} 2 P,\\ P &\xrightarrow{\Gamma_d(x)} \emptyset . \end{align} \end{subequations} Here $\Gamma_b$ and $\Gamma_d$ are the birth and death rates, respectively, and can depend on the environmental variables $x$. In principle, the transition rules (\ref{e:general_rules}) can describe any other kind of processes such as migration or growth. However, in this paper we will focus exclusively on simple birth-death processes. This constitutes an instance of a piecewise deterministic Markov process (PDMP) \cite{Davis-1984, Davis-1993, Faggionato-2009, Faggionato-2010}; the reason for the adoption of this name will become clear later. If the system is in state $(n, x)$ the transition rates are then given by \begin{subequations}\label{e:general_transitions} \begin{align} {T}_b(n+1|n;\, x) &= n\, \Gamma_b(x), \\ {T}_d(n-1|n;\, x) &= n\, \Gamma_d(x). \end{align} \end{subequations} All of this can be generalized to a system with $D$ discrete variables and $C$ continuous variables by introducing the state variables $\mathbf{n}=(n_1,n_2,\ldots,n_D)$ and $\mathbf{x}=(x_1,x_2,\ldots,x_C)$. Having specified the state variables and the transition rates, we can now go on to write down the master equation that governs the evolution of the probability function of the system. However, before doing so, it is useful to give a specific example of a PDMP in order to make these concepts more concrete. \subsection{Example: A semi-arid ecosystem}\label{s:ex} As an example, we will consider a non-spatial piecewise deterministic model of semi-arid ecosystems, defined in terms of three variables corresponding to, respectively, the densities of surface water, $\sigma$, soil water, $\omega$, and number of plants, $n$. A spatial version of this model has been well-studied in the ecological literature \cite{Rietkerk-2001,Rietkerk-2002,Rietkerk-2008,Rietkerk-2010}, but in the case where the number of plants is effectively infinite. In the model, rainwater falls onto the surface of the land, and then infiltrates into the soil where it is taken up by plants. Although rainfall in a semi-arid environment can vary drastically and unpredictably over time scales short in comparison with the birth-death dynamics of plants, here we focus on the {\it average} amount of rainfall over a relatively long period of time. This is a simplifying assumption that allows us to separate the influence of intrinsic demographic fluctuations from extrinsic environmental noise (see e.g. \cite{D'Odorico-Book, Kettler-2009, Mara-2008, Mara-2007} for investigations on the latter). Indeed, this simplification has also been used by ecologists \cite{Rietkerk-2001,Rietkerk-2002,Rietkerk-2008,Rietkerk-2010} in order to avoid further complications related to the infiltration of water under high water densities (see e.g. \citep{Porporato-Book}). In comparison to these investigations, the only sacrifice of realism in the model we study here relates to the neglect of spatial interactions. In contrast, a realistic feature missing in the studies mentioned above, and that we explore here, relates to the discrete nature of plants and their intrinsic stochastic behavior. Due to its relative scarcity, water is the main resource that drives the dynamics of semi-arid ecosystems. It seems natural to model the water densities $\mathbf{x} = (\sigma, \omega)$ as continuous variables governed by an equation which is deterministic when conditioned on the number of plants. More specifically, we will assume that the water dynamics is governed by an equation of the form (\ref{e:general}), but with two continuous variables. Following \cite{Rietkerk-2001,Rietkerk-2002,Rietkerk-2008,Rietkerk-2010} we write $\mathbf{F}=(F_\sigma , F_\omega)$ where \begin{subequations}\label{e:ns} \begin{align} F_\sigma (n, \mathbf{x}) & = R\, -\, \alpha(\rho)\, \sigma \, , \label{se:ns_surface}\\ F_\omega (n, \mathbf{x}) & = \alpha(\rho)\, \sigma \, -\, \beta(\omega)\, \rho\, -\, r\, \omega\, , \label{se:ns_soil} \end{align} \end{subequations} and where $\rho=\mu n$, $\mu$ being a parameter that characterizes the influence that a single individual P has on the dynamics of the environment. In Eq.~(\ref{e:ns}), $R$ is the average rate of rainfall that increases the amount of surface water $\sigma$, and $r$ is a constant rate that characterizes the loss of soil water which can be due, for instance, to evaporation (see \cite{Rietkerk-2010,Kefi-PhD} for an investigation of a fully deterministic model which considers also loss of surface water). Finally, \begin{align}\label{e:ab} \alpha(\rho) = a\: \frac{\rho +k\, W_0}{\rho +k},\hspace{0.5cm} \beta(\omega) = b\:\frac{\omega}{\omega + k}, \end{align} are saturable rates that describe the infiltration of surface water into the soil and the uptake of soil water by plants, respectively. Here $a,b,k$ and $W_0$ are constants. It is worth mentioning that the infiltration rate is taken to depend on the number of plants in the system, this is in line with \cite{Rietkerk-2001,Rietkerk-2002,Rietkerk-2008,Rietkerk-2010} and reflects the fact that vegetation typically increases the propensity of surface water being absorbed into the soil. We will model plants as discrete entities that follow a stochastic birth-death process. Later on, we shall investigate the impact that demographic noise, due to the discrete nature of plants, has on the behavior of the system. We will assume that a single plant has an average mass $m$ so that the density of biomass per unit area is given by $\rho = m n/A$, where the integer $n$ denotes the number of plants in the system, and where $A$ is the system's total area. It is here important to recall that we are focusing on a well-mixed system and thus we do not consider any spatial structure. The impact that plants have on the dynamics of water is via the density $\rho$ (see~Eqs.~(\ref{se:ns_surface}) and (\ref{se:ns_soil})). The effect of the creation or removal of a single plant on the water dynamics is characterized by the parameter $\mu=m/A$, the minimal amount by which the density $\rho$ can change due to a discrete event of the plant dynamics. The birth and death rates for the stochastic plant dynamics are taken to be \begin{subequations}\label{e:rates} \begin{align} \Gamma_b(\omega) &= c\: \beta(\omega),\\ \Gamma_d &= d \, , \end{align} \end{subequations} respectively, where $d$ and $c$ are constants and $\beta(\omega)$ is defined in (\ref{e:ab}). It should be noticed that the death rate is taken to be constant, whereas the birth rate is assumed to depend on the current amount of soil water. This dependence, and that of $\mathbf{F}$ in Eq.~(\ref{e:ns}) on the density of plants, $\rho = \mu n$, is what couples the deterministic and stochastic dynamics of water and plants, respectively. \section{Master equation}\label{s:gme} We now return to the general development of the formalism, and assume a single discrete variable and a single continuous variable, to avoid cluttering the equations with indices. Suppose then that, at time $t$, the system is in state $(n^\prime,x^\prime)$, and denote by $T(n|n^\prime;x^\prime)$ the total rate (i.e., including births, deaths, and any other processes in the model) for the system to make a transition from $n^\prime$ to $n$. The probability that, in a small time interval $\Delta t$, there are no transitions is given by \begin{equation} p_{\Delta t}^0(x^\prime) = 1-\Delta t \sum_{\ell \neq n^\prime}T(\ell|n^\prime;x^\prime). \label{p_Delta_t} \end{equation} In this case, the system will evolve deterministically according to Eq.~(\ref{e:general}), so that after a time $\Delta t$ it will be in the state $\left(n^\prime, {x}_0\right)$ where \begin{equation} \label{e:state1} {x}_0 = x^\prime + {F}(n^\prime,x^\prime)\Delta t , \end{equation} up to first order in $\Delta t$. Suppose now that there is a single transition from $n^\prime$ to $n$ which takes place precisely at $\Delta t^\prime < \Delta t$. In this case the system will evolve in a more intricate fashion, to the state $\left(n^\prime, {x}_1\right)$ with \begin{equation}\label{e:state2} {x}_1 = x^\prime + {F}(n^\prime,x^\prime)\Delta t^\prime +{F}(n,x^\prime)\left(\Delta t - \Delta t^\prime\right) , \end{equation} up to first order in $\Delta t$. Notice that the probability to have more than one transition in the interval $\Delta t$ is $\mathcal{O}(\Delta t^2)$. Therefore, the probability of a transition during a time $\Delta t$ is composed of two kinds of terms, corresponding to the two possibilities, (\ref{e:state1}) and (\ref{e:state2}), above. Each of these contain a Dirac delta contribution for the piecewise deterministic evolution of ${x}$. Thus the probability of the system being in state $(n,x)$ at time $t+\Delta t$, given it was in state $(n^\prime,x^\prime)$ at time $t$ is given by \begin{eqnarray} & & \mathcal{P}_{\Delta t}(n,x |{n}^\prime,{x}^\prime) = p_{\Delta t}^{0}({x}^\prime)\, \delta\left(x - {x}_0\right)\delta_{n^\prime,n} \nonumber \\ & & + \Delta t\, T(n|{n}^\prime;x^\prime)\, \delta\left(x - {x}_1\right)\left(1 - \delta_{n^\prime,n}\right), \end{eqnarray} with $x_0$ and $x_1$ as defined in Eqs.~(\ref{e:state1}) and (\ref{e:state2}) respectively. Using Eq.~(\ref{p_Delta_t}) this gives, to first order in $\Delta t$, \begin{eqnarray} & & \mathcal{P}_{\Delta t}(n,x |{n}^\prime,{x}^\prime) = \delta\left(x - {x}_0\right)\delta_{n^\prime,n} \nonumber \\ & & - \Delta t \sum_{\ell \neq n^\prime}T(\ell|n^\prime;x^\prime)\delta\left(x - x^\prime \right)\delta_{n^\prime,n} \nonumber \\ & & + \Delta t\, T(n|{n}^\prime;x^\prime)\, \delta\left(x - x^\prime \right)\left(1 - \delta_{n^\prime,n}\right), \label{inter} \end{eqnarray} where in the last two terms we have replaced $x_0$ and $x_1$ respectively by $x^\prime$, since these terms are already of order $\Delta t$. We can now use the Chapman-Kolmogorov equation together with Eq.~(\ref{inter}) to obtain a master equation for the evolution of the probability at finite time $t$. The last two terms on the right-hand side of Eq.~(\ref{inter}) give the standard terms in the master equation, but the first term gives a contribution \begin{equation} \int dx^\prime \sum_{n^\prime} \delta\left(x - {x}_0\right)\delta_{n^\prime,n}\mathcal{P}(n^\prime,x^\prime,t). \label{first_term} \end{equation} It is worth pointing out that $x_0$ depends on the integration variable $x'$, see Eq.~(\ref{e:state1}). Introducing a test function, and integrating by parts, we find that (e.g. see \cite{Gardiner-book}) this term equals \begin{equation} \mathcal{P}(n,x,t) - \Delta t\,\frac{\partial }{\partial x}\left[ F(n,x) \mathcal{P}(n,x,t) \right], \label{firstterm} \end{equation} to first order in $\Delta t$. This then yields the master equation for the evolution of the probability distribution $\mathcal{P}(n,{x},t)$ for the system to be in a state (${n}, {x})$~\cite{Davis-1984, Faggionato-2009, Faggionato-2010} \begin{equation}\label{e:master} \begin{split} \frac{\partial }{\partial t}\mathcal{P}({n},{x},t) = & -\frac{\partial }{\partial x}\left[{F}({n},{x} ) \mathcal{P}(n,{x},t)\right] \, \\ &+ \sum_{{n}^\prime\neq{n}}\left[{T}(n|{n}^\prime; {x})\mathcal{P}(n^\prime,{x},t)\right. \\ & \left. - {T}(n^\prime|{n} ; {x})\mathcal{P}(n,{x},t)\right]. \end{split} \end{equation} Although we have derived the master equation for a PDMP with one discrete and one continuous variable, the generalization to many variables can immediately be seen to be \begin{equation}\label{e:master_many} \begin{split} \frac{\partial }{\partial t}\mathcal{P}(\mathbf{n},\mathbf{x},t) = & -\sum^{C}_{i=1}\frac{\partial }{\partial x_i}\left[{F}_i(\mathbf{n},\mathbf{x} ) \mathcal{P}(\mathbf{n},\mathbf{x},t)\right] \, \\ &+ \sum_{\mathbf{n}^\prime\neq\mathbf{n}}\left[{T}(\mathbf{n}|\mathbf{n}^\prime; \mathbf{x})\mathcal{P}(\mathbf{n}^\prime,\mathbf{x},t)\right. \\ & \left. - {T}(\mathbf{n}^\prime|\mathbf{n} ; \mathbf{x})\mathcal{P}(\mathbf{n},\mathbf{x},t)\right]. \end{split} \end{equation} The terms in the second sum are of the standard form one would expect in a master equation for birth-death processes. The first term describes the (piecewise) deterministic evolution of $\mathbf{x}$, and is of the form of a drift term in a standard Fokker-Planck equation for continuous processes. The absence of a term containing second derivatives with respect to components of $\mathbf{x}$ reflects the (piecewise) deterministic nature of the evolution of $\mathbf{x}$. The master equation fully specifies the dynamics of the stochastic system, but it cannot be solved analytically and it is difficult to solve numerically. Instead, either single trajectories for the underlying process can be simulated by using an algorithm similar to that originally devised by Gillespie~\cite{Gillespie-1976,Gillespie-1977}, see Appendix \ref{a2} for details, or approximation schemes can be applied to the master equation. In the next section we discuss an example of such a scheme. \section{Approximate dynamics}\label{s:ad} The method we will use to analyze the master equation (\ref{e:master_many}) separates out the average behavior from the fluctuations around it. We will use the example of the semi-arid ecosystem in Section \ref{s:ex} to illustrate the idea. In general, the fluctuations in the discrete variables $n$, are expected to be of order $\sqrt{n}$, and their impact on the deterministic dynamics of order $\sqrt{\mu n}$, where $\mu=m/A$, as defined above, is the minimal change in mass density per unit area induced by a birth or death event. We will thus introduce the change of variables \begin{equation}\label{e:n} \mu n = \rho + \sqrt{\mu}\, \eta, \end{equation} where $\rho$ is the deterministic density introduced in Section \ref{s:ex}. To the order that we will be working, $\mu \langle n \rangle = \rho$, where the angle brackets stand for an average with probability density function $\mathcal{P}(n,\mathbf{x},t)$ at time $t$. The stochastic deviation from the deterministic result, given by $\rho$, is described by the term $\sqrt{\mu} \eta$, and so the deterministic limit corresponds to $\mu \to 0$. The physical meaning of this limit is that the area of the system is to be chosen sufficiently large (formally infinite) so that it contains a large number of plants. The discreteness of the birth-death dynamics is then no longer relevant. In addition the system is always assumed to be well-mixed so that any possible spatial structure can be neglected. The intrinsic fluctuations in the discrete variables will induce fluctuations in the {continuous} variables $\mathbf{x}$. For this reason we will also carry out the replacement \begin{equation}\label{e:x} \mathbf{x} = \bs{\chi} + \sqrt{\mu}\,\bs{\xi}, \end{equation} where $\bs{\chi}= \langle \mathbf{x} \rangle$ and $\bs{\xi}$ are the average and {fluctuation} terms in the {continuous} variables, respectively. We would like to stress that, up to now, Eqs. (\ref{e:n}) and (\ref{e:x}) are nothing else than a suitable change of variables. We are here assuming that all the stochastic variation is contained in the variables $({\eta}, {\bs{\xi}})$, while the corresponding averages $({\rho},{\bs{\chi}})$, obtained in the limit $\mu\to 0$ as explained above, follow a deterministic dynamics. For this reason we introduce the probability distribution $\Pi ({\eta}, {\bs{\xi}},t)$ of the stochastic variables $({\eta}, {\bs{\xi}})$ alone, in order to separate these two kinds of contributions. Once again we reiterate that the fluctuations are not externally imposed, but are rather intrinsic to the model. Their statistical properties emerge from the model and from the approximations that we carry out, as we discuss in more detail in Section \ref{s:f}. Notice that, in terms of the new variables, a single stochastic transition, $n\to n\pm 1$, corresponds to $\eta\to\eta\pm\sqrt{\mu}$. When $\mu$ is small, the effect of a single transition can, therefore, be conveniently represented by a Taylor expansion in $\sqrt{\mu}$. The detailed calculations follow the lines of \cite{vanKampen-book}, and are summarized briefly in Appendix \ref{a1}. \section{Deterministic limit}\label{s:dl} \begin{figure}[t] \hspace{-0.5cm}\includegraphics[width=60mm]{f1.eps} \caption{(Color online) Phase diagram of the deterministic system, Eqs.~(\ref{e:nsdet}). In the upper region (`Vegetation') the state with vegetation $\rho=\rho^\ast > 0$ is stable under small perturbations, i.e. ${\rm Re}(\lambda_{\rm max}) < 0$. In the small region to the centre left (`Cycles'), there are no stable fixed points, but a numerical integration of Eqs.~(\ref{e:nsdet}) reveals the presence of limit cycles (see discussion in Section \ref{s:f}). In the bottom region (`Desert'), the desert state ($\rho =0$) is stable under small perturbations. The two points marked `Simulations' correspond to $R=1.03$ and $R=1.05$ (with $W_0=0.1$), these are the parameters chosen for the stochastic simulations discussed in Section \ref{s:f}. \label{f:1}} \end{figure} The leading contribution found from the above expansion shows that the average behaviour, described by $\rho$ and $\bs{\chi}$, is given by \begin{equation}\label{e:nsdet} \dv{\rho}{t} = \Phi(\rho,\bs{\chi}),\hspace{1cm} \dv{\bs{\chi}}{t} = \mathbf{F}(\rho,\bs{\chi}), \end{equation} where \begin{equation}\label{e:phi} \Phi(\rho,\bs{\chi})= \Gamma_b(\bs{\chi})\, \rho - \Gamma_d(\bs{\chi})\, \rho. \end{equation} The first equation in (\ref{e:nsdet}) can be found from Appendix \ref{a1} or simply by calculating the average of $\mu n$ from the master equation. The function $\mathbf{F}$ is given by Eq.~(\ref{e:ns}). We now analyze some of the properties of the limiting deterministic dynamics. The first question we can ask is: are there any fixed points, i.e. do states exist such that \begin{equation}\label{e:nsfp} \dv{\rho}{t} = 0,\hspace{1cm}\dv{\bs{\chi}}{t} = 0, \end{equation} and, if so, are they stable under small perturbations ? To study the (linear) stability of these states, it is useful to introduce a small perturbation $\bs{\varepsilon} = (\delta\rho ,\delta\bs{\chi})$ around the fixed point of interest, say $(\rho,\bs\chi)=(\rho^\ast ,\bs{\chi}^\ast)$, which is a solution to (\ref{e:nsfp}) above. We thus write $(\rho,\bs\chi)=(\rho^\ast+\delta\rho,\bs{\chi}^\ast+\delta\bs\chi)$ and expand equations (\ref{e:nsdet}) up to first order in $\bs\varepsilon$ to obtain \begin{equation}\label{e:eps} \dv{\bs\varepsilon}{t} = \mathcal{J}^\ast\cdot \bs\varepsilon . \end{equation} Here $\mathcal{J}^\ast = \mathcal{J}(\rho^\ast,\bs{\chi}^\ast)$ is the $3\times 3$ Jacobian matrix of the system of equations (\ref{e:nsdet}), given by (\ref{e:jac}), evaluated at the fixed point. The stability of the fixed point is then governed by the eigenvalue with the largest real part, $\lambda_{\rm max}$, of $\mathcal{J}^\ast$. If the real part of this eigenvalue is negative, a small perturbation will die out after a time of order $1/|{\rm Re}(\lambda_{\rm max})|$, otherwise it will grow exponentially fast until non-linearities set in and limit the growth. We now apply this analysis to Eqs. (\ref{e:nsdet}). Depending on the choice of parameters, there can be either one {out of} two stable fixed points \cite{Rietkerk-2008} or none: the fixed points correspond to either a desert state \begin{equation} \rho_0 = 0,~\sigma_0 = \frac{R}{a\, W_0},~\omega_0 = \frac{R}{r}, \end{equation} or a state with non-zero vegetation \begin{equation} \rho^\ast = \frac{c\, R}{d}-\frac{r\, c\, k}{c\, b-d},~ \sigma^\ast = \frac{R}{\alpha(\rho^\ast)},~ \omega^\ast = \frac{d\, k}{c\, b-d}. \end{equation} Figure ~\ref{f:1} illustrates these three situations in terms of the parameters $R$ and $W_0$. In the region where there are no stable fixed points a numerical integration of Eqs.~(\ref{e:nsdet}) shows the existence of limit cycles, see Fig.~\ref{f:1} and further discussion in Section \ref{s:f}. These limit cycles were not reported in \cite{Rietkerk-2002}, presumably because the authors worked in a steady-state approximation that neglected perturbations of the surface water density $\sigma$. As our analysis reveals, these perturbations can render the fixed point $\rho=\rho^\ast$ unstable. While the range of parameters for which there are cycles in the deterministic approximation appears rather small, it has been observed elsewhere that demographic noise can effectively enlarge such regions \cite{McKane-PRL2005,Boland-2008,Biancalani-2011,Goldenfeld-2011,Biancalani-2010,Goldenfeld-2009}. Noise-induced oscillations can be found in parameter regimes in which a deterministic analysis predicts stable fixed points, i.e. outside the region labelled `Cycles' in Fig. \ref{f:1}. These so-called quasi-cycles can be characterized by analytical techniques described below, their amplitude is particularly pronounced near the deterministic instability. Our analysis focuses on parameter regimes outside, but near, the region of instability in the deterministic phase diagram of Fig. \ref{f:1}. This is sufficient to illustrate the main point we want to make in this paper: that demographic stochasticity can alter the qualitative behavior of the model relative to that predicted in the deterministic approximation. We also carry out simulations inside the region of parameters in which the deterministic system has limit cycles. Finally we would like to add that when spatial interactions are taken into account, further phases can be found in Fig.~\ref{f:1}. In these phases the model exhibits spatial patterns coexisting with either of the homogeneous states (desert or homogeneous vegetation respectively), see \cite{Rietkerk-2001,Rietkerk-2002,Rietkerk-2008,Rietkerk-2010} for further details. \begin{figure}[t] \begin{center} \includegraphics[angle=-90, width=80mm]{f2.eps} \caption{(Color online) Power spectrum of the fluctuations of the number of plants in the phase with a stable deterministic fixed point. The solid (red) curve is the analytical prediction of Eqs.~(\ref{e:gps}) and (\ref{e:pps}); symbols show results from simulations of the PDMP, averaged over $100$ realizations, with an initial number of plants $n = 10^5$. The initial biomass $\mu n$, surface water $\sigma$ and soil water $\omega$ are initialized at the fixed point of the deterministic approximation. The parameter values used are given in Appendix \ref{a3} with $R=1.05$ (corresponding to the upper point in Fig. \ref{f:1}). \label{f:2}} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[trim=25mm 15mm 20mm 10mm, width=57mm]{f3.eps} \caption{(Color online) Noise-induced oscillations in the piecewise deterministic model for semi-arid ecosystems. The continuous (colored) curves show the results of simulations with initial number of plants $n = 10^5$; biomass $\mu\, n$, surface water $\sigma$ and soil water $\omega$ are initialized at the fixed point of the deterministic approximation (dashed lines). From top to bottom the continuous lines are: surface water (blue), soil water (red thick line) and biomass density (green). The (black) dashed lines show the evolution of the corresponding deterministic approximation initialized slightly away of the fixed point to show the oscillatory convergence. This feature is what couples with the noise to induce quasi-cycles in the full model. The parameter values used are those given in Appendix \ref{a3} with $R=1.05$ (upper point in Fig. \ref{f:1}). \label{f:3}} \end{center} \end{figure} \section{Fluctuations}\label{s:f} Using an expansion in the model parameter $\mu$ (effectively a system-size or small-noise expansion) it is possible to derive a set of coupled linear Langevin equations that approximate the stochastic dynamics close to a stable fixed point $(\rho^\ast,\bs{\chi}^\ast)$. The expansion method goes back to \cite{vanKampen-book} and is standard by now, see \cite{McKane-PRL2005,Boland-2008,Boland-2009,Rogers-2012,Biancalani-2011,Goldenfeld-2011,Scott-2011,Biancalani-2010,Goldenfeld-2009,Lugo-2008}, so that we do not give full details here. A brief summary can be found in Appendix \ref{a1}, including the derivation of the Langevin equation (\ref{e:langevin}) which appears in the sub-leading order of the van Kampen expansion. The stability of the deterministic fixed point is required in order for the stochastic dynamics to remain close to the deterministic attractor, so that the linear approximation remains valid. As detailed in Appendix \ref{a1}, carrying out a Fourier transform of this linear Langevin dynamics with respect to time allows one to calculate the power spectra of fluctuations. In particular, we are interested in the power spectrum, $S(\Omega)=\left\langle\left|\widetilde{\eta}(\Omega)\right|^2 \right\rangle$, of the fluctuations of the plant density about the deterministic fixed point, $\rho^\ast >0$. The quantity $\widetilde{\eta}(\Omega)$ is the Fourier transform (with respect to time) of the fluctuations of the plant density, and as before $\avg{\cdots}$ denotes an average over realizations of the stochastic dynamics. Within the van-Kampen expansion analytical results can be derived for the power spectrum, $S(\Omega)$, details can be found in the Appendix, see in particular Eqs.~(\ref{e:gps}) and (\ref{e:pps}). It is worth pointing out that although we here focus on fluctuations of the plant density, similar results can be derived for the fluctuations of the soil and surface water densities. \begin{figure}[t] \begin{center} \includegraphics[trim=25mm 5mm 20mm 10mm, width=60mm]{f4.eps} \caption{(Color online) Effect of demographic noise on the limit cycle of the deterministic model. The continuous curves are results from simulations of the PDMP, with initial number of plants $n = 10^5$; biomass $\mu\, n$, surface water $\sigma$ and soil water $\omega$ are initialized on the limit cycle. Dashed lines represent the deterministic limit cycle. From top to bottom the continuous lines are: surface water (blue), soil water (red thick line) and biomass density (green). The rainfall parameter is $R=1.03$ (corresponding to the lower point indicated in Fig. \ref{f:1}), remaining parameter values can be found in Appendix \ref{a3}. \label{f:4}} \end{center} \end{figure} \begin{figure}[t!] \begin{center} \includegraphics[trim=10mm 35mm 30mm 35mm, width=60mm]{f5.eps} \caption{(Color online) Projection on the plane of surface and soil water variables of the limit cycle (black thick line) in Fig.~\ref{f:4}. The continuous `wiggly' (red) curve, near to the limit cycle, corresponds to the part of the stochastic trajectory in Fig.~\ref{f:4} between times $4200$ and $5500$ (wide valley in biomass and wide plateau in surface water). It appears that the region where the noise has stronger effects is in the top of the plot, which indeed corresponds to biomass close to zero (so number of plants relatively small). For reference, the dotted (blue) curve at the top corresponds to the deterministic approximation initialized at the limit cycle, except for the biomass which is slightly displaced from the limit cycle ($\rho\approx 0.003$ rather than $0.05$). We can see that the perturbation initially grows away from the cycle and later returns to it. \label{f:5} } \end{center} \end{figure} We have also carried out numerical simulations using a modification of the Gillespie algorithm \cite{Gillespie-1976,Gillespie-1977}, as described in Appendix \ref{a2}. In Fig. \ref{f:2} we test {the} analytical results {we have just discussed against direct simulations of the PDMP. The value of the parameter $R$ was set to $1.05$. Good agreement between the theoretical predictions and the numerical simulations is found. The figure demonstrates the existence of coherent quasi-cycles, driven by intrinsic noise, in a region of parameter space where the deterministic approximation predicts a stable fixed point, i.e.~where one has ${\rm Re}(\lambda_{\rm max})<0$. As seen in Fig.~\ref{f:2}, the power spectrum, $S(\Omega)$, shows a maximum at a characteristic non-zero frequency, confirming noise-induced oscillations~\cite{McKane-PRL2005}. An inspection of a single trajectory, as shown in Fig.~\ref{f:3}, shows that these quasi-cycles can be also detected by eye in the time domain. In Fig.~\ref{f:4} we show similar simulations for $R=1.03$. The deterministic system then no longer has a stable fixed point, but instead it has a limit cycle. The effect of demographic noise on the limit cycle is to introduce a stochastic modulation of the period and amplitude of the cycle. These effects can be studied analytically by separating directions perpendicular to the limit cycle from longitudinal modes in a co-moving Frenet frame. This is discussed in more detail for a different model system in \cite{Boland-2009, Boland-2008}. For the present system, however, we will not investigate this further analytically, but only discuss a few qualitative features. Figure \ref{f:5} shows a projection of the limit cycle (black thick continuous line), onto the plane spanned by the surface water and soil water variables. In the same figure we also plot a part of the stochastic trajectory (red thin continuous line) shown in Fig.~\ref{f:4}, specifically Fig. \ref{f:5} shows the segment of the stochastic trajectory between time $4200$ and time $5500$. This segment corresponds to the wide plateau of surface water concentration $\sigma\approx 50$ seen in the upper panel of Fig. \ref{f:4} (upper thin solid line). In this segment the plant biomass, a measure for the number of individuals in the system, remains relatively small, see the solid line in the lower panel of Fig. \ref{f:4}. In this regime of small numbers of individuals the effects of demographic noise are stronger than in periods in which there are more individuals present in the system. This is seen in Fig.~\ref{f:5}; the stochastic trajectory deviates substantially from the deterministic trajectory in the upper part of the limit cycle when the plant biomass is small, but it follows the deterministic cycle more closely when the number of plants is large (lower part of the limit cycle). As shown in \cite{Boland-2009, Boland-2008} the overall effect of demographic noise on a limit cycle is determined by two different factors: one is the relative magnitude of discretization effects, these are small for large populations, but more relevant for small populations as just discussed. Secondly, the local stability of the deterministic trajectory plays an important role as well. For highly stable deterministic trajectories the amplification effect that demographic noise undergoes is relatively small. If the deterministic attractor is only weakly stable, then the amplification factor can be significant. In the case of a fixed point, stability of the deterministic system is characterized by the eigenvalue $\lambda_{\rm max}$. For limit cycles the situation is more complicated, stability is then governed by the relevant Floquet exponents \cite{Boland-2009, Boland-2008}, and crucially local stability can vary along the limit cycle. In our system we observe that stability in the upper part of the limit cycle is only relatively weak. To illustrate this we have initialized the deterministic system at a point near the limit cycle trajectory, but with a slight displacement in the biomass ($\rho\approx 0.003$ instead of $\rho\approx 0.05$). No perturbation is applied to the water variables $\sigma$ and $\omega$. As shown in Fig. \ref{f:5} (blue dotted line) we find that the perturbation initially appears to grow, but ultimately returns back to the limit cycle. This relatively weak stability of the upper part of the deterministic limit cycle may enhance the effects of demographic stochasticity. \section{Conclusions}\label{s:c} In this paper we have studied the effects of demographic noise in the framework of piecewise deterministic Markov processes. Specifically we have investigated a class of hybrid systems, composed of $C$ continuous degrees of freedom, and $D$ discrete ones. Such systems are of interest for example in the context of ecology, where the continuous degrees of freedom can represent variables of the external environment such as water or light. The discrete degrees of freedom could then represent the individual-based dynamics of a population of plants or animals. While the population of individuals follows a standard Markovian birth-death process and while the continuous degrees of freedom are governed by ordinary differential equations both aspects are not independent. Non-trivial behavior arises through the coupling of both types of dynamics, for example the birth and death rates will depend on the availability of environmental resources, while these in turn depend on the state of the discrete population. We have shown how to apply techniques of nonequilibrium physics to analyze the effects of demographic stochasticity on such systems. In particular, we have shown how to use a system-size expansion to obtain a deterministic approximation in the limit of infinite populations, and to derive an effective Gaussian description of fluctuations about the deterministic behavior in the limit of large, but finite populations. We have applied these concepts to a stylized non-spatial model of semi-arid ecosystems and find that demographic noise can induce persistent coherent stochastic oscillations in parameter regimes in which a deterministic description would predict a stable fixed point. These quasi-cycles can be characterized analytically by deriving closed-form expressions for their power spectra within the small-noise approximation, in good agreement with simulations. This extends existing work on the qualitative features (here quasi-cycles) generated by amplified demographic noise, see \cite{McKane-PRL2005,Boland-2008,Boland-2009,Rogers-2012,Biancalani-2011,Goldenfeld-2011,Scott-2011,Biancalani-2010,Goldenfeld-2009,Lugo-2008}. In numerical simulations we have also investigated the effects of intrinsic noise on deterministic limit cycles, similar to the observations made in \cite{Boland-2009,Boland-2008} we observe that stochasticity can lead to a longitudinal phase-drift along the limit cycle, coupled with transverse fluctuations. While our present approach is limited to non-spatial hybrid systems, work in progress will extend these results to piecewise deterministic models with spatial interactions. This is particularly interesting for more realistic models of semi-arid ecosystems, in which pattern forming mechanisms are known to be important~\cite{Meron-2011,Rietkerk-2010,Rietkerk-2008,Meron-2004,Rietkerk-2002,Meron-2001}. \begin{acknowledgements} We would like to thank Max Rietkerk, Mara Baudena and Maarten Eppinga for insightful discussions. This work was supported by EPSRC under grant RESINEE (reference EP/I019200/1). TG would like to thank Research Councils UK for support (RCUK reference EP/E500048/1). \end{acknowledgements}
1,941,325,220,640
arxiv
1,941,325,220,641
arxiv
\section{Conclusions} In this work a new approach to solving the problem in~\cite{Chooz99} was proposed. The precision of the sample mean estimator was calculated analytically for the offset exponential and normal distributions both for a finite sample and for limiting cases. Even though the original applied problem concerned the exponential distribution, the normal distribution was found to be also useful because of the central limit theorem~\cite{CLT}. It was shown explicitly how the distribution of the sample mean of the exponential pdf converges near the mode to the normal distribution. While the normal distribution is tractable easier and has simpler formulae for the distribution of the sample mean and for the directional CDF, the exponential distribution has richer mathematical properties. While the distribution of the convolution of normal pdfs depends only on one combination of parameters, for the exponential distribution this is not the case. While the normal distribution is stable, the exponential one is not. Geometric techniques were used to deal with the limiting case of the exponential distribution. It was shown that the spherical projection of the sample mean of the exponential distribution has connections with hypergeometric functions and modified Bessel functions. In this study we didn't concern other estimators, such as MLEs or the mean of the sample's projection on the sphere. Note that in~\cite{Chooz99} it was stated that the mean of unit vectors is a more precise estimator than the arithmetic sample mean. It might also be useful for mathematical applications to study the normal and exponential distributions in dimensions other than three. \subsection*{Acknowledgements} I would like to express my gratitude to the Independent University of Moscow, which largely formed my mathematical way of thinking and taught me quite a broad area in advanced mathematics (even though this thesis has least to do with what I was taught at IUM). The years that I remember most at IUM were the first ones, when we solved lots of problems and discussed them personally with mathematicians. I'm very thankful to professor Alexander Mikhailovich Chebotarev, who gave a course on statistics at my primary university and who agreed to be my scientific adviser at IUM, read this thesis, provided useful suggestions and independently checked the results \ref{E_CDF_cos_theta_final} and \ref{CDF_G_exact}. I'm also thankful to all my mathematics professors at my primary university, which was called the Institute of Natural Sciences and Ecology and now is the 10th faculty of the Moscow Institute of Physics and Technology, and which gave me very good fundamental and more advanced knowledge of mathematics; and at my physical and mathematical school 1189 and earlier. I would like to thank my current employer, the Institute for Nuclear Research of the Russian Academy of Sciences, my scientific adviser there Valery Vitalievich Sinev, and my mother Arsenia Nikitenko. \subsubsection*{Integral on $\cos\theta'$} For shorter notation we define \begin{equation} \left( \begin{array}{rcl} a & = & r_n^2 + r_0^2 \\ b & = & 2 r_n r_0 \\ c & = & \frac nl \end{array} \right) \label{abc} \end{equation} We assume $b\neq0$, that is we exclude the point $r=0$ from the integration. We calculate \begin{align} \int_x^1 e^{-c\sqrt{a - bx'}} (c\sqrt{a - bx'})^k \mathrm{d} x' & = \nonumber \\ \left( z = c\sqrt{a - bx'} \atop \mathrm{d} x' = -\frac{2z\mathrm{d} z}{bc^2} \right) & = - \frac{2}{bc^2} \int_{c\sqrt{a - bx}}^{c\sqrt{a - b}} e^{-z}z^{k+1}\,\mathrm{d} z \label{int_sqrt} \end{align} \begin{equation} \int x^k e^{-x}\,\mathrm{d} x = -x^k e^{-x} + k \int x^{k-1} e^{-x} \,\mathrm{d} x = -e^{-x}k! \sum_{i=0}^k \frac{x^i}{i!} \label{int_e-x_xk} \end{equation} Combining \ref{abc}, \ref{int_sqrt}, \ref{int_e-x_xk} \begin{gather*} \int_{\cos\theta}^1 e^{-\frac nl \sqrt{r_n^2 + r_0^2 - 2 r_n r_0 \cos\theta'}} \left(\frac nl \sqrt{r_n^2 + r_0^2 - 2 r_n r_0 \cos\theta'}\right)^k \,\mathrm{d} \cos\theta' = \nonumber \\ = \frac{l^2}{n^2 r_n r_0} e^{-z}(k+1)! \sum_{i=0}^{k+1} \left. \frac{z^{i}}{i!} \right|_{\frac nl \sqrt{r_n^2 + r_0^2 - 2 r_n r_0 \cos \theta}} ^{\frac nl \sqrt{r_n^2 + r_0^2 - 2 r_n r_0}} = \nonumber \\ = \frac{l^2(k+1)!}{n^2 r_0} \sum_{i=0}^{k+1} \frac{1}{i!} \Big( \nonumber \hspace{7cm} \nonumber \\ \qquad \quad \frac 1{r_n} e^{-\frac nl |r_n - r_0|} \left(\frac nl |r_n - r_0|\right)^{i} - \label{int_cos_upper} \addtocounter{equation}{1} \tag{\textrm{\theequation}} \\ \shoveright - \frac 1{r_n} e^{- \frac nl \sqrt{r_n^2 + r_0^2 - 2 r_n r_0 \cos \theta}} \left(\frac nl \sqrt{r_n^2 + r_0^2 - 2 r_n r_0 \cos \theta}\right)^{i} \addtocounter{equation}{1} \tag{\textrm{\theequation}} \label{int_cos_lower} \\ \qquad \hspace{-3cm} \Big) . \addtocounter{equation}{1} \tag{\textrm{\theequation}} \label{CDF_E_int_cos} \end{gather*} \subsubsection* {Integral of \ref{int_cos_upper} over $r_n$} \label{sss_int_cos_upper} Integrating \ref{int_cos_upper}, we multiply it by $r_n^2$ from the Jacobean, and expand the modulus \begin{align} \int_0^\infty r_n e^{-\frac nl |r_n - r_0|} \left(\frac nl |r_n - r_0| \right)^i & \mathrm{d} r_n = \label{int_cos_upper_dr} \\ & \int_0^{r_0} r_n e^{-\frac nl (r_0 - r_n)} \left(\frac nl (r_0 - r_n)\right)^i \mathrm{d} r_n \label{int_module_zero} \\ & + \int_{r_0}^\infty r_n e^{-\frac nl (r_n - r_0)} \left(\frac nl (r_n - r_0)\right)^i \mathrm{d} r_n \label{int_module_infty} \end{align} The integral from $0$ to $r_0$ is easily calculated using \ref{int_e-x_xk}: \begin{multline} \textrm{\ref{int_module_zero}} = \left( \begin{array}{c} \frac nl \left(r_0 - r_n\right) = x, \\ r_n = r_0 - \frac ln x , \\ \mathrm{d} r_n = - \frac ln \mathrm{d} x \end{array} \right) = - \int_{\frac nl r_0}^0 \frac ln \left(r_0 - \frac ln x\right) e^{-x}x^i \,\mathrm{d} x \stackrel{\left( \textrm{ \ref{int_e-x_xk} } \right)}{=} \\ = \left. \frac ln r_0 e^{-x} i! \sum_{j=0}^{i} \frac{x^j}{j!} \right|_{\frac nl r_0}^0 - \frac{l^2}{n^2} \int_0^{\frac nl r_0} x^{i+1}e^{-x} \mathrm{d} x = \\ = \frac ln r_0 i! - \frac ln r_0 e^{-\frac nl r_0} i! \sum_{j=0}^{i} \frac{\left(\frac nl r_0\right)^j}{j!} - \frac{l^2}{n^2} \int_0^{\frac nl r_0} x^{i+1}e^{-x} \mathrm{d} x \label{int_module_zero_sol} \end{multline} We have retained the last integral, for it is useful in what follows. The integral from $r_0$ to infinity is taken similarly: \begin{multline} \mbox{\ref{int_module_infty}} = \frac ln \int_{r_0}^\infty e^{-\frac nl(r_n - r_0)}\left(\frac nl(r_n - r_0)\right)^{i+1} \mathrm{d} r_n \\ + r_0 \int_{r_0}^\infty e^{-\frac nl(r_n - r_0)}\left(\frac nl(r_n - r_0)\right)^i \mathrm{d} r_n = \left( \frac ln \right)^2 (i+1)! + \frac ln r_0 i! \label{int_module_infty_sol} \end{multline} Adding \ref{int_module_zero_sol} and \ref{int_module_infty_sol}, we obtain \begin{align} \mbox{\ref{int_cos_upper_dr}} & = 2\,i! \frac ln r_0 - \frac ln r_0 e^{-\frac nlr_0} i! \sum_{j=0}^i \frac{\left(\frac nl r_0\right)^j}{j!} + (i+1)!\frac{l^2}{n^2} - \frac{l^2}{n^2} \int_0^{\frac nl r_0} x^{i+1}e^{-x} \mathrm{d} x \label{int_cos_upper_sol} \end{align} In case of a large number of events $n$ or, more precisely, when $\frac nl r_0 \gg 1$, the exponent $e^{-\frac nl r_0}$ is much smaller than any power of $\frac nl r_0$, and \begin{equation} \mbox{\ref{int_cos_upper_dr}} \simeq 2\, i!\,\frac ln \,r_0 . \end{equation} This corresponds to the case when most of the integral \ref{int_cos_upper_dr} is accumulated in a neighbourhood of $r_n = r_0$. \subsubsection*{Integral of \ref{int_cos_lower} over $r_n$} \label{sss_int_cos_lower} In this subsubsection we calculate the integral \begin{equation} \int_0^\infty r_n e^{-\frac nl \sqrt{r_n^2 + r_0^2 - 2 r_n r_0 \cos \theta} } \left( \frac nl \sqrt{r_n^2 + r_0^2 - 2 r_n r_0 \cos \theta} \right)^i \,\mathrm{d} r_n \label{int_cos_sqrt_int} . \end{equation} We introduce \begin{equation*} x = \frac nl \sqrt{r_n^2 + r_0^2 - 2 r_n r_0 \cos \theta}, \end{equation*} then we can rewrite \begin{equation*} x^2 \left(\frac ln\right)^2 = (r_n - r_0 \cos\theta)^2 + r_0^2(1-\cos^2\theta) \end{equation*} Change from $r_n$ to $x$ is a change of coordinates if $r_n$ is uniquely defined through $x$ and vice versa. Therefore we should separately consider the regions $r_n \geq r_0 \cos\theta$ and $r_n < r_0 \cos\theta$. The point $r_n = r_0 \cos\theta$ on a line of integration corresponds to the maximum of the pdf on that line (this is the nearest point on the line to the the mode of the distribution). \paragraph{ \fbox{\fbox{ $\cos\theta \geq 0$ }} } \paragraph{ \fbox{ $r_n \geq r_0 \cos\theta$ } } \begin{eqnarray*} & & r_n = r_0 \cos\theta + \sqrt{\frac{l^2}{n^2}x^2 - r_0^2(1 - \cos^2\theta)} \nonumber \\ & & \mathrm{d} r_n = \frac{\frac{l^2}{n^2} x \mathrm{d} x} {\sqrt{\frac{l^2}{n^2}x^2 - r_0^2(1 - \cos^2\theta)}} = \frac{\frac{l}{n} x \mathrm{d} x} {\sqrt{x^2 - \frac{n^2}{l^2}r_0^2(1 - \cos^2\theta)}} \end{eqnarray*} \begin{eqnarray} \int_{r_0\cos\theta}^\infty r_n e^{-\frac nl \sqrt{r_n^2 + r_0^2 - 2 r_n r_0 \cos \theta} } \left(\frac nl \sqrt{r_n^2 + r_0^2 - 2 r_n r_0 \cos \theta} \right)^i \,\mathrm{d} r_n = \nonumber \\ = \frac ln r_0 \cos\theta \int_{\frac nl r_0 \sqrt{1 - \cos^2\theta}}^\infty \frac{x^{i+1}} {\sqrt{x^2 - \frac{n^2}{l^2}r_0^2(1 - \cos^2\theta)}} e^{-x} \,\mathrm{d} x \ + \label{r0>costheta_1} & & \\ \lefteqn{ \phantom{=} + \frac {l^2}{n^2} \int_{\frac nl r_0 \sqrt{1 - \cos^2\theta}}^\infty x^{i+1} e^{-x} \,\mathrm{d} x } \phantom{ = \int_{\frac nl r_0 \sqrt{1 - \cos^2\theta}}^\infty r_0 \cos\theta \frac{\frac ln x^{i+1}} {\sqrt{x^2 - \frac{n^2}{l^2}r_0^2(1 - \cos^2\theta)}} e^{-x} \,\mathrm{d} x + } & & \label{gamma_cos} \end{eqnarray} The integral \ref{r0>costheta_1} converges at the lower limit for $\cos\theta \neq 1$ because the integral $\int_a^b \frac{\mathrm{d} x}{\sqrt{x^2 - a^2}} = \int_a^b \frac{\mathrm{d} x}{\sqrt{(x-a)(x+a)}}$ converges. The integral \begin{equation*} \int_a^\infty \frac{x^{i+1}e^{-x}\mathrm{d} x}{\sqrt{x^2 - a^2}} \stackrel{(x=a\mathrm{ch}y)}{=} a^{i+1}\int_0^\infty \mathrm{ch}^{i+1}\!y\ e^{-a\mathrm{ch}y}\mathrm{d} y \end{equation*} can be expressed through the modified Bessel function $K_\nu$ (8.407 in~\cite{GradRyzh}) using the formula 3.547(4) from~\cite{GradRyzh}: \begin{equation*} \int_0^\infty \mathrm{exp}\left(-\beta\mathrm{cosh}x\right) \mathrm{cosh}\left(\gamma x\right) \mathrm{d} x = K_\gamma\left(\beta\right) \qquad \qquad \quad \left[\Re\beta > 0\right] \end{equation*} since ch$^nx$ can be expressed as a sum of ch($kx$) using 1.320(6) and 1.320(8) from~\cite{GradRyzh}. \paragraph{ \fbox{ $ 0 \leq r_n \leq r_0 \cos\theta$ } } \begin{eqnarray*} \sqrt{\frac{l^2}{n^2}x^2 - r_0^2(1 - \cos^2\theta)} = r_0 \cos\theta - r_n, \\ r_n = r_0\cos\theta - \sqrt{\frac{l^2}{n^2}x^2 - r_0^2(1 - \cos^2\theta)} \end{eqnarray*} The limits $r_n|_0^{r_0\cos\theta}$ are converted to $x|_{\frac nl r_0}^{\frac nl r_0 \sqrt{1-\cos^2\theta}}$. The differential $\mathrm{d} r_n$ is the same as in the previous case $r_n \geq r_0\cos\theta$ except for the negative sign, which we omit changing the upper and the lower limits of the integration. \begin{multline} \int_0^{r_0\cos\theta} r_n e^{-\frac nl \sqrt{r_n^2 + r_0^2 - 2 r_n r_0 \cos \theta} } \left( \frac nl \sqrt{r_n^2 + r_0^2 - 2 r_n r_0 \cos \theta} \right)^i \,\mathrm{d} r_n = \\ = \int_{\frac nl r_0 \sqrt{1-\cos^2\theta}}^{\frac nl r_0} \left( r_0\cos\theta - \sqrt{\frac{l^2}{n^2}x^2 - r_0^2(1-\cos^2\theta)} \right) \frac{ e^{-x} \frac ln x^{i+1} \,\mathrm{d} x } {\sqrt{x^2 - \frac{n^2}{l^2}r_0^2(1 - \cos^2\theta)}} = \\ \begin{split} = & \; \frac ln r_0\cos\theta \int_{\frac nlr_0 \sqrt{1-\cos^2\theta}}^{\frac nlr_0} \frac{x^{i+1}} {\sqrt{x^2 - \frac{n^2}{l^2}r_0^2(1 - \cos^2\theta)}} e^{-x} \,\mathrm{d} x \\ & - \frac {l^2}{n^2} \int_{\frac nlr_0 \sqrt{1-\cos^2\theta}}^{\frac nl r_0} x^{i+1} e^{-x} \,\mathrm{d} x . \end{split} \label{r0_lt_costheta} \end{multline} Adding \ref{r0>costheta_1} and \ref{gamma_cos} to \ref{r0_lt_costheta} gives \begin{gather} \textrm{\ref{int_cos_sqrt_int}} \stackrel{1 > \cos\theta \geq 0}{=} \label{E_CDF_int_cos_gt_0} \\ = \frac ln r_0\cos\theta \int_{\frac nl r_0}^\infty \frac{x^{i+1}} {\sqrt{x^2 - \frac{n^2}{l^2}r_0^2(1 - \cos^2\theta)}} e^{-x} \,\mathrm{d} x + \frac {l^2}{n^2} \int_{\frac nl r_0}^\infty x^{i+1} e^{-x} \,\mathrm{d} x + \\ + 2 \frac ln r_0\cos\theta \int_{\frac nl r_0 \sqrt{1-\cos^2\theta}}^{\frac nl r_0} \frac{x^{i+1}} {\sqrt{x^2 - \frac{n^2}{l^2}r_0^2(1 - \cos^2\theta)}} e^{-x} \,\mathrm{d} x \label{int_cos_gt_0_add} . \end{gather} \paragraph{ \fbox{\fbox{ $\cos\theta < 0$ }} } In this case $r_n$ is always bigger than $r_0 \cos\theta$ and the integral becomes as in the case of \ref{r0>costheta_1}, \ref{gamma_cos}. The lower limit is changed from $r_n = 0$ to $x = \frac nl r_0$, \begin{multline} \textrm{\ref{int_cos_sqrt_int}} \stackrel{\cos\theta < 0}{=} \frac ln r_0 \cos\theta \int_{\frac nl r_0}^\infty \frac{x^{i+1}} {\sqrt{x^2 - \frac{n^2}{l^2}r_0^2(1 - \cos^2\theta)}} e^{-x} \,\mathrm{d} x + \frac {l^2}{n^2} \int_{\frac nl r_0}^\infty x^{i+1} e^{-x} \,\mathrm{d} x. \label{int_cos_lt_0} \end{multline} Note that the only difference between \ref{int_cos_lt_0} and \ref{E_CDF_int_cos_gt_0} is \ref{int_cos_gt_0_add}. We can combine the results for $\cos\theta < 0$ and for $\cos\theta \geq 0$ using the Heaviside step function: \begin{equation} \Theta(x) = \begin{cases} 1 & x \geq 0, \\ 0 & x < 0. \end{cases} \label{Theta_Heaviside} \end{equation} \begin{multline} \textrm{\ref{int_cos_sqrt_int}} = \frac ln r_0 \cos\theta \int_{\frac nl r_0}^\infty \frac{x^{i+1}} {\sqrt{x^2 - \frac{n^2}{l^2}r_0^2(1 - \cos^2\theta)}} e^{-x} \,\mathrm{d} x + \frac {l^2}{n^2} \int_{\frac nl r_0}^\infty x^{i+1} e^{-x} \,\mathrm{d} x \\ + 2 \Theta(\cos\theta) \frac ln r_0\cos\theta \int_{\frac nl r_0 \sqrt{1-\cos^2\theta}}^{\frac nl r_0} \frac{x^{i+1}} {\sqrt{x^2 - \frac{n^2}{l^2}r_0^2(1 - \cos^2\theta)}} e^{-x} \,\mathrm{d} x \label{int_cos_theta_sqrt_full} \end{multline} \subsubsection*{CDF($\cos\theta(\mathbf r_n))$} Combining the calculations for the CDF$(\cos\theta)$, \begin{gather} \mathrm{CDF}(\cos\theta) \stackrel{\ref{CDF_E_begin},\ref{CDF_E_int_cos}}{=} \frac nl \frac 1{r_0} \sum_{k=0}^{2n-2} \frac{(4n-4-k)!2^k(k+1)!} {k!(2n-2-k)!} \int_0^{\infty}r_n \mathrm{d} r_n \sum_{i=0}^{k+1} \frac 1{i!} \Bigg( \nonumber \\ e^{-\frac nl |r_n - r_0|}\left(\frac nl |r_n - r_0|\right)^i - e^{-\frac nl\sqrt{r_n^2 + r_0^2 - 2r_nr_0\cos\theta}} \left(\frac nl \sqrt{r_n^2 + r_0^2 - 2 r_n r_0 \cos\theta}\right)^i \nonumber \\ \Bigg) \stackrel{\ref{int_cos_upper_sol},\ref{int_cos_theta_sqrt_full}}{=} \frac nl \frac 1{r_0} \sum_{k=0}^{2n-2} \frac{(4n-4-k)!2^k(k+1)} {(2n-2-k)!} \sum_{i=0}^{k+1} \frac 1{i!} \Bigg( \nonumber \\ 2\,i! \frac ln r_0 - \frac ln r_0 e^{-\frac nlr_0} i! \sum_{j=0}^i \frac{\left(\frac nl r_0\right)^j}{j!} + (i+1)!\frac{l^2}{n^2} - \frac{l^2}{n^2} \int_0^{\frac nl r_0} x^{i+1}e^{-x} \mathrm{d} x \label{int_cos_upper_sol_here} \\ - \frac ln r_0 \cos\theta \int_{\frac nl r_0}^\infty \frac{x^{i+1}} {\sqrt{x^2 - \frac{n^2}{l^2}r_0^2(1 - \cos^2\theta)}} e^{-x} \,\mathrm{d} x - \frac {l^2}{n^2} \int_{\frac nl r_0}^\infty x^{i+1} e^{-x} \,\mathrm{d} x \label{int_cos_sqrt_here} \\ - 2 \Theta(\cos\theta) \frac ln r_0\cos\theta \int_{\frac nl r_0 \sqrt{1-\cos^2\theta}}^{\frac nl r_0} \frac{x^{i+1}} {\sqrt{x^2 - \frac{n^2}{l^2}r_0^2(1 - \cos^2\theta)}} e^{-x} \,\mathrm{d} x \nonumber \Bigg) \end{gather} The line \ref{int_cos_upper_sol_here} corresponds to \ref{int_cos_upper}, while \ref{int_cos_sqrt_here} and below come from \ref{int_cos_lower}. The last terms in \ref{int_cos_upper_sol_here} and \ref{int_cos_sqrt_here} sum up to $-\frac{l^2}{n^2}\Gamma(i+2)$ and cancel with $(i+1)!\frac{l^2}{n^2}$ in \ref{int_cos_upper_sol_here}. Thus we obtain the final answer: \begin{multline} \mathrm{CDF}_E(\cos\theta(\mathbf{r}_n, \mathbf{r}_0)) = \sum_{k=0}^{2n-2} \frac{(4n-4-k)!2^k(k+1)} {(2n-2-k)!} \Bigg( 2(k+2) \\ - e^{-\frac nlr_0} \sum_{i=0}^{k+1} (k+2-i)\frac{\left(\frac nl r_0\right)^i}{i!} - \cos\theta \int_{\frac nl r_0}^\infty \frac{\sum_{i=0}^{k+1} \frac{x^{i+1}}{i!}} {\sqrt{x^2 - \frac{n^2}{l^2}r_0^2(1 - \cos^2\theta)}} e^{-x} \,\mathrm{d} x \\ - 2 \Theta(\cos\theta) \cos\theta \int_{\frac nl r_0 \sqrt{1-\cos^2\theta}}^{\frac nl r_0} \frac{\sum_{i=0}^{k+1} \frac{x^{i+1}}{i!}} {\sqrt{x^2 - \frac{n^2}{l^2}r_0^2(1 - \cos^2\theta)}} e^{-x} \,\mathrm{d} x \Bigg) \label{E_CDF_cos_theta_final} \end{multline} The first term in \ref{E_CDF_cos_theta_final} is equal to $2^{4n-2}(2n-1)!$ due to \ref{normfact}. \section{Exponential distribution} \label{exp_distribution} \subsection{Introduction} Exponential distribution appeared in the author's studies connected to the problem in~\cite{Chooz99}. The observed pdf at large $x$ deviations was similar to $\sim e^{-\frac xl}$. To be spherically symmetric the pdf should be proportional to $\sim e^{-\frac rl}$. To calculate the normalisation factor, we take the integral \begin{equation*} \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} e^{-\frac{\sqrt{x^2 + y^2 + z^2}}{l}} \,\mathrm{d} x\,\mathrm{d} y\,\mathrm{d} z = 4\pi \int_{0}^{\infty} r^2 e^{-\frac{r}{l}} \,\mathrm{d} r = 8 \pi l^3 \end{equation*} Hence the offset exponential probability density function in 3-dimensional space is \begin{equation} f_e(x,y,z|x_0, y_0, z_0) = \frac 1{8 \pi l^3} e^{-\frac{\sqrt{(x-x_0)^2 + (y-y_0)^2 + (z-z_0)^2}}l} \label{f_e} \end{equation} \subsubsection [Fourier transform and convolutions] {Fourier transform of $f_e$ and its convolutions} In order to calculate the Fourier transform of $f_e$, we calculate the integral \begin{eqnarray*} \iiint_{\R3} e^{-i\mathbf{pr}}e^{-\frac{\sqrt{(\mathbf r - \mathbf r_0)^2}}{l}} \, \mathrm{d}^3\mathbf r & = & \iiint_{\R3} e^{-i\mathbf{pr}_0} e^{-i\mathbf{pr}'} e^{-\frac {r'}l} \, \mathrm{d}^3\mathbf r' \\ & = & 2\pi e^{-i\mathbf{pr}_0} \int_0^\infty \int_0^\pi e^{-ipr'\cos\theta - \frac{r'}l} \sin \theta \,\mathrm{d} \theta \, r'^2 \, \mathrm{d} r', \end{eqnarray*} the inner integral on $\theta$ \begin{equation} \int_{-1}^{1} e^{-ipr'\cos\theta}\,\mathrm{d}\cos\theta = \frac 1{-ipr'}(e^{-ipr'} - e^{ipr'}) = \frac 1{ipr'}(e^{ipr'} - e^{-ipr'}), \label{exptheta} \end{equation} and the outer integral on $r'$ with one of the complex conjugate exponents \begin{displaymath} \int_0^\infty r' e^{ipr' - \frac{r'}{l}} \,\mathrm{d} r' =\left(r' = \frac{r}{\frac 1l - ip}\right) = \frac 1{(\frac 1l - ip)^2} \int_0^\infty r e^{-r}\,\mathrm{d} r = \frac{1}{(\frac{1}{l} - ip)^2}; \end{displaymath} combining the two conjugate integrals, we obtain \begin{displaymath} \frac 1{ip} ( \frac{1}{(\frac{1}{l} - ip)^2} - c.c.) = \frac{(\frac 1l+ip)^2 - (\frac 1l - ip)^2} {ip(\frac 1l - ip)^2(\frac 1l + ip)^2} = \frac{\frac{4ip}l} {ip(\frac 1{l^2} + p^2)^2} = \frac 4{l(\frac 1{l^2} + p^2)^2}, \end{displaymath} therefore, taking into account the normalisation factor $\frac 1{8\pi l^2}$ and the factor $(2\pi)^{-\frac 32}$ from the Fourier transform, \begin{equation} \hat{f}_e(\mathbf p) = \frac 1{8\pi l^3}\frac{8\pi} {(2\pi)^\frac 32 l (\frac 1{l^2} + p^2)^2} e^{-i\mathbf{pr}_0} = \label{feFourier} \frac{e^{-i\mathbf{pr}_0}} {(2\pi)^\frac 32(1 + l^2p^2)^2} \end{equation} From~\ref{fenFourierGen} and~\ref{feFourier} we can learn the Fourier transform of the convolution of $n$ exponential distributions: \begin{equation} \hat f_n(\mathbf p) = \frac{e^{-i\mathbf{pr}_0n}} {(2\pi)^{\frac 32}(1 + l^2p^2)^{2n}} . \end{equation} Thereby the convolution of $n$ exponential distributions is \begin{equation*} f_n(\mathbf r) = \tilde{\hat{f}}_n(\mathbf p) = \iiint_{\R3} \frac{e^{i\mathbf{pr}}} {(2\pi)^\frac 32} \hat f_n(\mathbf p) \,\mathrm{d}^3\mathbf p = \frac 1{(2\pi)^3} \iiint_{\R3} \frac{e^{i\mathbf{p}(\mathbf r - \mathbf r_0 n)}} {(1 + l^2 p^2)^{2n}} \,\mathrm{d}^3\mathbf p , \end{equation*} choosing spherical coordinates with the $z$ axis along $\mathbf r - n \mathbf r_0$, the exponent becomes $e^{i p |\mathbf r - n \mathbf r_0| \cos \theta}$, and using~\ref{exptheta} , \begin{eqnarray*} f_n(\mathbf r) & = & \frac 1{(2\pi)^2} \int_0^\infty \frac{e^{ipr|\mathbf r - n \mathbf r_0|} - e^{- ipr|\mathbf r - n \mathbf r_0|}} {ip|\mathbf r - n \mathbf r_0|} \frac 1{(1 + l^2 p^2)^{2n}} p^2 \,\mathrm{d} p = \\ & = & \frac 1{2 \pi^2 |\mathbf r - n \mathbf r_0|} \int_0^\infty \frac{p \sin (p|\mathbf r - n \mathbf r_0|)} {(1 + l^2 p^2)^{2n}} \,\mathrm{d} p . \end{eqnarray*} Substituting inside the integral $p = \frac xl$, \begin{equation} f_n(\mathbf r) = \frac 1{2 \pi^2 l^2 |\mathbf r - n \mathbf r_0|} \int_0^\infty \frac{x \sin (x \frac {|\mathbf r - n \mathbf r_0|}l) } {(1 + x^2)^{2n}} \,\mathrm{d} x . \end{equation} \subsubsection {Distribution of the sample mean \texorpdfstring{$E_n$}{Eₙ}} A statistic useful in practical applications is $\mathbf r_n = \frac{\mathbf r}n$ , the arithmetic mean of $\mathbf r$. We can calculate the probability density function $E_n(\mathbf r_n)$ of the random variable $\mathbf r_n$ and, using the conservation of probability under the change of variables $E_n(\mathbf r_n)\,\mathrm{d}^3\mathbf r_n = f_n(\mathbf r)\,\mathrm{d}^3\mathbf r$ , we obtain $ E_n(\mathbf r_n) = n^3 f_n(n\mathbf r_n) $ \begin{equation} E_n(\mathbf r_n) = \frac {n^2}{2 \pi^2 l^2 |\mathbf r_n - \mathbf r_0|} \int_0^\infty \frac{x \sin (x \frac {n |\mathbf r_n - \mathbf r_0|}l) } {(1 + x^2)^{2n}} \,\mathrm{d} x. \label{E_n} \end{equation} The integral can be calculated analytically using the formula 3.737(2) from~\cite{GradRyzh} $[a > 0, \Re \beta > 0 ]$: \begin{equation} \int_0^\infty \frac{x\sin (ax)\,\mathrm{d} x} {(x^2 + \beta^2)^{n+1}} = \left\{ \begin{aligned} & \frac{\pi a e^{-a\beta}} {2^{2n}n!\beta^{2n-1}} \sum_{k=0}^{n-1} \frac{(2n-k-2)!(2a\beta)^k} {k!(n-k-1)!} \label{GR_int} \\ & \frac \pi2 e^{-a\beta} \qquad \qquad \qquad \quad \left[n=0, \beta \geq 0\right] \end{aligned} \right. \end{equation} Combining \ref{E_n} and \ref{GR_int} , we obtain \begin{equation} E_n(\mathbf r_n) = \frac {n^3}{\pi l^3} \frac{e^{-\frac nl |\mathbf r_n - \mathbf r_0|}} {2^{4n-1}(2n-1)!} \sum_{k=0}^{2n-2} \frac{(4n-4-k)!(2 \frac nl |\mathbf r_n - \mathbf r_0|)^k} {k!(2n-2-k)!} \label{E_n_full} \end{equation} In the case of $n=1$ the sum in~\ref{E_n_full} is equal to $1$ and we obtain~\ref{f_e}. \subsection {Properties of \texorpdfstring{$E_n$}{Eₙ}} In this subsection we study representations of $E_n$ other than~\ref{E_n_full} and its connection with hypergeometric functions. \input{exp_integral_GR.tex} \subsubsection {Proof that \texorpdfstring{$E_n$}{Eₙ} is properly normalised} The integral $\int_{\R3} E_n(\mathbf r_n)\,\mathrm{d}^3 \mathbf r_n$ is equal to~$1$, since $E_n$ is a properly normalised pdf. Here we calculate it explicitly using the formula~\ref{E_n_full}. After the parallel shift of $\mathbf{r}_n$ to $\mathbf{r}_0$, which doesn't affect the total integral, after changing to spherical coordinates and having integrated on the polar angles, we obtain the equality to be proved \begin{equation*} \frac{n^3}{\pi l^3} \, 4\pi \int_0^\infty r_n^2\,\mathrm{d} r_n \frac{e^{-\frac nl r_n}} {2^{4n-1}(2n-1)!} \sum_{k=0}^{2n-2} \frac{(4n-4-k)!(2 \frac nl r_n)^k} {k!(2n-2-k)!} = 1. \end{equation*} Using the integral $ \int_0^\infty x^n e^{-x} \,\mathrm{d} x = \Gamma(n+1) = n!$, this transforms to \begin{equation} \frac 1{2^{4n-3}(2n-1)!} \sum_{k=0}^{2n-2} \frac{(4n-4-k)!2^k(k+2)!} {k!(2n-2-k)!} = 1. \label{normfact} \end{equation} This equality holds for $n=1$. For $n=2$ the left-hand side $$ \frac 1{2^53!}\left(\frac{4!\,2!}{2!} + \frac{3! \, 2 \!\cdot\! 3!}{1!} + \frac{2!\,2^2 \!\cdot\! 4!}{2!}\right) = \frac 1{2^5}(4 + 12 + 16) = 1. $$ In the remaining of this subsubsection we prove the identity \ref{normfact}. Different techniques for calculation of closed forms of summations involving binomial coefficients can be found in~\cite{GKP}. Here we use the method of hypergeometric functions. The Gaussian (or ordinary) hypergeometric function $_2F_1(a,b;c;z)$ is a special function represented by the hypergeometric series (\!\!\cite{PBM}, 7.2.1(1)) \begin{eqnarray*} _2F_1(a,b;c;z) & = & 1 + \frac{ab}c z + \frac{a(a+1)b(b+1)}{c(c+1)}\frac{z^2}{2!} + \\ & & + \frac{a(a+1)(a+2)b(b+1)(b+2)}{c(c+1)(c+2)}\frac{z^3}{3!} + \ldots \end{eqnarray*} This can be rewritten using the rising factorial or Pochhammer symbol \begin{equation} \begin{split} (a)_0 & = 1, \\ (a)_n & = a(a+1)\ldots(a+n-1), \label{Pochhammer} \end{split} \end{equation} then \begin{equation} _2F_1(a,b;c;z) = \sum_{k=0}^\infty \frac{(a)_k(b)_k}{(c)_k} \frac{z^k}{k!}. \end{equation} In case when $a$ or $b$ is a negative integer, only a finite number of terms is non zero. Using Pochhammer symbol~\ref{Pochhammer}, we can express \begin{eqnarray} \frac{n!}{(n-k)!} = n (n-1) \ldots (n-k+1) = (-1)^k(-n)_k, \nonumber \\ (n-k)! = \frac{n!}{(-1)^k(-n)_k} \label{(n-k)!} \\ (k+2)! = 1 \cdot 2 \cdot 3 \ldots (k+2) = 2\cdot(3)_k, \label{(k+2)!} \end{eqnarray} The sum in \ref{normfact} can be rewritten as \begin{eqnarray} & & \sum_{k=0}^{2n-2} \frac{(4n-4-k)!(k+2)!} {(2n-2-k)!} \frac {2^k}{k!} \stackrel{\left( \textrm{ \ref{(n-k)!},\ref{(k+2)!} } \right)}{=} 2 \frac{(4n-4)!}{(2n-2)!} \sum_{k=0}^{2n-2} \frac{(-2n+2)_k (3)_k}{(-4n+4)_k} \frac{2^k}{k!} = \nonumber \\ & & \hspace{4.5cm} = 2 \frac{(4n-4)!}{(2n-2)!} \, {}_2F_1(-2n+2,3;-4n+4;2) \label{2F1(2)} \end{eqnarray} This hypergeometric function can be calculated using the formula 7.3.8(6) in~\cite{PBM}: \begin{equation} _2F_1(-n,a;-2n;2) = 2^{2n} \frac{n!}{(2n)!} \left(\frac{a+1}2\right)_{\!\!n} \label{2F1(2)PBM}. \end{equation} Substituting \ref{2F1(2)PBM} into \ref{2F1(2)}, we obtain the original equality~\ref{normfact}. \input{exp_cdf.tex} \input{exp_cdf_approx} \section{Normal distribution} \label{normal_distribution} \subsection {Introduction. Distribution of the sample mean \texorpdfstring{$G_n$}{Gₙ}} A one-dimensional normal (or Gaussian) distribution has the pdf \begin{equation} g(x) = \frac 1{\sqrt{2\pi\sigma^2}} e^{-\frac{(x-x_0)^2}{2\sigma^2}}. \end{equation} With this definition the variance $\mathrm E[(x-x_0)^2] = \sigma^2$. For $d$ dimensions the spherically symmetric multivariate normal distribution is \begin{equation} g(\mathbf r) = \frac 1{(2\pi\sigma^2)^{\frac d2}} e^{-\frac{(\mathbf r- \mathbf r_0)^2}{2\sigma^2}} \label{gauss_d} \end{equation} \paragraph{The Fourier transform} of the multivariate normal distribution (\ref{Fourier}) \begin{multline} \hat g(x) = \int_{\mathbb R^d} \frac {e^{-i\mathbf{pr}}}{(2\pi)^{\frac d2}} \frac{e^{-\frac{(\mathbf r- \mathbf r_0)^2}{2\sigma^2}}} {(2\pi\sigma^2)^{\frac d2}} \mathrm d^d\mathbf r = \frac 1{(2\pi)^{\frac d2}} \int_{\mathbb R^d} \frac{e^{-\frac{(\mathbf r + i \mathbf p \sigma^2)^2}{2\sigma^2}}} {(2\pi\sigma^2)^{\frac d2}} e^{-\frac{\mathbf p^2 \sigma^2}2} \mathrm d^d\mathbf r = \\ = \frac 1{(2\pi)^\frac d2} e^{-\frac{\mathbf p^2 \sigma^2}2} \label{GaussFourier} \end{multline} This is similar to the normal distribution with the variance $\frac 1{\sigma^2}$, except that it is not properly normalised, since the Fourier transform preserves the $L^2$-norm, but not the $L^1$-norm. For the Gaussian distribution the direct Fourier transform coincides with the inverse Fourier transform. The Fourier transform of the convolution of $n$ $d$-dimensional Gaussian distributions (\ref{fenFourierGen}) \begin{equation} \hat g_n(\mathbf p) = (2\pi)^{\frac{(n-1)d}2} \frac 1{(2\pi)^{\frac {nd}2}} e^{-\frac{\mathbf p n \sigma^2}2} = \frac 1{(2\pi)^{\frac d2}} e^{-\frac{\mathbf p^2 n \sigma^2}2}. \end{equation} \paragraph{ The pdf of the convolution of $n$ Gaussian pdfs } , which corresponds to the sum of $n$ normally distributed variables, can be obtained by taking the inverse Fourier transform using (\ref{GaussFourier}) : \begin{equation} g_n(\mathbf r) = \frac 1{(2\pi n \sigma^2)^\frac d2} e^{-\frac{\mathbf r^2}{2 n \sigma^2}}. \end{equation} This is again the normal distribution with the variance re-scaled to $n\sigma^2$. We use the average sum of Gaussian vectors $\mathbf r_n = \frac{\mathbf{r}}n$, and we shift the center of the distribution to $\mathbf r_0$; then the standard deviation becomes $\frac \sigma {\sqrt n}$: \begin{equation} G_n(\mathbf r_n) = \frac {n^{\frac d2}}{(2\pi\sigma^2)^{\frac d2}} e^{-\frac{n(\mathbf r_n - \mathbf r_0)^2}{2\sigma^2}} \end{equation} \subsection {\texorpdfstring{$\mathrm{CDF}_G(\cos\theta({\mathbf r_n}))$} {CDF(cosθ(rₙ))}} In this subsection we work with $d = 3$. For calculation of integrals in spherical coordinates in arbitrary dimension one may consult \cite{Fi3}. \begin{equation*} \mathrm{CDF}(\cos\theta) = \frac{2\pi n^{\frac 32}}{(2\pi\sigma^2)^{\frac 32}} \int_0^\infty r^2 \mathrm{d} r \int_{\cos\theta}^1 e^{-\frac{n\left(r^2 - 2 r r_0 \cos\theta' + r_0^2\right)}{2\sigma^2}} \mathrm{d} \cos\theta' , \end{equation*} the inner integral is taken easily in 3-dimensional space, \begin{equation*} \int_{\cos\theta}^1 e^{\frac{2 n r r_0 \cos\theta'}{2\sigma^2}} \mathrm{d} \cos\theta' = \frac{\sigma^2}{nrr_0} \left( e^{\frac{2nrr_0}{2\sigma^2}} - e^{\frac{2nrr_0\cos\theta}{2\sigma^2}} \right) , \end{equation*} therefore \begin{equation} \mathrm{CDF}(\cos\theta) = \frac{\sqrt{n}}{\sqrt{2\pi\sigma^2}r_0} \int_0^\infty r \left( e^{-\frac{n}{2\sigma^2}\left(r^2 - 2 r r_0 + r_0^2\right)} - e^{-\frac{n}{2\sigma^2}\left(r^2 - 2 r r_0 \cos\theta + r_0^2\right)} \right) \mathrm{d} r . \label{CDF_G_eq} \end{equation} To calculate the first term in brackets, it is sufficient to calculate the second term and put $\cos\theta = 1$. We complete the square in the integral \begin{equation} \int_0^\infty r e^{-\frac{n}{2\sigma^2}\left(r^2 - 2rr_0\cos\theta + r_0^2\right)} \mathrm{d} r = e^{-\frac{n}{2\sigma^2}r_0^2\left(1-\cos^2\theta\right)} \int_0^\infty r e^{-\frac{n}{2\sigma^2}\left(r-r_0\cos\theta\right)^2} \mathrm{d} r , \label{G_complete_sq} \end{equation} then the latter integral we split into two ones with $r = (r - r_0 \cos\theta) + r_0 \cos\theta$. The first part is a total derivative with respect to $r-r_0 \cos\theta = y$, \begin{multline*} \int_0^\infty r e^{-\frac{n}{2\sigma^2}(r-r_0\cos\theta)^2} \mathrm{d} r = \int_{-r_0\cos\theta}^\infty y e^{-\frac{ny^2}{2\sigma^2}} \mathrm{d} y + r_0 \cos\theta \int_{-r_0\cos\theta}^{\infty} e^{-\frac{ny^2}{2\sigma^2}} \mathrm{d} y = \\ = \frac{\sigma^2}n e^{-\frac{nr_0^2\cos^2\theta}{2\sigma^2}} + r_0 \cos\theta \int_{-r_0\cos\theta}^{\infty} e^{-\frac{ny^2}{2\sigma^2}} \mathrm{d} y . \end{multline*} The last term can be expressed through the \textit{error function} (\cite{GradRyzh}, 8.250(1)): \begin{equation} \mathrm{erf}(x) = \frac 2{\sqrt{\pi}} \int_0^x e^{-t^2} \mathrm{d} t \label{erf} , \end{equation} \begin{align} \int_{-r_0\cos\theta}^\infty e^{-\frac{ny^2}{2\sigma^2}} \mathrm{d} y & = \sqrt{\frac{2\sigma^2}n} \int_{-\frac{\sqrt{n} r_0}{\sqrt{2\sigma^2}} \cos\theta}^\infty e^{-t^2} \mathrm{d} t \nonumber \\ & = \sqrt{\frac{\pi\sigma^2}{2n}} + \sqrt{\frac{\pi\sigma^2}{2n}} \mathrm{erf}\left(\frac{\sqrt n r_0}{\sqrt{2\sigma^2}} \cos\theta\right) \label{G_final_erf} \end{align} Combining \ref{G_complete_sq} and \ref{G_final_erf} into \ref{CDF_G_eq}, \begin{multline} \mathrm{CDF}(\cos\theta) = \frac{\sqrt n}{\sqrt{2\pi\sigma^2}r_0} \Bigg( \frac{\sigma^2}n e^{-\frac{nr_0^2}{2\sigma^2}} + r_0 \sqrt{\frac{\pi\sigma^2}{2n}} \left(1 + \mathrm{erf}\left(\frac{\sqrt n r_0}{\sqrt{2\sigma^2}}\right) \right) - \\ - e^{-\frac n{2\sigma^2}r_0^2 \left(1-cos^2\theta \right)} \left(\frac{\sigma^2}n e^{-\frac{n r_0^2 \cos^2\theta}{2\sigma^2}} + r_0 \cos\theta \sqrt{\frac{\pi\sigma^2}{2n}} \left(1 + \mathrm{erf}\left(\frac{\sqrt n r_0}{\sqrt{2\sigma^2}} \cos\theta \right) \right) \right) \Bigg) \end{multline} The first term cancels out, and we obtain the final result: \begin{align} \mathrm{CDF}_G(\cos\theta({\mathbf r_n})) = & \frac 12 \Bigg( 1 + \mathrm{erf}\left(\frac{\sqrt n r_0}{\sqrt 2 \sigma}\right) \nonumber \\ & - e^{-\frac {nr_0^2}{2\sigma^2} \left(1-cos^2\theta \right)} \cos\theta \left( 1 + \mathrm{erf}\left(\frac{\sqrt n r_0}{\sqrt 2 \sigma} \cos\theta \right) \right) \Bigg) \label{CDF_G_exact} \end{align} Note that $\mathrm{CDF_G}$ depends only on one combination of parameters $\sqrt n \frac{r_0}{\sigma}$. \subsection { Approximations of \texorpdfstring{$\mathrm{CDF}_G(\cos\theta)$} {CDF(cosθ)} and \texorpdfstring{$\theta(cl)$}{θ(cl)} } In this subsection we consider the behaviour of $\mathrm{CDF}_G(\cos\theta)$ for different values of $\cos\theta$ and parameters and the behaviour of confidence intervals ($\theta \textrm{ or} \cos\theta$) for given confidence levels $\mathrm{CDF}_G(\cos\theta) = cl$. We introduce the parameter \begin{equation} a = \frac{\sqrt n r_0}{\sqrt 2 \sigma} \label{a_G} \end{equation} and express the CDF as \begin{equation} \mathrm{CDF}(\cos\theta) = \frac 12 \left( 1 + \mathrm{erf}(a) - e^{-a^2\left(1-\cos^2\theta\right)} \cos\theta \left(1 + \mathrm{erf}\left(a\cos\theta\right)\right) \right). \label{CDF_G_a} \end{equation} In what follows we work with \ref{CDF_G_a}, but keep in mind the expression of \ref{a_G} through the original parameters of the distribution $n, r_0, \sigma$. We are interested not only in limit cases, but even more in finite statistics samples. We group the terms according to their orders and keep lower order terms explicitly. \subsubsection { \texorpdfstring{$\theta$}{θ} close to 0, $n$ large } The asymptotic representation for the error function for large argument is (\cite{GradRyzh}, 8.254) \begin{equation} \mathrm{erf}(z) = 1 - \frac{e^{-z^2}}{\sqrt \pi z} \left( \sum_{k=0}^n (-1)^k \frac{(2k-1)!!}{\left(2z^2\right)^k} + \mathit O \left(|z|^{-2n -z}\right) \right) \label{erf_large_arg} \end{equation} (where $(-1)!! = 1$). $\mathrm{erf}(a)$ tends very rapidly to 1 as $a$ increases. Therefore the simplest approximation would be to substitute $\mathrm{erf}$ for $1$ for large arguments. Thus for $a\gg1$, $a \cos\theta \gg 1$ \begin{equation} \mathrm{CDF}(\cos\theta) = 1 - e^{-a^2(1-\cos^2\theta)} \cos\theta + \alpha_1, \label{CDF_G_large_param} \end{equation} where \begin{equation} \alpha_1 = \frac{\mathrm{erf}(a) - 1}2 - e^{-a^2(1-\cos^2\theta)} \cos\theta \frac{\mathrm{erf}(a\cos\theta) - 1}2 = \mathit O \left( a^{-1} e^{-a^2} \right). \label{G_alpha_1} \end{equation} To express $\theta$ from \ref{CDF_G_large_param} is more difficult, since the exponent power $a^2\left(1-\cos^2\theta\right)$ can be arbitrary. We fix $\mathrm{CDF}(\cos\theta) = cl$, move the term with $\theta$ to the left side of \ref{CDF_G_large_param}, and take logarithm \begin{gather} \ln \cos \theta - a^2 \left(1-\cos^2\theta\right) = \ln \left(1 - cl + \alpha_1 \right) \label{G_not_big_cl} , \\ \frac 12 \ln\left(1-\sin^2\theta\right) - a^2 \sin^2\theta = \ln\left(1 - cl\right) + \mathrm{ln}\left(1 + \frac{\alpha_1}{1 - cl}\right) \label{G_expand_ln} \end{gather} The equation \ref{G_not_big_cl} means that the results for lower $cl$ will be more precise than for $cl$ very close to 1, namely $1 - cl$ should be much more than $\alpha_1$. Lower $cl$ also corresponds to smaller $\theta$. In order to solve \ref{G_expand_ln} w.r.t. $\sin\theta$, we have to take a reasonable assumption $\sin^2\theta \ll 1$; we introduce \begin{equation} \beta_1 = \mathrm{ln}\left(1-\sin^2\theta\right) + \sin^2\theta = \mathit O(\sin^4\theta) \label{G_beta_1}, \end{equation} we also rewrite the last term in \ref{G_expand_ln} as \begin{equation} \alpha_2 = \mathrm{ln}\left(1 + \frac{\alpha_1}{1 - cl}\right) = \mathit O(\alpha_1) \label{G_alpha_2} \end{equation} Then from \ref{G_expand_ln}, \ref{G_beta_1}, \ref{G_alpha_2} \begin{equation} \sin^2\theta = \frac{-\mathrm{ln}(1-cl)}{\frac 12 + a^2} + \frac{\beta_1}{1 + 2a^2} - \frac{\alpha_2}{\frac 12 + a^2} \label{G_sin_theta_ord_2} \end{equation} The equation \ref{G_expand_ln} can be solved with a better precision if we take into account more terms from the ln series (see e.g. \cite{GradRyzh} 1.511). Let \begin{equation} \beta_2 = \mathrm{ln}\left(1-\sin^2\theta\right) + \sin^2\theta + \frac 12 \sin^4\theta = \mathit O(\sin^6\theta) \label{G_beta_2}, \end{equation} then \ref{G_expand_ln} transforms to a quadratic equation on $\sin^2\theta$ \begin{gather} \frac 12 \beta_2 - \frac 14 \sin^4\theta - \left(\frac 12 + a^2\right)\sin^2\theta = \mathrm{ln}(1-cl) + \alpha_2, \nonumber \\ \sin^4\theta + 2\left(1 + 2a^2\right)\sin^2\theta + 4\mathrm{ln}(1-cl) + 4\alpha_2 - 2\beta_2 = 0, \nonumber \\ \sin^2\theta = -\left(1 + 2a^2\right) + \sqrt{(1+2a^2)^2 - 4\mathrm{ln}(1-cl) -4\alpha_2 + 2\beta_2} \label{G_sin_theta_ord_4} \end{gather} The most precise formula for big $a$ should be \ref{G_sin_theta_ord_4}. For very big $a$ and small $\theta$ we can get a simpler expression from~\ref{G_sin_theta_ord_2}: \begin{equation} \theta \approx \frac{\sqrt{-\mathrm{ln}(1-cl)}}{a} \stackrel{\ref{a_G}}{=} \frac{\sqrt{-2\mathrm{ln}(1-cl)}\sigma}{\sqrt n r_0} \label{G_theta_large_n_simple} \end{equation} \subsubsection { \texorpdfstring{$\theta$}{θ} close to \texorpdfstring{$\frac{\pi}2$}{π/2} } One can expect confidence intervals to be near $\frac{\pi}2$ when $a$ is neither too big nor too small. Therefore in this subsubsection we assume $a \sim 1$, so that $a\cos\theta \ll 1$. The error function for small arguments can be approximated using integration of the exponent series in \ref{erf} term by term: \begin{equation} \mathrm{erf}(z) = \frac 2{\sqrt{\pi}} \left(z - \frac{z^3}{3} + \mathit O\left(z^5\right) \right) \label{erf_small_arg} \end{equation} \begin{gather} \mathrm{CDF}(\cos\theta) = \frac 12 (1 + \mathrm{erf}(a)) -\frac{e^{-a^2}}2 \cos\theta(1 + \frac 2{\sqrt\pi} a\cos\theta + \gamma_1) \nonumber \\ + \frac{e^{-a^2}}2 \left(1 - e^{a^2\cos^2\theta}\right) \cos\theta (1 + \mathrm{erf}(a\cos\theta)), \\ \textrm{where } \gamma_1 = \mathrm{erf}(a\cos\theta) - \frac{2}{\sqrt\pi} a\cos\theta = \mathit O(a^3\cos^3\theta) \label{G_gamma_1} \end{gather} To find $\cos\theta(cl)$ we denote \begin{equation} \delta_1 = (e^{a^2\cos^2\theta} - 1)(1 + \mathrm{erf}(a\cos\theta)) = \mathit O(\cos^2\theta), \label{G_delta_1} \end{equation} then \begin{gather} \cos\theta \left(1 + \frac2{\sqrt\pi} a\cos\theta \right) + \cos\theta(\gamma_1 + \delta_1) = (1 + \mathrm{erf}(a) - 2 cl) e^{a^2} \label{G_small_cos_eq} \end{gather} In the leading order the solution $\cos\theta$ of \ref{G_small_cos_eq} is the r.h.s. of \ref{G_small_cos_eq}. Therefore when we solve that equation up to $\mathit O(\cos^3\theta)$, we chose the `+' root: \begin{equation} \cos\theta = \frac{-1 + \sqrt{1 + \frac8{\sqrt\pi}ae^{a^2}(1 + \mathrm{erf}(a) - 2 cl) - \frac8{\sqrt\pi}a \cos\theta(\gamma_1 + \delta_1) }} {\frac 4{\sqrt\pi}a} \label{G_cos_theta_ord_3} . \end{equation} \subsubsection { \texorpdfstring{$\theta$}{θ} close to \texorpdfstring{$\pi$}{π} } \begin{equation} \mathrm{CDF}(\cos\theta) = \frac12(1 + \mathrm{erf}(a)) - \frac12 e^{-a^2\sin^2\theta} \cos\theta (1 + \mathrm{erf}(a\cos\theta)) \label{} \end{equation} The situation when $\theta$ is close to $\pi$ can appear when we have $a$ small and we are interested in large confidence levels (our precision is low, but still allows us to exclude a region near the pole $\theta = \pi$). In this subsubsection we don't take assumptions on $a$, but use a Taylor series expansion of $\mathrm{erf}(z)$ at an arbitrary point: \begin{equation} \mathrm{erf}(a + \Delta) = \mathrm{erf}(a) + \frac2{\sqrt\pi}e^{-a^2}\Delta + \mathit O(\Delta^2) \label{erf_Taylor} . \end{equation} Therefore \begin{gather} \mathrm{erf}(a\cos\theta) = \mathrm{erf}\left(-a\sqrt{1 - \sin^2\theta}\right) = - \mathrm{erf}(a) + \frac1{\sqrt\pi}ae^{-a^2}\sin^2\theta + \varepsilon_1 \label{G_varepsilon_1} , \\ \varepsilon_1 = \mathit O(\sin^4\theta) \nonumber \end{gather} To find $\theta(cl)$ we solve the equation \begin{multline} e^{-a^2\sin^2\theta}(-\cos\theta)(1 + \mathrm{erf}(a\cos\theta)) = 2cl - 1 - \mathrm{erf}(a) ,\\ -a^2\sin^2\theta + \frac12\mathrm{ln}(1-\sin^2\theta) + \mathrm{ln}(1+\mathrm{erf}(a\cos\theta)) = \mathrm{ln}(2cl - 1 - \mathrm{erf}(a)) \end{multline} Using \ref{G_varepsilon_1}, \begin{align} \mathrm{ln}(1 + \mathrm{erf}(a\cos\theta)) & = \mathrm{ln}(1 - \mathrm{erf}(a)) + \mathrm{ln}\left(1 + \frac{\frac1{\sqrt\pi} ae^{-a^2}\sin^2\theta + \varepsilon_1} {1 - \mathrm{erf}(a)} \right) \nonumber \\ & = \mathrm{ln}(1-\mathrm{erf}(a)) + \frac{ae^{-a^2}}{\sqrt\pi(1 - \mathrm{erf}(a))} \sin^2\theta + \varepsilon_2, \label{G_varepsilon_2} \end{align} \begin{equation*} \varepsilon_2 = \mathit O(\sin^4\theta). \end{equation*} Using \ref{G_beta_1} and \ref{G_varepsilon_2}, we obtain \begin{multline} \sin^2\theta \left( -a^2 - \frac 12 + \frac{ae^{-a^2}}{\sqrt\pi(1 - \mathrm{erf}(a))} \right) = -\frac{\beta_1}2 - \mathrm{ln}(1 - \mathrm{erf}(a)) - \varepsilon_2 \nonumber \\ + \mathrm{ln}(2cl - 1 - \mathrm{erf}(a)), \end{multline} \begin{equation} \sin^2\theta = \left( a^2 + \frac 12 - \frac{ae^{-a^2}}{\sqrt\pi(1 - \mathrm{erf}(a))} \right)^{-1} \left(\mathrm{ln}\left(\frac{1-\mathrm{erf}(a)}{2cl - 1 - \mathrm{erf}(a)}\right) + \frac{\beta_1}2 + \varepsilon_2 \right) . \label{G_theta_near_pi_ord_2} \end{equation} The r.h.s. of \ref{G_theta_near_pi_ord_2} is positive, since the argument of the last ln is larger than 1: $1 - \mathrm{erf}(a) > 2cl - 1 - \mathrm{erf}(a)$. However, the denominator of the logarithm's argument should also be positive, \begin{equation} cl > \frac{1 + \mathrm{erf}(a)}2 \label{G_theta_pi_cl} . \end{equation} This means that if we want to exclude some percentage of the outcomes of $r_n$ with the directions near the pole, we should chose a confidence level which satisfies \ref{G_theta_pi_cl}. This is a necessary, but not a sufficient condition on $cl$. A more precise condition is that the r.h.s. of~\ref{G_theta_near_pi_ord_2} is less than $1$. When $a$ is small we obtain $ 2\mathrm{ln}\left(\frac1{2cl-1}\right) < 1 \textrm{, and } cl > \frac12(1+e^{-1/2}) \approx 0.80. $ \section{Introduction} The problem originated from neutrino physics \cite{Chooz99}. We consider a set of vectors in 3-dimensional Euclidean space $\mathbb R^3$. We make a parametric assumption that this set is a sample of independent identically distributed variables, where a parameter of the distribution is a direction. As our estimator we take the direction of the arithmetic mean of the sample. This allows a simpler mathematical treatment compared to other possible estimators, since the sum of variables corresponds to the convolution of their pdfs, and this can be calculated in a standard way using the Fourier transform. Our goal is to find the distribution of the estimate in order to calculate confidence sets on the sphere, which we consider the precision of the estimator. We study both the exact case for finite samples and asymptotic cases, e.g. for number of events large. Our parametric models are the exponential distribution, section \ref{exp_distribution}, and the normal (Gaussian) distribution, section \ref{normal_distribution}. These results are new compared to previous studies. The author was unable to find directional results for the exponential distribution. In physical articles only the limiting case of large number of events is usually considered \cite{Chooz99}. Mathematical literature on directional statistics usually deals with distributions on spheres \cite{MardiaJupp}, while in our case we have complete 3-dimensional information. This work was written for those who are not necessarily statisticians or mathematicians but have met the problems treated here. Therefore the author attempted to use only the basic facts from mathematical undergraduate courses and introduced in detail more advanced notions when they were used. All the references used in this work can be found on the internet. \subsection{Convolution of pdfs using the Fourier transform} The probability density function (pdf) $f(\mathbf r)$ of the sum of two independent variables $\mathbf r_1+\mathbf r_2$ in $\mathbb R^\mathrm d$ is given by the \textit{convolution} of their pdfs: $$ f_{\mathbf r_1+\mathbf r_2}(\mathbf r) = (f_1 * f_2)(\mathbf r) = \int_{\mathbb R^\mathrm d} f_1(\mathbf r') f_2(\mathbf r - \mathbf r') \,\mathrm{d} \mathbf r' $$ We denote the \textit{Fourier transform} \footnote{this is similar to the \textit{characteristic function} in probability theory, the latter is complex conjugate and without the factor $(2\pi)^{\frac d2}$ } of a function $f(\mathbf r)$ as $$ \hat f(\mathbf p) = \int_{\mathbb R^\mathrm d} \frac{e^{-i\mathbf{pr}}}{(2\pi)^{d/2}} f(\mathbf{r}) \,\mathrm{d}^d \mathbf r, \label{Fourier} $$ and the inverse Fourier transform of $f$ as $\tilde f$ (therefore $\tilde{\hat f}(x) \equiv f(x)$). With this definition the inverse Fourier transform operator is complex conjugate to the direct Fourier transform operator. Then \begin{align} \widehat{f * g}(\mathbf p) &= \int_{\mathbb R^\mathrm d} \mathrm{d}^d \mathbf r \int_{\mathbb R^\mathrm d} \frac{e^{-i\mathbf{pr}}}{(2\pi)^{d/2}} f(\mathbf r') g(\mathbf r - \mathbf r') \,\mathrm{d}^d \mathbf r' \nonumber \\ &= \int_{\mathbb R^\mathrm d} \frac{e^{-i\mathbf{pr}'}}{(2\pi)^{d/2}} f(\mathbf r') \,\mathrm{d}^d \mathbf r' \int_{\mathbb R^\mathrm d} e^{-i\mathbf p(\mathbf r - \mathbf r')} g(\mathbf r - \mathbf r') \,\mathrm{d}^d \mathbf r \nonumber \\ &= (2\pi)^{d/2}\hat f(\mathbf p) \hat g(\mathbf p). \end{align} This is a well-known property of the Fourier transform, that it maps the \textit{convolution} of two pdfs to the \textit{product} of their Fourier transforms. Therefore the Fourier transform of the convolution of $n$ distributions $f$ is \begin{equation} \label{fenFourierGen} \hat f_n(\mathbf p) = (\hat f (\mathbf p))^n (2\pi)^{\frac {(n-1)d}2} \end{equation}
1,941,325,220,642
arxiv
\section{Introduction} In~\cite[Lemma~2.39]{AVV-Expos-89}, the following result was obtained: The function \begin{equation} \frac1x\ln\Gamma\biggl(1+\frac{x}2\biggr) \end{equation} is strictly increasing from $[2,\infty)$ onto $[0,\infty)$ and \begin{equation}\label{avv-89-funct} \lim_{x\to\infty}\biggl[\frac1{x\ln x}\ln\Gamma\biggl(1+\frac{x}2\biggr)\biggr]=\frac12, \end{equation} where \begin{equation}\label{gamma-dfn} \Gamma(x)=\int^\infty_0t^{x-1} e^{-t}\td t \end{equation} for $x>0$ denotes the classical Euler gamma function $\Gamma(x)$. From this, the following conclusions were deduced in~\cite[Lemma~2.40]{AVV-Expos-89}: The sequence $\Omega_n^{1/n}$ decreases strictly to $0$ as $n\to\infty$, the series $\sum_{n=2}^\infty\Omega_n^{1/\ln n}$ is convergent, and \begin{equation}\label{n-to-infty-Omega-n} \lim_{n\to\infty}\Omega_n^{1/(n\ln n)}=e^{-1/2}, \end{equation} where \begin{equation} \Omega_n=\frac{\pi^{n/2}}{\Gamma(1+n/2)} \end{equation} stands for the $n$-dimensional volume of the unit ball $\mathbb{B}^n$ in $\mathbb{R}^n$. Further, it was conjectured in~\cite[Remark~2.41]{AVV-Expos-89} that the function in~\eqref{avv-89-funct} is strictly increasing from $[2,\infty)$ onto $\bigl[0,\frac12\bigr)$ and this would imply that the sequence $\Omega_n^{1/(n\ln n)}$ is strictly decreasing for $n\ge2$. \par In~\cite[Theorem~1.5]{Anderson-Qiu-Proc-1997}, it was proved that the function \begin{equation}\label{aderson-qiu-funct} \frac{\ln\Gamma(x+1)}{x\ln x} \end{equation} is strictly increasing from $(1,\infty)$ onto $(1-\gamma,1)$, where $\gamma$ is the Euler-Mascheroni constant. From this, above-mentioned conjecture in~\cite[Remark~2.41]{AVV-Expos-89} were resolved in~\cite[Corollary~3.1]{Anderson-Qiu-Proc-1997}. In addition, it was conjectured in~\cite[Conjecture~3.3]{Anderson-Qiu-Proc-1997} that the function~\eqref{aderson-qiu-funct} is concave on $(1,\infty)$. \par In~\cite{chen-qi-oct-04-123} and \cite[Theorem~4]{X.Li.Sci.Magna-08}, the function~\eqref{aderson-qiu-funct} was proved to be strictly increasing on $(0,\infty)$. \par In~\cite[Section~3]{elber-laforgia-pams}, the function~\eqref{aderson-qiu-funct} was proved to be concave for $x>1$. \par In~\cite[Theorem~1.1]{berg-pedersen-jcam}, it was proved that \begin{equation}\label{n-derivative-x+1} (-1)^{n-1}\biggl[\frac{\ln\Gamma(x+1)}{x\ln x}\biggr]^{(n)}>0 \end{equation} for $x>0$ and $n\in\mathbb{N}$. More strongly, the reciprocal of the function~\eqref{aderson-qiu-funct} was proved in~\cite[Theorem~1.4]{berg-pedersen-jcam} to be a Stieltjes transform for $x\in\mathbb{C}\setminus(-\infty,0]$, where $\mathbb{C}$ is the set of all complex numbers. Furthermore, among other things, it was directly shown in~\cite[Theorem~1.1]{pickrock} that the function~\eqref{aderson-qiu-funct} for $x\in\mathbb{C}\setminus(-\infty,0]$ is a Pick function. \par In~\cite[Lemma~4]{alzer-ball-ii}, it was demonstrated that the function \begin{equation}\label{F(x)-dfn-Alzer} F(x)=\frac{\ln\Gamma(x+1)}{x\ln(2x)} \end{equation} is strictly increasing on $[1,\infty)$ and strictly concave on $[46,\infty)$. With the help of this, the double inequality \begin{equation}\label{Theorem-2-alzer-ball-ii} \exp\biggl(\frac{a}{n(\ln n)^2}\biggr) \le\frac{\Omega_{n}^{1/(n\ln n)}}{\Omega_{n+1}^{1/[(n+1)\ln(n+1)]}} <\exp\biggl(\frac{b}{n(\ln n)^2}\biggr) \end{equation} was turned out in~\cite[Theorem~2]{alzer-ball-ii} to be valid for $n\ge2$ if and only if \begin{equation} a\le\ln2\ln\pi-\frac{2(\ln2)^2\ln(4\pi/3)}{3\ln3}=0.3\dotsm \quad\text{and}\quad b\ge\frac{1+\ln(2\pi)}2=1.4\dotsm. \end{equation} \par It is clear that the function in~\eqref{avv-89-funct} is equivalent to~\eqref{F(x)-dfn-Alzer}. Therefore, the above conjecture posed in~\cite[Remark~2.41]{AVV-Expos-89} was verified once again. \par The first aim of this paper is to extend the ranges of $x$ such that the function $F(x)$ is both strictly increasing and strictly concave respectively as follows. \begin{thm}\label{F(x)-thm-Qi} On the interval $\bigl(0,\frac12\bigr)$, the function $F(x)$ defined by~\eqref{F(x)-dfn-Alzer} is strictly increasing; On the interval $\bigl(\frac12,\infty\bigr)$, it is both strictly increasing and strictly concave. \end{thm} The second aim of this paper is, with the aid of Theorem~\ref{F(x)-thm-Qi}, to generalize the decreasing monotonicity of the sequence $\Omega_n^{1/(n\ln n)}$ for $n\ge2$, the conjecture in~\cite[Remark~2.41]{AVV-Expos-89}, and the main result in~\cite[Corollary~3.1]{Anderson-Qiu-Proc-1997}, to the logarithmic convexity. \begin{thm}\label{unit-ball-qi-thm} The sequence $\Omega_{n}^{1/(n\ln n)}$ is strictly logarithmically convex for $n\ge2$. Consequently, the sequence \begin{equation} \frac{\Omega_{n}^{1/(n\ln n)}}{\Omega_{n+1}^{1/[(n+1)\ln(n+1)]}} \end{equation} is strictly decreasing for $n\ge2$. \end{thm} In~\cite[Lemma~3]{alzer-ball-ii}, the double inequality \begin{equation}\label{alzer-unit-ball-log} \frac23<\biggl[1-\frac{\ln x}{\ln(x+1)}\biggr]x\ln x\triangleq G(x)<1 \end{equation} was verified to be true for $x\ge3$. The right-hand side inequality in~\eqref{alzer-unit-ball-log} was also utilized in the proof of the inequality~\eqref{Theorem-2-alzer-ball-ii}. \par The third aim of this paper is to extend and generalize the inequality~\eqref{alzer-unit-ball-log} to a monotonicity result as follows. \begin{thm}\label{unit-ball-thm1} The function $G(x)$ defined in the inequality~\eqref{alzer-unit-ball-log} is strictly increasing on $(0,\infty)$ with \begin{equation}\label{2-limits} \lim_{x\to0^+}G(x)=-\infty\quad\text{and}\quad \lim_{x\to\infty}G(x)=1. \end{equation} \end{thm} \section{Remarks} Before proving our theorems, we are about to give some remarks on them and the volume of the unit ball in $\mathbb{R}^n$. \begin{rem} It is obvious that Theorem~\ref{F(x)-thm-Qi} extends or generalizes the conjecture posed in~\cite[Remark~2.41]{AVV-Expos-89} and the corresponding conclusions obtained in~\cite[Corollary~3.1]{Anderson-Qiu-Proc-1997} and~\cite[Lemma~4]{alzer-ball-ii} respectively. \end{rem} \begin{rem} For $a>1$ and $x>0$ with $x\ne\frac1a$, let \begin{equation}\label{F(x)-dfn-alpha} F_a(x)=\frac{\ln\Gamma(x+1)}{x\ln(ax)}. \end{equation} It is very natural to assert that the function $F_a(x)$ is strictly increasing on $\bigl(0,\frac1a\bigr)$ and it is both strictly increasing and strictly concave on $\bigl(\frac1a,\infty\bigr)$. More strongly, we conjecture that \begin{equation} (-1)^{n-1}[F_a(x)]^{(n)}>0 \end{equation} for $x>\frac1a$ and $n\in\mathbb{N}$. \end{rem} \begin{rem} From Theorem~\ref{unit-ball-thm1}, it is easy to see that the right-hand side inequality in~\eqref{alzer-unit-ball-log} is sharp, but the left-hand side inequality can be sharpened by replacing the constant $\frac23=0.666\dotsm$ by a larger number $\frac{3(2\ln2-\ln3)\ln3}{2\ln2}=0.683\dotsc$. \end{rem} \begin{rem} We conjecture that \begin{equation} (-1)^{k-1}[G(x)]^{(k)}>0 \end{equation} for $k\in\mathbb{N}$ on $(0,\infty)$, that is, the function $1-G(x)$ is completely monotonic on $(0,\infty)$. \end{rem} \begin{rem} In~\cite{minus-one-rgmia} and its revised version~\cite{minus-one-JKMS.tex}, the reciprocal of the function $[\Gamma(x+1)]^{1/x}$ was proved to be logarithmically completely monotonic on $(-1,\infty)$. Consequently, the function \begin{equation} Q(x)=\biggl[\frac{\pi^{x/2}}{\Gamma(1+x/2)}\biggr]^{1/x}=\frac{\sqrt\pi\,}{[\Gamma(1+x/2)]^{1/x}} \end{equation} is also logarithmically completely monotonic on $(-2,\infty)$. In particular, the function $Q(x)$ is both strictly decreasing and strictly logarithmically convex on $(-2,\infty)$. Because $Q(n)=\Omega_n^{1/n}$ for $n\in\mathbb{N}$, the sequence $\Omega_n^{1/n}$ is strictly decreasing and strictly logarithmically convex for $n\in\mathbb{N}$. This generalizes one of the results in~\cite[Lemma~2.40]{AVV-Expos-89} mentioned above. \par Furthermore, from the logarithmically complete monotonicity of $Q(x)$, it is easy to obtain that the sequence \begin{equation}\label{Omega-n-1-n-ratio} \frac{\Omega_n^{1/n}}{\Omega_{n+1}^{1/(n+1)}} \end{equation} is also strictly decreasing and strictly logarithmically convex for $n\in\mathbb{N}$. As a direct consequence of the decreasing monotonicity of the sequence~\eqref{Omega-n-1-n-ratio}, the following double inequality may be derived: \begin{equation}\label{Omega-n+1-n-(n+1)} \Omega_{n+1}^{n/(n+1)}<\Omega_n\le\biggl(\frac2{\sqrt{\pi}\,}\biggr)^n\Omega_{n+1}^{n/(n+1)}, \quad n\in\mathbb{N}. \end{equation} When $1\le n\le4$, the right-hand side inequality in~\eqref{Omega-n+1-n-(n+1)} is better than the corresponding one in \begin{equation}\label{Theorem-1-ball-volume-rn} \frac2{\sqrt{\pi}\,}\Omega_{n+1}^{n/(n+1)}\le\Omega_n <\sqrt{e}\,\Omega_{n+1}^{n/(n+1)}, \quad n\in\mathbb{N} \end{equation} obtained in~\cite[Theorem~1]{ball-volume-rn}. \end{rem} \begin{rem} In~\cite{Open-TJM-2003-Ineq-Ext.tex}, the inequality \begin{equation}\label{yaming-ineq} \frac{[\Gamma(x+y+1)/\Gamma(y+1)]^{1/x}}{[\Gamma(x+y+2)/\Gamma(y+1)]^{1/(x+1)}} <\sqrt{\frac{x+y}{x+y+1}}\, \end{equation} was confirmed to be valid if and only if $x+y>y+1>0$ and to be reversed if and only if $0<x+y<y+1$. Taking $y=0$ and $x=\frac{n}2$ in~\eqref{yaming-ineq} leads to \begin{equation}\label{frac[Gamma(n/2+1)]} \frac{[\Gamma(n/2+1)]^{1/n}}{[\Gamma((n+2)/2+1)]^{1/(n+2)}} =\frac{\Omega_{n+2}^{1/(n+2)}}{\Omega_n^{1/n}} <\sqrt[4]{\frac{n}{n+2}}\,,\quad n>2. \end{equation} Similarly, if letting $y=1$ and $x=\frac{n+1}2>1$ in~\eqref{yaming-ineq}, then \begin{equation} \frac{\Omega_{n+5}^{1/(n+3)}}{\Omega_{n+3}^{1/(n+1)}} <\frac1{\pi^{2/(n+1)(n+3)}}\sqrt[4]{\frac{n+3}{n+5}}\,,\quad n\ge2. \end{equation} \end{rem} \begin{rem} In~\cite{Open-TJM-2003.tex}, the following double inequality was discovered: For $t>0$ and $y>-1$, the inequality \begin{equation}\label{open-TJM-2003-ineq} \biggl(\frac{x+y+1}{x+y+t+1}\biggr)^a <\frac{[\Gamma(x+y+1)/\Gamma(y+1)]^{1/x}} {[\Gamma(x+y+t+1)/\Gamma(y+1)]^{1/(x+t)}} <\biggl(\frac{x+y+1}{x+y+t+1}\biggr)^b \end{equation} holds with respect to $x\in(-y-1,\infty)$ if $a\ge\max\bigl\{1,\frac1{y+1}\bigr\}$ and $b\le\min\bigl\{1,\frac1{2(y+1)}\bigr\}$. Letting $t=1$, $y=0$ and $x=\frac{n}2$ for $n\in\mathbb{N}$ in~\eqref{open-TJM-2003-ineq} reveals that \begin{equation*} \frac{n+2}{n+4}<\frac{[\Gamma(n/2+1)]^{2/n}} {[\Gamma((n+2)/2+1)]^{2/(n+2)}} <\sqrt{\frac{n+2}{n+4}}\, \end{equation*} which is equivalent to \begin{equation}\label{sqrt-frac-n+2n+4} \sqrt{\frac{n+2}{n+4}}\,<\frac{\Omega_{n+2}^{1/(n+2)}}{\Omega_n^{1/n}} <\sqrt[4]{\frac{n+2}{n+4}}\,,\quad n\in\mathbb{N}. \end{equation} When $n\ge3$, the inequality~\eqref{frac[Gamma(n/2+1)]} is better than the right-hand side inequality in~\eqref{sqrt-frac-n+2n+4}. \par If taking $t=1$, $y=1$ and $x=\frac{n}2-1$, then \begin{equation} \frac1{\pi^{2/[(n-2)n]}}\sqrt{\frac{n+2}{n+4}}\, <\frac{\Omega_{n+2}^{1/n}}{\Omega_n^{1/(n-2)}} <\frac1{\pi^{2/[(n-2)n]}}\sqrt[8]{\frac{n+2}{n+4}}\,,\quad n\in\mathbb{N}. \end{equation} \par Amazingly, replacing $t$ by $\frac12$, $y$ by $0$, and $x$ by $\frac{n}2$ in~\eqref{open-TJM-2003-ineq} results in \begin{equation}\label{ratio-Omega-n-n+1} \sqrt{\frac{n+2}{n+3}}\, <\frac{\Omega_{n+1}^{1/(n+1)}} {\Omega_{n}^{1/n}} <\sqrt[4]{\frac{n+2}{n+3}}\, \end{equation} for $n\ge-1$. When $n>2$, this refines the inequality~\eqref{Theorem-1-ball-volume-rn} in~\cite[Theorem~1]{ball-volume-rn}. \par Similarly, by setting different values of $x$, $y$ and $t$ in inequalities~\eqref{yaming-ineq} and~\eqref{open-TJM-2003-ineq}, more similar inequalities as above may be derived immediately. \end{rem} \begin{rem} Now it is very clear that the inequality~\eqref{Theorem-1-ball-volume-rn} obtained in~\cite[Theorem~1]{ball-volume-rn} was thoroughly strengthened by~\eqref{Omega-n+1-n-(n+1)} and~\eqref{ratio-Omega-n-n+1} together. \end{rem} \begin{rem} The inequality~\eqref{ratio-Omega-n-n+1} and other related ones derived above motivate us to ask the following question: What are the best positive constants $a\ge3$, $b\le3$, $\lambda\le1$, $\mu\ge1$, $\alpha\ge2$, and $\beta\le4$ such that the inequality \begin{equation}\label{unit-ball-open} \sqrt[\alpha]{1-\frac\lambda{n+a}}\, <\frac{\Omega_{n+1}^{1/(n+1)}} {\Omega_{n}^{1/n}} <\sqrt[\beta]{1-\frac\mu{n+b}}\, \end{equation} holds true for $n\in\mathbb{N}$? \end{rem} \section{Lemmas} In order to prove our theorems, we need the following lemmas. \begin{lem}[\cite{subadditive-qi.tex, theta-new-proof.tex-BKMS, subadditive-qi-guo-jcam.tex}]\label{comp-thm-1} For $x\in(0,\infty)$ and $k\in\mathbb{N}$, we have \begin{equation}\label{qi-psi-ineq-1} \ln x-\frac1x<\psi(x)<\ln x-\frac1{2x} \end{equation} and \begin{equation}\label{qi-psi-ineq} \frac{(k-1)!}{x^k}+\frac{k!}{2x^{k+1}}< (-1)^{k+1}\psi^{(k)}(x)<\frac{(k-1)!}{x^k}+\frac{k!}{x^{k+1}}. \end{equation} \end{lem} \begin{lem}[{\cite[Theorem~2]{AJMAA-063-03}}]\label{Theorem-2-AJMAA-063-03} For $x>1$, we have \begin{equation}\label{ajmaa-063-03-thm1} \left(1+\frac{1}{x}\right)^{x}>\frac{x+1}{[\Gamma(x+1)]^{1/x}}. \end{equation} The inequality~\eqref{ajmaa-063-03-thm1} is reversed for $0<x<1$. \end{lem} \begin{lem}[\cite{miqschur, Mon-Two-Seq-AMEN.tex}]\label{Mon-Two-Seq-AMEN.tex-ineq} If $t>0$, then \begin{gather} \frac{2t}{2+t}<\ln(1+t)<\frac{t(2+t)}{2(1+t)}; \label{log-ineq-qi} \end{gather} If $-1<t<0$, the inequality~\eqref{log-ineq-qi} is reversed. \end{lem} \section{Proofs of theorems} \begin{proof}[Proof of Theorem~\ref{F(x)-thm-Qi}] The inequality~\eqref{n-derivative-x+1} means that the function~\eqref{aderson-qiu-funct} is positive and increasing on $(0,\infty)$. It is apparent that the function \begin{equation*} \frac{\ln x}{\ln(2x)}=\frac1{1+\ln2/\ln x} \end{equation*} is positive and strictly increasing on $\bigl(0,\frac12\bigr)$ and $(1,\infty)$. Therefore, the function \begin{equation} F(x)=\frac{\ln\Gamma(x+1)}{x\ln x}\cdot\frac{\ln x}{\ln(2x)} \end{equation} is strictly increasing on $\bigl(0,\frac12\bigr)$ and $(1,\infty)$. \par The inequality~\eqref{ajmaa-063-03-thm1} for $0<x<1$ in Lemma~\ref{Theorem-2-AJMAA-063-03} can be rewritten as \begin{equation}\label{ajmaa-063-03-thm1-rew} \frac{\ln\Gamma(x+1)}{x}<(1-x)\ln(x+1)+x\ln x. \end{equation} \par Direct calculation yields \begin{equation*} F'(x)=\frac{x\ln(2x)\psi(x+1)-[\ln(2x)+1]\ln\Gamma(x+1)}{x^2[\ln(2x)]^2} \triangleq\frac{\theta(x)}{x^2[\ln(2x)]^2} \end{equation*} and \begin{equation*} \theta'(x)=x \ln(2x)\psi'(x+1)-\frac{\ln\Gamma(x+1)}{x}. \end{equation*} Utilizing the left-hand side inequality in~\eqref{qi-psi-ineq} for $k=1$, the inequality~\eqref{ajmaa-063-03-thm1-rew} and Lemma~\ref{Mon-Two-Seq-AMEN.tex-ineq} leads to \begin{align*} \theta'(x)&>x\biggl[\frac{1}{x+1}+\frac{1}{2(x+1)^2}\biggr]\ln(2x)-(1-x)\ln(x+1)-x\ln x\\ &>x\biggl[\frac{1}{x+1}+\frac{1}{2(x+1)^2}\biggr]\frac{2(2x-1)}{1+2x} -\frac{x(1-x)(2+x)}{2(1+x)}-x\ln x\\ &=x\biggl[\frac{2 x^4+5 x^3+8 x^2+3 x-8}{2 (x+1)^2 (2 x+1)}-\ln x\biggr]\\ &>x\biggl[\frac{2 x^4+5 x^3+8 x^2+3 x-8}{2 (x+1)^2 (2 x+1)}-\frac{2(x-1)}{1+x}\biggr]\\ &=\frac{x\bigl(2x^4-3x^3+4x^2+11x-4\bigr)}{2(x+1)^2(2 x+1)}\\ &=\frac{x[2x^2(x-1)^2+x(x+1)^2+10x-4]}{2(x+1)^2(2 x+1)}\\ &>0 \end{align*} for $\frac12<x<1$. Hence, the function $\theta(x)$ is strictly increasing on $\bigl(\frac12,1\bigr)$. Since $$ \theta\biggl(\frac12\biggr)=-\ln\Gamma\biggl(\frac32\biggr)=-\biggl(\frac12\ln\pi-\ln2\biggr)>0, $$ the function $\theta(x)$, and so the function $F'(x)$, is positive, and then the function $F(x)$ is increasing on $\bigl(\frac12,1\bigr)$. In a word, the function $F(x)$ is strictly increasing on $\bigl(0,\frac12\bigr)$ and $\bigl(\frac12,\infty\bigr)$. \par Ready computation gives \begin{multline*} \frac{x^3[\ln(2x)]^3F''(x)}{2[\ln(2x)]^2+3\ln(2x)+2} =\ln\Gamma(x+1)\\ +\frac{x^2[\ln(2x)]^2\psi'(x+1)-2x\ln(2x)[\ln(2x)+1]\psi(x+1)}{2[\ln(2x)]^2+3\ln(2x)+2} \end{multline*} and \begin{equation*} \frac{\td}{\td x}\biggl\{\frac{x^3[\ln(2x)]^3F''(x)}{2[\ln(2x)]^2+3\ln(2x)+2}\biggr\} =\frac{[\ln(2x)]^2\phi(x)}{\bigl\{2[\ln(2x)]^2+3\ln(2x)+2\bigr\}^2} \end{equation*} for $x>\frac12$, where, by making use of the right-hand side inequality in~\eqref{qi-psi-ineq-1} and the inequality~\eqref{qi-psi-ineq} for $k=1,2$, \begin{align*} \phi(x)&=[2\ln(2x)+5]\psi(x+1)\\ &\quad+x\bigl\{x\bigl[2[\ln(2x)]^2+3\ln(2x)+2\bigr]\psi''(x+1) -[4\ln(2x)+3]\psi'(x+1)\bigr\}\\ &<[2\ln(2x)+5]\biggl[\ln(x+1)-\frac1{2(x+1)}\biggr]\\ &\quad+x\biggl\{-x\bigl[2[\ln(2x)]^2+3\ln(2x)+2\bigr]\biggl[\frac1{(x+1)^2}+\frac1{(x+1)^3}\biggr]\\ &\quad-[4\ln(2x)+3]\biggl[\frac1{x+1}+\frac1{2(x+1)^2}\biggr]\biggr\}\\ &=[2\ln(2x)+5]\biggl[\ln(x+1)-\frac{1}{2(x+1)}\biggr]\\ &\quad-\frac{x\bigl\{4x(x+2)[\ln(2x)]^2+2\bigl(7x^2+16x+6\bigr)\ln(2x)+10x^2+23x +9\bigr\}}{2(x+1)^3}\\ &\triangleq\varphi(x) \end{align*} and \begin{align*} -x(x+1)^4\varphi'(x)&=2x^4+10x^3+19x^2+6x+1+2(x+4)x^2[\ln(2x)]^2\\ &\quad+x\bigl(2x^3+10x^2+20x+3\bigr)\ln(2x)-2(x+1)^4\ln(x+1)\\ &\triangleq h(x),\\ h'(x)&=8x^3+34x^2+52x+7+2x(3x+8)[\ln(2x)]^2\\ &\quad+\bigl(8x^3+34x^2+56x+3\bigr)\ln(2x)-8(x+1)^3\ln(x+1),\\ xh''(x)&=24x^3+86x^2+100x+3+4x(3x+4)[\ln(2x)]^2\\ &\quad+8x\bigl(3x^2+10x+11\bigr)\ln(2x)-24x(x+1)^2\ln(x+1)\\ &\triangleq q(x),\\ q'(x)&=4\bigl\{18x^2+57x+47+(6x+4)[\ln(2x)]^2\\ &\quad+2\bigl(9x^2+23x+15\bigr)\ln(2x)-6\bigl(3x^2+4x+1\bigr)\ln(x+1)\bigr\},\\ xq''(x)&=4\bigl\{36x^2+97x+30+6[\ln(2x)]^2x-12(3x+2)\ln(x+1)x\\ &\quad+\bigl(36x^2+58x+8\bigr)\ln(2x)\bigr\},\\ x^2(x+1)q^{(3)}(x)&=8\bigl\{18x^3+53x^2+18x-11-18(x+1)x^2\ln(x+1)\\ &\quad+2\bigl(9x^3+12x^2+x-2\bigr)\ln(2x)\bigr\}\\ &\triangleq 8p(x),\\ xp'(x)&=2\bigl[27x^3+65x^2+10x-2-9(3x+2)x^2\ln(x+1)\\ &\quad+\bigl(27x^2+24x+1\bigr)x\ln(2x)\bigr],\\ (x+1)x^2p''(x)&=2\bigl[54x^4+152x^3+90x^2+3x+2+6\bigl(9x^2+13x\\ &\quad+4\bigr)x^2\ln(2x)-18\bigl(3x^2+4x+1\bigr)x^2\ln(x+1)\bigr],\\ (x+1)^2x^3p^{(3)}(x)&=2\bigl[54x^5+168x^4+146x^3+18x^2-9x-4\\ &\quad+54(x+1)^2x^3\ln(2x)-54(x+1)^2x^3\ln(x+1)\bigr],\\ (x+1)^3x^4p^{(4)}(x)&=-4\bigl(3x^5+8x^4-9x^2-19x-6\bigr)\\ &\quad\triangleq-4\lambda(x),\\ \lambda'(x)&=15 x^4+32 x^3-18 x-19,\\ \lambda''(x)&=60 x^3+96 x^2-18. \end{align*} Since $\lambda''(x)$ is strictly increasing for $x>\frac12$ and $\lambda''\bigr(\frac12\bigr)=\frac{27}2$, the function $\lambda''(x)>0$, so $\lambda'(x)$ is strictly increasing for $x>\frac12$. From $\lambda'\bigl(\frac12\bigr)=-\frac{369}{16}$ and $\lim_{x\to\infty}\lambda'(x)=\infty$, it follows that the function $\lambda'(x)$ has a unique zero which is a minimum of $\lambda(x)$ for $x>\frac12$. Since $\lambda\bigl(\frac12\bigr)=-\frac{549}{32}$ and $\lim_{x\to\infty}\lambda(x)=\infty$, the functions $\lambda(x)$ and $p^{(4)}(x)$ has a unique zero which is a maximum of $p^{(3)}(x)$. Due to $p^{(3)}\bigl(\frac12\bigr)=188-108\ln\bigl(\frac{3}{2}\bigr)=144.20\dotsm$ and $\lim_{x\to\infty}p^{(3)}(x)=108(1+\ln2)$, the function $p^{(3)}(x)$ is positive, thus the function $p''(x)$ is strictly increasing for $x>\frac12$. Owing to $p''\bigl(\frac12\bigr)=258-90 \ln\bigl(\frac{3}{2}\bigr)=221.50\dotsc$, the function $p''(x)$ is positive and $p'(x)$ is strictly increasing for $x>\frac12$. By virtue of $p'\bigl(\frac12\bigr)=\frac{1}{2} \bigl[181-63 \ln \bigl(\frac{3}{2}\bigr)\bigr]=77.72\dotsc$, it is easy to see that $p'(x)>0$ and $p(x)$ is strictly increasing for $x>\frac12$. Owing to $p\bigl(\frac12\bigr)=\frac{27}{4} \bigl[2-\ln\bigl(\frac{3}{2}\bigr)\bigr]=10.76\dotsc$, it is deduced that $p(x)>0$ and $q^{(3)}(x)>0$ for $x>\frac12$, consequently the function $q''(x)$ is strictly increasing for $x>\frac12$. On account of $q''\bigl(\frac12\bigr)=700-168 \ln\bigl(\frac{3}{2}\bigr)=631.88\dotsc$, we obtain that $q''(x)>0$ and $q'(x)$ is strictly increasing for $x>\frac12$. In virtue of $q'\bigl(\frac12\bigr)=320-90 \ln\bigl(\frac{3}{2}\bigr)$, we have $q'(x)>0$, and then the function $q(x)$ is strictly increasing for $x>\frac12$. Because of $q\bigl(\frac12\bigr)=\frac{155}{2}-27\ln\bigl(\frac{3}{2}\bigr)=66.55\dotsc$, we acquire that $q(x)>0$ and $h''(x)>0$, so $h'(x)$ is strictly increasing for $x>\frac12$. By $h'\bigl(\frac12\bigr)=\frac{85}{2}-27 \ln\bigl(\frac{3}{2}\bigr)=33.55\dotsc$, it is derived that $h'(x)>0$ and $h(x)$ is strictly increasing for $x>\frac12$. Since $h\bigl(\frac12\bigr)=\frac{81}{8} \bigl[1-\ln\bigl(\frac{3}{2}\bigr)\bigr]=6.01\dotsc$, we gain that $h(x)>0$ and $\varphi'(x)<0$, accordingly $\varphi(x)$ is strictly decreasing for $x>\frac12$. Due to $\varphi\bigl(\frac12\bigr)=-\frac{91}{27} +5\ln\bigl(\frac{3}{2}\bigr)=-1.34\dotsc$, it is procured that $\varphi(x)<0$, i.e., $\phi(x)<0$, hence $$ \frac{\td}{\td x}\biggl\{\frac{x^3[\ln(2x)]^3F''(x)}{2[\ln(2x)]^2+3\ln(2x)+2}\biggr\}<0 $$ and the function $$ \frac{x^3[\ln(2x)]^3F''(x)}{2[\ln(2x)]^2+3\ln(2x)+2} $$ is strictly decreasing for $x>\frac12$. By $$ \biggl\{\frac{x^3[\ln(2x)]^3F''(x)}{2[\ln(2x)]^2+3\ln(2x)+2}\biggr\}\bigg\vert_{x=1/2} =\ln\frac{\sqrt{\pi}\,}2=-0.12\dotsc, $$ it follows that $$ \frac{x^3[\ln(2x)]^3F''(x)}{2[\ln(2x)]^2+3\ln(2x)+2}<0, $$ which is equivalent to $F''(x)<0$ for $x>\frac12$. As a result, the function $F(x)$ is concave on $\bigl(\frac12,\infty\bigr)$. The proof of Theorem~\ref{F(x)-thm-Qi} is complete. \end{proof} \begin{proof}[Proof of Theorem~\ref{unit-ball-qi-thm}] Let \begin{equation} f(x)=\biggl[\frac{\pi^{x/2}}{\Gamma(1+x/2)}\biggr]^{1/(x\ln x)} \end{equation} for $x>0$. Taking the logarithm of $f(x)$ gives \begin{equation*} \ln f(x)=\frac{\ln\pi}{2\ln x}-\frac{\ln\Gamma(1+x/2)}{x\ln x}=\frac{\ln\pi}{2\ln x} -2F\biggl(\frac{x}2\biggr), \end{equation*} where $F(x)$ is defined by~\eqref{F(x)-dfn-Alzer} for $x>0$ and $x\ne\frac12$. Differentiating twice gives \begin{equation*} [\ln f(x)]''=\frac{(\ln x+2)\ln\pi}{2x^2(\ln x)^3} -\frac12F''\biggl(\frac{x}2\biggr). \end{equation*} Since $F(x)$ is strictly concave for $x>\frac12$, then the function $[\ln f(x)]''$ is positive for $x>1$. As a result, the function $f(x)$ is strictly logarithmically convex for $x>1$, and so the sequence $f(n)=\Omega_{n}^{1/(n\ln n)}$ for $n\ge2$ is strictly logarithmically convex. \par Since the function $f(x)$ is strictly logarithmically convex for $x>1$, then $[\ln f(x)]'$ is strictly increasing for $x>1$, therefore \begin{equation*} \biggl[\ln\frac{f(x)}{f(x+1)}\biggr]'=[\ln f(x)-\ln f(x+1)]'=[\ln f(x)]'-[\ln f(x+1)]'<0 \end{equation*} for $x>1$. This implies that the function $\frac{f(x)}{f(x+1)}$ is strictly decreasing for $x>1$, hence the sequence $\frac{f(n)}{f(n+1)}$ is also strictly decreasing for $n\ge2$. The proof of Theorem~\ref{unit-ball-qi-thm} is thus completed. \end{proof} \begin{proof}[Proof of Theorem~\ref{unit-ball-thm1}] It is easy to see that the function $G(x)$ in~\eqref{alzer-unit-ball-log} may be rearranged as \begin{equation}\label{f-rew} G(x)=\frac{\ln x}{\ln(x+1)}\ln\biggl(1+\frac1x\biggr)^x. \end{equation} It is common knowledge that the function $\bigl(1+\frac1x\bigr)^x$ is strictly increasing and greater than $1$ on $(0,\infty)$ and tends to $e$ as $x\to\infty$. Furthermore, for $x>0$, \begin{equation*} \biggl[\frac{\ln x}{\ln(x+1)}\biggr]'=\frac{(x+1)\ln(x+1)-x\ln x}{x(x+1)[\ln(x+1)]^2}>0. \end{equation*} Thus, the function $\frac{\ln x}{\ln(x+1)}$ is strictly increasing on $(0,\infty)$ and positive on $(1,\infty)$ and, by L'H\^ospital's rule, tends to $1$ as $x\to\infty$. Therefore, the second limit in~\eqref{2-limits} is valid and the function $G(x)$ is strictly increasing on $(1,\infty)$. The first limit in~\eqref{2-limits} can be calculated by L'H\^ospital's rule as follows \begin{align*} \lim_{x\to0^+}G(x)&=\lim_{x\to0^+}\frac{1-{\ln x}/{\ln(x+1)}}{1/x}\lim_{x\to0^+}\ln x\\ &=\lim_{x\to0^+} \frac{x[(x+1)\ln(x+1)-x\ln x]}{(x+1)[\ln(x+1)]^2}\lim_{x\to0^+}\ln x\\ &=\lim_{x\to0^+} \biggl\{\frac{x}{\ln(x+1)}\biggl[1-\frac{x\ln x}{(x+1)\ln(x+1)}\biggr]\biggr\}\lim_{x\to0^+}\ln x\\ &=\lim_{x\to0^+} \biggl[1-\frac{x}{\ln(x+1)}\cdot\frac{\ln x}{x+1}\biggr]\lim_{x\to0^+}\ln x\\ &=-\infty. \end{align*} \par The function $G(x)$ can also be rearranged as \begin{equation}\label{f1(x)f2(x)-dfn} G(x)=\frac{x\ln x}{\ln(x+1)}[\ln(x+1)-\ln x]\triangleq f_1(x)f_2(x) \end{equation} for $x\in(0,\infty)$. It is not difficult to see that the function $f_2(x)$ is positive and decreasing on $(0,\infty)$. Straightforward differentiation produces \begin{align*} f_1'(x)=\frac{\ln(x+1)+ [\ln(x+1)-{x}/{(x+1)}]\ln x}{[\ln(x+1)]^2} \triangleq\frac{g(x)}{[\ln(x+1)]^2} \end{align*} and $f_1(1)=0$. By virtue of the double inequality~\eqref{log-ineq-qi}, it follows that \begin{align*} g(x)&>\frac{2x}{2+x}+\biggl[\frac{x(2+x)}{2(1+x)}-\frac{x}{x+1}\biggr]\ln x \\ &=\frac{2x}{2+x}+\frac{x^2}{2(1+x)}\ln x \\ &=\frac{x^2}{2(1+x)}\biggl[\frac{4(1+x)}{x(2+x)}+\ln x\biggr]\\ &\triangleq \frac{x^2}{2(1+x)}h(x) \end{align*} and \begin{equation*} h'(x)=\frac{x^3-4x-8}{x^2 (x+2)^2}<0 \end{equation*} for $x\in(0,1)$. As a result, the function $h(x)$ is decreasing on $(0,1)$ with $h(1)=\frac83$, so $h(x)>0$ on $(0,1)$. This means that the functions $g(x)$ and $f_1'(x)$ are positive on $(0,1)$. Hence, the function $f_1(x)$ is increasing and negative on $(0,1)$. In conclusion, the function $G(x)=f_1(x)f_2(x)$ is increasing and negative on $(0,1)$. The proof of Theorem~\ref{unit-ball-thm1} is complete. \end{proof} \begin{proof}[Second proof of a part of Theorem~\ref{unit-ball-thm1}] By \eqref{f1(x)f2(x)-dfn}, it is clear that the function $$ f_2(x)=\int_x^{x+1}\frac1t\td t=\int_0^1\frac1{t+x}\td t $$ is completely monotonic on $(0,\infty)$. \par On the other hand, we have \begin{align*} f_1(x)&=\frac{u(x)-u(0)}{v(x)-v(0)} \end{align*} and $$ \frac{u'(x)}{v'(x)}=(x+1)(1+\ln x) $$ is increasing on $(0,\infty)$, where $$ u(x)=\begin{cases}x\ln x,&x\in(0,1]\\0,&x=0 \end{cases} $$ and $v(x)=\ln(x+1)$ for $x\in[0,1]$. The monotonic form of L'H\^ospital's rule put forward in~\cite[Theorem~1.25]{anderson1} (or see \cite[p.~92, Lemma~1]{Gene-Jordan-Inequal.tex} and \cite[p.~10, Lemma~2.9]{refine-jordan-kober.tex-JIA}) reads that if $U$ and $V$ are continuous on $[a,b]$ and differentiable on $(a,b)$ such that $V'(x)\ne0$ and $\frac{U'(x)}{V'(x)}$ is increasing $($or decreasing$)$ on $(a,b)$, then the functions $\frac{U(x)-U(b)}{V(x)-V(b)}$ and $\frac{U(x)-U(a)}{V(x)-V(a)}$ are also increasing $($or decreasing$)$ on $(a,b)$. Therefore, the function $f_1(x)$ is increasing and negative on $(0,1)$. In a word, the function $G(x)=f_1(x)f_2(x)$ is increasing and negative on $(0,1)$. \end{proof}
1,941,325,220,643
arxiv
\section{Introduction} Detection and further experimental reconfirmation of current cosmic acceleration pose to cosmology a fundamental task of identifying and revealing the cause of such phenomenon. In light of this cosmological models with spinor field being the source of gravitational field attracts widespread interest in recent time \cite{henprd,sahaprd,greene,SBprd04,kremer1,ECAA06,sahaprd06,BVI,kremer2}. It was shown that a suitable choice of spinor field nonlinearity\\ (i) accelerates the isotropization process \cite{sahaprd,SBprd04,ECAA06};\\ (ii) gives rise to a singularity-free Universe \cite{sahaprd,SBprd04,ECAA06,BVI} and \\ (iii) generates late time acceleration \cite{kremer1,sahaprd06,kremer2}. Question that naturally pops up is, if the spinor field can redraw the picture of evolution caused by perfect fluid and dark energy, is it possible to simulate perfect fluid and dark energy by means of a spinor field? Affirmative answer to this question was given in the a number of papers \cite{shikin,spinpf0,spinpf1}. In those papers the authors have shown that different types of perfect fluid and dark energy can be described by nonlinear spinor field. In \cite{spinpf0} we used two types of nonlinearity, one occurs as a result of self-action and the other resulted from the interaction between the spinor and scalar field. It was shown that the case with induced nonlinearity is the partial one and can be derived from the case with self-action. In \cite{spinpf1} we give the description of generalized Chaplygin gas and modified quintessence \cite{pfdenr} in terms of spinor field and study the evolution of the Universe filled with nonlinear spinor field within the scope of a Bianchi type-I cosmological model. The purpose of this paper is to extend that study within the scope of a isotropic and homogeneous FRW cosmological model. \section{Simulation of perfect fluid with nonlinear spinor field} In this section we simulate different types of perfect fluid and dark energy by means of a nonlinear spinor field. \subsection{Spinor field Lagrangian} Let us first write the spinor field Lagrangian \cite{sahaprd}: \begin{equation} L_{\rm sp} = \frac{i}{2} \biggl[\bp \gamma^{\mu} \nabla_{\mu} \psi- \nabla_{\mu} \bar \psi \gamma^{\mu} \psi \biggr] - m\bp \psi + F, \label{lspin} \end{equation} where the nonlinear term $F$ describes the self-action of a spinor field and can be presented as some arbitrary functions of invariant generated from the real bilinear forms of a spinor field. For simplicity we consider the case when $F = F(S)$ with $S = \bp \psi$. Here $\nabla_\mu$ is the covariant derivative of spinor field: \begin{equation} \nabla_\mu \psi = \frac{\partial \psi}{\partial x^\mu} -\G_\mu \psi, \quad \nabla_\mu \bp = \frac{\partial \bp}{\partial x^\mu} + \bp \G_\mu, \label{covder} \end{equation} with $\G_\mu$ being the spinor affine connection. Varying \eqref{lspin} with respect to $\bp (\psi)$ one finds the spinor field equations: \begin{subequations} \label{speq} \begin{eqnarray} i\gamma^\mu \nabla_\mu \psi - m \psi + \frac{dF}{dS} \psi &=&0, \label{speq1} \\ i \nabla_\mu \bp \gamma^\mu + m \bp - \frac{dF}{dS} \bp &=& 0, \label{speq2} \end{eqnarray} \end{subequations} Variation of \eqref{lspin} with respect to metric function gives energy-momentum tensor for the spinor field \begin{equation} T_{\mu}^{\rho}=\frac{i}{4} g^{\rho\nu} \biggl(\bp \gamma_\mu \nabla_\nu \psi + \bp \gamma_\nu \nabla_\mu \psi - \nabla_\mu \bar \psi \gamma_\nu \psi - \nabla_\nu \bp \gamma_\mu \psi \biggr) \,- \delta_{\mu}^{\rho} L_{\rm sp} \label{temsp} \end{equation} where $L_{\rm sp}$ in account of spinor field equations \eqref{speq1} and \eqref{speq2} takes the form \begin{equation} L_{\rm sp} = - S \frac{dF}{dS} + F(S). \label{lsp} \end{equation} We consider the case when the spinor field depends on $t$ only. In this case for the components of energy-momentum tensor we find \begin{subequations} \begin{eqnarray} T_0^0 &=& mS - F, \label{t00s}\\ T_1^1 = T_2^2 = T_3^3 &=& S \frac{dF}{dS} - F. \label{t11s} \end{eqnarray} \end{subequations} A detailed study of nonlinear spinor field was carried out in \cite{sahaprd,SBprd04,ECAA06}. In what follows, exploiting the equation of states we find the concrete form of $F$ which describes various types of perfect fluid and dark energy. \subsection{perfect fluid with a barotropic equation of state} First of all let us note that one of the simplest and popular model of the Universe is a homogeneous and isotropic one filled with a perfect fluid with the energy density $\ve = T_0^0$ and pressure $p = - T_1^1 = -T_2^2 = -T_3^3$ obeying the barotropic equation of state \begin{equation} p = W \ve, \label{beos} \end{equation} where $W$ is a constant. Depending on the value of $W$ \eqref{beos} describes perfect fluid from phantom to ekpyrotic matter, namely \begin{subequations} \label{zeta} \begin{eqnarray} W &=& 0, \qquad \qquad \qquad {\rm (dust)},\\ W &=& 1/3, \quad \qquad \qquad{\rm (radiation)},\\ W &\in& (1/3,\,1), \quad \qquad\,\,{\rm (hard\,\,Universe)},\\ W &=& 1, \quad \qquad \quad \qquad {\rm (stiff \,\,matter)},\\ W &\in& (-1/3,\,-1), \quad \,\,\,\,{\rm (quintessence)},\\ W &=& -1, \quad \qquad \quad \quad{\rm (cosmological\,\, constant)},\\ W &<& -1, \quad \qquad \quad \quad{\rm (phantom\,\, matter)},\\ W &>& 1, \quad \qquad \quad \qquad{\rm (ekpyrotic\,\, matter)}. \end{eqnarray} \end{subequations} The barotropic model of perfect fluid is used to study the evolution of the Universe. Most recently the relation \eqref{beos} is exploited to generate a quintessence in order to explain the accelerated expansion of the Universe \cite{chjp,zlatev}. In order to describe the matter given by \eqref{zeta} with a spinor field let us now substitute $\ve$ and $p$ with $T_0^0$ and $-T_1^1$, respectively. Thus, inserting $\ve = T_0^0$ and $p = - T_1^1$ from \eqref{t00s} and \eqref{t11s} into \eqref{beos} we find \begin{equation} S \frac{dF}{dS} - (1+W)F + m W S= 0, \label{eos1s} \end{equation} with the solution \begin{equation} F = \lambda S^{1+W} + mS. \label{sol1} \end{equation} Here $\lambda$ is an integration constant. Inserting \eqref{sol1} into \eqref{t00s} we find that \begin{equation} T_0^0 = - \lambda S^{1+W}. \label{lambda} \end{equation} Since energy density should be non-negative, we conclude that $\lambda$ is a negative constant, i.e., $\lambda = - \nu$, with $\nu$ being a positive constant. So finally we can write the components of the energy momentum tensor \begin{subequations} \begin{eqnarray} T_0^0 &=& \nu S^{1+W}, \label{t00sf}\\ T_1^1 = T_2^2 = T_3^3 &=& - \nu W S^{1+W}. \label{t11sf} \end{eqnarray} \end{subequations} As one sees, the energy density $\ve = T_0^0$ is always positive, while the pressure $p = - T_1^1 = \nu W S^{1+W}$ is positive for $W > 0$, i.e., for usual fluid and negative for $W < 0$, i.e. for dark energy. In account of it the spinor field Lagrangian now reads \begin{equation} L_{\rm sp} = \frac{i}{2} \biggl[\bp \gamma^{\mu} \nabla_{\mu} \psi- \nabla_{\mu} \bar \psi \gamma^{\mu} \psi \biggr] - \nu S^{1+W}, \label{lspin1} \end{equation} Thus a massless spinor field with the Lagrangian \eqref{lspin1} describes perfect fluid from phantom to ekpyrotic matter. Here the constant of integration $\nu$ can be viewed as constant of self-coupling. A detailed analysis of this study was given in \cite{shikin}. \subsection{Chaplygin gas} An alternative model for the dark energy density was used by Kamenshchik {\it et al.} \cite{kamen}, where the authors suggested the use of some perfect fluid but obeying "exotic" equation of state. This type of matter is known as {\it Chaplygin gas}. The fate of density perturbations in a Universe dominated by the Chaplygin gas, which exhibit negative pressure was studied by Fabris {\it et al.} \cite{fabris}. Model with Chaplygin gas was also studied in the Refs. \cite{dev,sen}. Let us now generate a Chaplygin gas by means of a spinor field. A Chaplygin gas is usually described by a equation of state \begin{equation} p = -A/\ve^\gamma. \label{chap} \end{equation} Then in case of a massless spinor field for $F$ one finds \begin{equation} \frac{(-F)^\gamma d(-F)}{(-F)^{1+\gamma} - A} = \frac{dS}{S}, \label{eqq} \end{equation} with the solution \begin{equation} -F = \bigl(A + \lambda S^{1+\gamma}\bigr)^{1/(1+\gamma)}. \label{chapsp} \end{equation} On account of this for the components of energy momentum tensor we find \begin{subequations} \begin{eqnarray} T_0^0 &=& \bigl(A + \lambda S^{1+\gamma}\bigr)^{1/(1+\gamma)}, \label{edchapsp}\\ T_1^1 = T_2^2 = T_3^3 &=& A/\bigl(A + \lambda S^{1+\gamma}\bigr)^{\gamma/(1+\gamma)}. \label{prchapsp} \end{eqnarray} \end{subequations} As was expected, we again get positive energy density and negative pressure. Thus the spinor field Lagrangian corresponding to a Chaplygin gas reads \begin{equation} L_{\rm sp} = \frac{i}{2} \biggl[\bp \gamma^{\mu} \nabla_{\mu} \psi- \nabla_{\mu} \bar \psi \gamma^{\mu} \psi \biggr] - \bigl(A + \lambda S^{1+\gamma}\bigr)^{1/(1+\gamma)}. \label{lspin2} \end{equation} Setting $\gamma = 1$ we find the result obtained in \cite{spinpf0}. \subsection{Modified quintessence} Finally, we simulate modified quintessence with a nonlinear spinor field. It should be noted that one of the problems that face models with dark energy is that of eternal acceleration. In order to get rid of that problem quintessence with a modified equation of state was proposed which is given by \cite{pfdenr} \begin{equation} p = - W (\ve - \ve_{\rm cr}), \quad W \in (0,\,1), \label{mq} \end{equation} Here $\ve_{\rm cr}$ some critical energy density. Setting $\ve_{\rm cr} = 0$ one obtains ordinary quintessence. It is well known that as the Universe expands the (dark) energy density decreases. As a result, being a linear negative function of energy density, the corresponding pressure begins to increase. In case of an ordinary quintessence the pressure is always negative, but for a modified quintessence as soon as $\ve_{\rm q}$ becomes less than the critical one, the pressure becomes positive. Inserting $\ve = T_0^0$ and $p = - T_1^1$ into \eqref{mq} we find \begin{equation} F = - \eta S^{1-W} + mS + \frac{W}{1-W}\ve_{\rm cr}, \label{Fmq} \end{equation} with $\eta$ being a positive constant. On account of this for the components of energy momentum tensor we find \begin{subequations} \begin{eqnarray} T_0^0 &=& \eta S^{1-W} - \frac{W}{1-W}\ve_{\rm cr}, \label{edmq}\\ T_1^1 = T_2^2 = T_3^3 &=& \eta W S^{1-W} - \frac{W}{1-W}\ve_{\rm cr}. \label{prmq} \end{eqnarray} \end{subequations} We see that a nonlinear spinor field with specific type of nonlinearity can substitute perfect fluid and dark energy, thus give rise to a variety of evolution scenario of the Universe. \section{Cosmological models with a spinor field} In the previous section we showed that the perfect fluid and the dark energy can be simulated by a nonlinear spinor field. In the section II the nonlinearity was the subject to self-action. In \cite{spinpf0} we have also considered the case when the nonlinearity was induced by a scalar field. It was also shown the in our context the results for induced nonlinearity is some special cases those of self-interaction. Taking it into mind we study the evolution an Universe filled with a nonlinear spinor field given by the Lagrangian \eqref{lspin}, with the nonlinear term $F$ is given by \eqref{sol1}, \eqref{chapsp} and \eqref{Fmq}. \subsection{Bianchi type-I anisotropic cosmological model} We consider the anisotropic Universe given by the Bianchi type-I (BI) space-time \begin{eqnarray} ds^2 = dt^2 - a_1^2 dx^2 - a_2^2 dy^2 - a_3^2 dz^2, \label{BI} \end{eqnarray} with $a_i$ being the functions of $t$ only. The Einstein equations for BI metric read \begin{subequations} \label{BIE} \begin{eqnarray} \frac{\ddot a_2}{a_2} +\frac{\ddot a_3}{a_3} + \frac{\dot a_2}{a_2}\frac{\dot a_3}{a_3}&=& \kappa T_{1}^{1},\label{11}\\ \frac{\ddot a_3}{a_3} +\frac{\ddot a_1}{a_1} + \frac{\dot a_3}{a_3}\frac{\dot a_1}{a_1}&=& \kappa T_{2}^{2},\label{22}\\ \frac{\ddot a_1}{a_1} +\frac{\ddot a_2}{a_2} + \frac{\dot a_1}{a_1}\frac{\dot a_2}{a_2}&=& \kappa T_{3}^{3},\label{33}\\ \frac{\dot a_1}{a_1}\frac{\dot a_2}{a_2} +\frac{\dot a_2}{a_2}\frac{\dot a_3}{a_3} +\frac{\dot a_3}{a_3}\frac{\dot a_1}{a_1}&=& \kappa T_{0}^{0}, \label{00} \end{eqnarray} \end{subequations} where dot denotes differentiation by $t$. From the spinor field equation, it can be shown that \cite{sahaprd} \begin{equation} S = \frac{C_0}{\tau}, \label{S} \end{equation} where we define \begin{eqnarray} \tau = \sqrt{-g} = a_1 a_2 a_3. \label{taudef} \end{eqnarray} For the components of the spinor field we obtain \begin{eqnarray} \psi_{1,2}(t) = \frac{C_{1,2}}{\sqrt{\tau}}\,e^{i\int {\cD} dt},\quad \psi_{3,4}(t) = \frac{C_{3,4}}{\sqrt{\tau}}\,e^{-i\int {\cD} dt}, \end{eqnarray} where ${\cD} = dF/dS$. Solving the Einstein equation for the metric functions one finds \cite{sahaprd} \begin{eqnarray} a_i = D_i \tau^{1/3} \exp{\Bigl(X_i \int \frac{dt}{\tau}\Bigr)} \end{eqnarray} with the constants $D_i$ and $X_i$ obeying \begin{equation} \prod_{i=1}^{3} D_i = 1, \quad \sum_{i=1}^{3} X_i = 0. \end{equation} Thus the components of the spinor field and metric functions are expressed in terms of $\tau$. From the Einstein equations one finds the equation for $\tau$ \cite{sahaprd} \begin{eqnarray} \frac{\ddot \tau}{\tau}= \frac{3}{2}\kappa \Bigl(T_{1}^{1}+T_{0}^{0}\Bigr). \label{dtau} \end{eqnarray} In case of \eqref{lspin1} on account of \eqref{S} Eq. \eqref{dtau} takes the form \begin{equation} \ddot \tau = (3/2) \kappa \nu C_0^{1+W} (1-W) \tau^{-W} \end{equation} with the solution in quadrature \begin{equation} \int\frac{d\tau}{\sqrt{3 \kappa \nu C_0^{1+W} \tau^{1-W} + C_1}} = t + t_0. \end{equation} Here $C_1$ and $t_0$ are the integration constants. \myfigures{spinpf_pf1}{0.45}{Evolution of the Universe filled with perfect fluid.} {0.45}{spinpf_de1}{0.45}{Evolution of the Universe filled with dark energy.}{0.45} In the Figs. \ref{spinpf_pf1} and \ref{spinpf_de1} we have plotted the evolution of the Universe defined by the nonlinear spinor field corresponding to perfect fluid and dark energy \cite{spinpf1}. Let us consider the case when the spinor field is given by the Lagrangian \eqref{lspin2}. The equation for $\tau$ now reads \begin{equation} \ddot \tau = (3/2) \kappa \Biggl[ \bigl(A\tau^{1+\gamma} + \lambda C_0^{1+\gamma}\bigr)^{1/(1+\gamma)} + A \tau^{1+\gamma}/\bigl(A\tau^{1+\gamma} + \lambda C_0^{1+\gamma}\bigr)^{\gamma/(1+\gamma)}\Biggr], \end{equation} with the solution \begin{equation} \int \frac{d \tau}{\sqrt{C_1 + 3 \kappa \tau \bigl(A\tau^{1+\gamma} + \lambda C_0^{1+\gamma}\bigr)^{1/(1+\gamma)}}} = t + t_0, \quad C_1 = {\rm const}. \quad t_0 = {\rm const}. \end{equation} Inserting $\gamma = 1$ we come to the result obtained in \cite{chjp}. Finally we consider the case with modified quintessence. In this case for $\tau$ we find \begin{equation} \ddot \tau = (3/2) \kappa \Bigl[\eta C_0^{1-W} (1+W) \tau^{W} - 2W \ve_{\rm cr}\tau/(1-W)\Bigr], \end{equation} with the solution in quadrature \begin{equation} \int \frac{d\tau}{\sqrt{3 \kappa \bigl[\eta C_0^{1-W} \tau^{1+W} - W\ve_{\rm cr}\tau^2/(1-W)\bigr] + C_1}} = t + t_0. \label{qdmq} \end{equation} Here $C_1$ and $t_0$ are the integration constants. Comparing \eqref{qdmq} with those with a negative $\Lambda$-term we see that $\ve_{\rm cr}$ plays the role of a negative cosmological constant. \myfigures{spinpf_mqep1}{0.45}{Dynamics of energy density and pressure for a modified quintessence.} {0.45}{spinpf_mq1}{0.45}{Evolution of the Universe filled with a modified quintessence.}{0.45} In the Fig. \ref{spinpf_mqep1} we have illustrated the dynamics of energy density and pressure of a modified quintessence. In the Fig. \ref{spinpf_mq1} the evolution of the Universe defined by the nonlinear spinor field corresponding to a modified quintessence has been presented. As one sees, in the case considered, acceleration alternates with declaration. In this case the Universe can be either singular (that ends in Big Crunch) or regular. \subsection{FRW cosmological models with a spinor field} Let us now consider the homogeneous and isotropic FRW cosmological model with the metric \begin{eqnarray} ds^2 = dt^2 - a^2 (dx^2 + dy^2 + dz^2). \label{FRW} \end{eqnarray} Corresponding Einstein equations read \begin{subequations} \label{EFRW} \begin{eqnarray} 2 \frac{\ddot a}{a} + \frac{\dot a^2}{a^2}&=& \kappa T_{1}^{1} \label{FRW11}\\ 3\frac{\dot a^2}{a^2}&=& \kappa T_{0}^{0}. \label{FRW00} \end{eqnarray} \end{subequations} From the spinor field equation in this case we find \begin{equation} S = \frac{C_0}{a^3}, \quad C_0 = {\rm const.} \label{SFRW} \end{equation} The components of the spinor field in this case take the form \begin{eqnarray} \psi_{1,2}(t) = \frac{C_{1,2}}{a^{3/2}}\,e^{i\int {\cD} dt},\quad \psi_{3,4}(t) = \frac{C_{3,4}}{a^{3/2}}\,e^{-i\int {\cD} dt}, \end{eqnarray} where ${\cD} = dF/dS$. In order to find the solution that satisfies both \eqref{FRW11} and \eqref{FRW00} we rewrite \eqref{FRW11} in view of \eqref{FRW00} in the following form: \begin{equation} \ddot a = \frac{\kappa}{6}\Bigl(3 T_1^1 - T_0^0\Bigr) a. \label{dda} \end{equation} Further we solve this equation for concrete choice of source field. Let us consider the case of perfect fluid given by the barotropic equation of state. In account of \eqref{t00sf}, \eqref{t11sf} and \eqref{SFRW} \eqref{dda} takes the form \begin{equation} \ddot a = \frac{\kappa \nu (1+3W)C_0^{1+W}}{2} a^{-(2+3W)}, \label{ddasf} \end{equation} that admits the first integral \begin{equation} \dot a^2 = \frac{\kappa}{3} \nu C_0^{1+W} a^{-(1+3W)} + E_1, \quad E_1 = {\rm const}. \label{dda1} \end{equation} In Fig. \ref{spinpf_pf_FRW} and \ref{spinpf_phan_FRW} we plot the evolution of the FRW Universe for different values of $W$. \myfigures{spinpf_pf_FRW}{0.45}{Evolution of the Universe filled with perfect fluid and dark energy.} {0.45}{spinpf_phan_FRW}{0.45}{Evolution of the Universe filled with phantom matter.}{0.45} As one sees, equation \eqref{dda1} imposes no restriction on the value of $W$. But it is not the case, when one solves \eqref{FRW00}. Indeed, inserting $T_0^0$ from \eqref{t00sf} into \eqref{FRW00} one finds \begin{equation} a = (A_1 t + C_1)^{2/3(1+W)}, \label{afrw} \end{equation} where $A_1 = (1+W)\sqrt{3 \kappa \nu C_0^{1+W}/4}$ and $C_1 = 3 (1 + W) C/2$ with $C$ being some arbitrary constant. This solution identically satisfies the equation \eqref{FRW11}. As one sees, case with $W = -1$, cannot be realized here. In that case one has to solve the equation \eqref{FRW00} straight forward. As far as phantom matter ($W < -1$) is concerned, there occurs some restriction on the value of $C$, as in this case $A_1$ is negative and for the $C_1$ to be positive, $C$ should be negative. As one can easily verify, in case of cosmological constant with $W = -1$ Eqn. \eqref{FRW00} gives \begin{equation} a = a_0 e^{\pm \sqrt{\kappa \nu/3}\,t}. \label{FRWlambda} \end{equation} Inserting \eqref{edchapsp} and \eqref{prchapsp} into \eqref{dda} in case of Chaplygin gas we have the following equation \begin{equation} \ddot a = \frac{\kappa}{6}\frac{2A a^{3(1+\gamma)} - \lambda C_0^{1+\gamma}}{a^2\Bigl(Aa^{3(1+\gamma)} + \lambda C_0^{1+\gamma}\Bigr)^{\gamma/(1+\gamma)}}. \label{FRWchap} \end{equation} We solve this equation numerically. The corresponding solution has been illustrated in Fig. \ref{spinpf_pf_FRW}. Finally we consider the case with modified quintessence. Inserting \eqref{edmq} and \eqref{prmq} into \eqref{dda} in this case we find \begin{equation} \ddot a = \frac{\kappa}{6}\Bigr[(3W-1)\eta C_0^{1-W} a^{3W-2} - \frac{2W}{1-W}\ve_{\rm cr} a\Bigr], \label{FRWmq} \end{equation} with he solution \begin{equation} \dot a^2 = \frac{\kappa}{3}\Bigl[\nu C_0^{1-W} a^{3W - 1} - \frac{W}{1-W} \ve_{\rm cr} a^2 + E_2\Bigr], \quad E_2 = {\rm const}. \label{dda2} \end{equation} In Fig. \ref{spinpf_mqep1_FRW} we plot the dynamics of energy density and pressure. The Fig. \ref{spinpf_mq_FRW} shows the evolution of a FRW Universe filled with modified quintessence. \myfigures{spinpf_mqep1_FRW}{0.45}{Dynamics of energy density and pressure for a modified quintessence.} {0.45}{spinpf_mq_FRW}{0.45}{Evolution of the FRW Universe filled with a modified quintessence.}{0.45} As one sees, in case of modified quintessence the pressure is sign alternating. As a result we have a cyclic mode of evolution. \section{Conclusion} Within the framework of cosmological gravitational field equivalence between the perfect fluid (and dark energy) and nonlinear spinor field has been established. It is shown that different types of dark energy can be simulated by means of a nonlinear spinor field. Using the new description of perfect fluid or dark energy evolution of the Universe has been studied within the scope of a BI and FRW models.
1,941,325,220,644
arxiv
\section{Introduction} In this paper, we consider fully-discrete approximations to the equations of motion arising in the Oldroyd fluids (see Oldroyd\cite{Old}, Oskolkov\cite{Os89}) of order one: \begin{eqnarray}\label{om ~~\frac {\partial {\bf u}}{\partial t}+{\bf u}\cdot\nabla{\bf u}-\mu\Delta{\bf u}-\int_0^t \beta (t-\tau)\Delta{\bf u} (\tau)\,d\tau+\nabla p={\bf f}, ~~~~~\mbox {in}~ \Omega,~t>0 \end{eqnarray} with incompressibility condition \begin{eqnarray}\label{ic} \nabla \cdot {\bf u}=0,~~~~~\mbox {on}~\Omega,~t>0, \end{eqnarray} and initial and boundary conditions \begin{eqnarray}\label{ibc} {\bf u}(x,0)={\bf u}_0~~\mbox {in}~\Omega,~~~{\bf u}=0,~~\mbox {on}~\partial\Omega,~t\ge 0. \end{eqnarray} Here, $\Omega$ is a bounded domain in $\mathbb{R}^2$ with boundary $\partial \Omega$, $\mu = 2 \kappa\lambda^{-1}>0$ and the kernel $\beta (t) = \gamma \exp (-\delta t),$ where $\gamma= 2\lambda^{-1}(\nu-\kappa \lambda^{- 1})>0$ and $\delta =\lambda^{-1}>0$. Further, ${\bf f}$ and ${\bf u}_0$ are given functions in their respective domain of definition. For more details, we refer to \cite{AS89} and \cite{Old}. There is considerable amount of literature devoted to Oldroyd model by Oskolkov, Kotsiolis, Karzeeva, Sobolevskii etc, see \cite{AS89,EO92,KrKO91,KOS92,Os89} and recently by Lin {\it et al.} \cite{HLST,HLSST,WHL}, Pani {\it et al.} \cite{PY05, PYD06}, Wang {\it et al.} \cite{WHS}, and references, therein. A detailed report on the continuous and semi-discrete cases can be found in \cite{GP11}. Literature for the fully-discrete approximations to the problem (\ref{om})-(\ref{ibc}) is, however, limited. In \cite{AO}, Akhmatov and Oskolkov have discussed stable and convergent finite difference schemes for the problem (\ref{om})-(\ref{ibc}). Recently in \cite{PYD06}, a linearized backward Euler method is used to discretize in temporal direction and semi-group theoretic approach is then employed to establish {\it a priori} error estimates. The following error bounds are proved in \cite{PYD06} for $t_n>0$ $$ \|{\bf u}(t_n)-{\bf U}^n\| \le Ce^{-\alpha t_n}k $$ and $$ \|{\bf u}(t_n)-{\bf U}^n\|_1 \le Ce^{-\alpha t_n}k(t_n^{-1/2}+\log\frac{1}{k}) $$ for smooth initial data and for zero forcing term. Here, $k$ is the time step and ${\bf U}^n$ is the finite difference approximation to ${\bf u}(t_n),$ when modified backward Euler method is applied in the temporal direction. Recently Wang {\it et al.} \cite{WHS} have again applied backward Euler method for the problem (\ref{om})-(\ref{ibc}), with smooth initial data, when the forcing function is non-zero. They have used energy arguments along with uniqueness condition to obtain the following uniform error estimates: $$ \|{\bf u}(t_n)-{\bf U}^n\| \le C(h^2+k) $$ and $$ \tau^{1/2}\|{\bf u}(t_n)-{\bf U}^n\|_1 \le C(h+k), $$ where $\tau(t_n)= \min\{1,t_n\}$ and $h$ is the mesh size, again with smooth initial data. Our present investigation is a continuation of \cite{GP11}, where {\it a priori} estimates and regularity results have been established, which are uniform in time under realistically assumed regularity on the exact solution and when ${\bf f},{\bf f}_t\in L^\infty({\bf L}^2)$. Error estimates for semi-discrete Galerkin approximations have been shown to be optimal in $L^{\infty}({\bf L}^2)$-norm for non-smooth initial data. Further, uniform (in time) error estimates under uniqueness condition are also established. In the present article, we discuss backward Euler method to discretize in the temporal variable and Galerkin approximations to discretize spatial variables for approximating solutions of the problem (\ref{om})-(\ref{ibc}). Our main aim, in this work, is to present optimal error estimate for the backward Euler method, when the initial data is non-smooth, that is, ${\bf u}_0\in{\bf J}_1.$ The main results of this paper are follows: \begin{itemize} \item [(i)] Proving uniform bound in time in the Dirichlet norm for the solution of the completely discrete backward Euler method. \item [(ii)] Deriving new estimates which are valid uniformly in time for the error associated with discrete linearized problem \item [(iii)] Establishing estimates for the error related to nonlinear part in which the error constant depends exponentially in time and thereby, making final error estimate in the velocity to depend on exponentially in time. \item [(iv)] Proving optimal error estimates for the velocity in ${\bf L}^2$-norm which are uniform in time under the uniqueness assumption. \end{itemize} To prove estimate in the Dirichlet norm for the discrete solution which is valid for all time, the usual tool, in the case of the Navier-Stokes equations, is to apply discrete version of uniform Gronwall's Lemma. Now for proving $(i),$ it is difficult to apply uniform Gronwall's Lemma due to presence of the discrete version of integral term. Therefore, a new way of looking at the proof has helped to achieve $(i),$ see; Lemma 4.3. For $(ii)-(iii),$ we observe that there are difficulties due to the non-linear term along with the presence of integral term in the case of non-smooth initial data. For example, the preliminary result ($L^{\infty}({\bf L}^2)$ estimate) is sub-optimal due to non-smooth initial data (see; Lemma \ref{pree}). In order to compensate the loss in the order of convergence, a more appropriate tool is to multiply by $t.$ But, unfortunately, it fails here due to the presence of the integral term (or the summation term). To overcome this difficulty, we modify some tools from the error analysis of linear parabolic integro-differential equations with non-smooth data (see; \cite{PS98, PS198, TZ89}) to fit into the present problem and also a special care is taken to bound the nonlinear term. Our analysis makes use of the solution,say; ${\bf V}^n$ of a linearized discrete problem (see; (5.5)) as an intermediate solution. Then, with its help, we split the error: ${\bf u}_h^n-{\bf U}^n$ at time level $t=t_n,$ where ${\bf u}_h^n={\bf u}_h(t_n)$ is the solution of the semi-discrete scheme at $t=t_n$ and ${\bf U}^n$ is the solution of the backward Euler method, into two error components: one in $\mbox{\boldmath $\xi$}^n:={\bf u}_h^n-{\bf V}^n,$ which denotes the contribution due to the linearized part (see; (\ref{errsplit})), and the other in $\mbox{\boldmath $\eta$}^n:={\bf U}^n-{\bf V}^n,$ which is due to the non-linearity (see; (\ref{errsplit})). Using a backward discrete linear problem and duality type argument along with an estimate of $\hat{\mbox{\boldmath $\xi$}^n},$ where $$\hat{\mbox{\boldmath $\xi$}^n}:= k \sum_{j=0}^n \mbox{\boldmath $\xi$}^j,$$ an $L^2$-estimate of $\mbox{\boldmath $\xi$}^n$ which is valid for all time is derived, refer to Theorm 5.1. For $L^2$ estimate of $\mbox{\boldmath $\eta$}^n,$ we employ negative norm estimate and $L^2$ estimate of $\hat{\mbox{\boldmath $\eta$}}^n$ and obtain estimate which depends on exponentially in time, see; Lemma 5.9. Thus, one of the main result for nonsmooth initial data that we have derived in Theorem 5.2 is as follows: \begin{equation} \label{main-velocity-estimate} \|{\bf u}(t_n)-{\bf U}^n\| \le K_T t_n^{-1/2}\big(h^2+k(1+\log \frac{1}{k})^{1/2}\big), \end{equation} where $K_T$ depends exponentially on time. Finally for the proof of $(iv),$ a careful use of the uniqueness condition, it is also shown that the error estimate (\ref{main-velocity-estimate}) is valid for all time. The remaining part of this paper is organized as follows. In Section $2$, we discuss some notations, basic assumptions and weak formulations. In Section $3$, a semidiscrete Galerkin method is discussed briefly. Section $4$ is devoted to backward Euler method. Optimal and uniform error bounds are obtained for the velocity when the initial data are in ${\bf J}_1.$ Finally, we summarize our results in the Section $5.$ \section{Preliminaries} \setcounter{equation}{0} For our subsequent use, we denote by bold face letters the $\mathbb{R}^2$-valued function space such as \begin{eqnarray*} {\bf H}_0^1 = [H_0^1(\Omega)]^2, \;\;\; {\bf L}^2 = [L^2(\Omega)]^2 ~~\mbox {and }\;\; {\bf H}^m=[H^m(\Omega)]^2, \end{eqnarray*} where $H^m(\Omega)$ is the standard Hilbert Sobolev space of order $m$. Note that ${\bf H}^1_0$ is equipped with a norm $$ \|\nabla{\bf v}\|= \left(\sum_{i,j=1}^{2}(\partial_j v_i, \partial_j v_i)\right)^{1/2}=\left(\sum_{i=1}^{2}(\nabla v_i, \nabla v_i)\right)^{1/2}. $$ Further, we introduce some more function spaces for our future use:--- \begin{eqnarray*} {\bf J}_1 &=& \{{\mbox{\boldmath $\phi$}} \in {\bf {H}}_0^1 : \nabla \cdot \mbox{\boldmath $\phi$} = 0\}\\ {\bf J}= \{\mbox{\boldmath $\phi$} \in {\bf {L}}^2 :\nabla \cdot \mbox{\boldmath $\phi$} &=& 0~~{\mbox {\rm in}}~~ \Omega,\mbox{\boldmath $\phi$}\cdot{\bf{n}}|_{\partial \Omega}=0~~{\mbox {\rm holds}~~{\rm weakly}}\}, \end{eqnarray*} where ${\bf {n}}$ is the outward normal to the boundary $\partial \Omega$ and $\mbox{\boldmath $\phi$} \cdot {\bf {n}} |_{\partial \Omega} = 0$ should be understood in the sense of trace in ${\bf H}^{-1/2}(\partial \Omega)$, see \cite{temam}. Let $H^m/\mathbb{R}$ be the quotient space consisting of equivalence classes of elements of $H^m$ differing by constants, which is equipped with norm $\| p\|_{H^m /\mathbb{R}}=\| p+c\|_m$. For any Banach space $X$, let $L^p(0, T; X)$ denote the space of measurable $X$ -valued functions $\mbox{\boldmath $\phi$}$ on $ (0,T) $ such that $$ \int_0^T \|\mbox{\boldmath $\phi$} (t)\|^p_X~dt <\infty~~~{\mbox {\rm if}}~~1 \le p < \infty, $$ and for $p=\infty$ $$ {\displaystyle{ess \sup_{0<t<T}}} \|\mbox{\boldmath $\phi$} (t)\|_X <\infty~~~{\mbox {\rm if} }~~p=\infty. $$ Through out this paper, we make the following assumptions:\\ (${\bf A1}$). For ${\bf {g}} \in {\bf L}^2$, let the unique pair of solutions $\{{\bf v} \in {\bf{J}}_1, q \in L^2 /R\} $ for the steady state Stokes problem \begin{eqnarray*} -\Delta {{\bf v}} + \nabla q = {\bf {g}},\\ \nabla \cdot{\bf v} = 0\;\;\; {\mbox {\rm in} }~~~\Omega,~~~~{\bf v}|_{\partial \Omega}=0, \end{eqnarray*} satisfy the following regularity result $$ \| {\bf v} \|_2 + \|q\|_{H^1 /R} \le C\|{\bf {g}}\|. $$ \noindent (${\bf A2}$). The initial velocity ${\bf u}_0$ and the external force ${\bf f}$ satisfy for positive constant $M_0,$ and for $T$ with $0<T \leq \infty$ $$ {\bf u}_0\in {\bf J}_1,~{\bf f},{\bf f}_t \in L^{\infty} (0, T ; {\bf L}^2)~~~{\mbox {\rm with}}~~~\|{\bf u}_0\|_1 \le M_0,~~{\displaystyle{\sup_{0<t<T} }}\big\{\|{\bf f}\|, \|{\bf f}_t\|\big\} \le M_0. $$ For our subsequent analysis, we present the following Lemma, which can be seen as a discrete version of Lemma 2.2 from \cite{PY05}. \begin{lemma Let $g_i,\phi^i \in \mathbb{R},~1\le i \le n,~n\in \mathbb{N}$ and $0<k<1$. Then the following estimate holds $$ \Big(k\sum_{i=1}^n\big(k\sum_{j=1}^i g_{i-j} \phi^j\big)^2 \Big)^{1/2} \le \Big(k\sum_{i=1}^k |g_i|\Big) \Big(k\sum_{i=1}^n |\phi^i|^2\Big)^{1/2}. $$ \end{lemma} \section{Semidiscrete Galerkin Approximations} \setcounter{equation}{0} From now on, we denote $h$ with $0<h<1$ to be a real positive discretization parameter tending to zero. Let ${\bf H}_h$ and $L_h$, $0<h<1$ be two family of finite dimensional subspaces of ${\bf H}_0^1 $ and $L^2$, respectively, approximating velocity vector and the pressure. Assume that the following approximation properties are satisfied for the spaces ${\bf H}_h$ and $L_h$: \\ ${\bf (B1)}$ For each ${\bf w} \in {\bf {H}}_0^1 \cap {\bf {H}}^2 $ and $ q \in H^1/R$ there exist approximations $i_h w \in {\bf {H}}_h $ and $ j_h q \in L_h $ such that $$ \|{\bf w}-i_h{\bf w}\|+ h \| \nabla ({\bf w}-i_h {\bf w})\| \le K_0 h^2 \| {\bf w}\|_2, ~~~~\| q - j_h q \|_{L^2 /R} \le K_0 h \| q\|_{H^1 /R}. $$ Further, suppose that the following inverse hypothesis holds for ${\bf w}_h\in{\bf H}_h$: \begin{align}\label{inv.hypo} \|\nabla {\bf w}_h\| \leq K_0 h^{-1} \|{\bf w}_h\|. \end{align} For defining the Galerkin approximations, set for ${\bf v}, {\bf w}, \mbox{\boldmath $\phi$} \in {\bf H}_0^1$, $$ a({\bf v}, \mbox{\boldmath $\phi$}) = (\nabla {\bf v}, \nabla \mbox{\boldmath $\phi$}) $$ and $$ b({\bf v}, {\bf w},\mbox{\boldmath $\phi$})= \frac{1}{2} ({\bf v} \cdot \nabla {\bf w} , \mbox{\boldmath $\phi$}) - \frac{1}{2} ({\bf v} \cdot \nabla \mbox{\boldmath $\phi$}, {\bf w}). $$ Note that the operator $b(\cdot, \cdot, \cdot)$ preserves the antisymmetric property of the original nonlinear term, that is, $$ b({\bf v}_h, {\bf w}_h, {\bf w}_h) = 0 \;\;\; \forall {\bf v}_h, {\bf w}_h \in {{\bf H}}_h. $$ Now,the semidiscrete Galerkin formulation reads as: Find ${\bf u}_h(t) \in {\bf H}_h$ and $p_h(t) \in L_h$ such that $ {\bf u}_h(0)= {\bf u}_{0h} $ and for $t>0$ \begin{eqnarray}\label{dwfh} ({\bf u}_{ht}, \mbox{\boldmath $\phi$}_h) +\mu a ({\bf u}_h,\mbox{\boldmath $\phi$}_h) &+& b({\bf u}_h,{\bf u}_h,\mbox{\boldmath $\phi$}_h)+ a(\bu_{h,{\beta}}, \mbox{\boldmath $\phi$}_h) -(p_h, \nabla \cdot \mbox{\boldmath $\phi$}_h) =({\bf f}, \mbox{\boldmath $\phi$}_h), \nonumber \\ &&(\nabla \cdot {\bf u}_h, \chi_h) =0, \end{eqnarray} for $\mbox{\boldmath $\phi$}_h\in{\bf H}_h,~\chi_h \in L_h$. Here ${\bf u}_{0h} \in {\bf H}_h $ is a suitable approximation of ${\bf u}_0\in {\bf J}_1$ and \begin{equation}\label{uhb} \bu_{h,{\beta}}(t)=\int_0^t \beta(t-s) {\bf u}_h(s)~ds. \end{equation} \noindent In order to consider a discrete space analogous to ${\bf J}_1$, we impose the discrete incompressibility condition on ${\bf H}_h$ and call it as ${\bf J}_h$. Thus, we define ${\bf J}_h,$ as $$ {\bf J}_h = \{ v_h \in {\bf H}_h : (\chi_h,\nabla\cdot v_h)=0 ~~~\forall \chi_h \in L_h \}. $$ Note that ${\bf J}_h$ is not a subspace of ${\bf J}_1$. With ${\bf J}_h$ as above, we now introduce an equivalent Galerkin formulation as: Find ${\bf u}_h(t)\in {\bf J}_h $ such that ${\bf u}_h(0) = {\bf u}_{0h} $ and for $t>0$ \begin{eqnarray}\label{dwfj} ~~~~ ({\bf u}_{ht},\mbox{\boldmath $\phi$}_h) +\mu a ({\bf u}_h,\mbox{\boldmath $\phi$}_h) + a(\bu_{h,{\beta}}, \mbox{\boldmath $\phi$}_h) = -b( {\bf u}_h, {\bf u}_h, \mbox{\boldmath $\phi$}_h)+({\bf f},\mbox{\boldmath $\phi$}_h)~~\forall \mbox{\boldmath $\phi$}_h \in {\bf J}_h. \end{eqnarray} Since ${\bf J}_h$ is finite dimensional, the problem (\ref{dwfj}) leads to a system of nonlinear integro-differential equations. For global existence of a solution pair of (\ref{dwfj}), we refer to \cite{PY05}. Uniqueness (of $p$) is obtained on the quotient space $L_h/N_h$, where $$ N_h=\{q_h\in L_h:(q_h, \nabla\cdot\mbox{\boldmath $\phi$}_h)=0,\forall \mbox{\boldmath $\phi$}_h\in{\bf H}_h\}. $$ The norm on $ L_h/N_h $ is given by $$ \| q_h\|_{L^2/N_h} = {\displaystyle{\inf_{\chi_h \in N_h} }} \| q_h + \chi_h\|. $$ For continuous dependence of the discrete pressure $p_h (t) \in L_h/N_h$ on the discrete velocity $u_h(t) \in {\bf J}_h$, we assume the following discrete inf-sup (LBB) condition for the finite dimensional spaces ${\bf H}_h$ and $L_h$:\\ \noinden ${\bf (B2')}$ For every $q_h \in L_h$, there exists a non-trivial function $\mbox{\boldmath $\phi$}_h \in {\bf H}_h$ and a positive constant $K_0,$ independent of $h,$ such that $$ |(q_h, \nabla\cdot \mbox{\boldmath $\phi$}_h)| \ge K_0 \|\nabla \mbox{\boldmath $\phi$}_h \|\| q_h\|_{L^2/N_h}. $$ Moreover, we also assume that the following approximation property holds true for ${\bf J}_h $. \\ \noinden ${\bf (B2)}$ For every ${\bf w} \in {\bf J}_1 \cap {\bf H}^2, $ there exists an approximation $r_h {\bf w} \in {\bf J_h}$ such that $$ \|{\bf w}-r_h{\bf w}\|+h \| \nabla ({\bf w} - r_h {\bf w}) \| \le K_5 h^2 \|{\bf w}\|_2 . $$ This is a less restrictive condition than (${\bf B2'}$) and it has been used to derive the following properties of the $L^2$ projection $P_h:{\bf L}^2\mapsto {\bf J}_h$. We now state without proof these results. For a proof, see \cite{HR82}. For $\mbox{\boldmath $\phi$} \in {\bf J}_h$, note that \begin{equation}\label{ph1} \|\mbox{\boldmath $\phi$}- P_h \mbox{\boldmath $\phi$}\|+ h \|\nabla P_h \mbox{\boldmath $\phi$}\| \leq C h\|\nabla \mbox{\boldmath $\phi$}\|, \end{equation} and for $\mbox{\boldmath $\phi$} \in {\bf J}_1 \cap {\bf H}^2,$ \begin{equation}\label{ph2} \|\mbox{\boldmath $\phi$}-P_h\mbox{\boldmath $\phi$}\|+h\|\nabla(\mbox{\boldmath $\phi$}-P_h \mbox{\boldmath $\phi$})\|\le C h^2\|{\tilde\Delta}\mbox{\boldmath $\phi$}\|. \end{equation} We now define the discrete operator $\Delta_h: {\bf H}_h \mapsto {\bf H}_h$ through the bilinear form $a (\cdot, \cdot)$ as \begin{eqnarray}\label{do a({\bf v}_h, \mbox{\boldmath $\phi$}_h) = (-\Delta_h{\bf v}_h, \mbox{\boldmath $\phi$})~~~~\forall {\bf v}_h, \mbox{\boldmath $\phi$}_h\in{\bf H}_h. \end{eqnarray} Set the discrete analogue of the Stokes operator ${\tilde\Delta} =P \Delta $ as ${\tilde\Delta}_h = P_h \Delta_h $. Using Sobolev imbedding and Sobolev inequality, it is easy to prove the following Lemma \begin{lemma}\label{nonlin} Suppose conditions (${\bf A1}$), (${\bf B1}$) and (${\bf B2}$) are satisfied. Then there exists a positive constant $K$ such that for ${\bf v},{\bf w},\mbox{\boldmath $\phi$}\in{\bf H}_h$, the following holds: \begin{equation}\label{nonlin1} |({\bf v}\cdot\nabla{\bf w},\mbox{\boldmath $\phi$})| \le K \left\{ \begin{array}{l} \|{\bf v}\|^{1/2}\|\nabla{\bf v}\|^{1/2}\|\nabla{\bf w}\|^{1/2}\|\Delta_h{\bf w}\|^{1/2} \|\mbox{\boldmath $\phi$}\|, \\ \|{\bf v}\|^{1/2}\|\Delta_h{\bf v}\|^{1/2}\|\nabla{\bf w}\|\|\mbox{\boldmath $\phi$}\|, \\ \|{\bf v}\|^{1/2}\|\nabla{\bf v}\|^{1/2}\|\nabla{\bf w}\|\|\mbox{\boldmath $\phi$}\|^{1/2} \|\nabla\mbox{\boldmath $\phi$}\|^{1/2}, \\ \|{\bf v}\|\|\nabla{\bf w}\|\|\mbox{\boldmath $\phi$}\|^{1/2}\|\Delta_h\mbox{\boldmath $\phi$}\|^{1/2}, \\ \|{\bf v}\|\|\nabla{\bf w}\|^{1/2}\|\Delta_h{\bf w}\|^{1/2}\|\mbox{\boldmath $\phi$}\|^{1/2} \|\nabla\mbox{\boldmath $\phi$}\|^{1/2}. \end{array}\right. \end{equation} \end{lemma} \noindent Examples of subspaces ${\bf H}_h$ and $L_h$ satisfying assumptions (${\bf B1}$), (${\bf B2}'$), and (${\bf B2}$) can be found in \cite{GR, BP, BF}. \\ We present below, a Lemma, that deals with higher order estimates of ${\bf u}_h,$ which will be useful in the error analysis of backward Euler method for non-smooth data. \begin{lemma}\label{dth2} Suppose conditions (${\bf A1}$), (${\bf B1}$), (${\bf B2}$) and (${\bf B4}$) are satisfied. Moreover, let ${\bf u}_h(0)\in{\bf J}_h$ and ${\bf f}$ satisfy the assumption (${\bf A3}$). Then, ${\bf u}_h,$ the solutions of the semidiscrete Oldroyd problem (\ref{dwfj}) satisfies the following {\it a priori} estimates: \begin{align} \tau^*\|{\bf u}_h\|_2^2+(\tau^*)^{r+1}\|{\bf u}_{ht}\|_r^2 & \le K,~~~~r\in \{0,1\}, \label{dth11} \\ e^{-2\alpha t}\int_0^t e^{2\alpha s}(\tau^*)^r(s)\|{\bf u}_{hs}\|^2_r\,ds & \le K, ~~~~r\in \{0,1,2\}, \label{dth12} \\ e^{-2\alpha t} \int_0^t e^{2\alpha s}(\tau^*)^{r+1}(s)\|{\bf u}_{hss}\|_{r-1}^2~ds & \le K,~~~~r\in \{-1,0,1\}, \label{dth13} \end{align} where $(\tau^*)(t)=\min\{1,t\},~\sigma(t) = \tau^*(t) e^{2\alpha t}$ and K depends on the given data, but not on time $T$. \end{lemma} \begin{proof} The estimates (\ref{dth11})-(\ref{dth12}) can be proved as in the continuous case, see \cite{GP11}. For the final estimate, we differentiate (\ref{dwfj}) to find that, for $\mbox{\boldmath $\phi$}_h \in {\bf J}_h,$ \begin{eqnarray}\label{dwfjt} ({\bf u}_{htt},\mbox{\boldmath $\phi$}_h) +\mu a({\bf u}_{ht},\mbox{\boldmath $\phi$}_h) & + & \beta(0)a({\bf u}_h,\mbox{\boldmath $\phi$}_h)-\delta \int_0^t \beta(t-s) a({\bf u}_h(s), \mbox{\boldmath $\phi$}_h)~ds \nonumber \\ & = & -b({\bf u}_{ht},{\bf u}_h,\mbox{\boldmath $\phi$}_h)-b({\bf u}_h,{\bf u}_{ht},\mbox{\boldmath $\phi$}_h)+({\bf f}_t,\mbox{\boldmath $\phi$}_h). \end{eqnarray} Taking $\mbox{\boldmath $\phi$}_h=(\tau^*)^2(t)e^{2\alpha t}{\bf u}_{htt}$ in (\ref{dwfjt}), we obtain \begin{align}\label{dth001} (\tau^*)^2(t) & e^{2\alpha t}\|{\bf u}_{htt}\|^2+\frac{\mu}{2}\frac{d}{dt}\big( (\tau^*)^2(t) e^{2 \alpha t}\|{\bf u}_{ht}\|_1^2\big) \le \big(\alpha (\tau^*)^2(t)+\tau^*(t)\big) e^{2\alpha t} \|{\bf u}_{ht}\|^2 \nonumber \\ & +\gamma (\tau^*)^2(t)e^{2\alpha t}\|{\bf u}_h\|_2\|{\bf u}_{htt}\|+\delta (\tau^*)^2(t) e^{2\alpha t}\int_0^t \beta(t-s) \|{\bf u}_h(s)\|_2\|{\bf u}_{htt}\|~ds \nonumber \\ &+(\tau^*)^2(t)e^{2\alpha t}\big(|b({\bf u}_{ht},{\bf u}_h,{\bf u}_{htt})|+ |b({\bf u}_h,{\bf u}_{ht},{\bf u}_{htt})|+\|f_t\|\|{\bf u}_{htt}\|\big) \end{align} Use (\ref{nonlin1}) to find that \begin{align*} |b({\bf u}_{ht},{\bf u}_h,{\bf u}_{htt})|+ |b({\bf u}_h,{\bf u}_{ht},{\bf u}_{htt})| \le \frac{1}{4} \|{\bf u}_{htt}\|^2+K\|{\bf u}_{ht}\|_1^2\|{\bf u}_h\|_2^2. \end{align*} Now, using (\ref{dth11})-(\ref{dth12}), we can easily deduce from (\ref{dth001}) that \begin{equation}\label{dth005} (\tau^*)^2\|{\bf u}_{ht}\|_1^2+\mu e^{-2\alpha t}\int_0^t (\tau^*)^2(s) e^{2\alpha s}\|{\bf u}_{hss}\|^2~ds \le K. \end{equation} We set $\mbox{\boldmath $\phi$}_h=-\tau^*(t) e^{2\alpha t}{\tilde\Delta}_h^{-1}{\bf u}_{htt}$ in (\ref{dwfjt}). From (\ref{nonlin1}), we see that $$ b({\bf u}_{ht},{\bf u}_h,{\tilde\Delta}_h^{-1}{\bf u}_{htt}) \le K\|{\bf u}_{ht}\|^{1/2}\|{\bf u}_{ht}\|_1^{1/2} \|{\bf u}_h\|_1\| {\bf u}_{htt}\|_{-1} $$ and therefore \begin{align*} \mu\frac{d}{dt}(\tau^*(t) e^{2\alpha t}\|{\bf u}_{ht}\|^2) +\tau^*(t) e^{2\alpha t} \|{\bf u}_{htt}\|_{-1}^2 \le \big(2\alpha \tau^*(t)+1\big) e^{2\alpha t}\|{\bf u}_{ht}\|_1^2 \\ +C(\mu,\gamma) \tau^*(t) e^{2\alpha t} \|\nabla{\bf u}_h\|^2 +2\|{\bf f}_t\|^2 +C(\mu,\delta)(\int_0^t \beta(t-s) e^{\alpha t}\|{\tilde\Delta}_h{\bf u}_h(s)\|~ds)^2 \\ +C(\mu) \tau^*(t) e^{2\alpha t}\Big(\|\nabla{\bf u}_h\|^2\|{\bf u}_{ht}\|^2 +\|\nabla{\bf u}_{ht}\|^2(1+ \|{\bf u}_h\|\|\nabla{\bf u}_h\|)\Big). \end{align*} Integrate with respect to time and multiply by $e^{-2\alpha t}$ to conclude \begin{equation}\label{dth006} \tau^*(t)\|{\bf u}_{ht}\|^2+\mu e^{-2\alpha t}\int_0^t \tau^*(s) e^{2\alpha s} \|{\bf u}_{hss}\|_{-1}^2 ds \le K. \end{equation} Finally, we set $\mbox{\boldmath $\phi$}_h= -e^{2\alpha t}{\tilde\Delta}_h^{2}{\bf u}_{htt}$ in (\ref{dwfjt}) and proceed as above to arrive at \begin{equation}\label{dth007} \|{\bf u}_{ht}\|_{-1}^2+\mu e^{-2\alpha t}\int_0^t e^{2\alpha s} \|{\bf u}_{hss}\|_{-2}^2 ds \le K. \end{equation} This completes the rest of the proof. \end{proof} \noindent The following semi-discrete error estimates are proved in \cite{GP11}. \begin{theorem}\label{errest} Let $\Omega$ be a convex polygon and let the conditions (${\bf A1}$)-(${\bf A2}$) and (${\bf B1}$)-(${\bf B2}$) be satisfied. Further, let the discrete initial velocity ${\bf u}_{0h}\in {\bf J}_h$ with ${\bf u}_{0h}=P_h{\bf u}_0,$ where ${\bf u}_0\in {\bf J}_1.$ Then, there exists a positive constant $C$ such that for $0<T<\infty $ with $t\in (0,T]$ $$ \|({\bf u}-{\bf u}_h)(t)\|+h\|\nabla({\bf u}-{\bf u}_h)(t)\|\le Ce^{Ct}h^2t^{-1/2}.$$ Moreover, under the assumption of the uniqueness condition, that is, \begin{equation}\label{uc} \frac{N}{\nu^2}\|{\bf f}\|_{\infty} < 1~~~\mbox{and}~~N =\sup_{{\bf u}, {\bf v},{\bf w}\in{\bf H}_0^1(\Omega)}\frac{b({\bf u},{\bf v},{\bf w})}{\|\nabla{\bf u}\|\|\nabla{\bf v}\| \|\nabla{\bf w}\|}, \end{equation} where $\nu=\mu+\frac{\gamma}{\delta}$ and $\|{\bf f}\|_{\infty} :=\|{\bf f}\|_{L^\infty(0, \infty; {\bf L}^2(\Omega))}$ then we have the following uniform estimate: $$ \|({\bf u}-{\bf u}_h)(t)\| \le Ch^2t^{-1/2}. $$ \end{theorem} \section{Backward Euler Method} \setcounter{equation}{0} For time discretization, we state below some notations. Let $k,~0<k<1,$ be the time step and let $t_n=nk,~n\ge 0.$ We define for a sequence $\{\mbox{\boldmath $\phi$}^n\}_{n \ge 0}\subset{\bf J}_h,$ $$ \partial_t\mbox{\boldmath $\phi$}^n=\frac{1}{k}(\mbox{\boldmath $\phi$}^n-\mbox{\boldmath $\phi$}^{n-1}). $$ For continuous function ${\bf v}(t),$ we set ${\bf v}_n={\bf v}(t_n).$ Since backward Euler method is of first order in time, we choose the right rectangle rule to approximate the integral term in (\ref{dwfj}) as: \begin{equation}\label{rrr} q_r^n(\mbox{\boldmath $\phi$})=k\sum_{j=1}^{n}\beta_{n-j}\phi^j\approx \int_0^{t_n} \beta(t_n-s)\mbox{\boldmath $\phi$}(s)~ds \end{equation} where $\beta_{n-j}=\beta(t_n-t_j).$ With $w_{nj}= k\beta(t_n-t_j),$ it is observed that the the right rectangle rule is positive in the sense that \begin{equation}\label{rrp} k\sum_{i=1}^n q_r^i(\phi)\phi^i= k\sum_{i=1}^n k\sum_{j=0}^i \omega_{ij}\phi^j \phi^i \ge 0,~~~~\phi=(\phi^0,\cdots,\phi^N)^T. \end{equation} For positivity of the rectangle rule with $\omega_{n0}=0,$ we refer to McLean and Thom{\'e}e \cite{MT}. Note that the error incurred due to right rectangle rule in approximating the integral term is \begin{align}\label{errrr \varepsilon_r^n (\phi) & := \int_0^{t_n} \beta(t_n-s)\mbox{\boldmath $\phi$}(s)~ds-k\sum_{j=1}^{n}\beta_{n-j}\phi^j \\ &\le Kk\sum_{j=1}^{n}\int_{t_{j-1}}^{t_j}\Big|\frac{\partial}{\partial s}(\beta(t_n-s) \mbox{\boldmath $\phi$}(s))\Big|~ds. \nonumber \end{align} We present here a discrete version of integration by parts. For sequences $\{a_i\}$ and $\{b_i\}$ of real numbers, the following summation by parts holds \begin{equation}\label{sumbp} k\sum_{j=1}^i a_jb_j= a_i\hat{b}_i-k\sum_{j=1}^{i-1} (\partial_t a_{j+1})\hat{b}_j, \end{equation} where $\hat{b}_i:=k\sum_{j=1}^i b_j.$ \\ We describe below the backward Euler scheme for the semidiscrete Oldroyd problem (\ref{dwfh}): Find $\{{\bf U}^n\}_{n\ge 0}\in{\bf H}_h$ and $\{P^n\}_{n\ge 1}\in L_h$ as solutions of the recursive nonlinear algebraic equations ($n\ge 1$) : \begin{equation}\left.\begin{array}{rcl}\label{fdbeh} (\partial_t{\bf U}^n,\mbox{\boldmath $\phi$}_h)+\mu a({\bf U}^n,\mbox{\boldmath $\phi$}_h) &+& a(q_r^n({\bf U}),\mbox{\boldmath $\phi$}_h) = (P^n,\nabla\cdot \mbox{\boldmath $\phi$}_h) \\ &+& ({\bf f}^n,\mbox{\boldmath $\phi$}_h)-b({\bf U}^n,{\bf U}^n,\mbox{\boldmath $\phi$}_h)~~~\forall\mbox{\boldmath $\phi$}_h\in{\bf H}_h, \\ (\nabla\cdot{\bf U}^n,\chi_h)&=& 0~~~\forall \chi_h \in L_h,~~~n\ge 0. \end{array}\right\} \end{equation} We choose ${\bf U}^0={\bf u}_{0h}=P_h{\bf u}_0.$ Now, for $\mbox{\boldmath $\phi$}_h\in{\bf J}_h,$ we seek $\{{\bf U}^n\}_{n\ge 0}\in{\bf J}_h$ such that \begin{equation}\label{fdbej} (\partial_t{\bf U}^n,\mbox{\boldmath $\phi$}_h)+\mu a({\bf U}^n,\mbox{\boldmath $\phi$}_h)+a(q_r^n({\bf U}),\mbox{\boldmath $\phi$}_h)= ({\bf f}^n,\mbox{\boldmath $\phi$}_h) -b({\bf U}^n,{\bf U}^n,\mbox{\boldmath $\phi$}_h)~~~\forall\mbox{\boldmath $\phi$}_h\in{\bf J}_h. \end{equation} Using variant of Brouwer fixed point theorem and standard uniqueness arguments, it is easy to show that the discrete problem (\ref{fdbej}) is well-posed. For a proof, we refer to \cite{G11}. Below we prove {\it a priori} bounds for the discrete solutions $\{{\bf U}^n\}_{n>0}.$ \begin{lemma}\label{stb} Let $0<\alpha<\min\{\delta,\frac{\mu\lambda_1}{2}\}$ and $k_0>0$ be such that for $0<k<k_0$ $$ 1+\big(\frac{\mu\lambda_1}{2}\big)k\ge e^{\alpha k}. $$ Further, let ${\bf U}^0={\bf u}_{0h}=P_h{\bf u}_0$ with ${\bf u}_0\in{\bf J}_1.$ Then, the discrete solution ${\bf U}^N,~N\ge 1$ of (\ref{fdbej}) satisfies the following estimates: \begin{align}\label{stb1a} \|{\bf U}^N\|^2+\Gamma_1 e^{-\alpha t_N}k\sum_{n=1}^N e^{\alpha t_n}\|\nabla{\bf U}^n\|^2 \le C \Big(e^{-\alpha t_N}\|{\bf U}^0\|^2+\|{\bf f}\|_{\infty}^2\Big), \end{align} where $\|{\bf f}\|_{\infty}=\|{\bf f}\|_{L^{\infty}({\bf L}^2)},$ and $$\Gamma_1= \Big(e^{-\alpha k}\mu-2\big(\frac{1-e^{-\alpha k}}{k}\big) \lambda_1^{-1}\Big). $$ \end{lemma} \begin{proof} Setting $\tilde{\bU}^n=e^{\alpha t_n}{\bf U}^n,$ we rewrite (\ref{fdbej}), for $\mbox{\boldmath $\phi$}_h\in{\bf J}_h,$ as \begin{equation}\label{tfdbej e^{\alpha t_n}(\partial_t{\bf U}^n,\mbox{\boldmath $\phi$}_h)+\mu a(\tilde{\bU}^n,\mbox{\boldmath $\phi$}_h)+e^{-\alpha t_n}b_h(\tilde{\bU}^n,\tilde{\bU}^n, \mbox{\boldmath $\phi$}_h)+e^{\alpha t_n}a(q_r^n({\bf U}),\mbox{\boldmath $\phi$}_h)= (\tilde{\f}^n,\mbox{\boldmath $\phi$}_h). \end{equation} Note that $$ e^{\alpha t_n}\partial_t{\bf U}^n= e^{\alpha k}\partial_t\tilde{\bU}^n-\Big(\frac{e^{\alpha k}-1}{k}\Big)\tilde{\bU}^n. $$ On substituting this in (\ref{tfdbej}) and then multiplying the resulting equation by $e^{-\alpha k},$ we obtain \begin{align}\label{stb001} (\partial_t\tilde{\bU}^n,\mbox{\boldmath $\phi$}_h)-\Big(\frac{1-e^{-\alpha k}}{k}\Big)(\tilde{\bU}^n,\mbox{\boldmath $\phi$}_h)+e^{-\alpha k} \mu a(\tilde{\bU}^n,\mbox{\boldmath $\phi$}_h)+e^{-\alpha t_{n+1}}b(\tilde{\bU}^n,\tilde{\bU}^n,\mbox{\boldmath $\phi$}_h) \nonumber \\ +\gamma e^{-\alpha k}\sum_{i=1}^n e^{-(\delta-\alpha)(t_n-t_i)} a(\tilde{\bU}^i,\mbox{\boldmath $\phi$}_h) =e^{-\alpha k}(\tilde{\f}^n,\mbox{\boldmath $\phi$}_h). \end{align} Put $\mbox{\boldmath $\phi$}_h=\tilde{\bU}^n$ in (\ref{stb001}) and observe that $$ (\partial_t\mbox{\boldmath $\phi$}^n,\mbox{\boldmath $\phi$}^n)=\frac{1}{k}(\mbox{\boldmath $\phi$}^n-\mbox{\boldmath $\phi$}^{n-1},\mbox{\boldmath $\phi$}^n) \ge \frac{1}{2k} (\|\mbox{\boldmath $\phi$}^n\|^2-\|\mbox{\boldmath $\phi$}^{n-1}\|^2)=\frac{1}{2}\partial_t\|\mbox{\boldmath $\phi$}^n\|^2, $$ and that the nonlinear term vanishes. Also use $\|\tilde{\bU}^n\|^2 \le \frac{1}{\lambda_1}\|\nabla\tilde{\bU}^n\|^2$ to obtain \begin{align}\label{stb002} \frac{1}{2}\partial_t\|\tilde{\bU}^n\|^2 +& \Big(e^{-\alpha k}\mu-\big(\frac{1-e^{-\alpha k}}{k}\big) \lambda_1^{-1}\Big)\|\nabla\tilde{\bU}^n\|^2 \nonumber \\ +& \gamma e^{-\alpha k}k\sum_{i=1}^n e^{-(\delta-\alpha)(t_n-t_i)} a(\tilde{\bU}^i,\tilde{\bU}^n) \le e^{-\alpha k}\|\tilde{\f}^n\|\|\tilde{\bU}^n\|. \end{align} The right-hand side of (\ref{stb002}) can be estimated as $$ \frac{1}{2}e^{-\alpha k}\mu\|\nabla\tilde{\bU}^n\|^2+\frac{1}{2\mu\lambda_1} e^{-\alpha k}\|\tilde{\f}^n\|^2, $$ so as to obtain from (\ref{stb002}) \begin{align}\label{stb003} \partial_t\|\tilde{\bU}^n\|^2 +& \Big(e^{-\alpha k}\mu-2\big(\frac{1-e^{-\alpha k}}{k}\big) \lambda_1^{-1}\Big)\|\nabla\tilde{\bU}^n\|^2 \nonumber \\ +& 2\gamma e^{-\alpha k}k\sum_{i=1}^n e^{-(\delta-\alpha)(t_n-t_i)} a(\tilde{\bU}^i,\tilde{\bU}^n) \le \frac{1}{\mu\lambda_1}e^{-\alpha k}\|\tilde{\f}^n\|^2. \end{align} With $0<\alpha<\min\{\delta,\frac{\mu\lambda_1}{2}\},$ we choose $k_0>0$ such that for $0<k<k_0$ $$ 1+\big(\frac{\mu\lambda_1}{2}\big)k\ge e^{\alpha k}. $$ This guarantees that $e^{-\alpha k}\mu-2\big(\frac{1-e^{-\alpha k}}{k}\big)\lambda_1^{-1}\ge 0.$ Multiply (\ref{stb003}) by $k$ and then sum over $n=1$ to $N.$ The resulting double sum is non-negative and hence, we obtain \begin{align} \label{UN-estimate-1} \|\tilde{\bU}^N\|^2+ \Gamma_1 k\sum_{n=1}^N \|\nabla\tilde{\bU}^n\|^2 \le \|{\bf U}^0\|^2+\frac{\|{\bf f}\|_{\infty}^2}{\mu\lambda_1}e^{-\alpha k} k\sum_{n=1}^N e^{2\alpha t_n}. \end{align} Note that using geometric series, we find that \begin{equation}\label{e-2alpha} k\sum_{n=1}^N e^{2\alpha t_n}= e^{2\alpha k}\frac{k}{e^{2\alpha k}-1} e^{2\alpha t_N}= e^{2\alpha(k-k^*)} e^{2\alpha t_N}, \end{equation} for some $k^*$ in $(0,k).$ On substituting (\ref{e-2alpha}) in (\ref{UN-estimate-1}), multiply through out by $e^{-\alpha t_N}$ to complete the rest of the proof. \end{proof} In order to obtain uniform (in time) estimate for the discrete solution ${\bf U}^n$ in Dirichlet norm, we introduce the following notation: \begin{equation}\label{ubeta} {\bf U}^n_{\beta}=k\sum_{j=1}^n \beta_{nj}{\bf U}^j, n >0;~~{\bf U}^0_{\beta}=0, \end{equation} and rewrite (\ref{fdbej}), for $\mbox{\boldmath $\phi$}_h\in{\bf J}_h,$ as \begin{equation}\label{fdbej00} (\partial_t{\bf U}^n,\mbox{\boldmath $\phi$}_h)+\mu a({\bf U}^n,\mbox{\boldmath $\phi$}_h)+b_h({\bf U}^n,{\bf U}^n,\mbox{\boldmath $\phi$}_h) +a({\bf U}^n_{\beta},\mbox{\boldmath $\phi$}_h)= ({\bf f}^n,\mbox{\boldmath $\phi$}_h). \end{equation} Note that \begin{equation}\label{une01} {\bf U}^n_{\beta}= k\gamma{\bf U}^n+e^{-\delta k}{\bf U}^{n-1}_{\beta}, \end{equation} and therefore \begin{align}\label{une02} \partial_t{\bf U}^n_{\beta} &=\frac{1}{k}({\bf U}^n_{\beta}-{\bf U}^{n-1}_{\beta}) =\frac{1}{k}( k\gamma{\bf U}^n+e^{-\delta k}{\bf U}^{n-1}_{\beta}-{\bf U}^{n-1}_{\beta}) \\ &=\gamma{\bf U}^n-\frac{(1-e^{-\delta k})}{k}{\bf U}^{n-1}_{\beta}. \nonumber \end{align} \begin{lemma}\label{unif.est0} Let $0<\alpha<\min (\delta,\mu\lambda_1/2),~{\bf U}^0=P_h{\bf u}_0$ and $k_0>0$ be such that for $0<k<k_0$ $$ 1+\big(\frac{\mu\lambda_1}{2}\big)k\ge e^{\alpha k}. $$ Then, the discrete solution ${\bf U}^n,~n\ge 1$ of (\ref{fdbej}) satisfies the following uniform estimates: \begin{equation}\label{unif.est01} \|{\bf U}^n\|^2+\frac{e^{-\delta k}}{\gamma} \|\nabla{\bf U}^n_{\beta}\|^2 \le e^{-2\alpha t_n}\|{\bf U}^0\|^2+ \left(\frac{1-e^{-2\alpha t_n}}{\alpha\mu\lambda_1}\right)\|{\bf f}\|^2_{\infty} =M_{11}^2, \end{equation} and \begin{equation}\label{unif.est02} k\sum_{n=m}^{m+l} \big(\mu\|\nabla{\bf U}^n\|^2+\frac{\delta}{\gamma} \|\nabla{\bf U}^n_{\beta}\|^2 \big) \le M_{11}^2+\frac{l}{\mu\lambda_1} \|{\bf f}\|^2_{\infty}=M_{12}^2(l), \end{equation} where ${\bf U}^n_{\beta}$ is given by (\ref{ubeta}). \end{lemma} \begin{proof} Take $\mbox{\boldmath $\phi$}_h={\bf U}^n$ in (\ref{fdbej00}) and from (\ref{une02}), we find that $$ a({\bf U}^n_{\beta},{\bf U}^n)= \frac{e^{-\delta k}}{\gamma} a({\bf U}^n_{\beta},\partial_t{\bf U}^n_{\beta}) +\frac{(1-e^{-\delta k})}{k\gamma}\|\nabla{\bf U}^n_{\beta}\|^2. $$ Using mean value theorem, we observe that $$ \frac{(1-e^{-\delta k})}{k} =\delta e^{-\delta k^*} \ge \delta e^{-\delta k}, ~~k^*\in (0,k). $$ Therefore, we obtain from (\ref{fdbej00}) \begin{equation}\label{une03} \partial_t\big(\|{\bf U}^n\|^2+\frac{e^{-\delta k}}{\gamma}\|\nabla{\bf U}^n_{\beta}\|^2 \big) +\mu\|\nabla{\bf U}^n\|^2+\frac{2\delta e^{-\delta k}}{\gamma}\|\nabla{\bf U}^n_{\beta}\|^2 \le \frac{1}{\mu\lambda_1}\|{\bf f}^n\|^2. \end{equation} As $0<\alpha<\min \{\delta,\mu\lambda_1/2\}$, we now find that \begin{equation}\label{une04} \partial_t\big(\|{\bf U}^n\|^2+\frac{e^{-\delta k}}{\gamma}\|\nabla{\bf U}^n_{\beta}\|^2 \big) +2\alpha\big(\|{\bf U}^n\|^2+\frac{e^{-\delta k}}{\gamma} \|\nabla{\bf U}^n_{\beta}\|^2\big) \le \frac{1}{\mu\lambda_1}\|{\bf f}^n\|^2. \end{equation} Multiply the inequality (\ref{une04}) by $e^{\alpha_0 t_{n-1}}$ for some $\alpha_0>0$ and note that \begin{eqnarray}\label{discrete01} \partial_t(e^{\alpha_0 t_n}\mbox{\boldmath $\phi$}^n) &=& e^{\alpha_0 t_{n-1}}\Big\{\partial_t\mbox{\boldmath $\phi$}^n +\frac{e^{ \alpha_0 k}-1}{k} \mbox{\boldmath $\phi$}^n\Big\} \nonumber \\ &\le & e^{\alpha_0 t_{n-1}}\Big\{\partial_t\mbox{\boldmath $\phi$}^n +2\alpha\mbox{\boldmath $\phi$}^n\Big\}. \end{eqnarray} With the assumption on the time step $k,$ that is, $0<k<k_0,$ and for given $\alpha$, we can always choose $\alpha_0$ such that \begin{equation}\label{une04a} 1+2\alpha k \ge e^{\alpha_0k}. \end{equation} Observe that $\alpha _0 < 2\alpha$. Therefore, we obtain from (\ref{une04}) $$ \partial_t\Big(e^{\alpha_0 t_n}\Big(\|{\bf U}^n\|^2+\frac{e^{-\delta k}}{\gamma} \|\nabla {\bf U}^n_{\beta}\|^2 \Big) \Big) \le\frac{e^{\alpha_0 t_{n-1}}}{\mu\lambda_1}\|{\bf f}\|_{\infty}^2. $$ Multiply by $k$ and sum over $1$ to $n$ and then multiply the resulting inequality by $e^{-\alpha_0 t_n}.$ Observe that ${\bf U}^0_{\beta}=0$ by definition. This results in the first estimate (\ref{unif.est01}). For the second estimate (\ref{unif.est02}), we multiply (\ref{une03}) by $k,$ sum over $m$ to $m+l$ with $m,l\in \mathcal{N}$ and use (\ref{unif.est01}) to complete the rest of the proof. \end{proof} \begin{lemma}\label{unif.est1} Under the assumptions of Lemma \ref{unif.est0}, the discrete solution ${\bf U}^n,~n\ge 1$ of (\ref{fdbej}) satisfies the following uniform estimates: \begin{equation}\label{unif.est1a} \|\nabla{\bf U}^n\|^2+\frac{e^{-\delta k}}{\gamma}\|{\tilde\Delta}_h{\bf U}^n_{\beta}\|^2 \le K. \end{equation} \end{lemma} \begin{proof} Set $\mbox{\boldmath $\phi$}_h=-{\tilde\Delta}_h{\bf U}^n$ in (\ref{fdbej00}) and as in the Lemma \ref{unif.est0}, we now obtain \begin{align}\label{une11} \partial_t\big(\|\nabla{\bf U}^n\|^2+\frac{e^{-\delta k}}{\gamma}\|{\tilde\Delta}_h{\bf U}^n_{\beta}\|^2 \big) +\mu\|{\tilde\Delta}_h{\bf U}^n\|^2&+\frac{2\delta}{\gamma}\|\nabla{\bf U}^n_{\beta}\|^2 \le \|{\bf f}^{n}\|\|{\tilde\Delta}_h{\bf U}^n\| \nonumber \\ &+|b_h({\bf U}^n,{\bf U}^n,-{\tilde\Delta}_h{\bf U}^n)|. \end{align} Use Lemma \ref{nonlin} to arrive at \begin{align}\label{une12} \partial_t\left(\|\nabla{\bf U}^n\|^2+\frac{e^{-\delta k}}{\gamma}\|{\tilde\Delta}_h{\bf U}^n_{\beta}\|^2 \right)+& \frac{4\mu} {3}\|{\tilde\Delta}_h{\bf U}^n\|^2+\frac{2\delta}{\gamma} \|{\tilde\Delta}_h{\bf U}^n_{\beta}\|^2 \nonumber \\ &\le \frac{3}{\mu} \|{\bf f}\|_{\infty}^2 +(\frac{9/2}{\mu})^3 M_{11}^2\|\nabla{\bf U}^n\|^4. \end{align} For some $\alpha_0>0$, we find that \begin{equation}\label{une13} \alpha_0\|\nabla{\bf U}^n\|^2 \le \frac{\mu}{3}\|{\tilde\Delta}_h{\bf U}^n\|^2 +\frac{3}{4\mu}\alpha_0^2\|{\bf U}^n\|^2. \end{equation} Define \begin{equation}\label{une14} g^n= \min\Big\{\alpha_0+\mu\lambda_1-(\frac{9}{2\mu})^3 M_{11}^2 \|\nabla{\bf U}^n\|^2, ~2\delta\Big\}. \end{equation} With $E^n :=\|\nabla{\bf U}^n\|^2+\frac{e^{-\delta k}}{\gamma} \|{\tilde\Delta}_h{\bf U}^n_{\beta}\|^2$, we rewrite (\ref{une12}) as \begin{equation}\label{une15} \partial_t E^n+g^nE^n \le \frac{3}{\mu}\|{\bf f}\|_{\infty}^2+\frac{3}{4\mu}\alpha_0^2\|{\bf U}^n\|^2 =K_{11}. \end{equation} Let $\{n_{i}\}_{i\in\mathbb{N}}$ and $\{\bar{n}_{i}\}_{i\in\mathbb{N}}$ be two subsequences of natural numbers such that $$ g^{n_{i}}=\alpha_0+\mu\lambda_1-(\frac{9}{2\mu})^3 M_{11}^2\|\nabla{\bf U}^{n_i}\|^2, ~~g^{\bar{n}_{i}}=2\delta,~\forall i. $$ If for some $n$, $$ g^n=\alpha_0+\mu\lambda_1-(\frac{9}{2\mu})^3 M_{11}^2\|\nabla{\bf U}^n\|^2=2\delta $$ then without loss of generality, we assume that $n\in \{\bar{n}_{i}\} $ so as to make the two subsequence $\{n_{i}\}$ and $\{\bar{n}_{i}\}$ disjoint. Now for $m,l\in\mathbb{N}$, we write \begin{align}\label{une16} k\sum_{n=m}^{m+l}g^n &= k\sum_{n=m_1}^{m_{l_1}} g^n+k\sum_{n=\bar{m}_1}^{\bar{m}_{l_2}} g^n \nonumber \\ &=k\sum_{n=m_1}^{m_{l_1}} \left(\alpha_0+\mu \lambda_1-(\frac{9}{2\mu})^3 M_{11}^2 \|\nabla{\bf U}^n\|^2\right)+k\sum_{n=\bar{m}_1}^{\bar{m}_{l_2}} 2\delta. \end{align} Here, $m_1,m_2,\cdots,m_{l_1} \in\{n_{i}\}\cap \{m,m+1,\cdots, m+l\}$ and $\bar{m}_1,\bar{m}_2,\cdots,\bar{m}_{l_2} \in\{\bar{n}_{i}\}\cap \{m,m+1, \cdots,m+l\}$ such that $l_1+l_2=l+1.$ Note that $l_1$ or $l_2$ could be $0$. Using Lemma \ref{unif.est0}, we observe that \begin{align*} (\frac{9}{2\mu})^3k\sum_{n=m}^{m+l} M_{11}^2 \|\nabla{\bf U}^n\|^2 \le \frac{9^3M_{11}^2}{2^3\mu^3}k\sum_{n=m}^{m+l} \|\nabla{\bf U}^n\|^2 \le \frac{9^3M_{11}^2}{2^3\mu^4}M_{12}^2(l)=K_{12}(l). \end{align*} Therefore, from (\ref{une16}), we find that \begin{align*} k\sum_{n=m}^{m+l}g^n \ge (kl_1)(\alpha_0+\mu \lambda_1)-K_{12}(l_1)+2\delta (kl_2). \end{align*} We choose $\alpha_0$ such that $(kl_1)(\alpha_0+\mu \lambda_1)-K_{12}(l_1)=2\delta (kl_1)$ to arrive at \begin{equation}\label{une17} k\sum_{n=m}^{m+l}g^n \ge 2\delta t_{l+1}. \end{equation} By definition of $g^n$, we have equality in (\ref{une17}) and in fact, $g^n=2\delta$. Now from (\ref{une15}), we obtain $$ \partial_t E^n+2\delta E^n \le K_{11}. $$ As in (\ref{discrete01}), we can choose $0<\alpha_{01} < \alpha \le \delta$ such that $$ \partial_t(e^{\alpha_{01}t_n}E^n) \le e^{\alpha_{01}t_{n-1}}(\partial_t E^n+2\delta E^n) \le K_{11}e^{\alpha_{01}t_{n-1}}. $$ Multiply by $k$ and sum over $1$ to $n$. Observe that $E^0=\|\nabla{\bf U}^0\|^2$. Finally, multiply the resulting inequality by $e^{-\alpha_{01}t_n}$ to find that $$ E^n \le e^{-\alpha_{01}t_n}\|\nabla{\bf U}^0\|^2+K. $$ This completes the rest of the proof. \end{proof} \begin{remark} As a consequence of the Lemma 4.3, the following {\it a priori} bound is valid: \begin{equation} \label{h2-bound} \tau*(t_n) \|{\tilde\Delta}_h{\bf U}^n\|^2 \leq K. \end{equation} \end{remark} \section{A Priori Error Estimate} \setcounter{equation}{0} In this section, we discuss error estimate of the backward Euler method for the Oldroyd model (\ref{om})-(\ref{ibc}). For the error analysis, we set, for fixed $n\in\mathbb{N}, ~1< n\le N,~{\bf e}_n={\bf U}^n -{\bf u}_h(t_n)={\bf U}^n-{\bf u}_h^n.$ We now rewrite (\ref{dwfj}) at $t=t_n$ and subtract the resulting one from (\ref{fdbej}) to obtain \begin{align}\label{eebe} (\partial_t{\bf e}_n,\mbox{\boldmath $\phi$}_h)+ \mu a({\bf e}_n,\mbox{\boldmath $\phi$}_h)+a(q^n_r({\bf e}),\mbox{\boldmath $\phi$}_h) = E^n ({\bf u}_h)(\mbox{\boldmath $\phi$}_h) +\varepsilon_a^n({\bf u}_h)(\mbox{\boldmath $\phi$}_h)+\Lambda^n_h(\mbox{\boldmath $\phi$}_h), \end{align} where, \begin{eqnarray} E^n({\bf u}_h)(\mbox{\boldmath $\phi$}_h) &=& ({\bf u}_{ht}^n,\mbox{\boldmath $\phi$}_h)-(\partial_t{\bf u}_h^n,\mbox{\boldmath $\phi$}_h) = ({\bf u}_{ht}^n,\mbox{\boldmath $\phi$}_h) -\frac{1}{k}\int_{t_{n-1}}^{t_n}({\bf u}_{hs}, \mbox{\boldmath $\phi$}_h)~ds \nonumber \\ &=& \frac{1}{2k}\int_{t_{n-1}}^{t_n} (t-t_{n-1})({\bf u}_{htt},\mbox{\boldmath $\phi$}_h)dt, \label{R1be} \\ \varepsilon_a^n({\bf u}_h)(\mbox{\boldmath $\phi$}_h) &=& a(\bu_{h,{\beta}}(t_n), \mbox{\boldmath $\phi$}_h)ds -a(q_r^n({\bf u}_h), \mbox{\boldmath $\phi$}_h)=a(\varepsilon_r^n({\bf u}_h),\mbox{\boldmath $\phi$}_h), \label{ver} \end{eqnarray} and \begin{align}\label{dLbe} \Lambda^n_h(\mbox{\boldmath $\phi$}_h) &= b({\bf u}_h^n,{\bf u}_h^n,\mbox{\boldmath $\phi$}_h)-b({\bf U}^n,{\bf U}^n,\mbox{\boldmath $\phi$}_h) \nonumber \\ & = -b({\bf u}_h^n,{\bf e}_n,\mbox{\boldmath $\phi$}_h)-b({\bf e}_n,{\bf u}_h^n,\mbox{\boldmath $\phi$}_h)-b({\bf e}_n,{\bf e}_n,\mbox{\boldmath $\phi$}_h). \end{align} In order to dissociate the effect of nonlinearity, we first linearized the discrete problem (\ref{fdbej}), and introduce $\{{\bf V}^n\}_{n\ge 1}\in{\bf J}_h$ as solutions of the following linearized problem: \begin{equation}\label{fdbejv} (\partial_t{\bf V}^n,\mbox{\boldmath $\phi$}_h)+\mu a({\bf V}^n,\mbox{\boldmath $\phi$}_h)+a(q_r^n({\bf V}),\mbox{\boldmath $\phi$}_h)= ({\bf f}^n,\mbox{\boldmath $\phi$}_h) -b({\bf u}_h^n,{\bf u}_h^n,\mbox{\boldmath $\phi$}_h)~~~\forall\mbox{\boldmath $\phi$}_h\in{\bf J}_h, \end{equation} given $\{{\bf U}^n\}_{n\ge 1}\in{\bf J}_h$ as solution of (\ref{fdbej}). It is easy to check the existence and uniqueness of $\{{\bf V}^n\}_{n\ge 1}\in{\bf J}_h.$ We now split the error as: \begin{align}\label{errsplit} {\bf e}_n:={\bf U}^n-{\bf u}_h^n = ({\bf U}^n-{\bf V}^n)-({\bf u}_h^n-{\bf V}^n) =: \mbox{\boldmath $\eta$}_n-\mbox{\boldmath $\xi$}_n. \end{align} The following equations are satisfied by $\mbox{\boldmath $\xi$}_n$ and $\mbox{\boldmath $\eta$}_n,$ respectively: \begin{align}\label{eebelin} (\partial_t\mbox{\boldmath $\xi$}_n,\mbox{\boldmath $\phi$}_h)+& \mu a(\mbox{\boldmath $\xi$}_n,\mbox{\boldmath $\phi$}_h)+a(q^n_r(\mbox{\boldmath $\xi$}),\mbox{\boldmath $\phi$}_h) = -E^n({\bf u}_h)(\mbox{\boldmath $\phi$}_h)-\varepsilon_a^n({\bf u}_h)(\mbox{\boldmath $\phi$}_h) \end{align} and \begin{align}\label{eebenl} (\partial_t\mbox{\boldmath $\eta$}_n,\mbox{\boldmath $\phi$}_h)+& \mu a(\mbox{\boldmath $\eta$}_n,\mbox{\boldmath $\phi$}_h)+a(q^n_r(\mbox{\boldmath $\eta$}),\mbox{\boldmath $\phi$}_h) =\Lambda^n_h(\mbox{\boldmath $\phi$}_h). \end{align} Below, we prove the following Lemma for our subsequent use. \begin{lemma}\label{e-ve} Let $ r,s\in \{0,1\},\tau_i=\min\{1,t_i\}$ and $\alpha$ as defined in Lemma 4.1. Then, with $E^n$ and $\varepsilon_a^n$ defined, respectively, as (\ref{R1be}) and (\ref{ver}), the following estimate holds for $n=1,\cdots,N$ and for $\{\mbox{\boldmath $\phi$}_h^i\}_i$ in ${\bf J}_h$: \begin{eqnarray}\label{eve1} && 2k\sum_{i=1}^n \tau_i^s e^{2\alpha (t_i-t_n)}\Big(E^i({\bf u}_h)(\mbox{\boldmath $\phi$}_h^i)+\varepsilon_a^i({\bf u}_h) (\mbox{\boldmath $\phi$}_h^i)\Big) \\ &&\le Kk^{(1+s-r)/2}(1+\log\frac{1}{k})^{(1-r)/2}\left({k\sum_{i=1}^n \tau_i^s e^{2\alpha (t_i-t_n)}\|\mbox{\boldmath $\phi$}_h^i\|_{1-r}^2}\right)^{1/2}. \nonumber \end{eqnarray} \end{lemma} \begin{proof} From (\ref{R1be}), we observe that \begin{align*} 2k &\sum_{i=1}^n \tau_i^s e^{2\alpha (t_i-t_n)} E^i({\bf u}_h)(\mbox{\boldmath $\phi$}_h^i) \\ \nonumber \le & \left[k^{-1}\sum_{i=1}^n \Big(\int_{t_{i-1}}^{t_i} \tau_i^{s/2} e^{\alpha (t_i-t_n)}(t-t_{i-1})\|{\bf u}_{htt}\|_{r-1}\,dt \Big)^2\right]^{1/2} \left[k\sum_{i=1}^n \tau_i^s e^{2\alpha (t_i-t_n)}\|\mbox{\boldmath $\phi$}_h^i\|_{1-r}^2 \right]^{1/2}. \nonumber \end{align*} Using (\ref{dth13}), we find \begin{align}\label{eve01a} & \left[{k^{-1}\sum_{i=1}^n \Big(\int_{t_{i-1}}^{t_i} \tau_i^{s/2} e^{\alpha (t_i-t_n)}(t-t_{i-1})\|{\bf u}_{htt}\|_{r-1}dt \Big)^2}\right]^{1/2} \nonumber \\ \le & \left[{k^{-1}\sum_{i=1}^n \int_{t_{i-1}}^{t_i} \tau_i^s \tau^{-(r+1)}(t-t_{i-1})^2 e^{2\alpha (t_i-t)}\,dt}\right]^{1/2}\left[{e^{-2\alpha t_n}\int_0^{t_n} \tau^{(r+1)} e^{2\alpha t}\|{\bf u}_{htt}\|^2_{r-1}\,dt}\right]^{1/2} \\ \le & Ke^{\alpha k}\left[{k^{-1}\sum_{i=1}^n \int_{t_{i-1}}^{t_i} \tau_i^s \tau(t)^{-(r+1)}(t-t_{i-1})^2}\,dt\right]^{1/2}. \nonumber \end{align} It is now easy to calculate the remaining part for various values of $r,s.$ For the sake of completeness, we present below the case when $r=s=0.$ \begin{align*} \sum_{i=1}^n \int_{t_{i-1}}^{t_i} t^{-1}(t-t_{i-1})^2 dt & \le \int_0^k t\,dt+k^2\sum_{i=2}^n \int_{t_{i-1}}^{t_i} t^{-1}\,dt \\ & \le Kk^2(1+\log\frac{1}{k}). \end{align*} This completes the proof of the first half. For the remaining part, we observe from (\ref{ver}) and (\ref{errrr}) that \begin{align}\label{eve02} & 2k\sum_{i=1}^n \tau_i^s e^{2\alpha (t_i-t_n)} \varepsilon_a^i({\bf u}_h)(\mbox{\boldmath $\phi$}_h^i) \le \left[{k\sum_{i=1}^n \tau_i^s e^{2\alpha (t_i-t_n)}\|\mbox{\boldmath $\phi$}_h^i\|_{1-r}^2}\right]^{1/2}~\times \\ & \left[{4k\sum_{i=1}^n \Big(\sum_{j=1}^i \int_{t_{j-1}}^{t^j} \tau_i^{s/2}e^{\alpha (t_i-t_n)}(t-t_{j-1}) \beta(t_i-t)\{\delta\|{\bf u}_h\|_{r+1}+\|{\bf u}_{ht}\|_{r+1} \}\,dt\Big)^2}\right]^{1/2}. \nonumber \end{align} In Lemma \ref{dth2}, we find that the estimates of $\|{\bf u}_{htt}\|_{r-1}$ and $\|{\bf u}_{ht}\|_{r+1}$ are similar, in fact, the powers of $t_i$ are same. Therefore,the right-hand side of (\ref{eve02}) involving $\|{\bf u}_{ht}\|_{r+1}$ can be estimated similarly as in (\ref{eve01a}). The terms involving $\|{\bf u}_h\|_{r+1}$ are clearly easy to estimate. But for the sake of completeness, we provide the case, when $ r=s=0.$ \begin{align*} & 4\delta^2 k\sum_{i=1}^n \Big(\sum_{j=1}^i \int_{t_{j-1}}^{t^j} e^{\alpha (t_i-t_n)} (t-t_{j-1})\beta(t_i-t) \|\nabla{\bf u}_h\|~dt\Big)^2 \nonumber \\ \le & ~4\gamma^2\delta^2 e^{-2\alpha t_n} k^3\sum_{i=1}^n e^{-2(\delta-\alpha)t_i} \Big(\sum_{j=1}^i \int_{t_{j-1}}^{t^j}e^{(\delta-\alpha)t}\|\nabla\hat{\bf u}_h\|~dt\Big)^2 \nonumber \\ \le & ~4\gamma^2\delta^2 e^{-2\alpha t_n} k^3\sum_{i=1}^n e^{-2(\delta-\alpha)t_i} \Big(\int_0^{t_i} e^{2(\delta-\alpha)s}ds\Big) \Big(\int_0^{t_i} \|\nabla\hat{\bf u}_h(s)\|^2 ds\Big) \nonumber \\ \le & \frac{2\gamma^2\delta^2}{2 (\delta-\alpha)} e^{-2\alpha t_n}k^3 \sum_{i=1}^n e^{2(\delta-\alpha)k}\big(Ke^{2\alpha t_i}\big) \le Kk^3 e^{2\delta k}. \end{align*} This completes the rest of the proof. \end{proof} \begin{lemma}\label{pree} Assume (${\bf A1}$)-(${\bf A2}$) and a spatial discretization scheme that satisfies conditions (${\bf B1}$)-(${\bf B2}$) and (${\bf B4}$). Let $0 < \alpha < \min \big\{\delta, \mu\lambda_1\big\},$ and $$ 1+(\mu\lambda_1)k > e^{2\alpha k} $$ which holds for $0<k<k_0,~k_0>0.$ Further, assume that ${\bf u}_h(t)$ and ${\bf V}^n$ satisfy (\ref{dwfj}) and (\ref{fdbejv}), respectively. Then, there is a positive constant $K$ such that \begin{eqnarray} \label{pree1} \|\mbox{\boldmath $\xi$}_n\|^2&+& e^{-2\alpha t_n} k\sum_{i=1}^n e^{2\alpha t_i} \|\mbox{\boldmath $\xi$}_i\|_1^2 \le Kk\big(1+\log\frac{1}{k}\big), \\ \|\mbox{\boldmath $\xi$}_n\|_1^2&+& k\sum_{i=1}^n \{\|\mbox{\boldmath $\xi$}_i\|_2^2+\|\partial_t\mbox{\boldmath $\xi$}_i\|^2\} \le K. \label{pree2} \end{eqnarray} \end{lemma} \begin{proof} For $n=i,$ we put $\mbox{\boldmath $\phi$}_h=\mbox{\boldmath $\xi$}_i$ in (\ref{eebelin}) and with the observation $$ (\partial_t\mbox{\boldmath $\xi$}_i,\mbox{\boldmath $\xi$}_i)=\frac{1}{2k}(\mbox{\boldmath $\xi$}_i-\mbox{\boldmath $\xi$}_{i-1},\mbox{\boldmath $\xi$}_i) \ge \frac{1}{2k}(\|\mbox{\boldmath $\xi$}_i\|^2 -\|\mbox{\boldmath $\xi$}_{i-1}\|^2)=\frac{1}{2}\partial_t\|\mbox{\boldmath $\xi$}_i\|^2, $$ we find that \begin{equation}\label{pree01} \partial_t\|\mbox{\boldmath $\xi$}_i\|^2+2\mu\|\nabla\mbox{\boldmath $\xi$}_i\|^2+a(q^i_r(\mbox{\boldmath $\xi$}),\mbox{\boldmath $\xi$}_i) \le -2E^i({\bf u}_h)(\mbox{\boldmath $\xi$}_i) -2\varepsilon_a^i({\bf u}_h)(\mbox{\boldmath $\xi$}_i). \end{equation} Multiply (\ref{pree01}) by $ke^{2\alpha t_i}$ and sum over $1\le i\le n\le N$ to obtain \begin{align}\label{pree02} \|\tilde{\bxi}_n\|^2 -\sum_{i=1}^{n-1} (e^{2\alpha k}-1)\|\tilde{\bxi}_i\|^2+2\mu k\sum_{i=1}^n \|\nabla\tilde{\bxi}_i\|^2 & \le -2k\sum_{i=1}^n e^{2\alpha t_i}\Big\{E^i({\bf u}_h)(\mbox{\boldmath $\xi$}_i) +\varepsilon_a^i({\bf u}_h)(\mbox{\boldmath $\xi$}_i)\Big\} \nonumber \\ & \le \mu k\sum_{i=1}^n \|\nabla\tilde{\bxi}_i\|^2+Kk \big(1+\log\frac{1}{k}\big) e^{2\alpha t_{n+1}}. \end{align} Recall that $\tilde{v}(t)=e^{\alpha t}v(t)$. Note that we have dropped the quadrature term on the left hand-side of (\ref{pree01}) after summation as it is non-negative. Finally, we have used Lemma \ref{e-ve} for $s=r=0$. We note that for $0<k<k_0$ $$ \mu-\frac{e^{2\alpha k}-1}{k\lambda_1} >0, $$ and hence, \begin{align}\label{pree06} \|\tilde{\bxi}_n\|^2 &+ (\mu-\frac{e^{2\alpha k}-1}{k\lambda_1}) k\sum_{i=1}^n \|\nabla\tilde{\bxi}_i\|^2\le Kk \big(1+\log\frac{1}{k}\big)e^{2\alpha t_{n+1}}. \end{align} Multiply (\ref{pree06}) by $e^{-2\alpha t_n}$ to establish (\ref{pree1}). Next, for $n=i,$ we put $\mbox{\boldmath $\phi$}_h=-{\tilde\Delta}_h\mbox{\boldmath $\xi$}_i$ in (\ref{eebelin}) and follow as above to obtain the first part of (\ref{pree2}), that is, $$ \|\mbox{\boldmath $\xi$}_n\|_1^2+k\sum_{i=1}^n \|\mbox{\boldmath $\xi$}_i\|_2^2 \le K. $$ Here, we have used (\ref{eve1}) for $s=0, r=1$ with $\alpha=0$ replacing $\mbox{\boldmath $\phi$}_h^{i}$ by ${\tilde\Delta}_h\mbox{\boldmath $\xi$}_i.$ Finally, for $n=i,$ we put $\mbox{\boldmath $\phi$}_h=\partial_t\mbox{\boldmath $\xi$}_i$ in (\ref{eebelin}) to find that \begin{align}\label{pree07} 2\|\partial_t\mbox{\boldmath $\xi$}_i\|^2+\mu\partial_t\|\mbox{\boldmath $\xi$}_i\|_1^2 \le -2a(q_r^i(\mbox{\boldmath $\xi$}),\partial_t\mbox{\boldmath $\xi$}_i) -2E^i({\bf u}_h)(\partial_t\mbox{\boldmath $\xi$}_i) -2\varepsilon_a^i({\bf u}_h)(\partial_t\mbox{\boldmath $\xi$}_i). \end{align} Multiply (\ref{pree07}) by $ke^{2\alpha t_i}$ and sum over $1\le i\le n\le N$. As has been done earlier, we can estimate the last two resulting terms on the right-hand side of (\ref{pree07}) using (\ref{eve1}) for $r=s=0$ as $$ \frac{k}{2} \sum_{i=1}^n e^{2\alpha t_i}\|\partial_t\mbox{\boldmath $\xi$}_i\|^2+K. $$ The only difference is that the resulting double sum (the term involving $q_r^i$) is no longer non-negative and hence, we need to estimate it. Note that \begin{align}\label{pree08} 2k\sum_{i=1}^n e^{2\alpha t_i} a(q_r^i(\mbox{\boldmath $\xi$}),\partial_t\mbox{\boldmath $\xi$}_i)=2\gamma k^2\sum_{i=1}^n \sum_{j=1}^i e^{-(\delta-\alpha)(t_i-t_j)} a(\tilde{\bxi}_j,e^{\alpha t_i}\partial_t\mbox{\boldmath $\xi$}_i) \\ \le \frac{k}{2} \sum_{i=1}^n e^{2\alpha t_i}\|\partial_t\mbox{\boldmath $\xi$}_i\|^2 +K(\gamma) k\sum_{i=1}^n \Big(k\sum_{j=1}^i e^{-(\delta-\alpha)(t_i-t_j)} \|{\tilde\Delta}_h\tilde{\bxi}_j\|\Big)^2. \nonumber \end{align} Using change of variable and change of order of double sum, we obtain \begin{align*} I & := K(\gamma) k\sum_{i=1}^n \Big(k\sum_{j=1}^i e^{-(\delta-\alpha)(t_i-t_j)} \|{\tilde\Delta}_h\tilde{\bxi}_j\|\Big)^2 \\ & \le K(\gamma) k\sum_{i=1}^n \Big(k\sum_{j=1}^i e^{-(\delta-\alpha)(t_i-t_j)} \Big) \Big(k\sum_{j=1}^i e^{-(\delta-\alpha)(t_i-t_j)}\|{\tilde\Delta}_h\tilde{\bxi}_j\|^2\Big) \\ & \le K(\alpha,\gamma) e^{(\delta-\alpha)k} k^2 \sum_{i=1}^n k\sum_{j=1}^i e^{-(\delta-\alpha)(t_i-t_j)}\|{\tilde\Delta}_h\tilde{\bxi}_j\|^2. \end{align*} Introduce $l=i-j$ to find that \begin{align*} I& \le K(\alpha,\gamma) e^{(\delta-\alpha)k}k^2\sum_{i=1}^n k\sum_{l=i-1}^0 e^{-(\delta-\alpha) t_l} \|{\tilde\Delta}_h\tilde{\bxi}_{i-l}\|^2~~~\mbox{for} ~l=i-j\\ &= K(\alpha,\gamma) e^{(\delta-\alpha)k}k^2\sum_{i=1}^n k\sum_{l=1}^{i} e^{-(\delta-\alpha)t_{l-1}}\|{\tilde\Delta}_h\tilde{\bxi}_{i-l+1}\|^2. \end{align*} With change of summation, we now arrive at \begin{align}\label{pree09} I&\le K(\alpha,\gamma) e^{(\delta-\alpha)k}k^2\sum_{l=1}^n k\sum_{i=l}^{n} e^{-(\delta-\alpha)t_{l-1}}\|{\tilde\Delta}_h\tilde{\bxi}_{i-l+1}\|^2 \nonumber \\ &= K(\alpha,\gamma) e^{(\delta-\alpha)k}k^2 \sum_{l=1}^n k\sum_{j=1}^{n-l+1} e^{-(\delta-\alpha)t_{l-1}}\|{\tilde\Delta}_h\tilde{\bxi}_j\|^2~~~\mbox{for} ~j=i-l+1 \nonumber \\ & \le K(\alpha,\gamma) e^{(\delta-\alpha)k}k \Big(k\sum_{l=1}^{n-1} e^{-(\delta-\alpha)t_l}\Big)\Big(k\sum_{j=1}^n \|{\tilde\Delta}_h\tilde{\bxi}_j\|^2\Big) \le K. \end{align} Combining (\ref{pree08})-(\ref{pree09}), we find that \begin{align*} 2k\sum_{i=1}^n e^{2\alpha t_i} a(q_r^i(\mbox{\boldmath $\xi$}),\partial_t\mbox{\boldmath $\xi$}_i)\le \frac{k}{2} \sum_{i=1}^n e^{2\alpha t_i}\|\partial_t\mbox{\boldmath $\xi$}_i\|^2+K. \end{align*} Therefore, we obtain \begin{align} k\sum_{i=1}^n e^{2\alpha t_i}\|\partial_t\mbox{\boldmath $\xi$}_i\|^2+\mu\|\tilde{\bxi}_n\|_1^2 \le K+\mu k \sum_{i=1}^{n-1} \frac{ (e^{2\alpha k}-1)}{k}\|\tilde{\bxi}_i\|_1^2. \end{align} Use (\ref{pree1}) and the fact that $(e^{2\alpha k}-1)/k \le K(\alpha)$ to complete the rest of the proof. \end{proof} \begin{remark} We note that the restriction on $k$, that is $0<k<k_0$ is not same in the Lemmas \ref{stb}, and \ref{pree}. Therefore, we take minimum of the $k_0$'s from Lemmas \ref{stb} and \ref{pree} and denote it as $k_{00}$, then for all $k$ satisfying $0<k<k_{00}$, all the result should hold. \end{remark} Analogous to the semi-discrete case, we resort to duality argument to obtain optimal $L^2({\bf L}^2)$ estimate. Consider the following backward problem: For a given ${\bf W}_n$ and ${\bf g}_i,$ let ${\bf W}_i,~n\ge i\ge 1 $ satisfy \begin{equation}\label{bwprob} (\mbox{\boldmath $\phi$}_h,\partial_t{\bf W}_i)-\mu a(\mbox{\boldmath $\phi$}_h, {\bf W}_i)-k\sum_{j=i}^n \beta(t_j-t_i) a(\mbox{\boldmath $\phi$}_h, {\bf W}_j) =(\mbox{\boldmath $\phi$}_h, e^{2\alpha t_i} {\bf g}_i),\mbox{\boldmath $\phi$}_h\in {\bf J}_h. \end{equation} The following {\it a priori} estimates are easy to derive. \begin{lemma}\label{bwest} Let the assumptions (${\bf A2}$), (${\bf B1}$), (${\bf B2}$) and (${\bf B4}$) hold. Then, for $0<k<k_0$, the following estimates hold under appropriate assumptions on ${\bf W}_n$ and $g$: \begin{equation*} \|{\bf W}_0\|_r^2+ k\sum_{i=1}^n e^{-2\alpha t_i}\{\|{\bf W}_i\|_{r+1}+ \|\partial_t{\bf W}_i\|_{r-1}\} \le K\big\{\|{\bf W}_n\|_r^2+k\sum_{i=1}^n e^{2\alpha t_i} \|{\bf g}_i\|_{r-1}^2\big\}, \end{equation*} where $r\in \{0,1\}$. \end{lemma} \begin{lemma}\label{neg} Under the assumptions of Lemma \ref{bwest}, the following estimate holds: \begin{equation} e^{-2\alpha t_n} k \sum_{i=1}^n e^{2\alpha t_i} \|\mbox{\boldmath $\xi$}_i\|^2 \le Kk^2. \label{neg1} \end{equation} \end{lemma} \begin{proof} With $$ {\bf W}_n=(-{\tilde\Delta}_h)^{-1}\mbox{\boldmath $\xi$}_n,~~{\bf g}_i=\mbox{\boldmath $\xi$}_i~\forall i$$ we choose $\mbox{\boldmath $\phi$}_h=\mbox{\boldmath $\xi$}_i$ in (\ref{bwprob}) and use (\ref{eebelin}) to obtain \begin{align}\label{neg01} e^{2\alpha t_i}\|\mbox{\boldmath $\xi$}_i\|^2 &=(\mbox{\boldmath $\xi$}_i,\partial_t{\bf W}_i)-\mu a(\mbox{\boldmath $\xi$}_i, {\bf W}_i)-k\sum_{j=i}^n \beta(t_j-t_i) a(\mbox{\boldmath $\xi$}_i, {\bf W}_j) \nonumber \\ &= \partial_t (\mbox{\boldmath $\xi$}_i,{\bf W}_i)-(\partial_t\mbox{\boldmath $\xi$}_i,{\bf W}_{i-1})-\mu a(\mbox{\boldmath $\xi$}_i, {\bf W}_i)-k\sum_{j=i}^n \beta(t_j-t_i) a(\mbox{\boldmath $\xi$}_i, {\bf W}_j) \nonumber \\ &= \partial_t (\mbox{\boldmath $\xi$}_i,{\bf W}_i)+k(\partial_t\mbox{\boldmath $\xi$}_i,\partial_t{\bf W}_i)+k\sum_{j=1}^i \beta(t_i-t_j) a(\mbox{\boldmath $\xi$}_j, {\bf W}_i)+E^i({\bf u}_h)({\bf W}_i) \nonumber \\ &+\varepsilon_a^i({\bf u}_h)({\bf W}_i)-k\sum_{j=i}^n \beta(t_j-t_i) a(\mbox{\boldmath $\xi$}_i, {\bf W}_j). \end{align} Multiply (\ref{neg01}) by $k$ and sum over $1\le i\le n$. Observe that the resulting two double sums cancel out (change of order of double sum). Therefore, we find that \begin{align}\label{neg02} k\sum_{i=1}^n e^{2\alpha t_i}\|\mbox{\boldmath $\xi$}_i\|^2+\|\mbox{\boldmath $\xi$}_n\|_{-1}^2 =k\sum_{i=1}^n \big[ k(\partial_t\mbox{\boldmath $\xi$}_i,\partial_t{\bf W}_i)+E^i({\bf u}_h)({\bf W}_i)+\varepsilon_a^i({\bf u}_h) ({\bf W}_i)\big]. \end{align} From (\ref{R1be}), we observe that \begin{align}\label{neg03} k\sum_{i=1}^n E^i({\bf u}_h)({\bf W}_i) \le k\sum_{i=1}^n \frac{1}{2k}\int_{t_{i-1}}^{t_i} (s-t_{i-1})\|{\bf u}_{hss}\|_{-2}\|{\bf W}_i\|_2 \nonumber \\ \le \frac{k}{4} e^{\alpha k}\Big(\int_0^{t_n} e^{2\alpha s}\|{\bf u}_{hss}\|_{-2}^2 ds\Big)^{1/2}\Big(k\sum_{i=1}^n e^{-2\alpha t_i}\|{\bf W}_i\|_2^2\Big)^{1/2}. \end{align} Similar to (\ref{pree06}), we obtain \begin{align}\label{neg04} k\sum_{i=1}^n \varepsilon_a^i({\bf u}_h)({\bf W}_i)\le K\Big(k^3\sum_{i=1}^n \int_0^{t_i} e^{2\alpha s} (\|{\bf u}_h\|^2+\|{\bf u}_{hs}\|^2)~ds\Big)^{1/2}\Big(k\sum_{i=1}^n e^{-2\alpha t_i} \|{\bf W}_i\|_2^2\Big)^{1/2}, \end{align} and \begin{align}\label{neg05} k\sum_{i=1}^n k(\partial_t\mbox{\boldmath $\xi$}_i,\partial_t{\bf W}_i)\le k\Big(k\sum_{i=1}^n e^{2\alpha t_i} \|\partial_t \mbox{\boldmath $\xi$}_i\|^2\Big)^{1/2}\Big(k\sum_{i=1}^n e^{-2\alpha t_i}\|\partial_t{\bf W}_i\|^2\Big)^{1/2}. \end{align} Incorporating (\ref{neg03})-(\ref{neg05}) in (\ref{neg02}), and using Lemmas \ref{dth2} and \ref{bwest}, we find that \begin{align}\label{neg07} k\sum_{i=1}^n e^{2\alpha t_i}\|\mbox{\boldmath $\xi$}_i\|^2+\|\mbox{\boldmath $\xi$}_n\|_{-1}^2 \le Kk^2 e^{2\alpha t_n}. \end{align} \end{proof} Due to the non-smooth initial data, we need some intermediate results involving the ``hat operator'' which is defined as \begin{equation}\label{sum0} {\hat{\mbox{\boldmath $\phi$}}}_h^n := k\sum_{i=1}^n \mbox{\boldmath $\phi$}_h^i. \end{equation} This can be considered as discrete integral operator. We first observe, using (\ref{sumbp}), that \begin{align*} & k\sum_{j=1}^i \beta(t_i-t_j) \mbox{\boldmath $\phi$}_j = \gamma e^{-\delta t_i}k\sum_{j=1}^i e^{\delta t_j} \mbox{\boldmath $\phi$}_j \\ = & \gamma e^{-\delta t_i} \Big\{e^{\delta t_i}\hat{\mbox{\boldmath $\phi$}}_i- k\sum_{j=1}^{i-1} (\frac{e^{\delta t_{j+1}}-e^{\delta t_j}}{k}) \hat{\mbox{\boldmath $\phi$}}_j \Big\}= \partial_t^i \Big\{k \sum_{j=1}^i \beta(t_i-t_j) \hat{\mbox{\boldmath $\phi$}}_j \Big\}. \end{align*} Here $\partial_t^i$ means the difference formula with respect to $i$. Now rewrite the equations (\ref{eebelin}) (for $n=i$) as follows: \begin{align}\label{eebelin1} (\partial_t\mbox{\boldmath $\xi$}_i,\mbox{\boldmath $\phi$}_h)+& \mu a(\mbox{\boldmath $\xi$}_i,\mbox{\boldmath $\phi$}_h)+\partial_t^i \Big\{k\sum_{j=1}^i \beta(t_i-t_j) a (\hat{\bxi}_j,\mbox{\boldmath $\phi$}_h) \Big\} = -E^i({\bf u}_h)(\mbox{\boldmath $\phi$}_h)-\varepsilon_a^i({\bf u}_h)(\mbox{\boldmath $\phi$}_h). \end{align} We multiply (\ref{eebelin1}) by $k$ and sum over $1$ to $n$. Using the fact that $\partial_t\hat{\bxi}_n=\mbox{\boldmath $\xi$}_n,$ we observe that \begin{align}\label{eebelin1i} (\partial_t\hat{\bxi}_n,\mbox{\boldmath $\phi$}_h)+ \mu a(\hat{\bxi}_n,\mbox{\boldmath $\phi$}_h)+ a(q^n_r(\hat{\bxi}),\mbox{\boldmath $\phi$}_h) = -k\sum_{i=1}^n \big(E^i({\bf u}_h)(\mbox{\boldmath $\phi$}_h)+\varepsilon_a^i({\bf u}_h)(\mbox{\boldmath $\phi$}_h)\big). \end{align} \begin{lemma}\label{ieens} Under the assumptions of Lemma \ref{pree}, the following estimate holds: \begin{align}\label{ieens1} \|\hat{\bxi}_n\|^2+ e^{-2\alpha t_n} k\sum_{i=1}^n e^{2\alpha t_i}\|\nabla\hat{\bxi}_i\|^2 \le Kk^2(1+\log\frac{1}{k}). \end{align} \end{lemma} \begin{proof} Choose $\mbox{\boldmath $\phi$}_h=\hat{\bxi}_i$ in (\ref{eebelin1i}) for $n=i$, multiply by $k e^{2\alpha t_i}$ and then sum over $1\le i\le n$. We drop the third term on the left hand-side of the resulting inequality due to non-negativity. \begin{align}\label{ieens01} e^{2\alpha t_n}\|\hat{\bxi}_n\|^2+\mu k\sum_{i=1}^n e^{2\alpha t_i}\|\nabla\hat{\bxi}_i\|^2 \le k\sum_{i=1}^n e^{2\alpha t_i} k\sum_{j=1}^i \big(|E^j({\bf u}_h)(\hat{\bxi}_i)| +|\varepsilon_a^j({\bf u}_h)(\hat{\bxi}_i)|\big). \end{align} From (\ref{R1be}), we find that $$ k\sum_{j=1}^i |E^j({\bf u}_h)(\hat{\bxi}_i)| \le \frac{1}{2}\big(\sum_{j=1}^i \int_{t_{j-1}}^{t_j} (s-t_{j-1}) \|{\bf u}_{hss}\|_{-1}ds \big)\|\nabla\hat{\bxi}_i\|. $$ Similar to the proof of Lemma \ref{e-ve} , we split the sum in $j=1$ and the rest to obtain \begin{align}\label{ieens02} k\sum_{j=1}^i |E^j({\bf u}_h)(\hat{\bxi}_i)| \le Kk(1+\frac{1}{2}\log\frac{1}{k}) e^{-\alpha k} \|\nabla\hat{\bxi}_i\|. \end{align} Therefore, \begin{align}\label{ieens03} k\sum_{i=1}^n e^{2\alpha t_i} k\sum_{j=1}^i |E^j({\bf u}_h)(\hat{\bxi}_i)| \le \frac{\mu}{4} k \sum_{i=1}^n e^{2\alpha t_i}\|\nabla\hat{\bxi}_i\|^2+Kk^2(1+\log\frac{1}{k}) e^{2\alpha t_n}. \end{align} Similarly \begin{align}\label{ieens04b} k\sum_{i=1}^n e^{2\alpha t_i} k\sum_{j=1}^i |\varepsilon^j_a({\bf u}_h)(\hat{\bxi}_i)| \le \frac{\mu}{4} k \sum_{i =1}^n e^{2\alpha t_i}\|\nabla\hat{\bxi}_i\|^2+Kk^2 (1+\log\frac{1}{k}) e^{2\alpha t_n}. \end{align} Incorporate (\ref{ieens03})-(\ref{ieens04b}) in (\ref{ieens01}) to complete the rest of the proof. \end{proof} \noindent We are now in a position to estimate $L^{\infty}({\bf L}^2)$-norm of $\mbox{\boldmath $\xi$}_n$. \begin{theorem}\label{l2eebxi} Under the assumptions of Lemma \ref{pree}, the following holds: \begin{equation}\label{l2eebxi1} t_n \|\mbox{\boldmath $\xi$}_n\|^2 +e^{-2\alpha t_n} k\sum_{i=1}^n \sigma_i\|\nabla\mbox{\boldmath $\xi$}_i\|^2 \le Kk^2(1+\log \frac{1}{k}), \end{equation} where $\sigma_i=t_i e^{2\alpha t_i}$. \end{theorem} \begin{proof} From (\ref{eebelin}) with $n=i$ and $\mbox{\boldmath $\phi$}_h=\sigma_i\mbox{\boldmath $\xi$}_i$, we obtain \begin{align}\label{teebelin} \partial_t (\sigma_i\|\mbox{\boldmath $\xi$}_i\|^2)-e^{2\alpha k}\Big\{\|\tilde{\bxi}_{i-1}\|^2+(\frac{1-e^{-2\alpha k} }{k})\sigma_{i-1}\|\mbox{\boldmath $\xi$}\|_{i-1}^2\Big\}+2\mu \sigma_i\|\nabla\mbox{\boldmath $\xi$}_i\|^2 \nonumber \\ +2\sigma_i a(q_{r}^i(\mbox{\boldmath $\xi$}), \mbox{\boldmath $\xi$}_i) \le -2E^i({\bf u}_h)(\sigma_i\mbox{\boldmath $\xi$}_i)- 2\varepsilon_{a}^i ({\bf u}_h) (\sigma_i \mbox{\boldmath $\xi$}_i). \end{align} We multiply (\ref{teebelin}) by $k$ and sum it over $1\le i\le n$ to find that \begin{align}\label{l2eebxi01} \sigma_n\|\mbox{\boldmath $\xi$}_n\|^2 + (2\mu-\frac{e^{2\alpha k}-1}{k\lambda_1}) k\sum_{i=1}^n \sigma_i\|\nabla\mbox{\boldmath $\xi$}_i\|^2\le e^{2\alpha k} k\sum_{i=2}^{n-1}\|\tilde{\bxi}_i\|^2 \nonumber \\ -2k\sum_{i=1}^n \sigma_i a(q_{r}^i(\mbox{\boldmath $\xi$}), \mbox{\boldmath $\xi$}_i)-2 k\sum_{i=1}^n E^i({\bf u}_h)(\sigma_i\mbox{\boldmath $\xi$}_i) -2k\sum_{i=1}^n \varepsilon_{a}^i({\bf u}_h) (\sigma_i \mbox{\boldmath $\xi$}_i). \end{align} As earlier, using (\ref{sumbp}), we note that \begin{align}\label{l2eebxi02} 2k\sum_{i=1}^n \sigma_i a(q_{r}^i(\mbox{\boldmath $\xi$}), \mbox{\boldmath $\xi$}_i)= 2k\sum_{i=1}^n \gamma a(\hat{\bxi}_i, \sigma_i\mbox{\boldmath $\xi$}_i)-2k\sum_{i=2}^n k\sum_{j=1}^{i-1} \partial_t \beta(t_i-t_j) a(\hat{\bxi}_j, \sigma_i\mbox{\boldmath $\xi$}_i). \end{align} The first term can be handled as follows (for some $\varepsilon >0$): \begin{align}\label{l2eebxi02a} 2k\sum_{i=1}^n \gamma a(\hat{\bxi}_i, \sigma_i\mbox{\boldmath $\xi$}_i) \le \varepsilon k\sum_{i=1}^n \sigma_i \|\nabla\mbox{\boldmath $\xi$}_i\|^2+K(\varepsilon, \mu,\gamma) k\sum_{i=1}^n e^{2\alpha t_i} \|\nabla\hat{\bxi}_i\|^2. \end{align} For the second term, using similar technique as in (\ref{pree09}), we observe that \begin{align}\label{l2eebxi02b} & 2k\sum_{i=2}^n k\sum_{j=1}^{i-1} \partial_t \beta(t_i-t_j) a(\hat{\bxi}_j, \sigma_i\mbox{\boldmath $\xi$}_i) \le \varepsilon k\sum_{i=1}^n \sigma_i \|\nabla\mbox{\boldmath $\xi$}_i\|^2 \\ +Kk\sum_{i=2}^n &\Big(k\sum_{j=1}^{i-1} e^{-\delta(t_i-t_j)} \big(\frac{e^{\delta k-1}}{k}\big) e^{\alpha t_i}\|\nabla\hat{\bxi}_j\|\Big)^2 \le \varepsilon k\sum_{i=1}^n \sigma_i \|\nabla\mbox{\boldmath $\xi$}_i\|^2+Kk\sum_{i=1}^n e^{2\alpha t_i}\|\nabla\hat{\bxi}_j\|^2. \nonumber \end{align} Combining (\ref{l2eebxi02})-(\ref{l2eebxi02b}), we find that \begin{align}\label{l2eebxi2} 2k\sum_{i=1}^n \sigma_i a(q_{r}^i(\mbox{\boldmath $\xi$}), \mbox{\boldmath $\xi$}_i) \le \varepsilon k\sum_{i=1}^n \sigma_i \|\nabla\mbox{\boldmath $\xi$}_i\|^2+K k\sum_{i=1}^n e^{2\alpha t_i}\|\nabla\hat{\bxi}_i\|^2. \end{align} From Lemma \ref{e-ve}, we obtain for $r=0$ and $s=1$ \begin{equation}\label{l2eebxi4} 2k\sum_{i=1}^n \big\{E^i({\bf u}_h)(\sigma_i\mbox{\boldmath $\xi$}_i)+\varepsilon_a^i({\bf u}_h)(\sigma_i\mbox{\boldmath $\xi$}_i)\big\} \le \varepsilon k\sum_{i=1}^n \sigma_i \|\nabla\mbox{\boldmath $\xi$}_i\|^2+Kk^2 (1+\log\frac{1}{k}) e^{2\alpha t_n}. \end{equation} Incorporate the estimates (\ref{l2eebxi2})-(\ref{l2eebxi4}) in (\ref{l2eebxi01}) and choose $\varepsilon=\mu/2$ to conclude \begin{align*} \sigma_n\|\mbox{\boldmath $\xi$}_n\|^2 + (\mu-\frac{e^{2\alpha k}-1}{k\lambda_1}) k\sum_{i=1}^n \sigma_i\|\nabla\mbox{\boldmath $\xi$}_i\|^2\le Kk^2 (1+\log\frac{1}{k}) e^{2\alpha t_n}+ K k\sum_{i=1}^n e^{2\alpha t_i}\|\nabla\hat{\bxi}_i\|^2. \end{align*} We multiply by $e^{2\alpha t_i}$ and use Lemma \ref{ieens} to complete the rest of the proof. \end{proof} \noindent We now obtain estimates of $\mbox{\boldmath $\eta$}$ below. Hence forward, $K_T$ means $KT e^{KT}.$ \begin{lemma}\label{estbta} Assume (${\bf A1}$), (${\bf A2}$) and a spatial discretization scheme that satisfies conditions (${\bf B1}$), (${\bf B2}$) and (${\bf B4}$). Further, assume that ${\bf U}^n$ and ${\bf V}^n$ satisfy (\ref{fdbej}) and (\ref{fdbejv}), respectively. Then, for some positive constant $K,$ there holds \begin{eqnarray} \|\mbox{\boldmath $\eta$}_n\|^2+e^{-2\alpha t_n} k\sum_{i=1}^n e^{2\alpha t_i}\|\mbox{\boldmath $\eta$}_i\|^2 \le K_{t_n}k(1+\log\frac{1}{k}), \label{estbta1} \\ \|\mbox{\boldmath $\eta$}_n\|_1^2+e^{-2\alpha t_n} k\sum_{i=1}^n e^{2\alpha t_i}\|\mbox{\boldmath $\eta$}_i\|_1^2 \le K_{t_n}. \label{estbta2} \end{eqnarray} \end{lemma} \begin{proof} We shall only prove the first estimate as the second one will follow similarly. For $n=i,$ we put $\mbox{\boldmath $\phi$}_h=\mbox{\boldmath $\eta$}_i$ in (\ref{eebenl}), multiply by $ke^{2\alpha t_i}$ and sum over $1\le i\le n\le N$ to obtain as in (\ref{pree02}) \begin{align}\label{estbta01} \|\tilde{\bta}_n\|^2 +2\mu k\sum_{i=1}^n \|\nabla\tilde{\bta}_i\|^2\le k\sum_{i=1}^{n-1} \frac{(e^{2\alpha k}-1)}{k}\|\tilde{\bta}_i\|^2+2k\sum_{i=1}^n e^{2\alpha t_i} \Lambda_h^i(\mbox{\boldmath $\eta$}_i). \end{align} We recall from (\ref{dLbe}) that \begin{align}\label{estbta02} \Lambda^i_h(\mbox{\boldmath $\eta$}_i)= -b({\bf u}_h^i,\mbox{\boldmath $\xi$}_i,\mbox{\boldmath $\eta$}_i)-b({\bf e}_i,{\bf u}_h^i,\mbox{\boldmath $\eta$}_i) -b({\bf e}_i,\mbox{\boldmath $\xi$}_i,\mbox{\boldmath $\eta$}_i). \end{align} Using (\ref{nonlin1}) and Lemma \ref{pree}, we obtain the following estimates: \begin{eqnarray} b({\bf e}_i,\mbox{\boldmath $\xi$}_i,\mbox{\boldmath $\eta$}_i)& \le & \|\mbox{\boldmath $\xi$}_i\|^{1/2}\|\nabla\mbox{\boldmath $\xi$}_i\|^{3/2}\|\nabla\mbox{\boldmath $\eta$}_i\| +\|\nabla\mbox{\boldmath $\xi$}_i\|\|\mbox{\boldmath $\eta$}_i\|\|\nabla\mbox{\boldmath $\eta$}_i\| \nonumber \\ & \le & \varepsilon \|\nabla\mbox{\boldmath $\eta$}_i\|^2+K\|\mbox{\boldmath $\eta$}_i\|^2+Kk^{1/2}(1+\log\frac{1}{k})^{1/2} \|\nabla\mbox{\boldmath $\xi$}_i\|^2 \label{estbta02a} \\ b({\bf u}_h^i,\mbox{\boldmath $\xi$}_i,\mbox{\boldmath $\eta$}_i) & \le & \varepsilon \|\nabla\mbox{\boldmath $\eta$}_i\|^2+K\|\nabla\mbox{\boldmath $\xi$}_i\|^2 \label{estbta02b} \\ b({\bf e}_i,{\bf u}_h^i,\mbox{\boldmath $\eta$}_i) & \le & \varepsilon \|\nabla\mbox{\boldmath $\eta$}_i\|^2+K\big(\|\nabla\mbox{\boldmath $\xi$}_i\|^2 +\|\mbox{\boldmath $\eta$}_i\|^2\big). \label{estbta02c} \end{eqnarray} Incorporate (\ref{estbta02a})-(\ref{estbta02c}) in (\ref{estbta02}) and then in (\ref{estbta01}). Choose $\varepsilon= \mu/6$ and once again use Lemma \ref{pree}. Finally, use discrete Gronwall's Lemma to complete the rest of the proof. \end{proof} \begin{remark} Combining Lemmas \ref{pree} and \ref{estbta}, we note that \begin{eqnarray} \|{\bf e}_n\|^2+e^{-2\alpha t_n} k\sum_{i=1}^n e^{2\alpha t_i}\|{\bf e}_i\|^2 \le K_{t_n}k(1+\log\frac{1}{k}), \label{err1} \\ \|{\bf e}_n\|_1^2+e^{-2\alpha t_n} k\sum_{i=1}^n e^{2\alpha t_i}\|{\bf e}_i\|_1^2 \le K_{t_n}. \label{err2} \end{eqnarray} Therefore, we obtain suboptimal order of convergence for $\|{\bf e}_n\|$. \end{remark} \noindent Below, we shall prove optimal estimate of $\|{\bf e}_n\|$ with help of a series of Lemmas. \begin{lemma}\label{negbta} Under the assumptions of Lemma \ref{estbta}, the following holds: \begin{align}\label{negbta1} \|\mbox{\boldmath $\eta$}_n\|_{-1}^2+e^{-2\alpha t_n} k\sum_{i=1}^n e^{2\alpha t_i} \|\mbox{\boldmath $\eta$}_i\|^2 \le K_{t_n}k^2. \end{align} \end{lemma} \begin{proof} Put $\mbox{\boldmath $\phi$}_h= e^{2\alpha t_i} (-{\tilde\Delta}_h)^{-1}\mbox{\boldmath $\eta$}_i$ in (\ref{eebenl}) for $n=i$. Multiply the equation by $ke^{2\alpha ik}$ and sum over $1\le i\le n\le N$ to arrive at \begin{align}\label{negbta01} \|\tilde{\bta}_n\|_{-1}^2+2\mu k &\sum_{i=1}^n \|\tilde{\bta}_i\|^2 \le \sum_{i=1}^{n-1} (e^{2\alpha k}-1)\|\tilde{\bta}_i\|_{-1}^2+2k \sum_{i=1}^n e^{2\alpha t_i} \Lambda_h^i (e^{2\alpha t_i} (-{\tilde\Delta}_h)^{-1}\mbox{\boldmath $\eta$}_i ). \end{align} From (\ref{dLbe}), we find that \begin{align}\label{negbta02} |2\Lambda_h^i((-{\tilde\Delta}_h)^{-1}\mbox{\boldmath $\eta$}_i)| & \le |2b({\bf e}_i,{\bf u}_h^i,(-{\tilde\Delta}_h)^{-1}\mbox{\boldmath $\eta$}_i) \nonumber \\ & +b({\bf u}_h^i,{\bf e}_i,(-{\tilde\Delta}_h)^{-1}\mbox{\boldmath $\eta$}_i) -b({\bf e}_i,{\bf e}_i,(-{\tilde\Delta}_h)^{-1}\mbox{\boldmath $\eta$}_i)|. \end{align} For the first term on the right hand-side of (\ref{negbta02}), we use (\ref{nonlin1}) to find that \begin{equation}\label{negbta03} |2b({\bf e}_i,{\bf u}_h^i,-{\tilde\Delta}_h^{-1}\mbox{\boldmath $\eta$}_i)| \le K\|{\bf e}_i\|\|{\bf u}_h^i\|_1\|\mbox{\boldmath $\eta$}_i\|_{-1}^{1/2} \|\mbox{\boldmath $\eta$}_i\|^{1/2}. \end{equation} Also, \begin{align}\label{negbta04} |2b({\bf u}_h^i,{\bf e}_i,-{\tilde\Delta}_h^{-1}\mbox{\boldmath $\eta$}_i)|& \le |({\bf u}_h^i\cdot\nabla{\bf e}_i, -{\tilde\Delta}_h^{-1}\mbox{\boldmath $\eta$}_i)|+ |({\bf u}_h^i\cdot\nabla(-{\tilde\Delta}_h^{-1}\mbox{\boldmath $\eta$}_i),{\bf e}_i)| \nonumber \\ & \le |({\bf u}_h^i\cdot\nabla{\bf e}_i, -{\tilde\Delta}_h^{-1}\mbox{\boldmath $\eta$}_i)|+K\|{\bf u}_h^i\|_1 \|\mbox{\boldmath $\eta$}_i\|_{-1}^{1/2}\|\mbox{\boldmath $\eta$}_i\|^{1/2}\|{\bf e}_i\|. \end{align} For $D_1=\frac{\partial}{\partial x_1}$, we note that \begin{align}\label{negbta05} ({\bf u}_h^i\cdot\nabla{\bf e}_i,-{\tilde\Delta}_h^{-1}\mbox{\boldmath $\eta$}_i) &=\sum_{l,j=1}^2 \int_{\Omega}{\bf u}_{h,l}^i D_l({\bf e}_{i,j})(-{\tilde\Delta}_h^{-1})\mbox{\boldmath $\eta$}_{i,j}dx \nonumber \\ &= -\sum_{l,j=1}^2 \int_{\Omega}\big\{D_l({\bf u}_{h,l}^i){\bf e}_{i,j}(-{\tilde\Delta}_h^{-1})\mbox{\boldmath $\eta$}_{i,j}+ {\bf u}_{h,l}^i{\bf e}_{i,j}D_l((-{\tilde\Delta}_h^{-1})\mbox{\boldmath $\eta$}_{i,j})\}dx. \nonumber \\ & \le K\|{\bf u}_h^i\|_1\|{\bf e}_i\|\|\mbox{\boldmath $\eta$}_i\|_{-1}^{1/2}\|\mbox{\boldmath $\eta$}_i\|^{1/2}. \end{align} Finally, from (\ref{nonlin1}), we find that \begin{align}\label{negbta06} |2b({\bf e}_i,{\bf e}_i,-{\tilde\Delta}_h^{-1}{\bf e}_i)| \le K\|{\bf e}_i\|\big(\|{\bf e}_i\|_1+\|{\bf e}_i\|^{1/2} \|{\bf e}_i\|_1^{1/2}\big)\|\mbox{\boldmath $\eta$}_i\|_{-1}^{1/2}\|\mbox{\boldmath $\eta$}_i\|^{1/2}. \end{align} Now, combine (\ref{negbta02})-(\ref{negbta06}) and use the fact that $$ \|{\bf e}_i\|_1 \le \|{\bf u}_h^i\|_1+\|{\bf U}^i\|_1 \le K$$ to observe that \begin{align}\label{negbta07} |2\Lambda_h^i((-{\tilde\Delta}_h)^{-1}\mbox{\boldmath $\eta$}_i)| & \le K\|{\bf e}_i\|\|\mbox{\boldmath $\eta$}_i\|_{-1}^{1/2} \|\mbox{\boldmath $\eta$}_i\|^{1/2} \nonumber \\ & \le K\|\mbox{\boldmath $\xi$}_i\|\|\mbox{\boldmath $\eta$}_i\|_{-1}^{1/2} \|\mbox{\boldmath $\eta$}_i\|^{1/2} +K\|\mbox{\boldmath $\eta$}_i\|_{-1}^{1/2}\|\mbox{\boldmath $\eta$}_i\|^{3/2}. \end{align} Incorporate (\ref{negbta07}) in (\ref{negbta01}) and use kickback argument to obtain \begin{align}\label{negbta08} \|\tilde{\bta}_n\|_{-1}^2+\mu k &\sum_{i=1}^n \|\tilde{\bta}_i\|^2 \le Kk\sum_{i=1}^n \|\tilde{\bta}_i\|_{-1}^2+Kk \sum_{i=1}^n \|\tilde{\bxi}\|^2. \end{align} Finally, use Lemma \ref{neg}, apply discrete Gronwall's lemma and multiply the resulting estimate by $e^{-2\alpha t_i}$ to complete the rest of the proof. \end{proof} \begin{remark} From Lemmas \ref{neg} and \ref{negbta}, we have the following estimate \begin{equation}\label{l2ee} e^{-2\alpha t_n} k\sum_{i=1}^n e^{2\alpha t_i}\|{\bf e}_i\|^2 \le K_{t_n}k^2. \end{equation} \end{remark} We need another estimate of $\mbox{\boldmath $\eta$}$ similar to the one in Lemma \ref{ieens} and the proof will follow in a similar line. For that purpose, we multiply (\ref{eebenl}) by $k$ and sum over $1$ to $n$ and similar to (\ref{eebelin1i}), we obtain \begin{align}\label{eebenli} (\partial_t\hat{\bta}_n,\mbox{\boldmath $\phi$}_h)+ \mu a(\hat{\bta}_n,\mbox{\boldmath $\phi$}_h)+ a(q^n_r(\hat{\bta}),\mbox{\boldmath $\phi$}_h) = k\sum_{i=1}^n \Lambda_h^i(\mbox{\boldmath $\phi$}_h). \end{align} \begin{lemma}\label{ibta} Under the assumptions of Lemma \ref{estbta}, the following holds: \begin{align}\label{ibta1} \|\hat{\bta}_n\|^2+ e^{-2\alpha t_n} k\sum_{i=1}^n e^{2\alpha t_i} \|\nabla\hat{\bta}_i\|^2 \le K_{t_n}k^2(1+\log\frac{1}{k}). \end{align} \end{lemma} \begin{proof} Choose $\mbox{\boldmath $\phi$}_h=\hat{\bta}_i$ in (\ref{eebenli}) for $n=i$, multiply by $k e^{2\alpha t_i}$ and then sum over $1\le i\le n$ to observe as in (\ref{ieens01}) \begin{align}\label{ibta01} e^{2\alpha t_n}\|\hat{\bta}_n\|^2+\mu k\sum_{i=1}^n e^{2\alpha t_i}\|\nabla\hat{\bta}_i\|^2 \le k\sum_{i=1}^n e^{2\alpha t_i} k\sum_{j=1}^i |\Lambda_h^i(\hat{\bta}_i)|. \end{align} We observe that \begin{equation}\label{ibta02} k\sum_{j=1}^i |\Lambda_h^j(\hat{\bta}_i)|= k\sum_{j=1}^i \Big|b({\bf u}_h^j, {\bf e}_j,\hat{\bta}_i)+b({\bf e}_j,{\bf u}_h^j,\hat{\bta}_i)+b({\bf e}_j,{\bf e}_j,\hat{\bta}_i)\Big|. \end{equation} Use (\ref{nonlin1}), (\ref{err1}) and (\ref{err2}) to obtain \begin{align}\label{ibta02a} & k\sum_{i=1}^n e^{2\alpha t_i}k\sum_{i=1}^n |b({\bf e}_j,{\bf e}_j,\hat{\bta}_i)|\le Kk\sum_{i=1}^n e^{2\alpha t_i}k\sum_{j=1}^i\|{\bf e}_j\|^{1/2} \|{\bf e}_j\|^{3/2}_1 \|\nabla\hat{\bta}_i\| \nonumber \\ \le & Kk\sum_{i=1}^n e^{2\alpha t_i}\Big(k\sum_{j=1}^i\|\tilde{{\bf e}}_j\|^2\Big)^{1/4} \Big(k\sum_{j=1}^i\|\tilde{{\bf e}}_j\|_1^2\Big)^{3/4} \|\nabla\hat{\bta}_i\| \nonumber \\ \le & K_{t_n}k^2(1+\log\frac{1}{k})+\varepsilon k\sum_{i=1}^n e^{2\alpha t_i}\|\nabla\hat{\bta}_i\|^2. \end{align} Similarly, \begin{align}\label{ibta02b} k\sum_{i=1}^n e^{2\alpha t_i}k\sum_{j=1}^i |b({\bf e}_j,{\bf u}_h^j,\hat{\bta}_i)|\le & Kk\sum_{i=1}^n e^{2\alpha t_i}\Big(k\sum_{j=1}^i\|\tilde{{\bf e}}_i\|^2\Big)^{1/2} \|\nabla\hat{\bta}_i\| \nonumber \\ \le & K_{t_n}k^2+\varepsilon k\sum_{i=1}^n e^{2\alpha t_i}\|\nabla\hat{\bta}_i\|^2. \end{align} and \begin{align}\label{ibta02c} k\sum_{i=1}^n e^{2\alpha t_i} k\sum_{j=1}^i |b({\bf u}_h^i,{\bf e}_i,\mbox{\boldmath $\phi$}_h)| \le k \sum_{i=1}^n e^{2\alpha t_i} k\sum_{j=1}^i \Big(\frac{1}{2}|((\nabla\cdot{\bf u}_h^i) {\bf e}_i,\mbox{\boldmath $\phi$}_h)|+|(({\bf u}_h^i\cdot\nabla)\mbox{\boldmath $\phi$}_h,{\bf e}_i)|\Big) \nonumber \\ \le Kk\sum_{i=1}^n e^{2\alpha t_i}\Big(k\sum_{j=1}^i\|\tilde{{\bf e}}_i\|^2\Big)^{1/2} \|\nabla\hat{\bta}_i\|\le K_{t_n}k^2+\varepsilon k\sum_{i=1}^n e^{2\alpha t_i}\|\nabla\hat{\bta}_i\|^2. \end{align} Combining these estimates, namely; (\ref{ibta02a})-(\ref{ibta02c}) and putting $\varepsilon=\mu/6$, we conclude the rest of the proof. \end{proof} \noindent We present below a Lemma with optimal estimate for $\mbox{\boldmath $\eta$}_n$. \begin{lemma}\label{eebta} Under the assumptions of Lemma \ref{estbta}, the following holds: \begin{align}\label{eebta1} t_n\|\mbox{\boldmath $\eta$}_n\|^2+e^{-2\alpha t_n} k\sum_{i=1}^n \sigma_i\|\mbox{\boldmath $\eta$}_i\|_1^2 \le K_{t_n}k^2(1+\log\frac{1}{k}). \end{align} \end{lemma} \begin{proof} We choose $\mbox{\boldmath $\phi$}_h=\sigma_i\mbox{\boldmath $\eta$}_i$ in (\ref{eebenl}) for $n=i$. Multiply the resulting equation by $k$ and sum it over $1<i<n$ to find that \begin{align}\label{eebta01} \sigma_n\|\mbox{\boldmath $\eta$}_n\|^2+2\mu k\sum_{i=1}^n \sigma_i\|\nabla\mbox{\boldmath $\eta$}_i\|^2 & \le K(\alpha) k\sum_{i=2}^{n-1}\|\tilde{\bta}_i\|^2-2k\sum_{i=1}^n a(q_{r}^i(\mbox{\boldmath $\eta$}),\sigma_i\mbox{\boldmath $\eta$}_i) \nonumber \\ &+2k\sum_{i=1}^n\Lambda_h^i(\sigma_i\mbox{\boldmath $\eta$}_i). \end{align} As in (\ref{l2eebxi2}), we obtain \begin{align}\label{eebta02} 2k\sum_{i=1}^n \sigma_i a(q_{r}^i(\mbox{\boldmath $\eta$}), \mbox{\boldmath $\eta$}_i) \le \varepsilon k\sum_{i=1}^n \sigma_i \|\nabla\mbox{\boldmath $\eta$}_i\|^2+K k\sum_{i=1}^n e^{2\alpha t_i}\|\nabla\hat{\bta}_i\|^2. \end{align} Following the proof technique leading to the estimate (\ref{estbta02}), we observe that \begin{align}\label{eebta03} 2k\sum_{i=1}^n\Lambda_h^i(\sigma_i\mbox{\boldmath $\xi$}_i) \le \varepsilon k\sum_{i=1}^n \sigma_i\|\nabla\mbox{\boldmath $\eta$}_i\|^2 +Kk\sum_{i=1}^n\sigma_i \big(\|\nabla\mbox{\boldmath $\xi$}_i\|^2+\|\mbox{\boldmath $\eta$}_i\|^2\big). \end{align} Substitute (\ref{eebta02})-(\ref{eebta03}) in (\ref{eebta01}) and this completes the rest of the proof. \end{proof} \begin{theorem}\label{l2eebe} Under the assumptions of Lemma \ref{estbta}, following holds: \begin{equation}\label{l2eebe1} \|{\bf e}_n\| \le K_Tt_n^{-1/2}k(1+\log \frac{1}{k})^{1/2}. \end{equation} \end{theorem} \begin{proof} Combine the Lemmas \ref{l2eebxi} and \ref{eebta} to complete the rest of the proof. \end{proof} \begin{remark} We need not split the error ${\bf e}$ in $\mbox{\boldmath $\xi$}$ and $\mbox{\boldmath $\eta$}$ in order to obtain optimal error estimate (\ref{l2eebe1}). However for optimal error estimate in $L^2$-norm which is uniform in time, we need to split the error ${\bf e}_n = \mbox{\boldmath $\eta$}_n-\mbox{\boldmath $\xi$}_n.$ \end{remark} \section[Uniform Error]{Uniform Error Estimate} \setcounter{equation}{0} In this section, we prove the estimate (\ref{l2eebe1}) to be uniform under the uniqueness condition $\mu-2N\nu^{-1}\|{\bf f}\|_{\infty} >0,$ where $N$ is given as in (\ref{uc}). We observe that the estimate (\ref{l2eebxi1}) involving $\mbox{\boldmath $\xi$}_n$ is uniform in nature. Hence, we are left to deal with ${\bf L}^2$ estimate of $\mbox{\boldmath $\eta$}_n.$ \begin{lemma}\label{btaunif} Let the assumptions of Lemma \ref{estbta} hold. Under the uniqueness condition $\mu-2N\nu^{-1}\|{\bf f}\|_{\infty} >0$ and under the assumption $$ 1+(\frac{\mu\lambda_1}{2})k > e^{2\alpha k}, $$ which holds for $0<k<k_0,~k_0>0$, the following uniform estimate hold: \begin{equation}\label{btaunif1} \|\mbox{\boldmath $\eta$}_n\| \le K\tau_n^{-1/2}k(1+\log \frac{1}{k})^{1/2}, \end{equation} where $\tau_n=\min \{1,t_n\}$. \end{lemma} \begin{proof} Choose $\mbox{\boldmath $\phi$}_h=\mbox{\boldmath $\eta$}_i$ in (\ref{eebenl}) for $n=i$ to obtain \begin{align}\label{001} \partial_t\|\mbox{\boldmath $\eta$}_i\|^2+2\mu \|\nabla\mbox{\boldmath $\eta$}_i\|+2 a(q^i_r(\mbox{\boldmath $\eta$}),\mbox{\boldmath $\eta$}_i) \le 2\Lambda^i_h(\mbox{\boldmath $\eta$}_i). \end{align} From (\ref{dLbe}), we find that \begin{align}\label{002} \Lambda^i_h(\mbox{\boldmath $\eta$}_i)= -b(\mbox{\boldmath $\xi$}_i,{\bf u}^i_h,\mbox{\boldmath $\eta$}_i)-b({\bf u}_h^i,\mbox{\boldmath $\xi$}_i,\mbox{\boldmath $\eta$}_i) -b(\mbox{\boldmath $\eta$}_i,{\bf u}^i_h,\mbox{\boldmath $\eta$}_i)-b({\bf e}_i,\mbox{\boldmath $\xi$}_i,\mbox{\boldmath $\eta$}_i). \end{align} From the definition of $N$ (see (\ref{uc})), we note that \begin{align}\label{004} |b(\mbox{\boldmath $\eta$}_i,{\bf u}^i_h,\mbox{\boldmath $\eta$}_i)| \le N\|\nabla\mbox{\boldmath $\eta$}_i\|^2\|\nabla{\bf u}^i_h\|. \end{align} Again with the help of (\ref{nonlin1}), we obtain \begin{align}\label{005} |b(\mbox{\boldmath $\xi$}_i,{\bf u}^i_h,\mbox{\boldmath $\eta$}_i)|&+|b({\bf u}_h^i, \mbox{\boldmath $\xi$}_i,\mbox{\boldmath $\eta$}_i)|\le K\|\mbox{\boldmath $\xi$}\|\|\nabla \mbox{\boldmath $\eta$}_i\|\|\nabla{\bf u}^i_h\|^{1/2} \|{\tilde\Delta}_h{\bf u}^i_h\|^{1/2} \nonumber \\ & \le K\tau_i^{-3/4}k(1+\log\frac{1}{k})^{1/2}\|\nabla\mbox{\boldmath $\eta$}_i\|. \end{align} Since $\|{\bf e}_i\|_2 \le \|{\bf u}_h^i\|_2+\|{\bf U}^i\|_2 \le Kt_i^{-1/2}$, we conclude that \begin{equation}\label{005a} |b({\bf e}_i,\mbox{\boldmath $\xi$}_i,\mbox{\boldmath $\eta$}_i)| \le K\tau_i^{-3/4}k(1+\log\frac{1}{k})^{1/2}\|\nabla\mbox{\boldmath $\eta$}_i\|. \end{equation} Therefore, from (\ref{004})-(\ref{005a}), we find that \begin{align}\label{006} |\Lambda^i_h(\mbox{\boldmath $\eta$}_i)| \le N\|\nabla\mbox{\boldmath $\eta$}_i\|^2\|\nabla{\bf u}^i_h\|+K\tau_i^{-3/4}k (1+\log\frac{1}{k})^{1/2}\|\nabla\mbox{\boldmath $\eta$}_i\|. \end{align} We recall from \cite{WHS} that $$ \limsup_{t\to\infty} \|\nabla{\bf u}_h(t)\| \le \nu^{-1}\|{\bf f}\|_{\infty}, $$ and therefore, for large enough $i\in \mathbb{N}$, say $i>i_0$ we obtain from (\ref{006}) \begin{align}\label{007} |2\Lambda_h^i({\bf e}_i)| \le 2N\nu^{-1}\|{\bf f}\|_{\infty}\|\nabla\mbox{\boldmath $\eta$}_i\|^2+K\tau_i^{-3/4} k (1+\log\frac{1}{k})^{1/2}\|\nabla\mbox{\boldmath $\eta$}_i\|. \end{align} With $\sigma_i=\tau_i e^{2\alpha t_i},$ we multiply (\ref{001}) by $k\sigma_i$ and sum over $i_0+1$ to $n$ to obtain \begin{align}\label{008} k\sum_{i=i_0+1}^n e^{2\alpha t_i} \{\partial_t\|\mbox{\boldmath $\eta$}_i\|^2+2\mu \|\nabla\mbox{\boldmath $\eta$}_i\|^2\} +2k\sum_{i=i_0+1}^n e^{2\alpha t_i} a(q^i_r(\mbox{\boldmath $\eta$}),\mbox{\boldmath $\eta$}_i) \nonumber \\ \le 2k\sum_{i=i_0+1}^n \sigma_i \Lambda^i_h(\mbox{\boldmath $\eta$}_i). \end{align} Without loss of generality, we can assume that $i_0$ is big enough, so that, by definition $\tau_i=1$ for $i\ge i_0$. We rewrite (\ref{008}) as follows: \begin{align}\label{008a} k\sum_{i=i_0+1}^n e^{2\alpha t_i} \{\partial_t\|\mbox{\boldmath $\eta$}_i\|^2+2\mu \|\nabla\mbox{\boldmath $\eta$}_i\|^2\} +2k\sum_{i=1}^n e^{2\alpha t_i} a(q^i_r(\mbox{\boldmath $\eta$}),\mbox{\boldmath $\eta$}_i) \nonumber \\ \le 2k\sum_{i=i_0+1}^n \sigma_i \Lambda^i_h(\mbox{\boldmath $\eta$}_i)+2k\sum_{i=1}^{i_0} e^{2\alpha t_i} a(q^i_r(\mbox{\boldmath $\eta$}),\mbox{\boldmath $\eta$}_i). \end{align} We observe that the last term on the left hand-side of (\ref{008a}) is non-negative and hence is dropped. \begin{align}\label{009} e^{2\alpha t_n}\|\mbox{\boldmath $\eta$}_n\|^2-\sum_{i=i_0+1}^{n-1} e^{2\alpha t_i} (e^{2\alpha k}-1)\|\mbox{\boldmath $\eta$}_i\|^2+\mu k\sum_{i=i_0+1}^n e^{2\alpha i} \|\nabla\mbox{\boldmath $\eta$}_i\|^2 \nonumber \\ + k\sum_{i=i_0+1}^n (\mu-2N\nu^{-1}\|{\bf f}\|_{\infty}) e^{2\alpha i} \|\nabla\mbox{\boldmath $\eta$}_i\|^2 \le e^{2\alpha t_{i_0}}\|\mbox{\boldmath $\eta$}_{i_0}\|^2+2k\sum_{i=1}^{i_0} e^{2\alpha t_i} q^i_r(\|\nabla\mbox{\boldmath $\eta$}_i\|)\|\nabla\mbox{\boldmath $\eta$}_i\| \nonumber \\ +Kk^2\sum_{i=i_0+1}^n \tau_i^{1/4} e^{2\alpha t_i} (1+\log\frac{1}{k})^{3/4}\|\nabla\mbox{\boldmath $\eta$}_i\|. \end{align} Under the assumption $$ 1+(\frac{\mu\lambda_1}{2})k > e^{2\alpha k}, $$ which holds for $0<k<k_0,~k_0>0$ with $0 \le \alpha \le \min \big\{\delta, \frac{\mu\lambda_1}{2}\big\},$ we find that \begin{align}\label{010} \frac{\mu}{2} k\sum_{i=i_0+1}^n e^{2\alpha t_i} \|\nabla\mbox{\boldmath $\eta$}_i\|^2-\sum_{i=i_0+1}^{n-1} e^{2\alpha t_i} (e^{2\alpha k}-1)\|\mbox{\boldmath $\eta$}_i\|^2 \nonumber \\ =k\sum_{i=i_0+1}^{n} \big(\frac{\mu}{2}-\frac{e^{2\alpha k}-1}{k\lambda_1}\big)\sigma_i \|\nabla\mbox{\boldmath $\eta$}_i\|^2 \ge 0. \end{align} Due to uniqueness condition, we arrive at the following: \begin{align}\label{011} k\sum_{i=i_0+1}^n (\mu-2N\nu^{-1}\|{\bf f}\|_{\infty}) e^{2\alpha i} \|\nabla\mbox{\boldmath $\eta$}_i\|^2 \ge 0. \end{align} Following the proof techniques of (\ref{pree08})-(\ref{pree09}), we obtain \begin{align}\label{012} 2k\sum_{i=1}^{i_0} e^{2\alpha t_i} q^i_r(\|\nabla\mbox{\boldmath $\eta$}_i\|) \|\nabla\mbox{\boldmath $\eta$}_i\| \le K k\sum_{i=1}^{i_0} e^{2\alpha t_i} \|\nabla\mbox{\boldmath $\eta$}_i\|^2. \end{align} And \begin{align}\label{013} Kk^2\sum_{i=i_0+1}^n \tau_i^{1/4} e^{2\alpha t_i} (1+\log\frac{1}{k})^{1/2} \|\nabla\mbox{\boldmath $\eta$}_i\|\le \frac{\mu}{4} k\sum_{i=i_0+1}^n \sigma_i \|\nabla\mbox{\boldmath $\eta$}_i\|^2 \nonumber \\ +Kk^2 (1+\log\frac{1}{k}) k\sum_{i=i_0+1}^n e^{2\alpha t_i} \tau_i^{-1/2}. \end{align} Incorporate (\ref{010})-(\ref{013}) in (\ref{009}), use Lemma \ref{negbta} and (\ref{l2ee}) to observe that \begin{align}\label{014} e^{2\alpha t_n}\|\mbox{\boldmath $\eta$}_n\|^2+k\sum_{i=1}^n \sigma_i \|\nabla\mbox{\boldmath $\eta$}_i\|^2 & \le K_{t_0} k^2+Kk^2 (1+\log\frac{1}{k}) e^{2\alpha t_n} \nonumber \\ &+Kk\sum_{i=1}^{i_0} e^{2\alpha t_i} \|\nabla\mbox{\boldmath $\eta$}_i\|^2. \end{align} Multiply by $e^{-2\alpha t_i}$ and under the assumption that \begin{equation}\label{015} k\sum_{i=1}^{t_0} e^{2\alpha t_i} \|\nabla\mbox{\boldmath $\eta$}_i\|^2 \le K_{t_0}t_{t_0}^{-1}k^2(1+\log \frac{1}{k}). \end{equation} we conclude that $$ \|\mbox{\boldmath $\eta$}_n\| \le Kt_n^{-1/2}k(1+\log\frac{1}{k})^{1/2}, $$ since $i_0>0$ is fixed. Combining this result with (\ref{l2eebxi1}) we complete the rest of the proof. \end{proof} \noindent We are now left with the proof (\ref{015}). \begin{lemma}\label{bta1} Under the assumption of Lemma \ref{estbta}, the following holds $$ k\sum_{i=1}^{i_0} e^{2\alpha t_i} \|\nabla\mbox{\boldmath $\eta$}_i\|^2 \le K_{t_{i_0}} t_{i_0}^{-1}k^2(1+\log \frac{1}{k}). $$ \end{lemma} \begin{proof} In (\ref{estbta01}), we use \begin{align*} \Lambda^i_h(\mbox{\boldmath $\eta$}_i) &= -b_h({\bf u}^i_h,{\bf e}_i,\mbox{\boldmath $\eta$}_i)-b_h({\bf e}_i,{\bf U}^i,\mbox{\boldmath $\eta$}_i) \nonumber \\ & \le \frac{\mu}{4}\|\nabla\mbox{\boldmath $\eta$}_i\|^2+K\|{\bf e}_i\|^2(\|{\tilde\Delta}_h{\bf u}^i_h\|+\|{\tilde\Delta}_h{\bf U}^i\|), \end{align*} along with Lemma \ref{negbta} and Theorem \ref{l2eebe} to arrive at \begin{align}\label{bta02} \|\tilde{\bta}_{i_0}\|^2 +\mu k\sum_{i=1}^{i_0} \|\nabla\tilde{\bta}_i\|^2\le K_{t_{i_0}}k^2 +K_{t_{i_0}}t_{i_0}^{-1}k^2(1+\log \frac{1}{k}) k\sum_{i=1}^{i_0} e^{2\alpha t_i} t_i^{-1/2}. \end{align} This completes the rest of the proof. \end{proof} \section{Conclusion} \setcounter{equation}{0} In this paper, we have discussed optimal error estimates for the backward Euler method employed to the Oldroyd model with non-smooth initial data, the is, ${\bf u}_0\in {\bf J}_1$. We have proved both optimal and uniform error estimate for the velocity. Uniform estimate is proved under uniqueness condition. The error analysis for the non-smooth initial data tells us that we need a few more proof techniques than the smooth case and proofs are more involved.
1,941,325,220,645
arxiv
\section{Introduction} $\mu$ Orionis (61 Ori, HR 2124, HIP 28614, HD 40932) is a quadruple star system that has been extensively studied by radial velocity (RV) and differential astrometry. It is located just North of Betelgeuse, Orion's right shoulder (left on the sky); $\mu$ Ori is a bright star that is visible to the unaided eye even in moderately light-polluted skies. \cite{Frost1906} discovered it to be a short period (four and a half day) single-lined spectroscopic binary; this was component Aa, whose short-period, low mass companion Ab has never been detected directly. \cite{Aitken1914} discovered it also had a more distant component (B) forming a sub-arcsecond visual binary. Much later, \cite{Fekel1980} found B was itself a short-period (4.78 days) double-lined spectroscopic binary, making the system quadruple; these stars are designated Ba and Bb. Most recently, \cite{Fekel2002} (hereafter F2002) reported the astrometric orbit of the A-B motion, double-lined RV orbits for A-B and the Ba-Bb subsystem, and a single-lined RV orbit for the Aa-Ab subsystem. F2002 estimate the spectral types as A5V (Aa, an Am star), F5V (Ba), and F5V (Bb), though they note these are classifications are less certain than usual due to the complexity of the system. For a more complete discussion of the history of $\mu$ Ori, see F2002. Until now, astrometric observations have only been able to characterize the long period A-B motion, lacking the precision necessary to measure the astrometric perturbations to this orbit caused by the Aa-Ab and Ba-Bb subsystems. The method described by \cite{LaneMute2004a} for ground-based differential astrometry at the $\sim 20$ $\mu{\rm as}$ level for sub-arcsecond (``speckle'') binaries has been used to study $\mu$ Ori during the 2004-2007 observing seasons. These measurements represent an improvement in precision of over two orders of magnitude over previous work on this system. The goal of the current investigation is to report the center-of-light (photocenter) astrometric orbits of the Aa-Ab and Ba-Bb subsystems. This enables measurement of the coplanarities of the A-B, Aa-Ab, and Ba-Bb orbits. The masses and luminosity ratio of Aa and Ab are measured for the first time.Also presented are updated orbits for the PHASES targets $\delta$ Equ, $\kappa$ Peg, and V819 Her. Astrometric measurements were made at the Palomar Testbed Interferometer \citep[PTI;][]{col99} as part of the Palomar High-precision Astrometric Search for Exoplanet Systems (PHASES) program \citep{Mute06Limits}. PTI is located on Palomar Mountain near San Diego, CA. It was developed by the Jet Propulsion Laboratory, California Institute of Technology for NASA, as a testbed for interferometric techniques applicable to the Keck Interferometer and other missions such as the Space Interferometry Mission (SIM). It operates in the J ($1.2 \mu{\rm m}$), H ($1.6 \mu{\rm m}$), and K ($2.2 \mu{\rm m}$) bands, and combines starlight from two out of three available 40-cm apertures. The apertures form a triangle with one 110 and two 87 meter baselines. \section{Observations and Data Processing} \subsection{PHASES Observations} \subsubsection{Instrumental Setup} $\mu$ Ori was observed with PTI on 17 nights in 2004-2007 with the observing mode described in \cite{LaneMute2004a}. Starlight is collected from two apertures, collimated, and sent to a central beam combining facility. There, the light from each telescope reflects from movable mirrors (delay lines) whose position is constantly varied to account for sidereal motion and to track atmospheric index of refraction variations. After this first set of delay lines, a beam-splitter is used to divide the light from each telescope; $\sim 70\%$ of the light is sent to an interferometric beam combiner that monitors a single fringe from any star in the field at rapid (10-20 millisecond) time-scales to measure fringe phase variations caused by the atmosphere, and provide feedback to the main delay lines. This process phase-stabilizes the other $\sim 30\%$ of the light \citep[a technique known as phase referencing, ][]{lc03}, which is sent to a second interferometric beam combiner that can add an additional variable delay of order $250\,{\rm \mu m}$ to the light from one telescope. This variable delay is modulated with a triangle waveform, scanning through interferograms from all stars within the subarcsecond field of view. These interferogram scans are the observables used for PHASES astrometry. \subsubsection{Data Reduction} Modifications to the data processing algorithm since the original report are given by \cite{Mut05_delequ} and have been incorporated in the current study. Interferogram templates are fit to each scan, forming a likelihood function of the separations of interferograms formed by components A and B. A grid of differential right ascension and declination is formed, and a $\chi^2$ likelihood surface is mapped onto this grid by converting delay separation to differential right ascension and declination. That $\chi^2$ surface is coadded over all the scans of $\mu$ Ori within the night (typically $\sim 1000$ scans or more). The deepest minimum in the $\chi^2$ surface corresponds to the binary separation, while the width of that minimum determines the uncertainty ellipse. Due to the oscillatory nature of the interferograms, other local minima can exist; these ``sidelobes'' are separated by the interferometer's resolution $\sim \lambda / B \sim 4\,{\rm mas}$ ($B$ is the separation between the telescopes, and $\lambda$ the wavelength of starlight), an amount much larger than the width of an individual minimum. The SNR can be increased by coadding many scans and by earth-rotation synthesis, which smears all but the true minimum, and the true minimum then can be established. Only those measurements for which no sidelobes appear at the $4\sigma$ contour of the deepest minimum are used in orbit fitting. All measurements have been processed using this new data reduction pipeline. The measurements are listed in Table \ref{phasesDataMuOri}, in the ICRS 2000.0 reference frame. \subsubsection{Technique Upgrades} Data since mid-2006 have benefited from the use of an automatic alignment system and longitudinal dispersion compensator; the affected data points are noted in Table \ref{phasesDataMuOri}. These modifications reduced the throughput of the astrometry setup; to compensate, a 50 Hz phase tracking rate was sometimes used, whereas observations previous to these changes utilized 100 Hz tracking for monitoring the atmosphere. Drifts in optical alignment may result in variable pupil sampling at the interferometer apertures, changing the effective interferometric baseline. To minimize this potential systematic, a continuous realignment system has been developed. A red laser is coaligned with the starlight and reverse propagated through the interferometer. Four-percent reflective pellicle beamsplitters are placed near the focuses of the interferometer telescopes to extract this tracer beam and redirect to a camera where the pupil is reimaged. The angles of the first flat mirrors receiving incoming starlight in the beam combining lab are continually adjusted by a closed-loop feedback system to hold the laser spot on the camera at the telescope. The path compensation for the geometric delay at PTI has been done with movable mirrors in air, which has a wavelength-dependent index of refraction. The fringe packets of astrophysical sources are dispersed by an amount that depends on the difference in air paths between arms of the interferometer; this changes the shape and overall location of the fringe packets. To compensate, two prisms are introduced in each of the interferometer's two arms. The set in one arm is static. The other pair are slid relative to each other along the slope of the prism to introduce a variable amount of glass dispersion whose shape is opposite that of air to high order. This flattens the variability of delay versus wavelength. The system is calibrated by setting the telescope siderostats into a retroreflecting mode, using an internal white-light source to form interferograms, which are detected with a low-resolution spectrometer (5 elements across K band), and measuring the offsets between the interferograms as a function of prism location. During observations, the prism position is continuously changed with an open loop control calculated from the locations of the delay lines. There is insufficient new data to establish the degree two which these instrumental changes might be reducing excess data scatter or to establish a relative weighting between data subsets. No large discontinuities in the orbital motions are seen between pre- and post-upgrade subsets for $\mu$ Ori or the other PHASES targets, suggesting the subsets are compatible. For the purposes of the current investigation, the PHASES observations are treated as a single data set with equal weighting on observations from before and after these upgrades. \subsubsection{The PHASES Astrometric Orbits} The differential astrometry measurements are listed in Table \ref{phasesDataMuOri}, in the ICRS 2000.0 reference frame. The existence of data scatter beyond the level estimated by formal uncertainties from the PHASES analysis algorithm was determined by model fitting the PHASES data alone. Model fitting was performed with standard $\chi^2$ sum of squared residuals minimization, slightly complicated by the two dimensional nature of the uncertainty ellipses but still straight-forward to carry out. The limited number and time span of the PHASES observations prevents an independent fitting of that data set to a 4-body, 3-Keplerian model to determine potential noise excess. Thus the A-B period and Aa-Ab and Ba-Bb periods, eccentricities, and angles of periastron passages were fixed at the values reported in F2002 (in the case of Ba-Bb, which had zero eccentricity in F2002, the angle of periastron passage is undefined and fixed at zero). The minimum $\chi^2$ does not equal the number of degrees of freedom. An excess noise factor of 1.73 is found, and the PHASES uncertainties reported in Table \ref{phasesDataMuOri} have been increased by this amount over the formal uncertainties. The rescaled (raw) median minor- and major-axis uncertainties are 20 (11) and 347 (200) $\mu{\rm as}$. The rescaled (raw) mean minor- and major-axis uncertainties are 27 (16) and 668 (386) $\mu{\rm as}$. \begin{deluxetable}{lllllllllllll} \tabletypesize{\scriptsize} \tablecolumns{13} \tablewidth{0pc} \tablecaption{PHASES data for $\mu$ Ori\label{phasesDataMuOri}} \tablehead{ \colhead{HJD-2400000.5} & \colhead{$\delta$RA} & \colhead{$\delta$Dec} & \colhead{$\sigma_{\rm min}$} & \colhead{$\sigma_{\rm maj}$} & \colhead{$\phi_{\rm e}$} & \colhead{$\sigma_{\rm RA}$} & \colhead{$\sigma_{\rm Dec}$} & \colhead{$\frac{\sigma_{\rm RA, Dec}^2}{\sigma_{\rm RA}\sigma_{\rm Dec}}$} & \colhead{N} & \colhead{LDC} & \colhead{Align} & \colhead{Rate} \\ \colhead{} & \colhead{(mas)} & \colhead{(mas)} & \colhead{($\mu{\rm as}$)} & \colhead{($\mu{\rm as}$)} & \colhead{(deg)} & \colhead{($\mu{\rm as}$)} & \colhead{($\mu{\rm as}$)} & \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{(Hz)} } \startdata 53271.49964 & 59.1469 & 105.9933 & 9.1 & 487.1 & 146.70 & 407.2 & 267.6 & -0.99918 & 3092 & 0 & 0 & 100 \\ 53285.47060 & 61.8963 & 110.1620 & 14.6 & 376.9 & 19.95 & 354.3 & 129.4 & 0.99273 & 2503 & 0 & 0 & 100 \\ 53290.47919 & 62.4497 & 112.0044 & 39.6 & 2034.0 & 151.07 & 1780.2 & 984.6 & -0.99894 & 531 & 0 & 0 & 100 \\ 53312.46161 & 66.0452 & 119.6805 & 5.2 & 108.2 & 158.79 & 100.9 & 39.5 & -0.98982 & 6840 & 0 & 0 & 100 \\ 53334.41210 & 69.4569 & 127.2488 & 11.5 & 235.3 & 160.77 & 222.2 & 78.3 & -0.98775 & 2876 & 0 & 0 & 100 \\ 53340.37952 & 70.2141 & 128.6951 & 15.0 & 165.2 & 157.64 & 152.9 & 64.4 & -0.96791 & 2056 & 0 & 0 & 100 \\ 53341.34723 & 70.4709 & 129.3402 & 13.2 & 146.6 & 153.14 & 130.9 & 67.3 & -0.97539 & 3570 & 0 & 0 & 100 \\ 53639.51295 & 104.4652 & 211.4488 & 16.3 & 226.6 & 149.61 & 195.6 & 115.5 & -0.98653 & 1204 & 0 & 0 & 100 \\ 53663.47636 & 106.2444 & 217.1615 & 10.4 & 121.4 & 153.87 & 109.0 & 54.3 & -0.97702 & 1829 & 0 & 0 & 100 \\ 53698.43902 & 110.2454 & 225.7804 & 46.8 & 894.1 & 37.76 & 707.5 & 548.8 & 0.99417 & 827 & 0 & 0 & 100 \\ 53705.34654 & 108.6092 & 226.3868 & 56.3 & 1797.7 & 151.29 & 1576.9 & 864.9 & -0.99724 & 574 & 0 & 0 & 100 \\ 53732.29696 & 111.8048 & 231.4902 & 20.0 & 187.4 & 156.29 & 171.8 & 77.5 & -0.95961 & 1103 & 0 & 0 & 100 \\ 53753.23270 & 113.4891 & 235.7197 & 43.7 & 346.8 & 158.08 & 322.2 & 135.7 & -0.93783 & 621 & 0 & 0 & 100 \\ 53789.18100 & 117.7579 & 244.2916 & 47.3 & 2294.6 & 36.86 & 1836.0 & 1377.0 & 0.99908 & 610 & 0 & 0 & 100 \\ 54056.41922 & 131.6278 & 287.8511 & 35.5 & 279.5 & 159.04 & 261.3 & 105.3 & -0.93264 & 699 & 1 & 1 & 50 \\ 54061.42434 & 132.3749 & 288.4474 & 30.7 & 580.4 & 161.71 & 551.1 & 184.5 & -0.98450 & 515 & 1 & 1 & 100 \\ 54103.32199 & 134.6975 & 294.9159 & 44.7 & 1066.4 & 163.89 & 1024.6 & 299.0 & -0.98780 & 182 & 1 & 1 & 50 \\ \enddata \tablecomments{ All quantities are in the ICRS 2000.0 reference frame. The uncertainty values presented in these data have all been scaled by a factor of 1.73 over the formal (internal) uncertainties within each given night. Column 1 is the heliocentric modified Julian date. Columns 2 and 3 is the differential right ascension and declination between A and B, in milli-arcseconds. Columns 4 and 5 are the $1\sigma$ uncertainties in the minor and major axis of the measurement uncertainty ellipse, in micro-arcseconds. Column 6, $\phi_{\rm e}$, is the angle between the major axis of the uncertainty ellipse and the right ascension axis, measured from increasing differential right ascension through increasing differential declination (the position angle of the uncertainty ellipse's orientation is $90-\phi_{\rm e}$). Columns 7 and 8 are the projected uncertainties in the right ascension and declination axis, in micro-arcseconds, while column 9 is the covariance between these. Column 10 is the number of scans taken during a given night. Column 11 is 1 if the longitudinal dispersion compensator was in use, 0 otherwise. Column 12 is 1 if the autoaligner was in use, 0 otherwise. Column 13 represents the tracking frequency of the phase-referencing camera. The quadrant was chosen such that the larger fringe contrast is designated the primary (contrast is a combination of source luminosity and interferometric visibility). } \end{deluxetable} \subsection{Previous Measurements} Previous differential astrometry measurements of $\mu$ Ori are tabulated in Table 5 of F2002. These have been included in the current fit, with identical weightings as assigned by that investigation, though it is noted that the text contains a typographical error, and the $\rho$ unit uncertainty $\sigma_{\rho}$ should be 0.024, rather than 0.0024 mas. The time span of these measurements is much longer than that of the PHASES program and aids in solving the long period A-B orbit, which also lifts potential fit parameter degeneracies between that orbit and those of the short period subsystems. Measurements marked as $3\sigma$ outliers by that investigation are omitted, resulting in 80 measurements each of separation and position angle being used for fitting. Ten new measurements have been published since that investigation and are listed in Table \ref{muOriNewPrev} with weights computed with the same formula as used in F2002. Two of these measurements are found to be $3\sigma$ outliers. In total, there are 88 measurements of separation and position angle used in fitting. \begin{deluxetable}{llllll} \tablecolumns{6} \tablewidth{0pc} \tablecaption{New Non-PHASES Astrometry for $\mu$ Ori\label{muOriNewPrev}} \tablehead{ \colhead{Besselian Year} & \colhead{$\rho$} & \colhead{$\theta$} & \colhead{Weight} & \colhead{Outlier} & Reference } \startdata 1991.8101 & 0.330 & 31.8 & 0.1 & 1 & \cite{TYC2002} \\ 1996.8986 & 0.306 & 13.4 & 8.7 & 0 & \cite{Hor2001b} \\ 1999.0153 & 0.200 & 14.8 & 10.1 & 1 & \cite{hor02} \\ 1999.0153 & 0.196 & 13.3 & 9.7 & 1 & \cite{hor02} \\ 1999.0153 & 0.203 & 14.9 & 10.4 & 1 & \cite{hor02} \\ 1999.8915 & 0.150 & 11.7 & 5.8 & 1 & \cite{hor02} \\ 1999.8915 & 0.145 & 12.5 & 5.5 & 1 & \cite{hor02} \\ 1999.8915 & 0.154 & 11.3 & 6.1 & 1 & \cite{hor02} \\ 2000.7653 & 0.081 & 359.4 & 2.0 & 0 & \cite{hor02} \\ 2005.1331 & 0.179 & 26.4 & 0.2 & 1 & \cite{Sca2007a} \\ \enddata \tablecomments{ The 10 new astrometry measurements published since F2002 for $\mu$ Ori. Column 1 is the epoch of observation in years, column 2 is the A-B separation in arcseconds, column 3 is the position angle east of north in degrees, and column 4 is the measurement weight on the same scale as F2002. ($\delta$RA $=\rho \sin \theta$, $\delta$Dec $=\rho \cos \theta$.) Column 5 is 0 if the measurement is a $3\sigma$ outlier not used in fitting, 1 otherwise. Column 6 is the original work where the measurement was published.} \end{deluxetable} F2002 also present radial velocity observations of components Aa, Ba, and Bb. Those measurements are included in the present fit, with weightings as reported in Tables 2, 3, and 4 of that paper. Measurements marked as $3\sigma$ outliers by that investigation have not been included in the present analysis. In total, 442 velocities---220 for Aa and 111 for each of Ba and Bb---are used in fitting. \section{Orbital Models} The apparent motions of the centers-of-light (photocenters) of the ${\rm A=Aa-Ab}$ and ${\rm B=Ba-Bb}$ subsystems relative to each other are given by the model \begin{eqnarray}\label{muOri3DorbitEquation} \overrightarrow{y_{\rm{obs}}} &=& \overrightarrow{r_{\rm{A-B}}} \nonumber\\ &+& \frac{M_{\rm{Ab}}/M_{\rm{Aa}} - L_{\rm{Ab}}/L_{\rm{Aa}}}{\left(1+M_{\rm{Ab}}/M_{\rm{Aa}}\right)\left(1+L_{\rm{Ab}}/L_{\rm{Aa}}\right)}\overrightarrow{r_{\rm{Aa-Ab}}}\nonumber\\ &-& \frac{M_{\rm{Bb}}/M_{\rm{Ba}} - L_{\rm{Bb}}/L_{\rm{Ba}}}{\left(1+M_{\rm{Bb}}/M_{\rm{Ba}}\right)\left(1+L_{\rm{Bb}}/L_{\rm{Ba}}\right)}\overrightarrow{r_{\rm{Ba-Bb}}} \end{eqnarray} corresponding to a four-body hierarchical dynamical system (HDS). The quantities $M$ are component masses and $L$ are component luminosities, and each of the summed vectors is determined by a 2-body Keplerian model. This model is used to fit the astrometric data; note that the total masses from the Aa-Ab and Ba-Bb orbits also show up as component masses for the A-B orbit, linking them, and that the mass ratios and luminosity ratios appear as additional parameters, degenerate with each other. The radial velocities are fit by a simple superposition of individual Keplerians; these determine mass ratios, and the luminosity ratios become nondegenerate parameters. The luminosity ratios are primarily constrained by the K-band PHASES data; the other astrometric data are not precise enough to detect the subsystem motions. The combined simultaneous fit to all data sets has 26 free parameters and 626 degrees of freedom. The parameters used are listed in the top half of Table \ref{muOriOrbitModels} with their associated best-fit values and $1\sigma$ uncertainties. It should be noted that while the Ba-Bb eccentricity has been allowed to vary as a free parameter, the best fit value is consistent with zero and could have been fixed; the other parameters are not changed significantly by doing so. Quantities of interest derived from those parameters are listed in the second half of that table, with corresponding uncertainties derived from first-order uncertainty propagation. The apparent center-of-light wobbles of the Aa-Ab and Ba-Bb subsystems are plotted in Figure \ref{MuOriOrbit}; the A-B and RV orbits were plotted in F2002 and are not significantly different in the present model. Including PHASES measurements in the fit introduces the ability to evaluate the inclination and luminosity ratio of the Aa-Ab system and angles of the nodes of the Aa-Ab and Ba-Bb systems, quantities that were entirely unconstrained in the F2002 study. The A-B angular parameters have much smaller uncertainties than in F2002 (the $\Omega_{\rm AB}$, $i_{\rm AB}$, and $\omega_{\rm AB}$ uncertainties are reduced by $13\times$, $6\times$, and $5\times$, respectively). Uncertainties in the A-B period, eccentricity, and epoch of periastron passage are improved by a factor of 2 or more. Most other fit parameters are constrained only marginally better than in F2002. \begin{deluxetable}{lllllllll} \tabletypesize{\tiny} \tablecolumns{9} \tablewidth{0pc} \tablecaption{Orbital parameters for $\mu$ Ori\label{muOriOrbitModels}} \tablehead{ \colhead{} & \multicolumn{2}{c}{$L_{\rm Aa} > L_{\rm Ab}$,} & \multicolumn{2}{c}{$L_{\rm Aa} > L_{\rm Ab}$,} & \multicolumn{2}{c}{$L_{\rm Aa} < L_{\rm Ab}$,} & \multicolumn{2}{c}{$L_{\rm Aa} < L_{\rm Ab}$,} \\ \colhead{Parameter} & \multicolumn{2}{c}{$L_{\rm Ba} > L_{\rm Bb}$} & \multicolumn{2}{c}{$L_{\rm Ba} < L_{\rm Bb}$} & \multicolumn{2}{c}{$L_{\rm Ba} > L_{\rm Bb}$} & \multicolumn{2}{c}{$L_{\rm Ba} < L_{\rm Bb}$} \\ } \startdata $\chi^2$ & \multicolumn{2}{c}{723.86} & \multicolumn{2}{c}{723.86} & \multicolumn{2}{c}{723.58} & \multicolumn{2}{c}{723.58} \\ $P_{\rm AB}$ (days) & $6813.8 $ & $\pm 1.2$ & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata \\ $e_{\rm AB}$ & $0.7410 $ & $\pm 0.0011$ & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata \\ $i_{\rm AB}$ (degrees) & $96.028 $ & $\pm 0.028$ & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata \\ $\omega_{\rm AB}$ (degrees) & $36.712 $ & $\pm 0.066$ & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata \\ $T_{\rm AB}$ (MHJD) & $46090.7 $ & $\pm 1.0$ & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata \\ $\Omega_{\rm AB}$ (degrees) & $204.877 $ & $\pm 0.011$ & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata \\ $P_{\rm AaAb}$ (days) & $4.4475849 $ & $\pm 1.2\times 10^{-6}$ & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata \\ $e_{\rm AaAb}$ & $0.0037 $ & $\pm 0.0014$ & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata \\ $i_{\rm AaAb}$ (degrees) & $47.1 $ & $\pm 9.0$ & \nodata & \nodata & $50.0 $ & $\pm 8.1$ & \nodata & \nodata \\ $\omega_{\rm AaAb}$ (degrees) & $304 $ & $\pm 21$ & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata \\ $T_{\rm AaAb}$ (MHJD) & $43739.69 $ & $\pm 0.26$ & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata \\ $\Omega_{\rm AaAb}$ (degrees) & $50.5 $ & $\pm 3.7$ & \nodata & \nodata & $231.7 $ & $\pm 3.8$ & \nodata & \nodata \\ $P_{\rm BaBb}$ (days) & $4.7835349 $ & $\pm 3.0\times 10^{-6}$ & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata \\ $e_{\rm BaBb}$ & $0.0016 $ & $\pm 0.0014$ & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata \\ $i_{\rm BaBb}$ (degrees) & $110.71 $ & $\pm 0.73$ & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata \\ $\omega_{\rm BaBb}$ (degrees) & $217 $ & $\pm 47$ & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata \\ $T_{\rm BaBb}$ (MHJD) & $43746.40 $ & $\pm 0.63$ & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata \\ $\Omega_{\rm BaBb}$ (degrees) & $111.3 $ & $\pm 3.9$ & $291.3 $ & $\pm 3.9$ & $111.3 $ & $\pm 4.0$ & $291.3 $ & $\pm 4.0$ \\ $M_{\rm A}$ ($M_\odot$) & $3.030 $ & $\pm 0.069$ & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata \\ $M_{\rm B}$ ($M_\odot$) & $2.746 $ & $\pm 0.038$ & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata \\ $M_{\rm Ab}/M_{\rm Aa}$ & $0.274 $ & $\pm 0.051$ & \nodata & \nodata & $0.259 $ & $\pm 0.039$ & \nodata & \nodata \\ $M_{\rm Bb}/M_{\rm Ba}$ & $0.9764 $ & $\pm 0.0022$ & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata \\ $L_{\rm Ab}/L_{\rm Aa}$ & $0 $ & $\pm 0.040$ & \nodata & \nodata & $0.738 $ & $\pm 0.061$ & \nodata & \nodata \\ $L_{\rm Bb}/L_{\rm Ba}$ & $0.765 $ & $\pm 0.055$ & $1.246 $ & $\pm 0.089$ & $0.773 $ & $\pm 0.055$ & $1.233 $ & $\pm 0.088$ \\ $d$ (parsecs) & $46.11 $ & $\pm 0.28$ & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata \\ $V_{0}$ (${\rm km\,s^{-1}}$) & $42.548 $ & $\pm 0.027$ & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata \\ \tableline $M_{\rm Aa}$ ($M_\odot$) & $2.38 $ & $\pm 0.11 $ & \nodata & \nodata & $2.408 $ & $\pm 0.092 $ & \nodata & \nodata \\ $M_{\rm Ab}$ ($M_\odot$) & $0.652 $ & $\pm 0.097 $ & \nodata & \nodata & $0.623 $ & $\pm 0.075 $ & \nodata & \nodata \\ $M_{\rm Ba}$ ($M_\odot$) & $1.389 $ & $\pm 0.019 $ & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata \\ $M_{\rm Bb}$ ($M_\odot$) & $1.356 $ & $\pm 0.019 $ & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata \\ $\Phi_{\rm AB-AaAb}$ (degrees) & $136.7 $ & $\pm 8.3 $ & \nodata & \nodata & $52.2 $ & $\pm 6.1 $ & \nodata & \nodata \\ $\Phi_{\rm AB-BaBb}$ (degrees) & $91.2 $ & $\pm 3.6 $ & $84.5 $ & $\pm 3.6 $ & $91.2 $ & $\pm 3.8 $ & $84.5 $ & $\pm 3.8 $ \\ $\Phi_{\rm AaAb-BaBb}$ (degrees) & $84.6 $ & $\pm 4.9 $ & $125.1 $ & $\pm 6.0 $ & $126.2 $ & $\pm 5.9 $ & $82.2 $ & $\pm 4.8 $ \\ $a_{\rm AB}$ (mas) & $273.7 $ & $\pm 2.1$ & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata \\ $a_{\rm AB}$ (AU) & $12.620 $ & $\pm 0.057$ & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata \\ $a_{\rm AaAb, COL}$ ($\mu{\rm as}$) & $358 $ & $\pm 84$ & \nodata & \nodata & $364 $ & $\pm 53$ & \nodata & \nodata \\% & $364 $ & $\pm 53$ \\ $a_{\rm AaAb}$ (mas) & $1.661 $ & $\pm 0.016$ & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata \\ $a_{\rm AaAb}$ (AU) & $0.07659 $ & $\pm 0.00058$ & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata \\ $a_{\rm BaBb, COL}$ ($\mu{\rm as}$) & $102 $ & $\pm 30$ & \nodata & \nodata & $98 $ & $\pm 30$ & \nodata & \nodata \\ $a_{\rm BaBb}$ (mas) & $1.688 $ & $\pm 0.013$ & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata \\ $a_{\rm BaBb}$ (AU) & $0.07780 $ & $\pm 0.00036$ & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata \\ $\pi$ (mas) & $21.69 $ & $\pm 0.13$ & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata \\ $K_{\rm Aa}$ & $1.03 $ & $\pm 0.26$ & \nodata & \nodata & $1.64 $ & $\pm 0.26$ & \nodata & \nodata \\ $K_{\rm Ab}$ & \multicolumn{2}{c}{$ > 4.58$} & \multicolumn{2}{c}{\nodata} & $1.96 $ & $\pm 0.27$ & \nodata & \nodata \\ $K_{\rm Ba}$ & $1.72 $ & $\pm 0.26$ & $1.99 $ & $\pm 0.26$ & $1.73 $ & $\pm 0.26$ & $1.98 $ & $\pm 0.26$ \\ $K_{\rm Bb}$ & $2.02 $ & $\pm 0.26$ & $1.75 $ & $\pm 0.26$ & $2.01 $ & $\pm 0.26$ & $1.75 $ & $\pm 0.26$ \\ $L_{K,\,{\rm Aa}}$ & $8.3 $ & $\pm 2.0$ & \nodata & \nodata & $4.8 $ & $\pm 1.2$ & \nodata & \nodata \\ $L_{K,\,{\rm Ab}}$ & $0 $ & $\pm 0.33 $ & \nodata & \nodata & $3.52 $ & $\pm 0.86$ & \nodata & \nodata \\ $L_{K,\,{\rm Ba}}$ & $4.4 $ & $\pm 1.1$ & $3.45 $ & $\pm 0.84$ & $4.4 $ & $\pm 1.1$ & $3.47 $ & $\pm 0.84$ \\ $L_{K,\,{\rm Bb}}$ & $3.35 $ & $\pm 0.82$ & $4.3 $ & $\pm 1.0$ & $3.38 $ & $\pm 0.82$ & $4.3 $ & $\pm 1.0$ \\ \enddata \tablecomments{ Orbital parameters for $\mu$ Ori. In the second, third, and fourth solutions, ellipses indicate a parameter that changes by less than two units in the last reported digit from the previous model. In the combined fits, all parameter uncertainties have been increased by a factor of $\sqrt{\chi_r^2} = 1.08$ (though the $\chi_r^2$ of the combined fit is artificial due to rescaling the uncertainties of the individual data sets, this reflects the degree to which the data sets agree with each other). The first solution is strongly preferred as it produces masses and luminosities that are correlated; the second is also possible because the stars Ba and Bb are very similar. The third and fourth solutions require an unlikely luminosity for component Ab, given its mass, and are not preferred. $a_{\rm COL}$ is the semimajor axis of the motion of the center-of-light of a subsystem, at K-band. $L_{\rm X}/L_{\rm Y}$ is the K-band luminosity ratio between components X and Y. $K$ is the K-band absolute magnitude, $L_K$ is K-band luminosity, in solar units. For the first two solutions, the best fit solution yields $K_{\rm Ab}$ is infinite; a lower limit is determined by setting $L_{\rm Ab}/L_{\rm Aa}$ to the upper limit of its $1\sigma$ uncertainty range. } \end{deluxetable} \begin{figure}[] \plottwo{f1a.eps}{f1b.eps} \caption[The Orbit of $\mu$ Ori] { \label{MuOriOrbit} The astrometric orbits of $\mu$ Ori Aa-Ab and Ba-Bb, phase-wrapped about their respective orbital periods. Phase zero is at the epoch of periastron passage ($T$), and each plot is repeated for two cycles (and each measurement is plotted twice) to allow for continuity at all parts of the graph. In both cases, the motions of the A-B system and of the other subsystem have been removed. The projection axis shown for each is 159 degrees East of North (equivalent to position angle 291 degrees), well aligned with the minor axis of many PHASES observations. The plotted uncertainties are those projected along this axis, and have been increased by a factor of 1.73 over the formal uncertainties, to reflect excess noise within the PHASES set. On the left is the motion of Aa-Ab; for clarity, only those observations with rescaled and projected uncertainties less than 200 $\mu{\rm as}$ have been plotted; on the right is the motion of Ba-Bb, for which the cutoff was at 30 $\mu{\rm as}$. } \end{figure} \subsection{Relative Orbital Inclinations} In systems with three or more stars, studying the system coplanarity is of interest for understanding the formation and evolution of multiples \citep{Sterzik2002}. To determine this without ambiguity, one must have both visual and RV orbital solutions for pairs of interest. Previously, this was available only in six triples \citep[for a listing, see][]{Mut06_v819her}. The reason mutual inclination measurements have been rare is due to the observational challenges these systems present---RV signals are largest for compact pairs of stars, whereas astrometry prefers wider pairs. The ``wide'' pair must be studied with RV and thus have an orbital period (and corresponding separation) as short as the two-component binaries that are already challenging to visual studies. The ``narrow'' pair is even smaller. The mutual inclination between two orbits is given by: \begin{equation}\label{KapPegMutualInclination} \cos \Phi = \cos i_1 \cos i_2 + \sin i_1 \sin i_2 \cos\left(\Omega_1 - \Omega_2\right) \end{equation} \noindent where $i_1$ and $i_2$ are the orbital inclinations and $\Omega_1$ and $\Omega_2$ are the longitudes of the ascending nodes. If only velocities are available for one system, the orientation of that orbit is unknown (even if it is eclipsing, the longitude of the ascending node is unknown). If velocities are unavailable for a given orbit, there is a degeneracy in which node is ascending---two values separated by 180 degrees are possible. This gives two degenerate solutions for the mutual inclination (not necessarily separated by 180 degrees). Even when a center-of-light astrometric orbit is available for the narrow pair, there can be a degeneracy between which node is ascending and the luminosity ratio. Having found one possible luminosity ratio $L_{\rm Ab}/L_{\rm Aa} = L_1$, it can be shown that the other possible solution, corresponding to changing the ascending node by 180 degrees, is given by \begin{equation}\label{eqLum} L_2 = \frac{2R+RL_1-L_1}{1+2L_1-R} \end{equation} \noindent where $R$ is the mass ratio $M_{\rm Ab}/M_{\rm Aa}$. In previous studies, support data has been available to lift that degeneracy. For example, in the V819 Her system \citep{Mut06_v819her}, the two possible luminosity ratios were $0.26$ and $1.89$. The eclipsing nature of the Ba-Bb pair lifted that degeneracy because it helped establish that the luminosity ratio is much less than 1. Similarly, in the $\kappa$ Peg system \citep{Mut06_kappeg}, the spectra used for RV also show that component Bb is much fainter than Ba, again lifting the degeneracy (a luminosity ratio either nearly zero or 1.9 were both possible). For a quadruple system, there are as many as four degenerate fit solutions. For $\mu$ Ori, a global minimum $\chi^2$ is found with longitude of the ascending node $\Omega_{\rm AaAb} = 231.7 \pm 3.8$ degrees and $L_{\rm Ab}/L_{\rm Aa} = 0.738 \pm 0.061$. The alternative pair of these parameters solving eq.~\ref{eqLum} would force the luminosity ratio to a negative value (but close to zero within uncertainties). However, searching for solutions with nonnegative luminosity ratios near zero yields a fit solution with only slightly larger $\chi^2$, and the node at roughly 180 degrees difference ($\Omega_{\rm AaAb} = 50.5 \pm 3.7$ and $L_{\rm Ab}/L_{\rm Aa} = 0 \pm 0.040$). All parameters other than the node angle and luminosity ratio vary between the two models by amounts less than the fit uncertainties. The two solutions find $M_{\rm Ab}/M_{\rm Aa} = 0.259 \pm 0.039$ or $ 0.274 \pm 0.051$ respectively; given the rough scaling $L \propto M^4$ \citep{MassLuminosity}, it is very likely that the larger luminosity ratio is incorrect. The larger luminosity ratio would also imply Ab is as bright as Ba or Bb. However, component Ab is not observed in the spectrum while Ba and Bb are. This is suggestive that Ab is faint, though it is also possible to explain this lack of Ab lines by postulating it is rapidly rotating. However, given that the other stars are not rapid rotators, there is little evidence to support Ab as a rapid rotator. It is concluded that the luminosity ratio near zero is {\em strongly} preferred, despite the slightly worse $\chi^2$ fit. Both fits are reported in Table \ref{muOriOrbitModels}, but the rest of the discussion in this paper refers only to the preferred solution for Aa-Ab. This degeneracy can be fully lifted by a single epoch image with a closure phase capable interferometer with sufficient angular resolution \citep[such as the Navy Prototype Optical Interferometer,][]{arm98}. For each of the Aa-Ab solutions, there exists two solutions for the Ba-Bb pair. In these cases, no negative luminosity ratios are found; the degeneracy is perfect and the $\chi^2$ of the fits are identical. $\Omega_{\rm BaBb}$ differs by 180 degrees in the two orbits, and the luminosity ratio $L_{\rm Bb}/L_{\rm Ba}$ switches between being larger (at $\Omega_{\rm BaBb} = 291$ degrees) or smaller (at 111 degrees) than unity; all other parameters remain unchanged. Both solutions are near unity, and the stars have very similar masses ($M_{\rm Bb}/M_{\rm Ba} = 0.9764 \pm 0.0022$). While the solution for which Bb is less luminous than Ba is slightly more consistent because it correlates to the mass ratio, it is conceivable that the other solution is correct. Thus, the solution for which Bb is less luminous than Ba is slightly preferred, but not as conclusively as for the Aa-Ab case, where the differences between the stars are more significant. Thus, both possibilities are considered in the remainder of this paper. \subsection{Evidence for Kozai Cycles with Tidal Friction?} Of particular interest is the potential for Kozai oscillations between orbital inclination and eccentricity in the narrow pairs \citep{Kozai1962}, which can affect the orbital evolution of the system. These occur independent of distances or component masses, with the only requirement being that the mutual inclination is between 39.2 and $180-39.2=140.8$ degrees. Other effects that cause precession can increase the value of this critical angle. \cite{Fabrycky2007} predict a buildup of mutual inclinations near 40 and 140 degrees by the combined effects of Kozai Cycles with Tidal Friction (KCTF) for triples whose short-period subsystems have periods between 3 and 10 days. The mutual inclination of $\mu$ Ori AB-AaAb is near 140 degrees, leading one to consider if this trend is starting to be seen. The systems with unambiguous mutual inclinations break down as follows: \begin{itemize} \item {\bf Five systems are outside of the 3-10 day inner period range.} These systems do not meet the criteria to be included in testing the buildup prediction: \begin{enumerate} \item V819 Her ($\Phi = 26.3\pm 1.5$ degrees, 2.23 d; see \S \ref{updates}), \item Algol \citep[$\Phi = 98.8\pm 4.9$ degrees, 2.9 d; ][]{les93, PanAlgol}, \item $\eta$ Vir \citep[$\Phi = 30.8\pm 1.3$ degrees, 72 d; ][]{hum03}, \item $\xi$ Uma ABC \citep[$\Phi = 132.1$ degrees, 670 d; ][]{Heintz1996}, and \item $\epsilon$ Hya ABC \citep[$\Phi = 39.4$ degrees, 5500 d; ][]{Heintz1996}. \end{enumerate} These fall outside the 3-10 day range of inner-binary periods applicable to the prediction in \cite{Fabrycky2007}. However, it is worth noting the mutual inclinations of $\xi$ UMa and $\epsilon$ Hya {\em are} near 140 and 40 degrees respectively and in V819 Her and $\eta$ Vir the values are outside the $40-140$ degrees range, so neither would have been predicted to undergo Kozai cycles or KCTF. Algol has a nearly perpendicular alignment, though the dynamics of Algol are different due to quadrupole distortions in the semidetached stars; Algol's alignment has been explained by \cite{eggleton2001}. \item {\bf No systems are outside the 40-140 degrees range, while also in the 3-10 day inner period range.} \item {\bf Two systems are between 40 and 140 degrees and in the 3-10 day inner period range.} These systems would not support the KCTF-driven buildup near 40 and 140 degrees: \begin{enumerate} \item $\mu$ Ori AB-BaBb ($\Phi = 91.2 \pm 3.6$ or $84.5 \pm 3.6$ degrees are possible, 4.78 d) and \item 88 Tau AaAb-Ab1Ab2 \citep[$\Phi = 82.0 \pm 3.3$ or $58 \pm 3.3$ degrees are possible, 7.89 d; ][] {lane88Tau2007_draft}. \end{enumerate} While mutual inclination degeneracies continue to exist in both systems, all possible values are in this range. \item {\bf Three systems are near 40 or 140 degrees, and in the 3-10 day inner period range.} These systems would appear to support the KCTF prediction: \begin{enumerate} \item $\mu$ Ori AB-AaAb ($\Phi = 136.7 \pm 8.3$ degrees, 4.45 d), \item 88 Tau AaAb-Aa1Aa2 \citep[$\Phi = 143.3 \pm 2.5$ degrees, 3.57 d; ][]{lane88Tau2007_draft}, and \item $\kappa$ Peg ($\Phi = 43.4\pm 3.9$ degrees, 5.97 d; see \S \ref{updates}). \end{enumerate} \end{itemize} In total, 3 of the 5 systems meeting the criteria for testing the KCTF prediction do appear near the peak points of 40 and 140 degrees. Following equations 1, 22, and 35 in \cite{Fabrycky2007}, in the presence of general relativity (GR) precession one expects the critical angles for $\mu$ Ori AB-AaAb and $\kappa$ Peg to be increased from 39.2 degrees to $\sim 68$ ($\tau \dot{\omega}_{GR} = 2.3$) and $\sim 54$ degrees ($\tau \dot{\omega}_{GR} = 1.3$), respectively, while for 88 Tau AaAb-Aa1Aa2 GR precession dominates no matter the inclination ($\tau \dot{\omega}_{GR} = 17$). Thus, Kozai oscillations are suppressed by precession in these systems' current states. However, it is possible these were present at earlier stages in the systems' histories and their current configurations were still reached through KCTF---the inner binaries may have originally been more widely separated, in which case GR effects would have been reduced. Alternatively, $\mu$ Ori AB-BaBb and 88 Tau AaAb-Ab1Ab2 both lie well within the range of predicted Kozai cycles, even including GR precession (which raises the critical angles to $\sim 58$ ($\tau \dot{\omega}_{GR} = 1.6$) and $\sim 50$ degrees ($\tau \dot{\omega}_{GR} = 0.9$), respectively). Several more systems with double visual orbits but lacking RV for at least one subsystem component are listed by \cite{Sterzik2002}. Two more (HD 150680 and HD 214608) are mentioned by \cite{orlov2000} and one more (HD 108500) by \cite{orlov2005}. Of these, the inner pair in HD 150680 is doubtful and listing HD 214608 as having a double visual orbit appears to be in error. In the paper cited by \cite{orlov2000} for HD 214608, \cite{duq1987} reveal it to be a double spectroscopic system, but point out that the the visual elements of the inner pair are unconstrained. The value of 150 degrees for the node seems to have been taken as nominal from the outer system (whose true ascending node is 180 degrees different). Because the nodes cannot be distinguished in the visual-only pairs, two values of the mutual inclinations are equally possible for each. Furthermore, all have inner systems with periods longer than 300 days. Thus, these cannot provide further direction on testing the KCTF prediction. \subsection{Masses and Distance} Components Ba and Bb are each determined to $1.4\%$, Aa to $5\%$, while the lowest mass member, Ab, is uncertain at the $15\%$ level. The individual masses of Aa and Ab have not been previously determined; this study enables the exploration of the natures of those stars. Both are members of star classes of interest: Aa is of spectral type Am, and Ab has a mass in the range of late K dwarfs. Through their physical association, it can be assumed both are co-evolved with Ba and Bb, each within the mass range for which stellar models have been well calibrated through observation. The masses of Ba and Bb are determined only slightly better (less than a factor of 2 improvement) over the previous study by F2002. Similarly, the distance is determined to $0.6\%$, a slight improvement. The {\em Hipparcos} \citep{hipcat} based parallax values discussed by \cite{Soder1999} ($21.5\pm 0.8$ mas in the original evaluation, revised to $20.8 \pm 0.9$ mas when the binary nature was considered) are consistent with, but less well constrained, than the current value of $21.69 \pm 0.13$ mas. \subsection{Component Luminosities} The 2MASS K-band magnitude for $\mu$ Ori is $m_{\rm Total} = 3.637\pm 0.260$ \citep{2MASS}. F2002 gives the difference between the luminosities of A and B in several bands. Unfortunately, none of these were taken near the K band (2.2 $\mu$m) where PTI operates. However, a Keck adaptive optics image of $\mu$ Ori was obtained on MJD 53227 with a narrow band ${\rm H_2}$ 2-1 filter centered at 2.2622 microns. The A-B differential magnitude in this band is $m_A - m_B = \Delta m_{AB} = -0.073 \pm 0.007$ magnitudes; this measurement is reported for the first time here. The combined orbital fit provides the system distance $d = 46.11 \pm 0.28$ parsecs and the luminosity ratios. The combined set of $m_{\rm Total}$, $\Delta m_{AB}$, $d$, $L_{\rm Ab}/L_{\rm Aa}$, and $L_{\rm Bb}/L_{\rm Ba}$ determines the component luminosities. Using first order error propagation, the K-band luminosities are $L_{K,\,{\rm Aa}} = 8.3 \pm 2.0$ solar and less than a third solar for Ab. Components Ba and Bb have K-band luminosities of either $L_{K,\,{\rm Ba}} = 4.4 \pm 1.1$ and $L_{K,\,{\rm Bb}} = 3.35 \pm 0.82$ or $L_{K,\,{\rm Ba}} = 3.45 \pm 0.84$ and $L_{K,\,{\rm Bb}} = 4.3 \pm 1.0$ solar. Absolute magnitudes are also given in Table \ref{muOriOrbitModels}. In each case, the uncertainty in the apparent magnitude $m_{\rm Total}$ dominates. \subsection{System Age and Evolutionary Tracks} The masses and absolute K-band magnitudes for the components in this system can be compared to the stellar evolution models from \cite{Girardi2002}. As in F2002, an abundance of $Z=0.02$ is assumed. Figure \ref{MuOriIsochrones} shows the mass vs.~K-band magnitudes for several isochrones downloaded from http://pleiadi.pd.astro.it/ and the values derived for the components of $\mu$ Ori. As the most massive component, the properties of Aa provide the strongest constraints on age, being most consistent with isochrones in the age range of $10^8-10^{8.5}$ years. This is not entirely consistent with the properties of Ba and Bb, though close. Of course, if KCTF has played a significant role in the orbital evolution of this system, one wonders whether stellar evolution models for single stars are really applicable to these stars. One would anticipate the evolution of Aa as being affected by tidal forces because it is part of the subsystem near the predicted 140 degree ``pile-up''. Thus, one would rely more on Ba and Bb for system age determination, indicating an age over $10^9$ years. \begin{figure}[] \epsscale{0.5} \plotone{f2.eps} \caption[$\mu$ Ori Isochrones] { \label{MuOriIsochrones} Stellar evolution models predicting isochrones for the elements of the $\mu$ Ori system show disagreement between component Aa with Ba and Bb. Curve labels give values of ${\rm \log (Age/years)}$. Component Aa provides the most leverage for determining the system age, though its evolution may have been altered by tidal friction. } \end{figure} \section{Updated Orbits}\label{updates} During the course of this investigation, a sign error was found in the analysis software that was used to compute orbital solutions for $\delta$ Equulei \citep{Mut05_delequ}, $\kappa$ Pegasi \citep{Mut06_kappeg}, and V819 Herculis \citep{Mut06_v819her}. This error affected fits to the radial velocity data only, with the result that the descending node was misidentified as the ascending, and the angle of periastron passage is off by 180 degrees. Because the software was self-consistent, this has no impact on the mutual inclinations derived. Additionally, the finite travel time of light across the wide orbit is included in the analysis, and this amount was thus incorrect by the same sign error, but the light travel time correction has only a small impact on those models. Both errors have since been corrected. Twenty-three new observations of $\delta$ Equ and 11 of $\kappa$ Peg have been collected since those initial investigations and are presented in Table \ref{phasesDataOthers}. Also presented in Table \ref{phasesDataOthers} is the complete set of 34 V819 Her PHASES observations with the 10 measurements taken during eclipses marked; the previous investigation used less precise methods for predicting eclipse times, so the flagged measurements have changed. Measurements made during eclipse are not used in fitting. The new analysis makes use of the V819 Her Ba-Bb inclination constraint derived by eclipse lightcurves, a feature not included in the previous study. When computing $\chi^2$, an additional term $(i_{\rm BaBb} - i_{\rm BaBb, \, eclipse})^2 / \sigma^2_{i {\rm , \, BaBb, \, eclipse}}$ is added to the sum, where $i_{\rm BaBb, \, eclipse} = 80.63$ and $\sigma_{i {\rm , \, BaBb, \, eclipse}} = 0.33$ degrees are the value and uncertainty of the Ba-Bb inclination from the lightcurve studies of \cite{vanHamme1994}. Note that this measurement results from an entirely independent data set. This added constraint lessens covariances between orbital elements. The corrected and updated orbital solutions are presented in Tables \ref{delEquOrbitModels} and \ref{OthersOrbitModels}, which are fit to the complete set of PHASES observations and the other astrometric and RV observations tabulated in those previous papers. (A few new measurement from speckle interferometry are available for each system since those investigations. These have little impact on the orbital solutions are not included in the present fit to avoid over-complicating this update.) The updated reweighting factors for the PHASES uncertainties for each data set are 3.91 for $\delta$ Equ, 7.93 for $\kappa$ Peg, and 2.0 for V819 Her. Alternatively, the noise floor for $\kappa$ Peg is now found at 161 $\mu{\rm as}$. \begin{deluxetable}{lllllllllllllll} \tabletypesize{\tiny} \tablecolumns{15} \tablewidth{0pc} \tablecaption{New PHASES data for $\delta$ Equ, $\kappa$ Peg, and V819 Her\label{phasesDataOthers}} \tablehead{ \colhead{Star} & \colhead{HJD-} & \colhead{$\delta$RA} & \colhead{$\delta$Dec} & \colhead{$\sigma_{\rm min}$} & \colhead{$\sigma_{\rm maj}$} & \colhead{$\phi_{\rm e}$} & \colhead{$\sigma_{\rm RA}$} & \colhead{$\sigma_{\rm Dec}$} & \colhead{$\frac{\sigma_{\rm RA, Dec}^2}{\sigma_{\rm RA}\sigma_{\rm Dec}}$} & \colhead{N} & \colhead{LDC} & \colhead{Align} & \colhead{Rate} & \colhead{Eclipse} \\ \colhead{} & \colhead{2400000.5} & \colhead{(mas)} & \colhead{(mas)} & \colhead{($\mu{\rm as}$)} & \colhead{($\mu{\rm as}$)} & \colhead{(deg)} & \colhead{($\mu{\rm as}$)} & \colhead{($\mu{\rm as}$)} & \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{(Hz)} & \colhead{} } \startdata $\delta$ Equ & 53508.50939 & -86.2658 & -117.6468 & 25.7 & 1872.8 & 150.49 & 1630.0 & 922.7 & -0.99949 & 263 & 0 & 0 & 100 & 1 \\ $\delta$ Equ & 53550.41402 & -94.5630 & -143.7433 & 33.6 & 248.4 & 151.98 & 219.8 & 120.4 & -0.94899 & 370 & 0 & 0 & 100 & 1 \\ $\delta$ Equ & 53552.38996 & -95.1309 & -144.7397 & 10.0 & 85.5 & 150.13 & 74.3 & 43.5 & -0.96437 & 1352 & 0 & 0 & 100 & 1 \\ $\delta$ Equ & 53571.33634 & -99.1010 & -155.5409 & 9.8 & 68.1 & 149.71 & 59.0 & 35.4 & -0.94762 & 1089 & 0 & 0 & 100 & 1 \\ $\delta$ Equ & 53584.32135 & -101.4805 & -162.5823 & 10.4 & 104.4 & 151.49 & 91.9 & 50.7 & -0.97257 & 774 & 0 & 0 & 100 & 1 \\ $\delta$ Equ & 53586.30477 & -102.0695 & -163.5727 & 13.2 & 831.5 & 150.94 & 726.8 & 404.0 & -0.99930 & 639 & 0 & 0 & 100 & 1 \\ $\delta$ Equ & 53607.23078 & -105.7105 & -174.9135 & 3.1 & 80.6 & 148.50 & 68.8 & 42.2 & -0.99617 & 4856 & 0 & 0 & 100 & 1 \\ $\delta$ Equ & 53613.22582 & -107.1684 & -177.7125 & 6.7 & 243.8 & 150.28 & 211.8 & 121.0 & -0.99794 & 1601 & 0 & 0 & 100 & 1 \\ $\delta$ Equ & 53614.22329 & -106.8636 & -178.5240 & 5.9 & 174.7 & 150.30 & 151.8 & 86.7 & -0.99697 & 1597 & 0 & 0 & 100 & 1 \\ $\delta$ Equ & 53637.20185 & -111.2007 & -189.7814 & 12.3 & 357.4 & 157.38 & 330.0 & 137.9 & -0.99531 & 570 & 0 & 0 & 100 & 1 \\ $\delta$ Equ & 53656.13045 & -114.1207 & -198.7179 & 16.3 & 215.1 & 154.15 & 193.8 & 94.9 & -0.98163 & 526 & 0 & 0 & 100 & 1 \\ $\delta$ Equ & 53874.48665 & -133.8986 & -275.5257 & 12.3 & 424.2 & 147.53 & 357.9 & 228.0 & -0.99794 & 264 & 0 & 0 & 100 & 1 \\ $\delta$ Equ & 53909.42854 & -134.0082 & -283.8970 & 6.1 & 58.2 & 152.62 & 51.7 & 27.3 & -0.96785 & 2646 & 0 & 1 & 50 & 1 \\ $\delta$ Equ & 53957.31149 & -133.6026 & -293.0248 & 9.0 & 405.5 & 154.61 & 366.4 & 174.1 & -0.99836 & 1127 & 1 & 1 & 50 & 1 \\ $\delta$ Equ & 53970.25612 & -133.8368 & -294.8599 & 5.7 & 50.6 & 153.56 & 45.3 & 23.1 & -0.96133 & 2387 & 1 & 1 & 50 & 1 \\ $\delta$ Equ & 53971.25621 & -133.8811 & -294.9923 & 7.3 & 62.3 & 152.11 & 55.1 & 29.8 & -0.96095 & 1682 & 1 & 1 & 50 & 1 \\ $\delta$ Equ & 53977.26493 & -133.7524 & -295.9196 & 5.3 & 39.3 & 157.55 & 36.4 & 15.8 & -0.93151 & 3320 & 1 & 1 & 50 & 1 \\ $\delta$ Equ & 54003.21361 & -133.0033 & -299.3443 & 16.7 & 206.4 & 161.12 & 195.4 & 68.6 & -0.96621 & 835 & 1 & 1 & 50 & 1 \\ $\delta$ Equ & 54005.19180 & -133.0818 & -299.4815 & 11.6 & 115.3 & 157.21 & 106.4 & 45.9 & -0.96173 & 1614 & 1 & 1 & 50 & 1 \\ $\delta$ Equ & 54028.10922 & -132.3883 & -301.8627 & 13.3 & 406.9 & 153.38 & 363.9 & 182.7 & -0.99668 & 632 & 1 & 1 & 50 & 1 \\ $\delta$ Equ & 54037.11296 & -132.9388 & -302.3034 & 11.0 & 319.6 & 159.14 & 298.7 & 114.3 & -0.99468 & 556 & 1 & 1 & 100 & 1 \\ $\delta$ Equ & 54230.50355 & -116.0819 & -301.5397 & 13.1 & 508.2 & 146.70 & 424.8 & 279.2 & -0.99843 & 933 & 1 & 1 & 50 & 1 \\ $\delta$ Equ & 54266.47236 & -111.8997 & -297.2144 & 13.2 & 112.0 & 158.12 & 104.1 & 43.5 & -0.94550 & 1315 & 1 & 1 & 50 & 1 \\ $\kappa$ Peg & 53494.50786 & 104.6067 & 43.3687 & 13.6 & 645.9 & 143.23 & 517.5 & 386.8 & -0.99904 & 702 & 0 & 0 & 100 & 1 \\ $\kappa$ Peg & 53586.36471 & 83.9658 & 55.9066 & 2.2 & 11.5 & 166.51 & 11.2 & 3.4 & -0.75093 & 5532 & 0 & 0 & 100 & 1 \\ $\kappa$ Peg & 53637.29586 & 71.2323 & 62.9289 & 6.3 & 53.5 & 173.82 & 53.2 & 8.5 & -0.66476 & 1639 & 0 & 0 & 100 & 1 \\ $\kappa$ Peg & 53921.45172 & 25.0960 & 85.9815 & 43.1 & 10007.6 & 160.71 & 9445.7 & 3306.4 & -0.99990 & 615 & 0 & 1 & 50 & 1 \\ $\kappa$ Peg & 53963.35858 & -9.1846 & 98.3661 & 6.2 & 79.3 & 165.54 & 76.8 & 20.7 & -0.95159 & 2244 & 1 & 1 & 100 & 1 \\ $\kappa$ Peg & 53978.32708 & -14.0804 & 99.4673 & 2.7 & 17.5 & 162.83 & 16.8 & 5.8 & -0.87056 & 5863 & 1 & 1 & 50 & 1 \\ $\kappa$ Peg & 53995.24384 & -17.0621 & 101.1081 & 11.5 & 514.2 & 159.38 & 481.3 & 181.5 & -0.99770 & 389 & 1 & 1 & 100 & 1 \\ $\kappa$ Peg & 54003.32618 & -22.4829 & 101.1592 & 10.9 & 849.5 & 3.09 & 848.2 & 47.1 & 0.97271 & 1590 & 1 & 1 & 50 & 1 \\ $\kappa$ Peg & 54008.31093 & -21.6029 & 102.0991 & 4.6 & 114.1 & 2.48 & 113.9 & 6.7 & 0.73308 & 2850 & 1 & 1 & 50 & 1 \\ $\kappa$ Peg & 54075.12667 & -38.5846 & 107.2133 & 7.0 & 183.8 & 3.06 & 183.5 & 12.0 & 0.81528 & 1756 & 1 & 1 & 50 & 1 \\ $\kappa$ Peg & 54265.39966 & -84.8163 & 119.9685 & 8.2 & 389.0 & 142.94 & 310.4 & 234.5 & -0.99904 & 3661 & 1 & 1 & 50 & 1 \\ V819 Her & 53109.47951 & 49.6406 & -84.4966 & 7.3 & 282.5 & 158.77 & 263.4 & 102.5 & -0.99707 & 2011 & 0 & 0 & 100 & 1 \\ V819 Her & 53110.48183 & 48.0946 & -84.1334 & 11.9 & 600.4 & 159.53 & 562.5 & 210.3 & -0.99819 & 1334 & 0 & 0 & 100 & 1 \\ V819 Her & 53123.45772 & 49.1860 & -85.9318 & 18.1 & 507.8 & 162.47 & 484.3 & 153.9 & -0.99240 & 1378 & 0 & 0 & 100 & 0 \\ V819 Her & 53130.44208 & 48.4778 & -86.4135 & 6.6 & 205.8 & 162.94 & 196.7 & 60.7 & -0.99360 & 2537 & 0 & 0 & 100 & 1 \\ V819 Her & 53137.43044 & 48.3928 & -87.1396 & 14.0 & 280.2 & 164.34 & 269.9 & 76.8 & -0.98202 & 1226 & 0 & 0 & 100 & 1 \\ V819 Her & 53144.42426 & 47.7017 & -87.6612 & 25.1 & 1039.6 & 167.13 & 1013.5 & 232.9 & -0.99386 & 897 & 0 & 0 & 100 & 1 \\ V819 Her & 53145.39541 & 48.3082 & -87.8013 & 13.7 & 316.5 & 161.59 & 300.3 & 100.8 & -0.98964 & 1673 & 0 & 0 & 100 & 1 \\ V819 Her & 53168.33949 & 47.0275 & -89.7513 & 15.0 & 339.9 & 162.93 & 325.0 & 100.8 & -0.98778 & 1409 & 0 & 0 & 100 & 1 \\ V819 Her & 53172.35221 & 47.3441 & -90.1337 & 6.3 & 170.0 & 168.29 & 166.5 & 35.1 & -0.98309 & 2560 & 0 & 0 & 100 & 1 \\ V819 Her & 53173.33202 & 47.1604 & -90.3599 & 8.0 & 77.4 & 33.97 & 64.4 & 43.8 & 0.97548 & 2904 & 0 & 0 & 100 & 1 \\ V819 Her & 53181.33391 & 46.4333 & -90.7857 & 7.5 & 174.6 & 169.71 & 171.8 & 32.1 & -0.97114 & 2795 & 0 & 0 & 100 & 0 \\ V819 Her & 53182.33164 & 46.5646 & -91.0136 & 13.9 & 333.2 & 169.62 & 327.8 & 61.6 & -0.97328 & 2014 & 0 & 0 & 100 & 1 \\ V819 Her & 53186.30448 & 45.6584 & -91.1213 & 18.2 & 426.3 & 166.80 & 415.1 & 99.0 & -0.98197 & 706 & 0 & 0 & 100 & 1 \\ V819 Her & 53187.30462 & 46.1427 & -91.2225 & 13.0 & 441.5 & 166.94 & 430.1 & 100.6 & -0.99110 & 1578 & 0 & 0 & 100 & 1 \\ V819 Her & 53197.26851 & 46.1852 & -92.1522 & 4.8 & 117.1 & 164.87 & 113.0 & 30.9 & -0.98715 & 5218 & 0 & 0 & 100 & 0 \\ V819 Her & 53198.24258 & 46.2937 & -92.2555 & 5.7 & 54.6 & 160.37 & 51.5 & 19.1 & -0.94836 & 5404 & 0 & 0 & 100 & 0 \\ V819 Her & 53199.29186 & 44.0252 & -91.9446 & 24.7 & 1645.4 & 171.42 & 1627.0 & 246.7 & -0.99488 & 946 & 0 & 0 & 100 & 0 \\ V819 Her & 53208.25236 & 46.4303 & -92.4901 & 6.6 & 181.6 & 37.67 & 143.8 & 111.1 & 0.99718 & 6558 & 0 & 0 & 100 & 0 \\ V819 Her & 53214.24077 & 45.6337 & -93.3429 & 5.5 & 125.9 & 169.45 & 123.8 & 23.7 & -0.97194 & 5251 & 0 & 0 & 100 & 1 \\ V819 Her & 53215.23094 & 45.6364 & -93.5172 & 4.8 & 110.6 & 167.53 & 108.0 & 24.3 & -0.97962 & 5723 & 0 & 0 & 100 & 1 \\ V819 Her & 53221.22209 & 46.2559 & -92.9726 & 8.8 & 342.2 & 38.91 & 266.3 & 215.0 & 0.99860 & 3998 & 0 & 0 & 100 & 1 \\ V819 Her & 53228.20946 & 45.0884 & -94.3310 & 7.2 & 100.6 & 169.45 & 98.9 & 19.7 & -0.92813 & 3180 & 0 & 0 & 100 & 0 \\ V819 Her & 53229.22073 & 45.2196 & -94.5160 & 6.4 & 80.0 & 172.84 & 79.4 & 11.8 & -0.83914 & 3905 & 0 & 0 & 100 & 1 \\ V819 Her & 53233.18295 & 45.0993 & -94.8458 & 6.0 & 64.7 & 167.67 & 63.2 & 15.0 & -0.91188 & 3303 & 0 & 0 & 100 & 1 \\ V819 Her & 53234.20151 & 44.8269 & -94.7637 & 7.6 & 37.8 & 172.74 & 37.5 & 8.9 & -0.51352 & 3701 & 0 & 0 & 100 & 1 \\ V819 Her & 53235.21764 & 45.2153 & -94.9183 & 8.5 & 107.1 & 176.57 & 106.9 & 10.6 & -0.60015 & 2094 & 0 & 0 & 100 & 1 \\ V819 Her & 53236.16733 & 44.4598 & -94.8862 & 4.7 & 78.1 & 166.59 & 76.0 & 18.7 & -0.96552 & 6684 & 0 & 0 & 100 & 0 \\ V819 Her & 53249.16006 & 44.2467 & -95.8032 & 4.3 & 87.9 & 172.71 & 87.2 & 12.0 & -0.93121 & 5428 & 0 & 0 & 100 & 1 \\ V819 Her & 53466.52265 & 31.4132 & -102.8881 & 9.9 & 204.3 & 163.01 & 195.4 & 60.4 & -0.98524 & 3031 & 0 & 0 & 100 & 1 \\ V819 Her & 53481.50628 & 30.4432 & -103.3348 & 11.1 & 311.1 & 38.18 & 244.6 & 192.5 & 0.99731 & 3301 & 0 & 0 & 100 & 0 \\ V819 Her & 53494.45305 & 29.5257 & -103.1671 & 18.6 & 251.9 & 163.98 & 242.2 & 71.8 & -0.96316 & 1355 & 0 & 0 & 100 & 1 \\ V819 Her & 53585.24930 & 23.9818 & -102.6116 & 10.1 & 114.7 & 174.02 & 114.0 & 15.6 & -0.76046 & 1479 & 0 & 0 & 100 & 1 \\ V819 Her & 53874.42827 & 1.2427 & -88.7776 & 6.8 & 99.5 & 168.07 & 97.4 & 21.6 & -0.94665 & 2358 & 0 & 0 & 100 & 1 \\ V819 Her & 53956.21398 & -5.0810 & -81.9129 & 9.2 & 341.7 & 170.23 & 336.8 & 58.7 & -0.98729 & 4028 & 1 & 1 & 50 & 0 \\ \enddata \tablecomments{ All quantities are in the ICRS 2000.0 reference frame. The uncertainty values presented in these data have not been rescaled. Column 1 is the star name. Columns 2-14 are as columns 1-13 in Table \ref{phasesDataMuOri}. Column 15 is 0 if the measurement was taken during a subsystem eclipse, 1 otherwise (V819 Her only). } \end{deluxetable} \begin{deluxetable}{lll} \tablecolumns{3} \tablewidth{0pc} \tablecaption{Orbital parameters for $\delta$ Equ\label{delEquOrbitModels}} \tablehead{ \colhead{Parameter} & \colhead{Value} & \colhead{Uncertainty} } \startdata $P$ (days) & 2084.03 & $\pm 0.10$ \\ $T$ (MHJD) & 53112.071 & $\pm 0.052$ \\ $e$ & 0.436851 & $\pm 0.000025$ \\ $a$ (mas) & 231.9650 & $\pm 0.0080$ \\ $V_{0, Lick}$ (${\rm km\,s^{-1}}$) & $-15.40$ & $\pm 0.11$ \\ $V_{0, DAO}$ (${\rm km\,s^{-1}}$) & $-15.875$ & $\pm 0.080$ \\ $V_{0, C}$ (${\rm km\,s^{-1}}$) & $-15.73$ & $\pm 0.10$ \\ $M_1$ ($M_\odot$) & 1.192 & $\pm 0.012$ \\ $M_2$ ($M_\odot$) & 1.187 & $\pm 0.012$ \\ $M_1+M_2$ ($M_\odot$) & 2.380 & $\pm 0.019$ \\ $M_1/M_2$ & 1.004 & $\pm 0.012$ \\ $i$ (deg) & 99.4083 & $\pm 0.0098$ \\ $\omega$ (deg) & 7.735 & $\pm 0.013$ \\ $\Omega$ (deg) & 23.362 & $\pm 0.012$ \\ $d$ (pc) & 18.379 & $\pm 0.048$ \\ \tableline $\pi$ (mas) & 54.41 & $\pm 0.14$ \\ \enddata \tablecomments{ All parameter uncertainties have been increased by a factor of $\sqrt{\chi_r^2} = 1.09$ (though the $\chi_r^2$ of the combined fit is artificial due to rescaling the uncertainties of the individual data sets, this reflects the degree to which the data sets agree with each other). The fit was repeated several times varying the set of non-degenerate parameters used in order to obtain uncertainty estimates for a number of desired quantities. The parameters $\{a, R=M_1/M_2\}$ were replaced with the sets $\{M=M_1+M_2, R\}$ and $\{M_1, M_2\}$. The parallax is a derived quantity.} \end{deluxetable} \begin{deluxetable}{lllll} \tabletypesize{\footnotesize} \tablecolumns{5} \tablewidth{0pc} \tablecaption{Orbital parameters for $\kappa$ Peg and V819 Her\label{OthersOrbitModels}} \tablehead{ \colhead{} & \multicolumn{2}{c}{$\kappa$ Peg} & \multicolumn{2}{c}{V819 Her} \\ \colhead{Parameter} & \colhead{Value} & \colhead{Uncertainty} & \colhead{Value} & \colhead{Uncertainty} } \startdata $P_{AB}$ (days) & 4224.76 & $\pm 0.74$ & 2019.66 & $\pm 0.35$ \\ $T_{AB}$ (MHJD) & 52401.52 & $\pm 0.96$ & 52627.5 & $\pm 1.3$ \\ $e_{AB}$ & 0.3140 & $\pm 0.0011$ & 0.67974 & $\pm 0.00066$ \\ $i_{AB}$ (degrees) & 107.911 & $\pm 0.029$ & 56.40 & $\pm 0.13$ \\ $\omega_{AB}$ (degrees) & 124.666 & $\pm 0.064$ & 222.50 & $\pm 0.22$ \\ $\Omega_{AB}$ (degrees) & 289.037 & $\pm 0.021$ & 141.96 & $\pm 0.12$ \\ $P_{BaBb}$ (days) & 5.9714971 & $\pm 1.3 \times10^{-6}$ & 2.2296330 & $\pm 1.9 \times10^{-6}$ \\ $T_{BaBb}$ (MHJD) & 52402.22 & $\pm 0.10$ & 52627.17 & $\pm 0.29$ \\ $e_{BaBb}$ & 0.0073 & $\pm 0.0013$ & 0.0041 & $\pm 0.0033$ \\ $i_{BaBb}$ (degrees) & 125.7 & $\pm 5.1$ & 80.70 & $\pm 0.38$ \\ $\omega_{BaBb}$ (degrees) & 179.0 & $\pm 6.0$ & 227 & $\pm 47$ \\ $\Omega_{BaBb}$ (degrees) & 244.1 & $\pm 2.3$ & 131.1 & $\pm 4.1$ \\ $V_{0, Keck}$ (${\rm km\,s^{-1}}$) & $-9.46$ & $\pm 0.22$ & \nodata & \nodata \\ $V_{1, Keck}$ (${\rm km\,s^{-1}\,day^{-1}}$) & $-2.2$ $\times 10^{-4}$ & $\pm 3.4 \times 10^{-4}$ & \nodata & \nodata \\ $V_{2, Keck}$ (${\rm km\,s^{-1}\,day^{-2}}$) & $6.8 \times 10^{-6}$ & $\pm 2.4 \times 10^{-6}$ & \nodata & \nodata \\ $V_{0, C}$ (${\rm km~s^{-1}}$) & $-9.40$ & $\pm 0.26$ & \nodata & \nodata \\ $V_{0, Lick}$ (${\rm km~s^{-1}}$) & $-8.37$ & $\pm 0.26$ & \nodata & \nodata \\ $V_{0, M/K}$ (${\rm km~s^{-1}}$) & \nodata & \nodata & $-3.375$ & $\pm 0.059$ \\ $V_{0, DAO}$ (${\rm km~s^{-1}}$) & \nodata & \nodata & $-3.385$ & $\pm 0.065$ \\ $V_{0, DDO}$ (${\rm km~s^{-1}}$) & \nodata & \nodata & $-3.35$ & $\pm 0.12$ \\ $M_A$ ($M_\odot$) & 1.533 & $\pm 0.050$ & 1.799 & $\pm 0.098$ \\ $M_{Ba+Bb}$ ($M_\odot$) & 2.472 & $\pm 0.078$ & 2.560 & $\pm 0.067$ \\ $M_{Bb}/M_{Ba}$ & 0.501 & $\pm 0.049$ & 0.742 & $\pm 0.012$ \\ $L_{Bb}/L_{Ba}$ & 0.015 & $\pm 0.021$ & 0.280 & $\pm 0.037$ \\ $d$ (parsecs) & 34.57 & $\pm 0.21$ & 68.65 & $\pm 0.87$ \\ \tableline $\Phi_{\rm AB-BaBb}$ (degrees) & 43.4 & $\pm 3.9$ & 26.3 & $\pm 1.5$ \\ $M_{Ba}$ ($M_\odot$) & 1.646 & $\pm 0.074$ & 1.469 & $\pm 0.040$ \\ $M_{Bb}$ ($M_\odot$) & 0.825 & $\pm 0.059$ & 1.090 & $\pm 0.030$ \\ $a_{AB}$ (AU) & 8.122 & $\pm 0.063$ & 5.108 & $\pm 0.046$ \\ $a_{BaBb}$ (AU) & 0.08710 & $\pm 0.00091$ & 0.04569 & $\pm 0.00040$ \\ $\pi$ (mas) & 28.93 & $\pm 0.18$ & 14.57 & $\pm 0.19$ \\ \enddata \tablecomments{ Uncertainties for $\kappa$ Pegasi are the maximum of three uncertainties: the uncertainty from the combined fit that included PHASES-reweighted data, that including PHASES data with a $161\mu{\rm as}$ noise floor, and the difference in the fit values for the two models. The parameters are the average values between a fit including the reweighted uncertainties and one with the noise floor. } \end{deluxetable} \section{Conclusions} The center-of-light astrometric motions of the Aa-Ab and Ba-Bb subsystems in $\mu$ Ori have been constrained by PHASES observations. While four degenerate orbital solutions exist, two of these can be excluded with high reliability based on mass-luminosity arguments, and the fact that Ab is not observed in the spectra. Ba and Bb are stars of a class (mid-F dwarfs) whose properties have been well established by studying other binaries. Their association with Aa and Ab, which are members of more poorly studied classes (Am and late K dwarfs) allows a better understanding of those objects in a system which can be assumed to be coevolved. The orbital solution finds masses and luminosities for all four components, the basic properties necessary in studying their natures. Complex dynamics must occur in $\mu$ Ori. The Ba-Bb orbital plane is nearly perpendicular to that of the A-B motion, and certainly undergoes Kozai-type inclination-eccentricity oscillations. It is possible that the mutual inclination of the A-B pair and Aa-Ab subsystem is a result of KCTF effects over the system's evolution. Finally, it is noted that the orbits in the $\mu$ Ori system are quite non-coplanar. This is in striking contrast with the planets of the solar system, but follows the trend seen in triple star systems. With the solar system being the only one whose coplanarity has been evaluated, it is difficult to draw conclusions about the configurations of planetary systems in general. It is important that future investigations evaluate the coplanarities of extrasolar planetary systems to establish a distribution. Whether that distribution will be the same or different than that of their stellar counterparts may point to similarities or differences in star and planet formation, and provide a key constraint on modeling multiple star and planet formation. \acknowledgements We thank Daniel Fabrycky for helpful correspondence about his recent theoretical work on KCTF. We thank Bill Hartkopf for providing weights for the new non-PHASES differential astrometry measurements. PHASES benefits from the efforts of the PTI collaboration members who have each contributed to the development of an extremely reliable observational instrument. Without this outstanding engineering effort to produce a solid foundation, advanced phase-referencing techniques would not have been possible. We thank PTI's night assistant Kevin Rykoski for his efforts to maintain PTI in excellent condition and operating PTI in phase-referencing mode every week. Part of the work described in this paper was performed at the Jet Propulsion Laboratory under contract with the National Aeronautics and Space Administration. Interferometer data was obtained at the Palomar Observatory with the NASA Palomar Testbed Interferometer, supported by NASA contracts to the Jet Propulsion Laboratory. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. This research has made use of the Simbad database, operated at CDS, Strasbourg, France. MWM acknowledges support from the Townes Fellowship Program. PHASES is funded in part by the California Institute of Technology Astronomy Department, and by the National Aeronautics and Space Administration under Grant No.~NNG05GJ58G issued through the Terrestrial Planet Finder Foundation Science Program. This work was supported in part by the National Science Foundation through grants AST 0300096, AST 0507590, and AST 005366. The work of FCF has been supported in part by NASA grant NCC5-511 and NSF grant HRD-9706268. MK is supported by the Polish Ministry of Science and Higher Education through grants N203 005 32/0449 and 1P03D 021 29.
1,941,325,220,646
arxiv
\section{Conclusions} \label{sec:conclusion} This paper empirically investigates the issues, causes, and solutions of microservices systems by employing a mixed-methods approach. Our study collected data from 2,641 issues from the issue tracking systems of 15 open-source microservices systems on GitHub, 15 interviews and an online survey completed by 150 practitioners. The primary contribution of this work is rooted in the taxonomies of issues, causes, and solutions in microservices systems. The taxonomy of issues consists of 19 categories, 54 subcategories, and 402 types of issues. The taxonomy of causes contains 8 categories, 26 subcategories, and 228 types of causes, whereas the taxonomy of solutions includes 8 categories, 32 subcategories, and 177 types of solutions. Overviewing the results of this study, the major issues for microservices systems are Technical Debt, Continuous Integration and Delivery, and Exception Handling issues. The majority of the issues occur due to General Programming Errors, Missing Features and Artifacts, and Invalid Configuration and Communication. These issues are mainly addressed by Fixing, Adding, and Modifying artifacts. Based on the results and our observations in this study, we proposed and summarized the following future research directions: \begin{itemize} \item\textit{New Techniques}: Proposing (i) techniques for controlling TD through the design of microservices systems, (ii) multi-layered security solutions for fine-grained security management, (iii) architecting and design techniques for microservices systems with a particular focus on highly resilient and low coupled microservices systems, (ii) solutions to trace and isolate communication issues to increase fault tolerance, (iv) techniques for dynamically optimizing configuration settings of microservices, (v) techniques for identifying critical paths for improving performance and removing performance bottlenecks due to configuration issues, (vi) techniques for detecting security breach points during the configuration of microservices systems, (vii) strategies to automatically test APIs, load, and application security for microservices systems, (viii) techniques for dynamically assigning storage platforms according to the requirements of microservices, (ix) techniques for storing and accessing decentralized and shared data without losing the independence of individual microservices, (x) techniques for optimizing resource utilization (e.g., CPU and GPU usage), (xi) a framework for improving scalability to increase microservices performance, and (xii) techniques for converting code of one language to another used in microservices systems. \item\textit{Empirically-Grounded Studies and Guidelines}: Conducting empirical studies to (i) investigate technical debt at the code, design, and communication level of microservices systems, (ii) explore the efforts required to fix the build issues in microservices systems at different phases (e.g., compilation and linking phases), and (iii) compare the types of problems and the needed efforts to fix the build issues in microservices systems with monolithic systems. We also argue for preparing the strategies and guidelines for (i) addressing \textsc{cd pipeline errors}, (ii) establishing issue, cause, and solution knowledge base for microservices systems in the multi-cloud (e.g., AWS, Google Cloud) containerized environment, (iii) organizing polyglot databases for efficient read and write operations by considering performance, and (iv) addressing microservices security vulnerability and related risks at various levels, such as data centers, cloud providers, virtualization, communication, and orchestration. \item\textit{Tools}: Designing and developing intelligent tools (i) to identify, measure, prioritize, monitor, and prevent TD in microservices systems, (ii) for distributed tracing and real-time performance monitoring, and (iii) to fix Networking issues like \textsc{localhost}, \textsc{ip address}, and \textsc{webhook} errors. Moreover, The outcome of our study (e.g., issue, cause, and solution taxonomies) can help propose a systematic process and develop an intelligent recommendation system to (semi-)automatically identify the issues and causes of microservices system development, as well as recommend the solutions to address those issues. The recommendation system can assist microservices practitioners in efficiently and effectively designing, developing, and operationalizing microservices systems. \end{itemize} In conclusion, this paper has presented an overview of various issues that can occur in microservices systems. We have discussed the challenges and pitfalls in the design and implementation of microservices systems and highlighted the patterns and trends in the types of issues that arise. Additionally, we have provided insights into the solutions for addressing these issues. One other expectation that we can relate with our study is to not only contribute to the existing body of knowledge of issues, causes, and solutions in microservices systems — that lacks methodological and empirical rigor — but also to encourage other researchers to explore deeper into this highly important research area. Empirical knowledge of the nature of issues in microservices systems can help organizations to develop a better understanding of the challenges and opportunities that microservices architecture brings and how to address them. \section{Discussion} \label{Discussion} After presenting the results, we now discuss the correlation between issues, their causes, and solutions implications (Section \ref{Sec:MappingIssueCausesSolutions}) followed by presenting the implications of the research. \subsection{Analyzing the relationship between issues, causes, and solutions} \label{Sec:MappingIssueCausesSolutions} While the taxonomy in Figure \ref{fig:Taxonomy} provided a categorization of issues in microservices systems, a mapping between the issues, their causes, and solutions are presented in Figure \ref{fig:mapping}. Mapping diagrams are frequently used in systematic mapping studies - relying on bubble plots - to correlate data or concepts along different dimensions \cite{petersen2008systematic}. We have chosen the mapping diagram to map the issues (Y-axis) to their causes (X-axis), and present the solutions that can address the issues (intersection of X/Y-axis). The interpretation of the mapping in Figure \ref{fig:mapping} is based on locating a given issue (Y-axis), mapping this issue with the cause(s) (X-axis), and identifying the possible solutions to fix the issue - elaborated and exemplified below. \begin{itemize} \item \textit{Issues in microservices systems (Y-axis)}: A total 19 categories of issues, adopted from Figure \ref{fig:Taxonomy}, are presented on the Y-axis. For example, one of the issue categories Technical Debt has a total of 687 instances of issues as presented in Figure \ref{fig:Taxonomy}. \item \textit{Causes of the issues (X-axis)}: A total of 8 categories of causes are presented on the X-axis, mapped with the corresponding issues. For example, General Programming Errors, such as \textit{incorrect naming and data type} (157, 6.92\%), \textit{testing error} (25, 1.10\%), and \textit{poor documentation} (22, 0.97\%), are the predominant causes of Technical Debt issues. In comparison, the causes like Poor Security Management have no impact on Technical Debt issues. \item \textit{Solutions to resolve the issues}: A total of 8 categories of solutions are presented at the intersection of the issues and their causes. For example, solutions, such as fixing an artifact (code, GUI, errors in build files) or removing an artifact (code dependency, empty tag, unnecessary documentation) can help with fixing a majority of the Technical Debt issues. \end{itemize} The mapping in Figure \ref{fig:mapping} can have a diverse interpretation based on the intent of the analysis that may include but is not limited to frequency analysis and data correlations. It is virtually impossible to elaborate on all possible interpretations, however; to exemplify some of the possible interpretations can be as follows: \begin{itemize} \item \textit{What are the most and least frequently occurring issues in microservices systems}? As per the mapping, the most frequent (top 3) issues are related to Technical Debt, CI/CD, and Exception Handling, representing a total of 764 identified issues. On the other hand, the least frequent issues relate to Organizational, Update and Installation, and Typecasting issues. \item \textit{What are the most and least common causes of a specific category of issues}? The mapping of the causes suggests that General Programming Errors, Invalid Configuration and Communication Problems along with Legacy Versions, Compatibility and Dependency Problem are the most common causes of the issues. Similarly, Fragile Code represents the least common cause for issues in microservices systems. \item \textit{What are the most and least recurring solutions to fix a specific category of issues}? Fixing Artifacts, Add Artifacts, and Modify Artifacts represent the most recurring solutions to address microservices issues, whereas importing/Export Artifacts, Upgrade Tools and Platforms, and Manage Configuration and Execution categories represent the least recurring solutions to address microservices issues. \end{itemize} \begin{figure*}[!htbp] \centering \includegraphics[width=0.8\textwidth]{Figures/mapping.pdf} \caption{Mapping between issues, causes, and solutions in microservices systems} \label{fig:mapping} \end{figure*} \subsection{Implications} \textbf{Technical Debt}: The results of this study indicate that more than one-fourth (25.59\%) of the issues are related to TD, spreading across a plethora of microservices systems development activities, such as design, coding, refactoring, and configuration \cite{freire2022software}. The detailed analysis of the causes reveals that most TD issues occur due to GPE, including \textit{compile time errors}, \textit{erroneous method definition and execution}, and \textit{incorrect naming and data type}. We observed that TD issues are mainly addressed by fixing, adding, and removing the artifacts. We also observed that TD in microservices systems is growing at a higher rate than other types of issues identified in this study. Recently published studies that investigated TD in microservices systems (e.g. \cite{de2021identifying, de2021reducing, 2020Does, bogner2018limiting}) have discussed several aspects of TD, such as architectural TD \cite{de2021identifying}, repaying architectural TD \cite{de2021reducing}, TD before and after the migration to microservices \cite{2020Does}, and limiting TD with maintainability assurance \cite{bogner2018limiting}. However, majority of TD in our study is related to \textit{code} and \textit{service design} debt, i.e., the architecture and implementation level of TD in microservices systems. Our study provides in-depth details about the types of TD, their causes, and solutions that can raise awareness of microservices practitioners to manage TD issues before they become too costly. Based on the study findings, we assert that future studies can investigate several other aspects of TD in microservices systems, such as (i) controlling TD through the design of microservices systems, (ii) investigating TD of microservices systems (e.g., \textsc{service dependencies}) at the code, design, and communication level, and (iii) proposing dedicated techniques and tools to identify, measure, prioritize, monitor, and prevent TD (e.g., \textsc{deprecated flags}, \textsc{data race}) in microservices systems. \textbf{CI/CD Issues}: CI and CD rely on a number of software development practices, such as rapid prototyping, and sprinting to enable practitioners with frequent integration and delivery of software systems and applications \cite{shahin2017continuous}. The combination of CI/CD and microservices systems enables practitioners to gain several benefits, including maintainability, deployability, and cohesiveness\cite{o2017continuous}. This study reports various CI/CD issues (55 types), their causes, and solutions mainly related to delivery pipelines and establishing cloud infrastructure management platforms (e.g., Google Cloud, AWS) for microservices systems. The primary causes behind the CI/CD issues are related to SD\&IA (e.g., \textsc{wrong dependencies chain}), GPE (e.g., \textsc{long message chain}), and ICC (e.g., \textsc{incorrect configuration setting }) categories. Most of the CI/CD issues are addressed by fixing artifacts and upgrading tools and platforms. Various issues related to continuous deployment, delivery, and integration of microservices in continuous software engineering (e.g., CI/CD, DevOps) have been discussed in the literature (e.g., \cite{jamshidi2018microservices, chen2018microservices, bavskarada2020architecting, waseem2020systematic}). However, none of the above mentioned studies provide fine-grain details about these issues. Based on the findings of this study, future work can investigate several aspects of combining microservices systems with CI/CD, such as (i) proposing general guidelines and strategies for preventing and addressing \textsc{cd pipeline errors} in microservices systems, and (ii) enriching the issue, cause, and solution knowledge for microservices systems in the multi-cloud (e.g., AWS, Google Cloud) containerized environment. \textbf{Build Issues}: Build is a process that compiles source code, runs unit tests, and produces artifacts that are ready to deploy as a working program for the software release. The build process may consist of several activities, such as parsing, dependency resolution, resource processing, and assembly \cite{lou2020}. Our study results indicate that 7.87\% of the issues are related to the build process of microservices systems, mainly due to GPE and SD\&IA, and most of the build issues are addressed by upgrading tools and platforms, managing infrastructure, and fixing artifacts. The types of build issues, their causes, and solutions indicate that most build problems of microservices systems occur during the parsing, resource processing, and assembly activities. These results can help practitioners to avoid various types of build issues. For example, practitioners should not add unnecessary dependencies in Docker build files and introduce outdated Kubernetes versions while establishing build and deployment pipelines for microservices systems. The most frequently reported build issues in this study are mainly related to compilation and linking phases of build process. It would be interesting to further explore the build process of microservices systems in the perspective of (i) code analysis and artifact generation for the build issues, and (ii) the effort required to fix the build issues in microservices systems. \textbf{Security Issues}: Microservices systems are vulnerable to a multitude of security threats due to their distributed nature and availability over the public clouds, making them a potential target for cyber-attacks. Our study results indicate that 7.99\% of the issues are related to the security of microservices systems, mainly due to PSM (e.g., \textsc{security dependencies}), GPE (e.g., \textsc{long message chain}), and ICC (e.g., \textsc{wrong connection closure}), and most of the security issues can be addressed by fixing, adding, and modifying artifacts. The issues related to \textsc{handling authorization header}, \textsc{shared authentication}, and \textsc{OAuth token error} indicate that most security issues occur during the authorization and authentication process of microservices. It also indicates that microservices have poor security at the application level. Other issues related to \textit{access control} and \textit{secure certificate and connection} also confirm that microservices systems have a much larger attack surface area than traditional systems (e.g., monolithic systems). The security issues, causes, and solutions identified in this study can help practitioners better understand why and where specific security issues may occur in microservices systems. For instance, practitioners might want to avoid writing unsafe code to prevent access control issues. Our findings suggest that security issues are multi-faceted, and security problems can be raised at different levels of microservices systems. Therefore, it is valuable to (i) develop dedicated strategies and guidelines to address security vulnerability and related risks at various levels, such as data centers, cloud providers, virtualization, communication, orchestration, and (ii) propose multi-layered security solutions for fine-grained security management in microservices systems. \textbf{Service Execution and Communicating Issues}: Generally, microservices systems communicate through synchronous (e.g., HTTP/HTTPS) and asynchronous (e.g., AMQP) protocols to complete the business process. The taxonomy of our study shows that 8.03\% of the issues are related to the execution and communication of microservices mainly because of ICC (e.g., \textsc{incorrect configuration setting}) and GPE (e.g., \textsc{syntax error in code}). Most of the service execution and communication issues are addressed by fixing, adding, and modifying artifacts. Microservices systems may have hundreds of services and their instances that frequently communicate with each other. Service execution and communication in microservices systems can also exacerbate the issues of resiliency, load balancing, distributed tracing, high coupling, and complexity \cite{newman2020building}. Several studies (e.g., \cite{yu2019survey}) also confirm that poor communication between microservices and their instances poses significant challenges for deployment, security, performance, fault tolerance, and monitoring of microservices systems. The identified issues, causes, and solutions can help practitioners to (i) to identify the problem areas of service execution and communication and (ii) adopt the strategies to prevent microservices execution and communication issues. Moreover, we argue that future studies can (i) propose architecture design techniques for microservices systems with a particular focus on highly resilient and low coupled microservices systems in order to address service discovery issues, and (ii) propose solutions to trace and isolate service communication issues to increase fault tolerance. \textbf{Configuration Issues}: Microservices systems can have a large number of services and their instances to configure and manage with third-party systems, deployment platforms, and log templates \cite{avritzer2020scalability}. It is essential that microservices systems should have the ability to track and manage the code and configuration changes. Our study results indicate that 4.65\% of the issues are related to the configuration of microservices systems, mainly due to ICC (e.g., \textsc{incorrect configuration setting}) and GPE (e.g., \textsc{syntax error in code}), and most of the configuration issues are addressed by modifying and fixing artifacts. There has been considerable research conducted on configuring traditional software systems. However, we found only a few studies (e.g., \cite{avritzer2020scalability, schaffer2018configuration, kehrer2018autogenic}) that investigated configuration for microservices systems. By considering the configuration issues identified from our study and existing literature, it would be interesting to further investigate and propose techniques and algorithms for (i) dynamically optimizing configuration settings, (ii) identifying critical paths for improving performance and removing performance bottlenecks due to configuration issues, and (iii) detecting security breach points during the configuration of microservices systems. \textbf{Monitoring Issues}: The dynamic nature of microservices systems needs monitoring infrastructure to diagnose and report errors, faults, failures, and performance issues \cite{waseem2021design}. Our study result shows that 3.18\% of the issues are related to the monitoring of microservices systems, mainly due to LC\&D (e.g., \textsc{compatibility error}) and GPE (e.g., \textsc{inconsistent package used}), and most of the monitoring issues are addressed by fixing artifacts and upgrading tools and platforms. The monitoring of microservices systems is fascinating to researchers from several perspectives, including tracing, real-time monitoring, and monitoring tools \cite{cinque2019microservices, phipathananunth2018synthetic, shiraishi2020real}. Future research can design and develop intelligent systems for (i) monitoring hosts, processes, network, and real-time performance of microservices systems, and (ii) identifying the root causes of container issues. \textbf{Testing Issues}: Testing poses additional challenges in microservices systems development, such as the polyglot code base in multiple repositories, feature branches, and databases per service \cite{waseem2021design, waseem2020testing}. Our study results indicate that 2.86\% of the issues are related to testing of microservices systems, mainly due to GPE (e.g., \textsc{incorrect test case}) and MFA (e.g., \textsc{missing essential system feature}), and most of the testing issues are addressed by adding missing features and fixing syntax and semantic errors in test cases (see Figure \ref{fig:mapping}). We identified multifaceted issues, causes, and solutions regarding microservices testing in this study that highlight several problematic areas for microservices systems, such as \textsc{faulty test cases}, \textsc{debugging}, and \textsc{load test cases} (see Figure \ref{fig:Taxonomy}). We also found several primary (e.g., \cite{camilli2022automated, heorhiadi2016gremlin}) and secondary studies (e.g., \cite{waseem2020testing}) that explore testing of microservices systems. By considering the testing issues identified from our study and existing literature, it is worthwhile to propose and develop the strategies to automatically test APIs, load, and application security for microservices systems. \textbf{Storage Issues}: One of the major problems that practitioners encounter is related to memory management, as per the build, execution, and deployment requirements of microservices \cite{fazio2016open, soldani2018pains}. It is argued in \cite{soldani2018pains} that ``\textit{storage-related pains started decreasing in 2017}'', and our study results indicate that storage issues still exist. The results show that 2.02\% of the issues are related to storage, mainly due to limited memory for process execution, and most of the storage issues are addressed by upgrading the memory size for process execution (see Figure \ref{fig:mapping}). However, scale-up memory could pose an additional burden on managing energy, cost, performance, and required algorithms. Storage issues could also stimulate other issues like performance, reliability, compliance, backup, data recovery, and archiving. Future studies can propose techniques for dynamically assigning storage platforms (e.g., containers, virtual machines) according to the requirements of microservices, which can bring efficiency to the utilization of storage platforms. \textbf{Database Issues}: Another key challenge for microservices systems is managing their databases. Our study identified 2.21\% database issues (e.g., \textit{Database Query}, \textit{Database Connectivity}), mostly due to \textit{GPE} (e.g., \textsc{wrong query parameters}), and most of the database issues are addressed by fixing artifacts and managing infrastructure. Several studies (e.g., \cite{viennot2015synapse, laigner2021data, laigner2021distributed}) explored the use of databases from the perspectives of heterogeneous and distributed database management for event-based microservice systems. However, we did not find any study that explored the issues related to databases in microservices systems. Based on the study results, future studies can propose database patterns and strategies for (i) organizing polyglot databases for efficient read-and-write operations by considering performance, and (ii) storing and accessing decentralized and shared data without losing the independence of individual microservices. \textbf{Networking Issues}: The network infrastructure for microservices systems consists of many components (both hardware and software), including but not limited to hosting servers, network protocols, load balancer, firewall, hardware devices, series of containers, public and private clouds, and a set of common APIs for accessing different components. Our study identified 1.65\% issues related to networking (e.g., \textsc{hosting and protocols}, \textsc{service accessibility}) during the development of microservices systems, mostly due to GPE (e.g., \textsc{content delivery networks (CDN) deployment error}), and most of the network issues are addressed by managing infrastructure and modifying artifacts. We found a few studies (e.g., \cite{luo2018high, kratzke2017microservices, bhattacharya2019smart}) that mainly focus on networking for containerized microservices and smart proxying for microservices. However, these studies do not report networking issues, causes, and solutions for microservices systems. Future research can (i) provide deeper insights into networking issues in the context of microservices systems and (ii) propose automatic correction methods to fix networking issues, like \textsc{Localhost}, \textsc{IP address}, and \textsc{Webhook} errors. \textbf{Performance Issues}: The performance of microservices systems is one of the highly discussed topics, and the existing studies (e.g., \cite{amaral2015performance, heinrich2017performance, de2016architecture, gan2021sage}) mainly focus on performance evaluation, monitoring, and workload characterization of microservices systems. We also found one study \cite{WuNoms} that presents a ``\textit{system to locate root causes of performance issues in microservices}''. We identified 1.65\% performance issues (e.g., \textit{Service Response Delay}, \textit{Resource Utilization}) discussed by practitioners in our study. These issues mainly occur due to MFA (e.g., \textsc{missing resource}) and SD\&IA (e.g., \textsc{wrong dependencies chain}), and are addressed by adding new features and fixing design anomalies. The identified performance issues, causes, and solutions can help (i) further explore performance differences for containerization platforms and combinations of configurations, (ii) propose strategies and techniques for reducing the service response delay when multiple microservices are accessing shared resources, (iii) propose techniques for optimizing resource utilization (i.e., CPU and GPU usage), and (iv) develop a framework for improving scalability to increase microservices performance. \textbf{Typecasting Issues}: Practitioners frequently use multiple programming languages and technologies simultaneously to develop microservices systems. Each of the used programming languages has its own syntax, structure, and semantics. It is most often the case that developers need to convert variables from one datatype to another datatype (e.g., double to string) for one or multiple microservices, a process referred to as typecasting. Our study identified 1.31\% issues related to networking during the development of microservices systems, mostly due to GPE (e.g., \textsc{wrong data conversion}), and most of the typecasting issues are addressed by managing infrastructure and modifying artifacts. To the best of our knowledge, no study has been conducted that investigates typecasting in the context of microservices systems. Based on the study results, future studies can propose and implement the techniques for converting code of one language to another used in microservices systems. \section{Threats to Validity} \label{sec:threats} This section reports the potential threats to the validity of this research and its results, along with mitigation strategies that could help minimize the impacts of the outlined threats based on \cite{easterbrook2008selecting}. The threats are broadly classified across internal, construct, external, and conclusion validity. \subsection{Internal Validity} Internal validity examines the extent to which the study design, conduct, and analysis answer to the research questions without bias \cite{easterbrook2008selecting}. We discuss the following threats to internal validity. \textit{Improper project selection}. The first internal validity threat to our study is an improper selection of OSS projects for executing our research plan. \textit{Mitigation}. We used a multi-step project selection approach to control the possible threat associated with subject system selection (see Phase 1 in Section \ref{sec:RMPhase1} and Figure \ref{fig:researchmethod}). A step-wise and criteria-driven approach for the project selection has been used that helped us to include the relevant (see Table \ref{tab:selectedProjects}) and eliminate irrelevant open-source microservices projects. \textit{Instrument understandability}. The participants of interviews and surveys may have a different understanding of the interview and survey instruments. \textit{Mitigation}. We adopted Kitchenham and Pfleeger’s guidelines for conducting surveys \cite{kitchenham2008personal}. We also piloted both interview and survey instruments to ensure understandability (see Section \ref{InterviewsProtocol} and Section \ref{PilotSurvey}). Our questionnaire for the survey were in English. However, during the interviews, we found that a few participants could not conveniently convey their answers in English. Therefore, we requested them to answer in their native languages, and for the latter we translated the answers into English. \textit{Participants selection and background}: The survey participants may not have sufficient expertise to answer questions. \textit{Mitigation}. We searched for microservices practitioners through personal contacts and relevant platforms (e.g., LinkedIn and GitHub). We explicitly made participants’ characteristics (e.g., roles and responsibilities) in the interview and survey preamble, and we selected only those practitioners with sufficient experience designing, developing, and operationalizing microservices systems. For example, the average experience of the interview participants is 5.33 years, and they mainly have responsibilities for designing and developing microservices systems (see Table \ref{tab:IntervieweesDemographics} and Figure \ref{fig:demography}). \textit{Interpersonal bias in extracting and synthesizing data}. (i) The interpersonal bias in the mining developer discussions and analysis process may threaten the internal validity of the study findings. \textit{Mitigation}. To address this threat, we defined explicit data collection criteria (e.g., exclude the issues of general questions). We had regular meetings with all the authors in data labelling, coding, and mapping. The conclusions were made based on the final consensus made by all the authors. (ii) The survey and interview participants may be slightly biased in providing actual answers due to company policies, work anxiety, or other reasons. \textit{Mitigation}. To mitigate this threat, we highlighted the anonymity of the participants and their companies in the interview and survey instruments preamble. (iii) The interviewer of the study may be biased toward getting the favourite answers. \textit{Mitigation}. We sent our interview questions 3 to 4 days before the interview to the interviewees. Hence, they had sufficient time to understand the context of the study and could provide the required feedback on the completeness and correctness of the developed taxonomies. \subsection{Construct Validity} Construct validity focuses on whether the study constructs (e.g., interview protocol, survey questionnaire) are correctly defended~\cite{easterbrook2008selecting}. Microservices issues, causes, and solutions are the core constructs of this study. Having said this, we identified the following threats. \textit{Inadequate explanation of the constructs}: This threat refers to the fact that the study constructs are not sufficiently described. \textit{Mitigation}. To deal with this threat, we prepared the protocols for mining developer discussions and conducting interviews and surveys, and these protocols were continuously improved during the internal meetings, feedback, and taxonomy refinements. Mainly, the authors had meetings (i) to establish a common understanding of issues, causes, and solutions (see Figure \ref{fig:IssuesBackground}), (ii) for defining the required data items to answer the RQs (see Table \ref{tab:Dataitems}), and (iii) for evaluating interview and survey question format, understandability, and consistency. We also invited two survey-based research experts to check the validity and integrity of the survey questions. Based on their feedback, we included Figure \ref{fig:Taxonomy}, Table \ref{tab:CausesTaxnomey}, and Table \ref{tab:SolutionsTaxnomey} as a part of the interview and survey questions for improving the participants' understandability of the issues, their causes and solutions in microservices systems. \textit{Data extraction and survey dissemination platforms}: This threat refers to the authenticity and reliability of the platforms we used for data collection and survey dissemination. \textit{Mitigation}. (i) To mitigate this threat for data collection, we identified 2,641 issues from the issue tracking systems of 15 open-source microservices systems on GitHub which were confirmed by the developers. (ii) We disseminated the survey at social media and professional networking groups. The main threat to survey dissemination platforms is identifying the relevant groups. We addressed this threat by reading the group discussion about microservices system development and operations. After ensuring that the group members frequently discussed various microservices aspects, we posted our survey invitation. \textit{Inclusion of valid issues and responses}: The threat is mainly related to the inclusion and exclusion of issues from the issue tracking systems and responses from survey participants. \textit{Mitigation}. We defined explicit criteria for including and excluding issues from the issue tracking systems (see Section \ref{sec:Ext&Syn}) and responses from survey participants (see Section \ref{WebSurvey}). For example, when screening issues from the issue tracking systems, we excluded those issue discussions consisting of general questions, ideas, and proposals. Similarly, we excluded those responses that were either randomly filled or filled by research students and professors who were not practitioners. \subsection{External Validity} External validity refers to the extent to which the study findings could be broadly generalized in other contexts \cite{easterbrook2008selecting}. The sample size and sampling techniques might not provide a strong foundation to generalize the study results. It is the case for all three data collection methods (mining developer discussions from open-source microservices projects, interviews, and surveys) used in this study. \textit{Mitigation}. (i) To minimize this threat, we derived the taxonomies of issues, causes, and solutions from a relatively large number of issues from 15 sampled open-source microservices projects belonging to different domains by involving multiple researchers. (ii) The taxonomies have been evaluated and improved by taking the feedback of experienced microservices practitioners through the interviews. (iii) A cross-sectional survey was conducted based on the derived and evaluated taxonomies. Overall, we received 150 valid responses from 42 countries of 6 continents (see Figure~\ref{fig:demography}(a)) having varying experience (from less than one year to more than ten years, see Figure~\ref{fig:demography}(b)) with different roles (see Figure~\ref{fig:demography}(c)), and working with diverse domains (see Figure~\ref{fig:demography}(d)) and programming languages and technologies (see Figure~\ref{fig:demography}(e)) to develop microservices systems. We acknowledge that this study findings may not be generalized or represent issues, causes, and solutions for all types of microservices systems. However, considering the size of the investigated issues, the number of microservices systems, interviews with microservices practitioners, and the survey population can strengthen the overall generalizability of the study results. \subsection{Conclusion Validity} Conclusion validity is related to dealing with threats that affect the correct conclusions in empirical studies \cite{easterbrook2008selecting}. Concluded findings of the results may be based on a single author's understanding and experience, and conflicts on the conclusions between authors may not be sufficiently discussed or resolved. \textit{Mitigation}. To address this threat, the first author extracted and analyzed study data (i.e., from the developer discussions, interviews, and survey). All other authors comprehensively reviewed the data through multiple meetings. Conflicts on data analysis results were resolved through mutual discussions and brainstorming among all the authors. Different researchers can interpret the inclusion and exclusion criteria differently which influences the conclusions of the study. \textit{Mitigation}. To minimize the effect of this issue, we applied the explicitly defined inclusion and exclusion criteria during the study data screening (see Section \ref{sec:Ext&Syn} - Step B). Finally, the interpreted results and conclusions have been confirmed by arranging several brainstorming sessions among all the authors. \section{Introduction} \label{sec:introduction} The software industry has recently witnessed the growing popularity of the Microservices Architecture (MSA) style as a promising design approach to develop applications that consist of multiple small, manageable, and independently deployable services \cite{fowler2014microservices, dragoni2017microservices}. Software development organizations may have adopted or planned to use the MSA style for various reasons. Specifically, some of them want to increase the scalability of applications using the MSA style, while others use it to quickly release new products and services to the customers, whereas it is argued that the MSA style can also help build autonomous development teams \cite{Davide2017Processes, jamshidi2018microservices}. From an architectural perspective, a microservices system (a system that adopts the MSA style) entails a significant degree of complexity both at the design phase as well as at runtime configuration \cite{newman2020building}. This implies that the MSA style brings unique challenges for software organizations, and many quality attributes may be (positively or negatively) influenced \cite{dragoni2017microservices, waseem2020systematic}. For example, service level security may be impacted because microservices are developed and deployed by various technologies (e.g., Docker containers \cite{combe2016docker}) and tools that are potentially vulnerable to security attacks \cite{newman2020building, yu2019survey}. Data management is also influenced because each microservice needs to own its domain data and logic \cite{wagner2018net}. This can, for example, challenge achieving and managing data consistency across multiple microservices. Zimmermann argues that MSA is not entirely new from Service-Oriented Architecture (SOA) (e.g., ``microservices constitute one particular implementation approach to SOA -- service development and deployment'') \cite{zimmermann2017microservices}. Similarly, Márquez and Astudillo discovered that some existing design rationale and patterns from SOA fit the context for MSA~\cite{1-marquez2018actual}. However, an important body of literature (e.g., \cite{dragoni2017microservices, esposito2016challenges, jamshidi2018microservices, yarygina2018overcoming}) has concluded that there are overwhelming differences between microservices systems, monolithic systems, and traditional service-oriented systems in terms of design, implementation, testing, and deployment. Gupta and Palvankar indicated that even having SOA experience and background can lead to suboptimal decisions (e.g., excessive service calls) in microservices systems \cite{Dinkar2020Pitfalls}. Hence, microservices systems may have an \textit{additional} and \textit{specific} set of \textbf{issues}. Borrowing the idea from \cite{dragoni2017microservices, ren2020understanding}, we define \textbf{issues} in this study as errors, faults, failures, and bugs that occur in a microservices system and consequently impact its quality and functionality. Hence, there is a need to leverage existing methods or derive new practices, techniques, and tools to address the specific and additional issues in microservices systems. Recently, a number of studies have investigated particular issues (e.g., code smell \cite{taibi2018definition}, debugging \cite{ZhouIEEE}, performance \cite{WuNoms}) in microservices systems. Despite these efforts, there is no in-depth and comprehensive study on the nature of different types of issues that microservices developers face, the potential causes of these issues, and possible fixing strategies for these issues. Jamshidi \textit{et al}. believed that this can be partially attributed to the fact that researchers have limited access to industry-scale microservices systems \cite{jamshidi2018microservices}. The empirical knowledge on the nature of issues occurring in microservices systems can be useful from the following perspectives: (i) understanding common issues in the design and implementation of microservices systems and how to avoid them, (ii) identifying trends in the types of issues that arise in microservices systems and how to address them effectively, (iii) experienced microservices developers can be allocated to address the most frequent and challenging issues, (iv) novice microservices developers can quickly be informed of empirically-justified issues and avoid common mistakes, and (v) the industry and academic communities can synergies theory and practice to develop tools and techniques for the frequently reported issues in microservices systems. \textbf{Motivating Example}: We now contextualize the issues, causes, and solutions based on an example illustrated in Figure \ref{fig:IssuesBackground}. The example is taken from the Spinnaker project, an open-source microservices project hosted on GitHub (see Table \ref{tab:selectedProjects}), and annotated with numbering to represent a sequence among the reported issue, its cause(s), and the solution(s) to resolve the issue. Figure \ref{fig:IssuesBackground} shows auxiliary information about the Spinnaker project, such as project description, stars, and contributors. As shown in the example, a contributor, typically a microservices developer, writes code and may provide additional details of the code in the form of the developer's comments. Once the code is compiled, the contributor reports permission denied \textit{issue} highlighting ``\textit{pipeline save with Admin account fails with permission denied}''. As the next step, the same or other contributors highlight the \textit{cause} for such issue as ``\textit{Spinnaker user does not have access to the service account}''. As the last step, an individual or a community of developers provides a \textit{solution} such as ``\textit{Allow the admin users to save the accounts}'' that follows the code snippet to resolve the issue. Once the issue is resolved, the contributor who highlighted it on GitHub marks it as a closed issue. We are only interested in analyzing issues that have been marked as closed to ensure that a solution to resolve the issue exists. As shown in Table \ref{tab:selectedProjects}, the Spinnaker project has 4,595 closed issues and 121 contributors. \begin{figure}[!htbp] \centering \includegraphics[scale=0.65]{Figures/IssuesBackground.pdf} \caption{An example of the issue, cause, and their solution} \label{fig:IssuesBackground} \end{figure} This work aims to \textit{systematically and comprehensively study and categorize the issues that developers face in developing microservices systems, the causes of the issues, and the solutions (if any)}. To this end, we conducted a mixed-methods empirical study following the guideline proposed by Easterbrook and his colleagues \cite{easterbrook2008selecting}. We collected the data by (i) mining 2,641 issues from the issue tracking systems of 15 open-source microservices systems on GitHub, (ii) conducting 15 interviews, and (iii) deploying an online survey completed by 150 practitioners to develop the taxonomies of the issues, their causes and solutions in microservices systems. The key findings of this study are: \begin{enumerate} \item The \textit{issue} taxonomy consists of 19 categories, 54 subcategories, and 402 types, indicating the diversity of issues in microservices systems. The top three categories of issues are Technical Debt, Continuous Integration and Delivery, and Exception Handling. \item The \textit{cause} taxonomy consists of 8 categories, 26 subcategories, and 228 types, in which General Programming Errors, Missing Features and Artifacts, and Invalid Configuration and Communication are the most frequently reported causes. \item The \textit{solution} taxonomy consists of 8 categories, 32 subcategories, and 177 types of solutions, in which the top three categories of solutions for microservices issues are Fix Artifacts, Add Artifacts, and Modify Artifacts. \item The overall \textit{survey findings} confirm the taxonomies of the issues, their causes and solutions in microservices systems and also indicate no major statistically significant differences in practitioners’ perspectives on the developed taxonomies. \end{enumerate} This paper has extended our previous work \cite{waseem2021nature} by adding two new research questions (\textbf{RQ3} and \textbf{RQ4}) and expanding and enhancing the results of \textbf{RQ1} and \textbf{RQ2} with increased volume and variety of data and applying a mixed-methods approach. Specifically, we explored 10 more open-source microservices systems on GitHub (now 15 projects), interviewed 15 practitioners, and conducted an online survey with 150 microservices practitioners for getting their perspectives on the proposed taxonomies of the issues, their causes and solutions in microservices systems, as well as the mapping between the issues, causes, and solutions. Our study makes the following key contributions: \begin{enumerate} \item We developed the taxonomies of the issues, their causes and solutions in microservices systems based on a qualitative and quantitative analysis of 2,641 issue discussions among developers on GitHub, 15 interviews, and an online survey completed by 150 practitioners. \item We provided the mapping between the issues, causes, and solutions in microservices systems with promising research directions on microservices systems that require more attention. \item We made the dataset of this study available online \cite{replpack}, which includes the data collection and analysis from GitHub and microservices practitioners, as well as detailed hierarchies of the taxonomies of issues, causes, and solutions, to enable replication of this study and conduct future research. \end{enumerate} The remainder of the paper is structured as follows. Section \ref{sec:methodology} details the research methodology employed. Section \ref{sec:results} presents the results of our study. Section \ref{Discussion} discusses the relationship between the issues, causes, and solutions, along with the implications and the threats to the validity of our results. Section \ref{RelatedWork} reviews related work, and Section \ref{sec:conclusion} draws conclusions and outlines avenues for future work. \section{Methodology} \label{sec:methodology} The research methodology of this study consists of three phases, as illustrated in Figure \ref{fig:researchmethod}. Given the nature of this research and the formulated research questions (see Section \ref{RQs}) – issues, causes, and solutions in microservices projects, we decided to use a mixed-methods study. Our study collected data from microservices projects hosted on GitHub, interviews, and a web-based survey. During \textbf{Phase 1}, we derived the taxonomies of 386 types of issues, 217 types of causes, and 177 types of solutions by mining and analyzing microservices practitioners’ discussions in the issue tracking systems of 15 open-source microservices projects hosted on GitHub. During \textbf{Phase 2}, we interviewed 15 microservices practitioners to extend and verify the taxonomies and identified additional 14 types of issues, 20 types of causes, and 22 types of solutions. During \textbf{Phase 3}, we surveyed 150 microservices practitioners using a Web-based survey to validate the outcomes of Phase 1 and Phase 2, i.e., using practitioners’ perspectives and feedback to validate the extracted types of issues, their causes and solutions. \subsection{Research Questions} \label{RQs} We formulated the following research questions (RQs). \begin{tcolorbox} [sharp corners, boxrule=0.1mm,] \small \textbf{RQ1}: What issues do occur in the development of microservice systems? \end{tcolorbox} \textbf{\underline{Rationale}}: \textbf{RQ1} aims to systematically identify and taxonomically classify the types of issues that occur in microservices systems. The answer to \textbf{RQ1} provides a comprehensive understanding of the issues (e.g., the most frequent issues) of microservices systems. \begin{tcolorbox}[sharp corners, boxrule=0.1mm,] \textbf{RQ2}: What are the causes of issues that occur in microservices systems? \end{tcolorbox} \textbf{\underline{Rationale}}: The aim of \textbf{RQ2} is to investigate and classify the root causes behind the issues identified in RQ1 and map causes to issues. The answer to \textbf{RQ2} helps practitioners avoid common issues in microservices systems. \begin{tcolorbox}[sharp corners, boxrule=0.1mm,] \textbf{RQ3}: What solutions are proposed to fix issues that occur in microservices systems? \end{tcolorbox} \textbf{\underline{Rationale}}: The aim of \textbf{RQ3} is to identify the solutions for the issues according to their causes and to develop the taxonomy of solutions. The answer to \textbf{RQ3} helps to understand the fixing strategies for addressing microservices issues. \begin{tcolorbox}[sharp corners, boxrule=0.1mm,] \textbf{RQ4}: What are the practitioners' perspectives on the taxonomies of the identified issues, causes, and solutions in microservices systems? \end{tcolorbox} \textbf{\underline{Rationale}}: The taxonomies of issues, causes, and solutions constructed from the results of RQ1, RQ2, and RQ3 are based on 15 open-source microservices systems and interviewing 15 practitioners. \textbf{RQ4} aims to evaluate the taxonomies of issues, causes, and solutions built in RQ1, RQ2, and RQ3 by conducting a relatively large-scale online survey. \begin{figure}[!htbp] \centering \includegraphics[width=0.5\textwidth]{Figures/ResearchMethod.PDF} \caption{An overview of the research method} \label{fig:researchmethod} \end{figure} \subsection {Phase 1 - Mining Developer Discussions} \label{sec:RMPhase1} This phase aims to systematically identify and synthesize the issues, their causes and solutions in open-source microservices systems on GitHub. For an objective and fine granular presentation of methodological details, this phase is divided into two steps, each elaborated below, based on the illustrative view in Figure \ref{fig:researchmethod}. \subsubsection{Step A – Identify and Select MSA-based OSS Projects} The specified RQs require us to identify and select MSA-based OSS projects, representing a repository of developer discussions and knowledge, to extract the issues, their causes, and solutions. This means that the RQs guided the development of search strings based on the recommendations and steps from \cite{SystematicSearchMap2018} for string composition to retrieve developer discussions on microservices projects deployed on GitHub \cite{surana2020tool}. We formulated the search string using the format [\textit{keyword-1} [OR logic] \textit{keyword-2} … [OR logic] … \textit{keyword-N}], where keywords represented the synonyms as [‘\textit{micro service}’ OR ‘\textit{micro-service}’ OR ‘\textit{microservice}’ OR ‘\textit{Micro service}’ OR ‘\textit{Micro-service}’ OR ‘\textit{Microservice}’]. To extract MSA issues, we selected GitHub, which is one of the most popular and rapidly growing platforms for social coding and community-driven collaborative development of OSS systems. GitHub represents a modern genre of software forges that unifies traditional methods of development (e.g., version control, code hosting) with features of socio-collaborative development (e.g., issue tracking, pull requests) \cite{kalliamvakou2016depth}. The variety and magnitude of the OSS system available on GitHub also inspired our choice to investigate the largest OSS platform in the world, with approximately 40 million users and 28 million publicly available project repositories. Based on the search string, we searched for the title and description of the OSS projects deployed on the GHTorrent dump hosted on Google Cloud. The search helped us retrieve a total of 2,690 potentially relevant MSA-based OSS projects for investigation. To shortlist and eventually select the projects pertinent to the outlined RQs, we applied multi-criteria filtering \cite{GitHubFilter2020}, considering a multitude of aspects such as the popularity or perceived significance of a project in the developers’ community (represented as total stars), adoption by or interests of developers (total forks), and the total number of developers involved (total contributors) for the project. As shown in Figure \ref{fig:researchmethod} (Step B), we only selected the projects that have (i) more than 10 stars and forks, (ii) the language is English, (iii) three or more contributors. This led us to shortlist a total of 167 microservices projects. To eliminate the instances of potential false positives, i.e., avoiding bias in construct validity, such as misleading project names and mockup code, we contacted the top three contributors of each project via their publicly available email IDs to clarify about: \begin{enumerate} \item \textbf{Correct Interpretation of the Project}: Please confirm if our interpretation of your project (Project URL and Name as an identifier) is appropriate for its design and implementation based on MSA. Also, please help us clarify if this project (e.g., tool, framework, solution) supports the development of microservices systems or if this project is developed using MSA. \item \textbf{MSA-based Characteristics and/or Features of the Project}: What features and/or characteristics of the project reflect MSA being used in the project? \item \textbf{Optional Question}: Could you please help us identify (the names, URLs, etc. of) any other OSS projects that are designed or developed using MSA? \end{enumerate} We contacted a total of 426 contributors, with 39 of them responded (i.e., 9.2\% response rate) to our query. Based on the contributors’ confirmation, we selected 15 MSA-based OSS projects as detailed in Table \ref{tab:selectedProjects}, highlighting the name, the total number of issues, and URL for each project. {\renewcommand{\arraystretch}{1} \begin{table} \centering \scriptsize \caption{Identified open-source microservices systems} \label{tab:selectedProjects} \begin{tabular}{|p{2.8cm}|c|c|c|c|} \hline \textbf{Project Name}&\textbf{\#Issues}&\textbf{\#Contributors} &\textbf{\#Forks}&\textbf{\#Stars}\\\hline Spinnaker\tablefootnote{\url{https://github.com/spinnaker/spinnaker/issues}} & 4595 & 121 & 1.2K & 8.5K \\\hline Cortex\tablefootnote{\url{https://github.com/cortexproject/cortex/issues}} & 1120 & 226 & 681 & 4.7K \\\hline Jaeger\tablefootnote{\url{https://github.com/jaegertracing/jaeger/issues}} & 995& 239 & 1.9K & 15.8K \\\hline eShopOnContainers\tablefootnote{\url{https://github.com/dotnet-architecture/eShopOnContainers/issues}} & 986 & 157 & 8.9K & 20.7K \\\hline Goa\tablefootnote{\url{https://github.com/goadesign/goa}} & 930 & 97 &485 &4.7K \\\hline Light-4j\tablefootnote{\url{https://github.com/networknt/light-4j}} & 584 & 37 & 588 & 3.4K \\\hline Moleculer\tablefootnote{\url{https://github.com/moleculerjs/moleculer}} & 473 & 102 & 497 &5.1K \\\hline Microservices-demo\tablefootnote{\url{https://github.com/microservices-demo/microservices-demo/issues}} &287 & 55 &2.1K & 3.1K \\ \hline Cliquet\tablefootnote{\url{https://github.com/mozilla-services/cliquet}} & 207 & 19&20&65 \\\hline Deep-framework\tablefootnote{\url{https://github.com/MitocGroup/deep-framework}} &174 &12 &75 &537 \\\hline Scalecube\tablefootnote{\url{https://github.com/scalecube/scalecube}} & 130 &20 &90 &547 \\\hline Lelylan\tablefootnote{\url{https://github.com/lelylan/lelylan/issues}} & 123& 7 &93 & 1.5K \\ \hline Open-loyalty\tablefootnote{\url{https://github.com/DivanteLtd/open-loyalty/issues}} & 175 &14&80&300 \\ \hline Spring PetClinic\tablefootnote{\url{https://github.com/spring-petclinic/spring-petclinic-microservices}} & 69 &32 &1.4K &1.1K\\\hline Pitstop\tablefootnote{\url{https://github.com/EdwinVW/pitstop}} & 39 &15 &490 &890 \\\hline \end{tabular} \end{table}} \subsubsection{Step B – Synthesize Issues, Causes, and Solutions} \label{sec:Ext&Syn} After the projects were identified, as illustrated in Figure \ref{fig:researchmethod}, extracting and synthesising the issues was divided into the following five parts. \textbf{Raw Data Collection}: We chose 15 microservices projects (see Table \ref{tab:selectedProjects}) as the source for building the dataset to answer our RQs. These 15 projects were chosen because they are significantly larger than other microservices projects hosted on GitHub. Hence, it is highly likely that their contributors had more discussions about the type of issues, causes, and solutions in issue tracking systems. The discussions relating to a software system can usually be captured in issue tracking systems \cite{Viviani21}. We initially extracted 10,222 issue titles, issue links, issue opening and closing dates, and the number of contributors for each issue through our customized Python script (see the Raw Data sheet in \cite{replpack}). We stored this information in MySQL and exported it into MS Excel sheets for further processing. We extracted only closed issues because it could increase the chances of answering all our RQs (e.g., solutions). \textbf{Issues Screening}: The first author further scanned the 10,222 issues to check if an issue has been closed or is still open. All open issues were discarded because an open issue is ongoing with many of its causes unknown and most likely solution(s) not found. Furthermore, after selecting only the closed issues, the first author further eliminated (i) issues without a detailed description, (ii) general questions, opinions, feedback, and ideas, (iii) feature requests (e.g., enhancements, proposals), (iv) announcements (e.g., about new updates), (v) duplicated issues, (vi) issues that had only one participant, and (vii) stale issues. After this step, we got 5,115 issues (see the Selected Issues (Round 1) sheet in \cite{replpack}). We had second round of screening on these 5,115 issues to check whether these issues are related to our RQs or not, and after comprehensively analyzing them we found 2,641 issues that were related to our RQs (see the Selected Issues (Round 2) sheet in \cite{replpack}). \textbf{Pilot Data Extraction}: To gain initial insights into the issues, two authors (i.e., the first and fourth) performed pilot data extraction based on 150 issues, i.e., 5.67\% of 2,641 screened issues. The authors focused on issue-specific data, such as issue description (i.e., textual details specified by contributors), type of issue (e.g., testing issue, deployment issue), and frequency of issue (i.e., number of occurrences). Pilot data extraction was counter-checked by the second and third authors to verify and refine the details before final data extraction. \textbf{Issues Extraction}: Issues, causes, solutions, and their corresponding data were extracted based on the guidelines for mining software engineering data from GitHub~\cite{gousios2017mining}. The data items (D1-D6) used for preparing the template for issues, causes, and solutions extraction are presented in Table \ref{tab:Dataitems}. Data items (D1-D3) document general information, including issue ID, issue title, and issue link, whereas data items (D4-D6) document data to answer RQ1-RQ3. \textbf{Data Analysis}: To synthesize the issues, we used the thematic analysis approach \cite{cruzes2011recommended} to identify the categories of issues, causes, and solutions. The thematic analysis approach is composed of five steps. (i) Familiarizing with data: The first author repeatedly read the project’s contributor’s discussion and documented all discussed key points about issues, causes, and solutions. (ii) Generating initial codes: after data familiarization, the first author generated an initial list of codes from the extracted data (see the Initial Codes sheet in \cite{replpack}). (iii) Searching for the types of issues: The first and second authors analyzed the initially generated codes and brought them under the specific types of issues. (iv) Reviewing types of issues: All the authors reviewed and refined the coding results with the corresponding types of issues. We separated, merged, and dropped several issues based on a mutual discussion between all the authors. (v) Defining and naming categories: We defined and further refined all the types of issues, causes, and solutions under precise and clear subcategories and categories (see Figure \ref{fig:Taxonomy}). We introduced three levels of categories for managing the identified issues, causes, and solutions. First, we organized the types of issues, causes, and solutions under a specific subcategory (e.g., \textsc{service dependency} in \textit{Service Design Debt}). Then we arranged the subcategories under a specific category (e.g., \textit{Service Design Debt} in \textbf{Technical Debt}). {\renewcommand{\arraystretch}{1} \begin{table*}[t] \centering \scriptsize \caption{Data items to be extracted from open-source microservices projects and their relevant RQs} \label{tab:Dataitems} \begin{tabular}{|p{0.3cm}|p{2.2cm}|p{8.8cm}|p{1.2cm}|} \hline \textbf{\#}&\textbf{Data Item}&\textbf{Description}&\textbf{RQ}\\\hline D1 & Index & The ID of the issue & Overview \\\hline D2 & Issue title & A title of the issue from a contributor that describes what the issue is all about & Overview \\\hline D3 & Issue link & The URL address of the issue & Overview \\\hline D4 & Issue key points & Key points from the developer discussion for issue identification & RQ1\\\hline D5 & Causes key points & Key points from the developer discussion for cause identification & RQ2\\\hline D6 & Solution key points & Key points from the developer discussion for solution identification & RQ3\\\hline \end{tabular} \end{table*}} To minimize any bias during data analysis, each step, i.e., classification, mapping, and documentation, were cross-checked and verified independent of the individual(s) involved in data synthesis. Documentation of the results as answers to RQs is presented as taxonomical classification of issues (Section \ref{sec:results_RQ1}), causes of issues (Section \ref{sec:results_RQ2}), solutions to address the issues (Section \ref{sec:results_RQ3}), and mapping of issues with their causes and solutions (Section \ref{Sec:MappingIssueCausesSolutions}), complemented by details of validity threats (Section \ref{sec:threats}). \subsection{Phase 2 - Conducting Practitioner Interviews} \label{sec:RMPhase2} Since we aim to understand the issues, causes, and solutions of microservices systems from a practitioners' perspective, we opted to conduct interviews to confirm and improve the developed taxonomies. The interview process consists of the following steps. \subsubsection{Preparing a Protocol} \label{InterviewsProtocol} The first author conducted 15 online interviews with microservices practitioners through Zoom, Tencent Meeting, and Microsoft Teams. Before conducting actual interviews, we also conducted two pilot interviews with microservices practitioners to check the understandability and comprehensiveness of interview questions. However, we did not include their answers in our dataset. In total, we conducted 15 actual interviews, and each interview took 35-45 minutes. It is argued that conducting 12 to 15 interviews with homogeneous groups is enough to reach saturation \cite{guest2006many}. After conducting 15 interviews, we observed saturation in the answers to our interview questions. Therefore, we stop conducting further interviews. We conducted semi-structured interviews based on an interview guide, which contains a general group of topics (e.g., issues, causes, solutions) and open-ended questions rather than predetermined answers for the questions. The interview process was comprised of three sections. In the first section, we asked 6 demographic questions to understand the interviewee’s background in microservices. We covered various aspects in this section, including the country of the practitioner, major responsibilities, overall experience in the IT industry, experience with implementing microservices systems, the work domain of the organization, and programming languages for developing microservices systems. In the second part, we asked three open-ended questions about the types of issues, causes of issues, and solutions to issues during the development of microservices systems. The purpose of this part was to allow the interviewees to spontaneously express their views about the issues developers face in developing microservices systems, the causes of the issues, and resolution strategies without the interviewer biasing their responses. In the third part, we presented the three taxonomies to the interviewees and asked them to indicate any missing issues, causes, and solutions that have not been explicitly mentioned. All three taxonomies come from identifying, analyzing, and synthesizing the developer discussions from 15 open-source microservices systems. The taxonomy of issues consists of 386 types of issues, 54 issue subcategories, and 18 issue categories. The taxonomy of causes consists of 217 types of causes, 26 cause subcategories, and 8 cause categories. The taxonomy of solutions consists of 171 types of solutions, 33 solution subcategories, and 8 solution categories. At the end of each interview, we thanked the interviewee and briefly informed them of our next plans. \subsubsection{Conducting Interviews} We recruited 15 microservices practitioners from IT companies in 10 countries: Australia (1), Canada (3), China (1), Chile (1), India (1), Pakistan (2), Sweden (2), Norway (1), the United Kingdom (1), and the United States of America (2). Interviewees were recruited by emailing our professional contacts in each country. We informed the possible participants that this interview was entirely voluntary with no compensation. With this approach, we recruited 15 interviewees with varied experiences in years. We refer to the interviewees as P1 to P15. Most of the interviewees are mainly software architects and application developers. Their average experience in the IT industry is 10.33 years (Minimum: 5, Maximum: 16, Median: 10, Mode: 9, Standard Deviations: 3.26). The interviewees’ average experience in microservices is 5.33 years (Minimum: 3, Maximum: 8, Mode: 4, Median: 5, Standard Deviations: 1.67). \subsubsection{Data Analysis} \label{InterviewsDataAnalysis} We applied a thematic analysis method \cite{braun2006using} to analyze the recorded interviews. Before applying the thematic analysis method, the first author prepared the text transcripts from audio recordings. The first author read the interview transcripts and coded them using the MAXQDA tool. We dropped several sentences unrelated to ``microservices issues, causes, and solutions''. After removing the extraneous information from the transcribed interviews, the first author read and coded the interview transcripts’ contents to get the answers to the interview questions. To ensure the quality of the codes, the second author verified the initial codes created by the first author and provided suggestions for improvement. After incorporating these suggestions, we generated a total of 28 types of issues (classified into 15 subcategories and 11 categories), 28 types of causes (classified into 15 subcategories and 7 categories), and 30 types of solutions (classified in 4 subcategories and 3 categories). Later, we exported the analyzed interview data from the MAXQDA tool to an MS Excel sheet (i.e., the Interview Results sheet in \cite{replpack}) to make the part of taxonomies of issues, causes, and solutions in microservices systems from the interview data. During the interviews, we got 48 instances of issues, 31 instances of causes, and 36 instances of solutions. Among these instances, we identified 14 types of issues, 21 types of causes, and 23 types of solutions that were not part of the taxonomies we derived from the 15 open-source microservices systems. Except for \textsc{service size}, \textsc{operational and tooling overhead}, and \textsc{team management} issues, all of the issues, causes, and solutions identified through the interviews can be classified under the existing categories (i.e., the output of Phase 1). The instances of issues mentioned by the interviewees include CI/CD Issues (9), Security Issues (6), Service Execution and Communication Issues (5), Database Issues (5), Organizational Issues (5), Testing Issues (5), Monitoring Issues (4), Performance Issues (4), Update and Installation Issues (3), Configuration Issues (1), and Technical Debt (1). The causes mentioned by the interviewees include Service Design and Implementation Anomalies (20), Poor Security Management (2), Legacy Versions, Compatibility, and Dependency Problems (2), Invalid Configuration and Communication Problems (2), General Programming Errors (2), Fragile Code (2), and Insufficient Resources (1). The solutions mentioned by the interviewees include Add Artifacts (32) and Upgrade Tools and Platforms (4). {\renewcommand{\arraystretch}{1} \begin{table*}[t] \centering \scriptsize \caption{Interviewees and their demographic information} \label{tab:IntervieweesDemographics} \begin{tabular}{|p{0.25cm}|p{3.8cm}|p{2.4cm}|p{2.8cm}|p{1.4cm}|p{1.4cm}|p{1cm}|} \hline \textbf{\#} &\textbf{Responsibilities} & \textbf{Languages}& \textbf{Domain}&\textbf{Overall Exp.} & \textbf{MSA Exp.} &\textbf{Country}\\\hline P1 & Software Architect, Developer & Python, Java, Node.JS & E-commerce, Healthcare & 14 Years & 6 Years & Sweden \\\hline P2 & Developer & Java with Spark & E-commerce & 9 Years & 5 Years & USA \\\hline P3 & Software Architect & Python, Go, Java & E-commerce & 13 Years & 6 Years & UK \\\hline P4 & Software Architect, Developer & Python, Java & Educational ERP & 7 Years & 3 Years & Pakistan \\\hline P5 & Software Architect & Python, Java & Internet of Things & 12 Years & 7 Years & Canada \\\hline P6 & Software Architect, Developer & Java, Node.JS, Python & Healthcare & 15 Years & 8 Years & Australia \\\hline P7 & Software Engineer & C\#.Net & E-commerce, Banking & 10 Years & 4 Years & Canada \\\hline P8 & DevOps Consultant & Kotlin, Python & Network applications & 9 Years & 6 Years & Canada \\\hline P9 & Software Architect & Java & Education, Healthcare & 5 Years & 4 Years & Chile \\\hline P10 & Application Developer, Architect & Java & Financial (Insurance) & 10 Years & 8 Years & Sweden \\\hline P11 & DevOps Consultant & Java, Kotlin,Python & Telecommunication & 6 Years & 3 Years & Norway \\\hline P12 & Application Developer & Angular, Puthon & Manufacturing & 8 Years & 4 Years & China \\\hline P13 & Principal Consultant & Swift & Transportation & 9 Years & 5 years & USA \\\hline P14 & Azure Technical Engineer & JavaScript, Golang & Embedded systems & 12 Years & 4 Years & Pakistan \\\hline P15 & Software Architect & Ruby, UML & Payment applications & 16 Years & 7 Years & India \\\hline \end{tabular} \end{table*}} \subsection{Phase 3 - Conducting a Survey} \label{sec:RMPhase3} A questionnaire-based survey approach is used to evaluate the taxonomies of issues, causes, and solutions based on mining developer discussions and conducting practitioner interviews. We adopted Kitchenham and Pfleeger’s guidelines for conducting surveys \cite{kitchenham2008personal} and used an anonymous survey to increase response rates \cite{tyagi1989effects}. \subsubsection{Recruitment of Participants and Conducting the Pilot Survey} \label{PilotSurvey} After the survey design, we needed to (i) select the survey participants and (ii) conduct a pilot survey for initial assessments (e.g., time taken, clarity of statements, and add, remove, and refine the questions). To select the potential respondents, we used the following contact channels to spread the survey broadly to a wide range of companies from various locations worldwide. The contact channels to recruit the potential participants included (i) professional contacts, researchers of industrial track publications, and authors of web blogs related to microservices, (ii) practitioners and their communities on social coding platforms (e.g., GitHub, Stack Overflow), and (iii) social and professional online networks (LinkedIn, Facebook, Twitter, Google Groups). In the survey invitation email, we also requested the potential participants to share the survey invitation with individuals or groups deemed as relevant participants. Before sending out the invitations, we ensured that we only contact individuals with experience in any aspects of MSA design, development, and/or engineering based on their professional profiles, such as code commits, industrial publications, and professional designations. Based on publicly available email IDs, first, we sent out survey invitations to only a selected set of 30 participants for a pilot survey. Out of the 30 participants contacted for the pilot survey, 10 replied (response rate 33.33\%) from 7 countries. The pilot survey helped us to refine the survey questionnaire in terms of restructuring the sections and rephrasing some questions for clarity of the survey to ensure that (i) the length of the survey is appropriate, (ii) the terms used in the survey questions are clear and understandable, and (iii) the answers to survey questions are meaningful. \subsubsection{Conducting the Web-based Survey} \label{WebSurvey} We adopted a cross-sectional survey design, which is appropriate for collecting information at one given point in time across a sample population \cite{kitchenham2008personal}. Surveys can be conducted in many ways, such as Web-based online questionnaires and phone surveys \cite{lethbridge2005studying}. We decided to conduct a Web-based survey because these surveys can help to (i) minimize the time and cost, (ii) collect the responses from geographically distributed respondents, (iii) minimize time zone constraints, and (iv) save the effort of researchers to collect data in a textual, graphical, or structured format \cite{lethbridge2005studying}. To document different types of responses while maintaining the granularity of information, we structured the questionnaire into a total of 12 questions organised under four sections (see the Survey Questionnaire sheet in \cite{replpack}). \textbf{Demographics}: We asked 6 demographic questions about the background information of the respondents to identify the (i) country or region, (ii) major responsibilities, (iii) overall work experience in the IT industry, (iv) work experience with microservices systems, (v) work domain of the organization, and (vi) programming languages and implementation technologies for developing microservices systems. The demographic information has been collected to (i) identify respondents who do not have sufficient knowledge about microservices, (ii) divide the results into different groups, and (iii) generalize the survey findings for the microservices research and practice community. We received a total of 156 responses, and we excluded 6 responses that were either randomly filled or filled by research students and professors who were not practitioners. It is also important to mention that because the responses to the pilot survey were valid, we also decided to include them in the final survey responses. In the end, we had a set of 150 valid responses. \begin{itemize} \item \textbf{Countries}: Respondents came from 42 countries of 6 continents (see Figure \ref{fig:demography}(a)) working in diverse teams and roles to develop microservices systems. The majority of them are from China (16 out of 150, 10.66\%), Pakistan (13 out of 150, 8.66\%), and India (9 out of 150, 6.00\%). \item \textbf{Experience}: We asked the participants about their experiences in the IT industry and the development of microservices systems. Figure \ref{fig:demography}(b) shows that the majority of the respondents (57 out of 150, 38.00\%) have worked in the IT industry for more than 10 years and around one third of the respondents (44 out of 150, 29.33\%) have worked with microservices systems for 1 to 3 years. We also received a considerable amount of responses in which practitioners have more than 10 years of experience working with microservices systems (20 out of 150, 13.33\%). \item \textbf{Professional Roles}: Figure \ref{fig:demography}(c) shows that the majority of the participants were application developer (62 out of 150, 41.33\%), architect (40 out of 150, 26.66\%), and DevOps engineer (29 out of 150, 19.8\%). Note that one participant may have multiple major responsibilities in the company, and consequently, the sum of the percentages exceeds 100\%. \item \textbf{Application Domains}: Figure \ref{fig:demography}(d) shows the domains of the participants' organizations where the microservice practitioners worked. Financial Systems (57 out of 150, 38.00\%), E-commerce (29 out of 150, 19.33\%), and Professional Services (29 out of 150, 19.33\%) are the dominant domains. Note that one organization where a practitioner worked may have one or more application domains. \item \textbf{Programming Languages and Implementation Technologies}: Figure \ref{fig:demography}(e) shows that 38 programming languages and technologies were used to develop microservices systems, in which “Java" (70 out of 150, 46.66\%), “Python" (67 out of 150, 44.66\%), and “GO" (39 out of 150, 26.00\%) are the most frequently used languages for developing microservices systems. \end{itemize} \textbf{Microservices Practitioners’ Perspective}: To evaluate the taxonomies of issues, causes, and solutions, we asked six survey questions (both Likert scale and open-ended, see the Survey Questionnaire sheet in \cite{replpack}) from microservices practitioners. We provided a list of 19 issue categories and asked survey participants to respond to each category on a 5-point Likert scale (Very Often, Often, Sometimes, Rarely, Never). Similarly, regarding causes, we provided 8 categories and asked practitioners to respond to each category on a 5-point Likert scale (Strongly Agree, Agree, Neutral, Disagree, Strongly Disagree). We also provided 8 categories of solutions and asked practitioners to respond to each category on a 5-point Likert scale. Along with these 5-point Likert scales, we added one option to know the familiarity of the survey respondents with the listed categories. We also asked three open-ended questions to identify the missing issue, causes, and solutions in the provided categories. All the open-ended responses were further analyzed through thematic classification and adjusted in the taxonomies. \begin{figure*}[!htbp] \centering \includegraphics[scale=0.45]{Figures/Demographics.pdf} \caption{Overview of the demographics of survey participants} \label{fig:demography} \end{figure*} \subsubsection{Data Analysis} We used descriptive statistics and constant comparison techniques \cite{glaser2017discovery, hoda2017becoming} to analyze the quantitative (i.e., closed-ended questions) and qualitative (i.e., open-ended questions) responses to survey questions, respectively. To better understand practitioners’ perspectives through Likert answers on issues, causes, and solutions in microservices systems, we used the Wilcoxon rank-sum test, i.e., participants who have $\le$ 6 years of experience (49 responses) versus participants who have $\ge$ 6 years of experience with microservices systems (101 responses). We used 6 years as a breaking point for separating the groups because 6 years is almost in the middle of practitioners' experience with microservices systems. We used the~\faBalanceScale{} symbol to indicate a significant difference between participant groups who have \textbf{Experience $\le$ 6 years} vs. \textbf{Experience $\ge$ 6 years}. \section{Related Work} \label{RelatedWork} We discuss the existing research in this section that can be broadly classified into two categories including (i) mining OSS repositories to extract reusable knowledge for MSA (Section \ref{miningoss}) and (ii) empirical studies on issues in microservices systems and software systems (Section \ref{empiricalstudiesonissues}). A conclusive summary and comparative analysis (Section \ref{conclusivesummary}) position the proposed research in the context of mining issues from open-source repositories and justify the scope and contributions of this research. \subsection{Mining OSS Repositories to Extract Reusable Knowledge for MSA} \label{miningoss} \subsubsection{Mining Patterns, Anti-Patterns, and Tactics to Architect MSA} From an MSA perspective, architectural patterns \cite{waseem2022} and tactics \cite{R1} represent empirically derived knowledge, best practices, and recurring solutions to address frequently occurring issues during architecture-centric design and development of service-driven systems \cite{1-marquez2018actual}. To discover patterns and tactics for architecting MSA, a number of studies have focused on analyzing open source repositories – mining version controls, searching change logs, and exploring design documents etc. – to investigate historical data that represents recurring solutions as patterns~\cite{1-marquez2018actual, R3, microservices2018, pigazzini2020towards}. The investigation of historical data involves postmortem analysis of development activities (e.g., software refactoring, testing, evolution) \cite{2019Microservices} as well as the knowledge of architects (e.g., developer discussions, code documentation) \cite{bandeira2019we}, which is captured via open source repositories, such as GitHub, Stack Overflow, or customized databases \cite{1-marquez2018actual, R3, bandeira2019we}. Specifically, Marquez \textit{et al}. \cite{1-marquez2018actual} explored source code artifacts of microservices projects, i.e., configuration files, and framework dependencies, available on GitHub to mine 17 architectural patterns addressing a set of quality attributes. Compared to the design and implementation-specific knowledge for microservices reported in \cite{1-marquez2018actual, R3}, Armin \textit{et al}. \cite{microservices2018} explored the evolution phase from traditional architectures to MSA and identified 15 migration patterns that support the evolution of legacy software towards a modular MSA. Compared to the pattern-based solutions discussed above, architectural tactics represent design decisions that focus on improving one specific quality aspect of MSA, such as service availability and fault avoidance for security critical systems \cite{R1}. While patterns promote best practices to develop MSA, anti-patterns represent a class of patterns that have been perceived as best practices and commonly used but are proven to be ineffective and/or counterproductive, such as bad smells in service design \cite{taibi2018definition}. Taibi \textit{et al}. \cite{2019Microservices} conducted interviews of microservices developers, i.e., accumulating practitioners' experiences to create a taxonomy of 20 anti-patterns including organizational (team oriented and technology/tool oriented) and technical (internal and communication) anti-patterns. The anti-pattern taxonomy aims to help microservices practitioners to avoid counterproductive patterns and design decisions. In comparison to the repository mining approaches that investigate code and architectural-centric artifacts available on GitHub such as \cite{R1, 1-marquez2018actual, pigazzini2020towards}, Bandeira \textit{et al}. \cite{bandeira2019we} reflected a human-centric view of MSA design by analyzing developer discussions on Stack Overflow. Their study classified a total of 1,043 microservice tagged posts into three categories namely technical (44.87\%), conceptual (30\%), and non-related (25.13\%) discussions to explore the processes, tools, issues, solutions, etc. that developers find most exciting or challenging while implementing MSA. \subsubsection{Analyzing Issues for Evolving Legacy Systems to MSA} In recent years, research and practices on the migration of legacy systems to microservices systems have gained significant attention with empirical evidence derived from industrial practices on the role of MSA in software evolvability \cite{assuring2019, 2020Does}, service extraction from legacies \cite{Mazlami, Carvalho}, and identifying the motivations and challenges for legacy migration to MSA \cite{Davide2017Processes}. For instance, Bogner \textit{et al}. \cite{assuring2019} reported the results of 17 structured interviews with 14 microservices practitioners from 10 projects to investigate a multitude of issues, such as tool support, patterns, and process to manage TD and enhance software evolvability using microservices. The findings of the interviews recommended a number of techniques, including but not limited to code review and service slicing, that can address a number of microservices issues related to service granularity, composition, coupling, and cohesion. Lenarduzzi \textit{et al}. \cite{2020Does} explored the issues of TD when legacy software is migrated to microservices. They investigated four years history of a project having 280K lines of code and concluded that although TD spiked initially due to the development of new microservices, however; after a short period of time the TD grew slower in the microservices system than in the (legacy) monolithic system. Carvalho \textit{et al}. \cite{Carvalho} conducted an online survey with 26 specialists, followed by individual interviews with 7 of them to understand the challenges pertaining to the migration of existing systems to microservices architecture. Their study results highlight that extracting the microservices from legacy components and monolithic source code modules represents the most critical challenge during the re-engineering or migration of legacies toward microservices systems. A number of empirical studies, such as \cite{Mazlami, Davide2017Processes}, engaged industrial practitioners to understand the processes, motivations, and challenges related to legacy system migration in general and service extraction in particular. The results of these studies provide recommendations and guidelines about service maintainability and scalability as the most important motivations for the enterprise-scale adoption of MSA. \subsection{Empirical Studies on Issues in Microservices Systems and Software Systems} \label{empiricalstudiesonissues} \subsubsection{Bad Smells and Performance Issues in Microservices Systems} Code smells and architectural smells (a.k.a bad smells) are often synonyms as a microservices anti-pattern reflecting symptoms of poorly designed microservices that decrease code understandability and increase effort for maintainability \cite{2019Microservices, taibi2018definition}. According to practitioners' suggestions and industrial case studies, detecting bad smells in microservices systems is critical for large-scale microservice systems \cite{taibi2018definition}. Walker \textit{et al}. \cite{C-2020Automated} provided tool support for the automatic detection of bad smells in microservices systems, and the tool MSANose can detect up to eleven microservices specific bad smells within microservices applications using bytecode and/or source code analysis throughout the development process or even before its deployment to production. In addition, a number of other prevalent issues in microservices systems, such as faults, bugs, performance issues, and service decomposition problems, are detailed in \cite{ZhouIEEE, WuNoms, matias2020determining}. More specifically, Wu \textit{et al}. \cite{WuNoms} proposed a solution named Microservice Root Cause Analysis (MicroRCA), which works by inferring the root causes of performance issues by correlating application performance symptoms with corresponding system resource utilization. Their proposed solution MicroRCA can addresses performance related issues in microservices systems by analyzing resource utilization and throughput of the services. \subsubsection{Taxonomies of Issues and Faults in Software Systems} While exploring issues from the microservices system point of view, it is important not to overlook the most recent taxonomies, empirical studies, and proposed solutions that address a multitude of issues, errors, and faults in non-MSA systems such as, deep learning \cite{Humbatova20} and application build systems \cite{lou2020}. In particular, a taxonomy of the types of faults in deep learning systems \cite{Humbatova20} and the types of build issues, their symptoms, and fix patterns \cite{lou2020} inspired our work on microservices issues, causes, and solutions. From the MSA perspective, a recently conducted study \cite{ZhouIEEE} identified typical faults occurring in microservices systems, practices of service debugging, and challenges faced by developers while addressing these faults. For example, a fault such as ``transactional service failure'' is due to overloaded requests to a third-party (payment gateway) service, ultimately leading to denial of service issues. Our proposed research draws inspiration from empirically derived taxonomies \cite{Humbatova20, lou2020} and goes beyond issue categorization to investigate their causes and proposed solutions as resolution strategies to fix multi-faceted issues related to Security, Testing, and Configuration that impact architecting and implementing microservices systems. \subsection{Conclusive Summary} \label{conclusivesummary} The studies reported in \cite{C-2020Automated, ZhouIEEE, WuNoms} are grounded in the empirical analysis of microservices systems to identify a multitude of issues, such as faults, bad smells, and performance issues faced by practitioners while designing, developing, and deploying microservices systems. To complement empiricism in microservices research and development, our proposed study mined the social coding platform (i.e., 15 open-source microservices systems on GitHub) and identified issues faced by developers with improvement and validation by microservices practitioners. While there is work on mining reusable knowledge \cite{1-marquez2018actual, R3} - patterns and best practices – from software repositories to develop microservices systems, there is no research on mining knowledge, overseeing the broader microservices system development life cycle to streamline the plethora of issues, their causes and fixing strategies. Our study primarily focuses on survey-driven validation of the issues, causes, and solutions of microservices systems by practitioners, and complements the body of research comprising some of the recent industrial studies on evolvability \cite{assuring2019}, migration \cite{2020Does}, and debugging \cite{ZhouIEEE} of microservices systems. \section{Results} \label{sec:results} This section presents the analyzed results of this study, addressing the four RQs outlined in Section \ref{RQs}. The analyzed results are further organized as categories (e.g., \textbf{Technical Debt}), subcategories (e.g., \textit{Code Debt}), and types (e.g., \textsc{code smell}). We present categories in \textbf{boldface}, subcategories in \textit{italic}, and types in \textsc{small capitals}. The relevant examples are provided as quoted messages along with their issue ID numbers to facilitate the traceability to our dataset (see the Initial Codes sheet in \cite{replpack}). We report the types of issues in Section \ref{sec:results_RQ1}, the types of causes in Section \ref{sec:results_RQ2}, the types of solutions in Section \ref{sec:results_RQ3}, and practitioners' perspective on these taxonomies in Section \ref{sec:results_RQ4}. \subsection{Types of Issues (RQ1)} \label{sec:results_RQ1} The taxonomy of issues in microservices systems is provided in Figure \ref{fig:Taxonomy}. The taxonomy of issues is derived by mining developer discussions (i.e., 2,641 instances of issues, see Section \ref{sec:Ext&Syn}), conducting practitioner interviews (i.e., 48 instances of issues, see Section \ref{InterviewsDataAnalysis}), and conducting a survey (9 instances of issues, see Section \ref{sec:results_RQ4}). Therefore, we got a total of 2,698 instances of issues. The results show that Technical Debt (687 out of 2698), Continues Integration and Delivery (313 out of 2698), and Service Execution and Communication (219 out of 2698) issues are most frequently discussed. The number of issues in each issue type, subcategory, and category are also shown in Figure \ref{fig:Taxonomy}. \textbf{1. Technical Debt} (687/2698, 25.46\%): Technical Debt (TD) is “\textit{a metaphor reflecting technical compromises that can yield short-term benefit but may hurt the long-term health of a software system”} \cite{li2015systematic}. This is the largest category in the taxonomy of microservices issues and includes a wide range of issues related to \textit{Code Debt} and \textit{Service Design Debt}. The interviewees also mentioned several issues regarding this category, and one representative quotation is depicted below. \faHandORight{} “\textit{The complexity introduces several types of technical debt both in the design and development phases of microservices systems}” \textbf{Developer, P2}. We identified and classified 19 types of TD issues in 2 subcategories (see Figure \ref{fig:Taxonomy} and the Issue Taxonomy sheet in \cite{replpack}). Each of them is briefly described below. \begin{itemize} \item \textit{Code Debt} (658, 24.47\%) refers to the issues of source code, which could adversely impact the code quality. This subcategory mainly gathers issues related to \textsc{code refactoring}, \textsc{code smell}, \textsc{code formatting}, \textsc{excessive literals}, and \textsc{duplicate code}. For instance, developers refactored the code of the Spinnaker project by ``\textit{adding hal command for tweaking the component sizings, \#11}''. Similarly, Jaeger’s developers found \textsc{code smells} in which ``\textit{the naming of `MutiplexWriter' is misleading, \#2077}''. In addition, several other types of \textit{Code Debt}, such as \textsc{code formatting} (e.g., ``\textit{No define formatting settings, \#895}''), \textsc{excessive literals} (e.g., “\textit{string param length limited to 100 characters, \#1775}”), and \textsc{duplicate code} (e.g., “\textit{duplicate key value violates unique constraint `deleted\textunderscore pkey', \#2236}”) also negatively affect the code legibility of microservices systems. \item \textit{Service Design Debt} (29, 1.07\%) refers to the violation of adopting successful practices (e.g., MSA patterns) for designing open-source microservices systems. The issues in this subcategory are mainly related to \textsc{service dependency}, \textsc{business logic issue}, and \textsc{design pattern issue}. For instance, the developers of the moleculer project reported the issue of service design debt in which “\textit{Service A requires module A. If service A changed, the runner reloads, but if module A changed, the runner does not reload, \#1873}”. \end{itemize} \textbf{2. Continuous Integration and Delivery (CI/CD) Issue} (313/2698, 11.60\%): CI/CD refers to the automation process that enables development teams to frequently develop, test, deploy, and modify software systems (e.g., microservices systems). Usually, a variety of tools and technologies are used to implement the CI/CD process. A wide range of CI/CD issues has been identified by mining microservices systems. However, we also found a few issues that the interviewees mentioned, and one representative quotation is depicted below. \faHandORight{} “\textit{The key issues of CI/CD for practitioners are many small independent code bases, multiple languages, frameworks, microservices integration, load testing, managing releases, and continued service updates}”, \textbf{DevOps Consultant, P8}. We identified and classified 55 types of CI/CD issues in 7 subcategories (see Figure \ref{fig:Taxonomy} and the Issue Taxonomy sheet in \cite{replpack}). Each of them is briefly described below. \begin{itemize} \item \textit{Deployment and Delivery Issue} (105, 3.89\%) reports the problems that occur during the deployment and delivery of microservices systems. We identified 17 types of issues in this subcategory, which are mainly related to \textsc{cd pipeline error}, \textsc{cd pipeline stage}, \textsc{halyard deployment}, and \textsc{deployment script} errors. For example, regarding \textsc{cd pipeline error}, one contributor of the eShopOnContainers project highlighted that “\textit{Jenkins pipeline is failing with error 403, \#990}”. \item \textit{Kubernetes Issue} (74, 2.74\%) reports the CI/CD issues specific to Kubernetes which is an open-source system for automatic deployment, scaling, and management of containerized applications (e.g., microservices systems). Most problems of the subcategory are related to general \textsc{kubernetes}, \textsc{helm bake}, \textsc{kubernetes manifest} errors. For example, the contributors of the eShopOnContainers project were “\textit{unable to list Kubernetes resources using default ASK to create a script, \#688}”). \item \textit{Docker Issue} (50, 1.85\%): Docker is an open-source platform that helps practitioners with continuous testing, deployment, executing, and delivering applications (e.g., microservices systems). Our results indicate that most of the issues related to Dockers are \textsc{docker image error}, \textsc{docker configuration error}, and \textsc{outdated container}. For instance, the contributors of the eShopOnContainers project pointed out that the “\textit{building of the solution using docker-compose was failing, \#2464}” due to a docker configuration error. \item \textit{Amazon Web Services (AWS) Issue} (17, 0.63\%): AWS provides an on-demand cloud computing platform for creating, testing, delivering, and managing applications. The most frequent types of this subcategory are general \textsc{aws error} and \textsc{aws jenkins error}. As an example, some developers of the Spinnaker project faced a situation in which “\textit{AWS Jenkins multi Debian package jobs fail to bake, \#2429}”. \item \textit{Version Control Issue} (16, 0.69\%) is related to version control and management systems. In general, the major issues in this subcategory are related to Git, such as \textsc{git plugin} “\textit{Mac local Git install fails on Halyard backup, \#7}”, \textsc{master branch} “\textit{Given the difference between the codebase, cherry-picking is not working for these changes,\#2436}”, and \textsc{gitlab} “\textit{Gitlab won't start OAUTH process when configured without an HTTPS redirect URL, \#461}” issues. \item \textit{Google Cloud Issue} (8, 0.29\%): Google Cloud provides infrastructure services for creating and managing projects (e.g., microservices systems) and resources. This subcategory contains issues related to the Google Cloud platform for microservices systems. The most reported issues are \textsc{gcp error} \textsc{gke error}, and \textsc{gce clone error}. For instance, in the Spinnaker project, practitioners identified that “\textit{GCP: instance group port name Mapping is not working properly, \#2496}”. \item \textit{Others} (38, 1.40\%): This subcategory gathers issues related to \textsc{cloud driver error} (e.g., “\textit{CloudDriver does not receive the Tags From GCR , \#515}”), \textsc{virtual machine error} (e.g., “\textit{Nomad deploy on virtual box fails, \#2076}”), and \textsc{spring boot error} (e.g., “\textit{Spring boot 2 breaks Spinnaker calling, \#847}”). \end{itemize} \textbf{3. Exception Handling Issue} (228/2698, 8.45\%): Exception handling is used to respond the unexpected errors during the running state of software systems (e.g., microservices systems), and it helps to avoid the software system being crashed unexpectedly. This category represents the issues practitioners face when handling various kinds of exceptions in microservices systems. We identified and classified 44 types of Exception Handling issues in 5 subcategories (see Figure \ref{fig:Taxonomy} and the Issue Taxonomy sheet in \cite{replpack}). Each of them is briefly described below. \begin{itemize} \item\textit{Unchecked Exception} (81, 3.00\%): These exceptions cannot be checked on the program’s compile time and throw the errors while executing the program’s instructions. We identified 10 types of issues in this subcategory, in which the top three types are \textsc{null pointer exception} (e.g., “\textit{NullPointerException when populating a request to call another API, \#1456}”), \textsc{file not found exception} (e.g., “\textit{Can not find schema.cql, \#186}”), \textsc{runtime exception} (e.g., “\textit{An exception was thrown while activating IFMS, \#2174}”). \item\textit{Checked Exception} (77, 2.85\%): These exceptions can be checked on the program’s compile time. Checked exceptions could be fully or partially checked exceptions. We identified 16 types of issues in this subcategory, mainly related to \textsc{io exception} (e.g., “\textit{Excessive wait for capacity match , \#970}”), \textsc{variables are not declared} (e.g., “\textit{Cannot read property 'map' of undefined, \#298}”), \textsc{error handling} (e.g., “\textit{Error handling example is not working , \#418}”). \item\textit{Resource not Found Exception} (37, 1.33\%): These exceptions occur when some services cannot find the required resources for executing operations. We identified 8 types of issues in this subcategory. Most of them are related to \textsc{attributes do not exist} (e.g., “\textit{Attributes from extend not available in view, \#410}”), \textsc{no server group} (e.g., “\textit{No server groups found in this application, \#1467”}), and \textsc{missing library} (e.g., “\textit{Missing supporting library, \#1424}”). \item\textit{Communication Exception} (28, 1.03\%): These are exceptions thrown when the client services cannot communicate with the producer services. We identified 7 types of issues in this subcategory, mainly related to \textsc{http request exception} (e.g., “\textit{Bad request 400 exception, \#2608}”) and \textsc{timeout exception} (e.g., “\textit{ForceCacheRefresh timeout exception after 10 minutes , \#2092}”). \item\textit{Others} (5, 0.18\%): This subcategory gathers issues related to \textsc{api exception} (e.g., “\textit{Improve cluster join API.Join should be asynchronous, \#1402”}), \textsc{dependency exception} (e.g., “Unsatisfied dependency exception, \#1399”), and \textsc{thrift exception} (e.g., “\textit{Unable to start Spinnaker services for development due to thrift exceptions, \#2093}”). \end{itemize} \textbf{4. Service Execution and Communication Issue} (219/2698, 8.11\%): Communication problems are obvious when microservices communicate across multiple servers and hosts in a distributed environment. Services interact using e.g., HTTP, AMQP, and TCP protocols depending on the nature of services. The interviewees also mentioned a few issues regarding this category, and one representative quotation is depicted below. \faHandORight{} “\textit{The poor implementation of microservices communication is also a source of insecure communication, latency, lack of scalability, and errors and fault identification on runtime}” (\textbf{P7, Software Engineer}). We identified and classified 34 types of service execution and communication issues in 3 subcategories (see Figure \ref{fig:Taxonomy} and the Issue Taxonomy sheet in \cite{replpack}). Each of them is briefly described below \begin{itemize} \item \textit{Service Communication} (166, 6.15\%): There are different ways of communication (e.g., synchronous communication, asynchronous message passing) between microservices. This subcategory covers the issues of service communication in which the majority of the issues are related to \textsc{service discovery failure} (e.g., “\textit{service discovery failure at the beginning of service startup, \#2178}”), \textsc{http connection error} (e.g., “\textit{server returned HTTP status 401 unauthorized, \#609}”), and \textsc{grpc connection error} (e.g., “\textit{grpc streaming received messages are Not validated, \#303}”). \item \textit{Service Execution} (27, 1.00\%): This subcategory contains the issues regarding \textsc{asynchronous communication}, \textsc{dynamic port binding}, \textsc{rabbitmq messaging}, and \textsc{service broker} during service execution. These issues occur due to various reasons. For example, a dependency issue between microservices occurred when “\textit{integration commands were sent asynchronously, \#1710}”. We also found several issues regarding \textsc{dynamic port binding}. For instance, a server module of the light-4j project could not dynamically allocate a “\textit{port on the same host with a given range, \#1742}”. Similarly, some developers faced a situation in which “\textit{old services could not be replaced with new services because Service broker could not properly destroy the old services, \#1216}”. \item \textit{Service Management} (17, 0.63\%): This subcategory covers the issues that occur in the distributed event store platform, service management platform, and service networking layer. The majority of the problems that happen in this subcategory are \textsc{kafka bug} (e.g., “\textit{Jaeger OTEL Ingester/Collector does not save spans to elastic search from Kafka, \#51}”), \textsc{kafka josn format issue} (e.g., “\textit{Kafka JSON format data have no Ref Type, \#2574})”, and \textsc{eks (elastic kubernetes service) error} (e.g., “\textit{Front50 is not able to work with EKS IAM roles for service accounts, \#2367}”). \end{itemize} \textbf{5. Security Issue} (213/2698, 7.89\%): Microservices provide public interfaces, use network-exposed APIs for communicating with other services, and are developed by using polyglot technologies and toolsets that may be insecure. This makes microservices a potential target for cyber-attacks; therefore, security in microservices systems demands serious attention. Mining microservices systems have identified a wide range of security issues. Among those issues, microservices practitioners also mentioned several other security issues during the interviews. One representative quotation is depicted below. \faHandORight{} “\textit{The other problem is securing microservices at different levels. Specifically, we deal with microservices-based IOT applications that have more insecure points than traditional ones}” (\textbf{P5, Solution Architect}). We identified and classified 37 types of security issues in 4 subcategories (see Figure \ref{fig:Taxonomy} and the Issue Taxonomy sheet in \cite{replpack}). Each of them is briefly described below. \begin{itemize} \item \textit{Authentication and Authorization} (123, 4.55\%): Authentication is the process of identifying a user, whereas authorization determines the access rights of a specific user to system resources. We found that the majority of the authentication and authorization issues are related to \textsc{handling authorization header} (e.g., Basic Auth, OAuth, OAuth 2.0) and \textsc{shared authentication} (e.g., “\textit{401 Unauthorized error occurred on Google oauth2, \#563}”) issues. \textsc{handling authorization header} is generally used to implement authorization mechanisms. Our study found several issues related to Basic Auth, OAuth, and OAuth 2.0 header failure and the non-availability of these security headers. For example, the developers of the eShopOnContainers project reported the issue about “\textit{invalid\textunderscore request on auth from Swagger for Location API, \#1990}”. Moreover, we found several issues regarding improper implementation of \textsc{shared authentication} methods (e.g., “\textit{Unable to start collector with password authenticator, \#228}”) in microservices systems. \item \textit{Access Control} (64, 2.37\%) is a fundamental element in securing the infrastructure of microservices or any software systems. Access control could be role- or attribute-based in a microservices system. The major types of issues in this subcategory are \textsc{managing credential setup} (e.g., “\textit{SECURITY ERROR: This download does not match the one reported by the checksum server, \#2414}”) and \textsc{security policy violation} (“\textit{Violates the security policy directive like script-src ‘unsafe-inline’. Note that script-src-element was not explicitly set, so ‘script-src’ is used as a fallback, \#594}”). \item \textit{Secure Certificate and Connection} (43, 1.59\%): Our study reports several issues regarding implementing security certificates and standards, such as SSL, TSL, and JWT, which are used to secure communication between client-server, service-to-service, or between microservices. For example, we found a \textsc{jwt error} (e.g., “\textit{JWT security doesn't behave properly in swagger, \#1625}”) and an \textsc{ssl connection issue} (e.g., “\textit{Deck deployment with SSL fails, \#601}”) in the Goa and Spinnaker project respectively. We also found several other types of secure certificate and connection issues, such as \textsc{security token expired}, \textsc{tls certificate issue}, and \textsc{expired certificate}. \item \textit{Encryption and Decryption} (13, 0.48\%) is used to convert plain text into ciphertext and ciphertext into plain text to secure the information. We identified three types of issues related to this subcategory are \textsc{data encryption} (e.g., “\textit{Errors in encrypting values in a secret.yml, \#322}”), \textsc{data decryption} (e.g., “\textit{the anti-forgery token could not be decrypted, \#2473}”), and \textsc{configuration decryption} (e.g., “\textit{Errors in retrieving symmetric key for configuration decryption, \#430}”). \end{itemize} \textbf{6. Build Issue} (210/2698, 7.78\%): Build is a process of preparing an application program for software release by collecting and compiling all required source files. The outcome of this process could be several types of artifacts, such as binaries and executable programs. We identified and classified 20 types of build issues in 3 subcategories (see Figure \ref{fig:Taxonomy} and the Issue Taxonomy sheet in \cite{replpack}). Each of them is briefly described below. \begin{itemize} \item \textit{Build Error} (141, 5.22\%): We identified several types of build errors which can interrupt the build process of microservices systems. We found that the majority of build errors are related to \textsc{build script} (e.g., “\textit{build errors on my generated http/my\textunderscore resource/client/cli.go file in the BuildGetPayload function, \#2415}”), \textsc{plugin compatibility} (e.g., “\textit{docker-compose build, can't build on macOS, \#998}”), \textsc{docker build fail} (e.g., “\textit{Failed: updating pod controller forindex.docker.io/weaveworksdemos/catalogue:0.3.1: Could not find image name, \#206}”), \textsc{build pipeline error} (e.g., “\textit{ERROR: Service 'webspa' failed to build, \#659”}) , \textsc{build file server error} (e.g., “\textit{go build Not working for service using file servers, \#779}”), \textsc{source file loading} (e.g., “\textit{Test projects don't build by default in a clean solution, but they do build one by one, \#673}”), and \textsc{module resolution} (e.g., “\textit{Module won't be reloaded when mod.js is changed, \#1011}”). \item \textit{Broken and Missing Artifacts} (59, 2.18\%): These issues occur during the build process's parsing stage when the build systems verify the required information (e.g., files, packages, designated locations) in the build script files before executing the build tasks. This subcategory mainly covers the issues related to \textsc{missing properties, packages, and files} (e.g., “\textit{Validation not triggered on the server when user inputs missing JSON fields in client, \#333}”), \textsc{broken files} (e.g., “\textit{code based generation file broken \#22}”), and \textsc{missing objects} (e.g.,“\textit{Value-object is missing in Order entity, \#201}”). We also identified several other types of broken and missing artifacts which include \textsc{missing ami (amazon machine image)} (e.g., “\textit{AMI not found when creating new server group, \#59}”), \textsc{missing base parameter in the client} (e.g.,“ \textit{no CLI flags generated for BaseParams in API, \#212}”) , and \textsc{missing link attributes} (e.g., “\textit{generated client side data types missing links attribute, \#81}”). \item \textit{Others} (10, 0.37\%): This subcategory mainly gathers issues related to \textsc{wrong use of universally unique identifier} (e.g., “\textit{uuid package Not imported in generated app/user\_types.go, \#208}”), and \textsc{inconsistent data generated} (e.g., “\textit{From the code it's possible to generate inconsistent data when the event and context(e.g., OrderingContext) is successfully saved, but failed to publish, \#666}”). \end{itemize} \textbf{7. Configuration Issue} (121/2698, 4.48\%): Configuration is a process of controlling and tracking and making consistent all the required instances for software systems(e.g., microservices systems). Microservices systems have multiple instances and third party applications to configure. This category gathers configuration issues during microservices system development, implementation, and deployment phases. Besides identifying the configuration issues from the OSS microservices system, microservices practitioners also indicated configuration issues during the interviews. One representative quotation is depicted in the following. \faHandORight{} “\textit{The major challenge for me is the poor microservices’ configuration, which grows as the application size grows—the configuration effect on implementation and deployment phases of the microservices systems. The poor configuration of microservices may lead to increased latency and decrease the speed of microservices calls between different services}” (\textbf{P12, Application Developer}). We identified and classified 16 types of configuration issues in 2 subcategories (see Figure \ref{fig:Taxonomy} and the Issue Taxonomy sheet in \cite{replpack}). Each of them is briefly reported below. \begin{itemize} \item \textit{Configuration Setting Error} (61, 2.26\%): This subcategory contains issues associated with the setup configuration of different types of servers, databases, and cloud infrastructure platforms. The main types of issues in this subcategory are \textsc{server configuration error} (e.g., “\textit{Errors when adding server group, \#2170}”), \textsc{database configuration error} (e.g., “\textit{Influxdb complains about the host header is missing with 400 error, \#134}”), and \textsc{aks (azure kubernetes service) configuration error} (e.g., “\textit{Need help in configuring my existing AKS domain, \#901}”). \item \textit{Configuration File Error} (60, 2.22\%): This subcategory covers the issues that mainly occur due to providing incorrect values in environment setting variables. The major types of issues in this subcategory are \textsc{configuration mismatch} (e.g., “\textit{Configuration updates cause alerting rules to forget firing state, \#2555}”), \textsc{conflict in configuration file names} (e.g., “\textit{Token replacement in configuration files does not allow special characters, \#531}”), and \textsc{incorrect file path} (e.g., “\textit{URI path normalisation errors in Spring security, \#728}”). \end{itemize} \textbf{8. Monitoring Issue} (89/2698, 3.29\%): The dynamic nature of microservices systems needs monitoring infrastructures to diagnose and report errors, faults, failure, and performance issues. This category reports issues related to monitoring microservices systems. Several interviewees also mentioned monitoring issues for microservices systems. One representative quotation is depicted in the following. \faHandORight{} “\textit{Microservices systems host containerized or virtualized across distributed private, public, hybrid, and multi-cloud environments. Monitoring highly distributed systems like microservices systems through traditional monitoring tools is a challenging experience because these tools only focus on a specific component or the overall operational health of the system}” (\textbf{P14, Azure Technical Engineer}). We identified and classified 17 types of monitoring issues in 3 subcategories (see Figure \ref{fig:Taxonomy} and the Issue Taxonomy sheet in \cite{replpack}). Each of them is briefly described below. \begin{itemize} \item \textit{Tracing and Logging Management Issue} (60, 2.22\%): One prominent challenge of monitoring microservices systems is the collection of logs from containers and distributed tracing. We identified 7 types of issues in this subcategory, mainly related to \textsc{distributed tracing error} (e.g., “\textit{Tracer has no activeSpan in the client header, \#1538}”), \textsc{logging management error} (e.g., “\textit{Lot of errors logged when query requests are cancelled, \#1545}”), and \textsc{observability issue} (e.g., “\textit{Missing OpenTracing support for observability, \#1048}”). \item \textit{Health Check Issue} (17, 0.63\%): This subcategory deals with the problems related to the health monitoring of microservices systems. We identified six types of issues, mainly related to \textsc{health check API error} (e.g., “\textit{Pod is unavailable and has been failing readiness probes, \#281}”), \textsc{health check fail} (e.g., “\textit{Front50's health check fails if Redis is not running locally, \#1516}”), and \textsc{health check port error} (e.g., “\textit{could not start the health check server, error: port not specified, \#57}”). \item\textit{Monitoring Tool Issue} (12, 0.44\%): We also identified several issue discussions, in which microservices practitioners discussed the problems of three monitoring tools, including \textsc{zipkin issue}, \textsc{jenkins issue}, and \textsc{tcp/tt health check issue}. \end{itemize} \textbf{9. Compilation Issue} (79/2698, 2.92\%): This category reports compilation errors, which mainly occur when the compiler cannot compile source code due to errors in the code or errors with the compiler itself. We identified and classified 9 types of compilation issues in 2 subcategories (see Figure \ref{fig:Taxonomy} and the Issue Taxonomy sheet in \cite{replpack}). Each of them is briefly described below. \begin{itemize} \item \textit{Illogically Symbols} (60/2698, 2.22\%): These issues occur when developers use illegal characters or incorrect syntax during the coding, for instance, \textsc{syntax error} (e.g., “\textit{Invalid memory address or Nil pointer reference, \#22}”), \textsc{invalid parent id} (e.g., “\textit{Get error: invalid parent span IDs, \#1323}”), and \textsc{unexpected end of file} (e.g., “\textit{Received an unexpected EOF or 0 bytes from the transport stream, \#1403}”). \item \textit{Wrong Method Call} (19/2698, 0.70\%): These issues occur when the compiler tries to search for definitions of methods by invoking them through method calls and finds \textsc{wrong parameter} (e.g., “\textit{Mobile EventToCommandBehavior can not be pass parameter by EventArgsConverter, \#698}”), \textsc{wrong method call} (e.g., “\textit{Caller method not set when calling and action, \#863}”), and \textsc{incorrect values} (e.g., “\textit{When filtering, size, paging and page sizing returns incorrect values, \#1080}”). \end{itemize} \textbf{10. Testing Issue} (77/2698, 2.85\%): Microservices systems pose significant challenges for testing because of many services, inter-communication processes, dependencies, network communication, and other factors. Among those issues, microservices practitioners also mentioned several other testing issues during the interviews. One representative quotation is depicted in the following. \faHandORight{} “\textit{Testing is another issue that I think is more challenging in microservices systems. I also think deploying each microservice as a singular entity and testing them is tedious and brings several problems. For example, testing coordination among multiple microservices when deploying one service as a singular entity}” (\textbf{P2, Application Developer}). We identified and classified 20 types of testing issues in 3 subcategories (see Figure \ref{fig:Taxonomy} and the Issue Taxonomy sheet in \cite{replpack}). Each of them is briefly described below. \begin{itemize} \item \textit{Test Case Issue} (45, 1,66\%): This subcategory covers the problems with test cases written to evaluate the expected output compliance with specific requirements for the microservices systems. Most of them are related to \textsc{faulty test case}, \textsc{missing test case}, and \textsc{syntax error in test case}. For instance, in the Goa project “\textit{Goagen generates faulty tests when headers are required outside of actions, \#9”}. \item \textit{Code and Component Test} (21, 0.77\%): This subcategory deals with the issues mainly related to \textsc{debugging}, \textsc{api testing}, and \textsc{missing design test} of microservices systems. For instance, in the eShopOnContainers project “\textit{[VStudio for Mac].env file is being ignored when debugging docker containers, \#952}”. \item \textit{Application Test} (11, 0.40\%): This subcategory gathers the issues related to overall microservices application testing, which include \textsc{load test case}, \textsc{broken integration test}, and \textsc{application security testing} issues. For example, the microservices-demo project developers discussed that “\textit{Even when running the load test, Weave Scope often does not show all the expected connections between all the SockShop components/services, \#1174”}. \end{itemize} \textbf{11. Documentation Issue} (75/2698, 2.77\%): Documentation for microservices systems may suffer from several problems. We identified and classified 8 types of documentation issues in two subcategories (see Figure \ref{fig:Taxonomy} and the Issue Taxonomy sheet in \cite{replpack}). Each of them is briefly described below. \begin{itemize} \item \textit{Insufficient Document} (49, 1.81\%): This subcategory covers the problems related to \textsc{outdated document}, \textsc{broken images}, and \textsc{inappropriate examples}. For instance, one contributor of the eShopOnContainers project mentioned the issue of “\textit{Out of date wiki guide for vs2015, \#2030}”. \item \textit{Readability Issue} (26, 0.95\%): This subcategory is related to readability problems with provided documentation. The leading types of readability issues are \textsc{poor readability}, \textsc{missing readme file}, and \textsc{old readme file}. One contributor of the light-4j project discussed the issue “\textit{missing links of the pages in Readme.md, \#364}”. \end{itemize} \textbf{12. Graphical User Interface (GUI) Issue} (70/2698, 2.59\%): This category reports the problems that can wreck the GUI of microservices systems. We identified and classified 68 types of GUI issues in 3 subcategories (see Figure \ref{fig:Taxonomy} and the Issue Taxonomy sheet in \cite{replpack}). Each of them is briefly described below. \begin{itemize} \item \textit{Broken User Interface Elements} (38, 1.41\%) are dysfunctional User Interface (UI) elements (e.g., buttons or text fields) that can become the reason for inconsistencies in the page layout across different devices (e.g., mobile and desktop browsers). This subcategory represents the faults mainly related to \textsc{front end crash}, \textsc{uneditable contents}, and \textsc{broken images in ui}. An example issue of broken images in UI raised by a developer of the Spinnaker project is “\textit{Deck has stopped showing Infrastructure items, \#1502}”. \item \textit{Missing Information and Legacy UI Artifacts} (32, 1.19\%): This subcategory contains the problems of wrong and incomplete information along with outdated UI artifacts, mainly related to \textsc{wrong gui display}, \textsc{displaying incomplete information}, and \textsc{selection not working}. For instance, a contributor of the Spinnaker project stated that “\textit{Entity tags are not showing up in the UI, \#1929}”. \end{itemize} \textbf{13. Update and Installation Issue} (68/2698, 2.52\%): This category gathers the identified problems related to update and installation of the packages, libraries, tools, and containerization platforms required to develop and manage microservices systems. We identified and classified 16 types of update and installation issues in 2 subcategories (see Figure \ref{fig:Taxonomy} and the Issue Taxonomy sheet in \cite{replpack}). Each of them is briefly described below. \begin{itemize} \item \textit{Update Error} (45, 1.67\%): This subcategory represents the errors in which developers face the issues of outdated packages, platforms, technologies, and backward compatibility. The leading types of issues are \textsc{outdated install packages} ,\textsc{backward compatibility issue}, and \textsc{json update error}. An example issue of JSON update error discussed in the Spinnaker project is “\textit{Deck not able to update the data on JSON file, \#2518}”. \item \textit{Installation Error} (23, 0.85\%): The development of microservices systems can be interrupted because of failure to install required languages, packages, and platforms. The top three types of issues are \textsc{language package installation error}, \textsc{npm (node package manager) error}, and \textsc{gke (google kubernetes engine) installation error}. For example, one contributor of the open-loyalty project mentioned “\textit{NPM install fails on Windows - resource busy or locked, \#2327}”. \end{itemize} \textbf{14. Database Issue} (65/2698, 2.40\%): The ownership of the microservices system database is usually distributed, and most of the microservices are autonomous and have a private data store relevant to their functionality. The distributed nature of database and microservices systems brings challenges like database implementation, data accessibility, and database connectivity. The microservices practitioners also mentioned several other database issues during the interviews. One representative quotation is depicted in the following. \faHandORight{} “\textit{Relational database for microservices systems. Usually, this issue occurs when we migrate from monolithic applications to microservices systems. It was mainly because of the missing transaction management system for getting data from the database of the old application (that was a relational database) through the microservices application}” (\textbf{P6, Software Architect, Developer}). We identified and classified 24 types of database issues in 3 subcategories (see Figure \ref{fig:Taxonomy} and the Issue Taxonomy sheet in \cite{replpack}). Each of them is briefly described below \begin{itemize} \item \textit{Database Connectivity (26, 0.96\%)}: This subcategory refers to the issues that occur while establishing a database connection. The leading types of issues in this subcategory are \textsc{sql container failure} , \textsc{sql transient connection failure} , and \textsc{database creation failure}. For example, one developer of the eShopOnContainers project mentioned “\textit{Kubernetes SQL-data service error, \#775}”. \item \textit{Database Query} (31, 1.14\%): The errors in this subcategory cover search query issues during the development of microservices systems. Such issues may cause a database performance bottleneck. The leading types of issues in this subcategory are \textsc{wrong query}, \textsc{elasticsearch database error}, and \textsc{database search error}. For instance, a contributor of the Jaeger project stated that “\textit{the Jaeger query does not accept a custom location for the agent, \#270}”. \item \textit{Others} (8, 0.29\%): Other types of database issues that cannot be classified into the above subcategories are included in this subcategory, which are mainly related to \textsc{database migration}, \textsc{database storage}, and \textsc{database adapter}. For example, one developer of the Spinnaker project mentioned that “\textit{DB adapter issue - can not perform a textual search with a list, \#1374}”. \end{itemize} \textbf{15. Storage Issue} (54/2698, 2.00\%): This category reports storage space problems during the development, execution, and management of microservices systems. We identified and classified 13 types of storage issues in 2 subcategories (see Figure \ref{fig:Taxonomy} and the Issue Taxonomy sheet in \cite{replpack}), and each subcategory is further described below. \begin{itemize} \item \textit{Storage Size Constraints} (48, 1.78\%): Different microservices have different data storage requirements. This subcategory covers the storage size constraints, mainly related to \textsc{lack of main memory}, \textsc{cache issue}, and \textsc{storage backend failure}. For example, a contributor of the light-4j project identified that “\textit{buffer size is too small in client.yml if Body cannot be parsed, \#453}”. \item \textit{Large Data Size} (6, 0.21\%): This subcategory gathers the issues related to large data size, including \textsc{large image size}, \textsc{large message size}, and \textsc{large file}. For example, a contributor of the moleculer project identified that “\textit{Request is timed out when sending large files, \#451}”. \end{itemize} \textbf{16. Performance Issue} (45/2698, 1.67\%): Microservices systems offer various advantages over monolithic systems. However, several types of issues wreak the performance of microservices systems. The interviewees also mentioned performance overhead as an issue, and one representative quotation is depicted below. \faHandORight{} “\textit{Unlike a monolithic application whose deployment and management are seemingly easier due to centralized control and monitoring, a microservices-based application has numerous independent services that may be deployed on different infrastructures and platforms. Such an aspect increases its performance overhead. I also think that microservices systems consume more resources, creating a heavy burden for servers. In the response achieving the performance goal become questionable}” (\textbf{P15, Software Architect}). We identified and classified 16 types of performance issues in three subcategories (see Figure \ref{fig:Taxonomy} and the Issue Taxonomy sheet in \cite{replpack}). Each of them is briefly described below. \begin{itemize} \item \textit{Service Response Delay} (28, 1.03\%): This subcategory gathers the types of issues regarding delay in service response that ruins the performance of microservices systems, such as \textsc{long wait}, \textsc{inconsistent payloads}, and \textsc{slow query}. Regarding the long wait issues, one contributor of the Spinnaker project mentioned that “\textit{Wait for a new service to be available in Cassandra before displaying in Deck, \#1571}”. \item \textit{Resource Utilisation} (13, 0.48\%): In this subcategory, we collected the issues that degrade the performance of microservices systems due to resource utilisation. These issues are mainly related to \textsc{load balancer error}, \textsc{high cpu usage}, and \textsc{rate limiting error}. One example of the load balancer issue mentioned by a Spinnaker project contributor is “\textit{Failed to create a Load Balancer when creating a new application, \#1583}”. \item \textit{Lack of Scalability} (4, 0.14\%): Scalability is the system's ability to respond the user demands and change workload by adding or removing resources. This subcategory gathers the issues related to scalability that can hinder microservices system growth, for instance, \textsc{scale to cluster error} and \textsc{circuit breaker issue}. An example issue provided by a Spinnaker developer highlighted that “\textit{Active replicate of a deployment (with manifest and Kubernetes V2) is grayed out under the Load Balancer tab. For users, it’s a bit confusing, \#967}”. \end{itemize} \textbf{17. Networking Issue} (41/2698, 1.51\%): Deploying, executing, and communicating in microservices systems over the network is complex. It is observed that many problems may disrupt the network. We identified and classified 20 types of networking issues in 2 subcategories (see Figure \ref{fig:Taxonomy} and the Issue Taxonomy sheet in \cite{replpack}). Each of them is briefly described below. \begin{itemize} \item \textit{Hosting and Protocols} (22, 0.81\%): This subcategory represents the issues related to hosting protocols, ports, and topologies for microservices systems. The leading types of issues in this subcategory are \textsc{localhost error}, \textsc{ip address issue}, and \textsc{udp (user datagram protocol) discovery error}. For example, one contributor of the Jaeger project pointed out a UDP discovery issue “\textit{Node.JS client sends UDP packets that agent in all-in-one does not recognize, \#1567”}. \item \textit{Service Accessibility} (19, 0.70\%): This subcategory of issues represent the cases where microservices practitioners face service accessibility problems. The leading types of issues are \textsc{webhook error}, \textsc{broken urls}, and \textsc{dns (domain name system) error}. For example, a Webhook issue described by a contributor of the Spinnaker project is “\textit{Right now there is not a generic way to start off pipelines based on a Webhook event, \#80}”. \end{itemize} \textbf{18. Typecasting Issue} (35/2698, 1.29\%): This category is related to the typecasting issues that occur when assigning a value of one primitive data type to another type. We identified and classified 9 types of typecasting issues in 2 subcategories (see Figure \ref{fig:Taxonomy} and the Issue Taxonomy sheet in \cite{replpack}). Each of them is briefly described below. \begin{itemize} \item \textit{Type Conversion} (20, 0.74\%): This subcategory deals with the issues when variables are not correctly converted from one type to another. The top three types of conversion issues are \textsc{identity conversion}, \textsc{boxing conversion}, and \textsc{enumeration validation issue}. For example, one developer of the Goa project mentioned that “\textit{Enum validations are not working properly for Non-primitive types, \#1984}”. \begin{landscape} \begin{figure} \includegraphics[width=\linewidth, height=0.68\linewidth]{Figures/TaxonomyOfIssues.pdf} \caption{A taxonomy of issues in microservices systems} \label{fig:Taxonomy} \end{figure} \end{landscape} \item \textit{Narrow/Wide Conversion} (15, 0.55\%): These issues occur when the compiler converts variables of a larger type into a smaller type (e.g., double to float) or a smaller type into a larger type (e.g., float to double). The top two types of narrow/wide conversion issues are \textsc{narrowing primitive conversion} and \textsc{narrowing reference conversion}. Regarding the narrow/wide conversion issues, a contributor of the Goa project mentioned that “\textit{Infinity recursions when result type points to a type with recursive definitions, \#2068}”. \end{itemize} \textbf{19. Organizational Issue} (7/2698, 0.25\%): We derived this category based on the interviewees’ feedback on the taxonomy of the issues (see Figure \ref{fig:Taxonomy} and the Issue Taxonomy sheet in \cite{replpack}). The interviewees only mentioned three types of issues in this category, including \textsc{team management}, \textsc{operational and tooling overhead}, and \textsc{service size}. We depicted two representative quotations below. \faHandORight{} “\textit{One of the critical challenges in organizations is team management according to available people, their expertise, and their working habits}” (\textbf{P1, Software Architect, Developer}). \faHandORight{} “\textit{Creating a reasonable size for each microservices' (I mean, each microservice should have sufficient responsibilities). This is a bit tricky because most of the issues are rooted here}” (\textbf{P6, Software Architect, Developer}). \begin{tcolorbox} [sharp corners, boxrule=0.1mm,] \small \textbf{Key Findings of RQ1}: We identified 2,641 instances of issues by mining developer discussions in 15 open-source microservices systems with 48 instances of issues mentioned by the interviewees and 9 instances of issues mentioned by the survey participants, which are 2,698 issues in total. The issue taxonomy consists of 19 categories, 54 subcategories, and 402 types, indicating the diversity of the issues in microservices systems. The majority of issues are related to Technical Debt (25.46\%), CI/CD (11.60\%), and Exception Handling (8.45\%). \end{tcolorbox} \subsection{Causes of Issues (RQ2)} \label{sec:results_RQ2} The taxonomy of causes of microservices issues is provided in Table \ref{tab:CausesTaxnomey}. It is worth mentioning that not all the issue discussions provide the information about their causes. Therefore, we identified 2,225 issue discussions containing information about the causes. The taxonomy of causes is derived by mining developer discussions (i.e., 2,225 instances of causes), conducting practitioner interviews (i.e., 31 instances of causes, see Section \ref{InterviewsDataAnalysis}), and conducting a survey (i.e., 11 instances of causes, see Section \ref{sec:results_RQ4}). Hence, we got a total of 2,267 instances of causes. We identified a total of 228 types of causes that can be classified into 8 categories and 26 subcategories. Due to space limitations, we only list the top two types of causes for each subcategory in Table \ref{tab:CausesTaxnomey}. The detail of the types of causes can be found in the dataset \cite{replpack}. The results show that General Programming Error (860 out of 2267), Missing Features and Artifacts (386 out of 2267), and Invalid Configuration \& Communication Problems (382 out of 2267) are the top three categories of causes. Each cause category is briefly discussed below. \textbf{1. General Programming Error (GPE)} (860/2267, 37.93\%): This category captures the causes that are based on a broad range of errors occurred in different phases of microservices system development, such as coding (e.g., syntax errors), testing (e.g., incorrect test cases), and maintenance (e.g., wrong examples in documentation) phases. We identified and classified 78 types of GPE causes in six subcategories (see Table \ref{tab:CausesTaxnomey} and the Cause Taxonomy sheet in \cite{replpack}). Each of them is briefly described below. \begin{itemize} \item \textit{Compile Time Error} (377, 16.62\%): This subcategory gathers the causes of issues in which microservices practitioners violate the rules of writing syntax for microservices systems codes. These causes must be addressed before the program can be compiled. We identified 16 types of causes in this subcategory in which the top three are \textsc{semantic error} (e.g., “\textit{Message is not handled properly, \#01}”), \textsc{syntax error in code} (e.g., “\textit{Mismatched types *string and string, \#2640}”), and \textsc{variable mutations} (e.g., “\textit{Invalid range error assumes integer values, \#310}”). \item \textit{Erroneous Method Definition and Execution} (262, 11.55\%): The causes in this subcategory are related to incorrect or partly correct definitions and executions of methods associated with object messages. Generally, methods are referred as class building blocks linked together for sharing and processing data to produce the desired results. We identified 27 types of causes in this subcategory in which the top three are \textsc{lack of cohesion in methods} (e.g., “\textit{Need to enhance the existing functionality of the class, \#899}”), \textsc{long message chain} (e.g., “\textit{multiple methods with the same type in the result body generates bad code, \#311}”), and \textsc{wrong parameterization} (e.g., “\textit{csvr.New() receives the same parameters as the streaming endpoint, but should be svcsvr.New(svcEndpoints, mux, dec, enc, eh), \#699}”). \item \textit{Incorrect Naming and Data Type} (157, 6.92\%): This subcategory covers the causes related to choosing incorrect names and data types for identifiers, methods, packages, and other entities in the source code. Our taxonomy contains 18 types of causes in this subcategory, and among them \textsc{wrong data conversion} (e.g., “\textit{Convert property does not convert the value in ctx.params, \#1966}”), \textsc{wrong data type} (e.g., “\textit{Use of string instead of int in pipeline template, \#652}”), and \textsc{wrong use of data types} (e.g., “\textit{string array element validation using Enum, \#307}”) are the top three types of causes. \item \textit{Testing Error} (25, 1.10\%): This subcategory covers the causes behind testing issues in microservice systems. In this subcategory, we identified 6 types of causes in which the top two types of causes are \textsc{incorrect test case} (e.g., “\textit{Integration events scenarios and marketing scenarios unit tests fail due to missing call to app.UseAuthorization, \#1172}”) and \textsc{incorrect syntax in test cases} (e.g., “\textit{Incorrect syntax for defining array element in test cases, \#1202}”). \item \textit{Poor Documentation} (22, 0.97\%): The documentation of software systems may contain critical information that describes the software product capabilities for system stakeholders. We identified 7 types of causes in this subcategory, in which the top three are \textsc{typo in documents} (e.g., “\textit{Typo in the Readme about not regenerating the main.go on running the gen tool, \#874}”), and \textsc{wrong example in documents} (e.g., “\textit{Text is not entirely true, \#682}”). \item \textit{Query and Database Issue} (11, 0.48\%): This subcategory contains the causes behind database issues in microservices systems. We identified 4 types of causes in this subcategory, and the top three types of causes are \textsc{wrong query parameters} (e.g., “\textit{Goa doesn’t generate query parameter declared in API, \#1388}”), \textsc{missing query parameters} (e.g., “\textit{CLI URL syntax does Not support query params, \#387}”), and \textsc{incorrect querying range} (e.g., “\textit{500s while querying longish ranges, \#1507}”). \end{itemize} \textbf{2. Missing Features and Artifacts (MFA)} (386/2267, 17.02\%): This category represents the causes behind microservices systems issues that occur due to missing required features, packages, files, variables, and documentation and tool support. We identified and classified 27 types of MFA causes in four subcategories (see Table \ref{tab:CausesTaxnomey} and the Cause Taxonomy sheet in \cite{replpack}). Each of them is briefly described below. \begin{itemize} \item \textit{Missing Features} (186, 8.20\%) denotes the nonexistence of system functionality in microservices systems. In this subcategory, we identified 14 types of causes, in which the top two types of causes are \textsc{missing required system features} (e.g., “\textit{Missing minItems/maxItems in a Swagger's JSON schema, \#97}”) and \textsc{missing security features} (e.g., “\textit{Need to authentication filter which breaks a pipeline trigger, \#274}”). \item \textit{Missing Documentation and Tool Support} (104,4.58\%): Proper documentation and tool support is vital to keep the record of and track changes between system requirements, architecture, and source code. It also guide various system stakeholders (e.g., architects, developers, end-users) regarding design, architecture, and coding standards use in microservices systems. Moreover, several types of tool support are also necessary for different phases of microservices system development (e.g., development and deployment). The absence of proper documentation and tool support can bring several types of issues to microservices systems. This subcategory contains 4 types of missing documentation and tool support causes, and mong them \textsc{missing readme file} (e.g., “\textit{Missing links of the pages in Readme.md, \#364}”) and \textsc{missing development and deployment tool support} (e.g., “\textit{goagen cant support all the features of the Go compiler, \#2619}”) are identified as the leading causes. \item \textit{Missing Packages and Files} (68, 2.99\%): This subcategory groups the causes related to absence of required resources, packages, and files for developing, deploying, and executing microservices systems. We collected 9 types of missing packages and files causes, in which the leading three types of causes are \textsc{missing resource} (e.g., “\textit{Meta map not included in error response, \#420}”), \textsc{missing required package} (e.g., “\textit{Data-Protection project package is missing, \#210}”), and \textsc{missing api} (e.g., “\textit{Missing AMI and API, \#59}”). \item \textit{Missing Variables} (28, 1.23\%): A few missing variables are also identified as the causes for several microservices issues. Compilers throw the error messages of missing variables if variables are set to a nonexistent directory or have the wrong names. This subcategory contains 7 types of missing variables causes, and among them \textsc{missing environment variable} (e.g., “\textit{Missing environment variables in travis for edge-router, \#77}”) and \textsc{missing properties} (e.g., “\textit{Array of element Validation code missing, \#195}”) are identified as the leading causes. \end{itemize} \textbf{3. Invalid Configuration and Communication (ICC) Problem} (382/2267, 16.85\%): Considering a large number of microservices, their distributed nature, and third-party plugins, microservices systems need to be configured for complete business operations properly. Each microservice has its instances and process, and services interact with each other using several inter-service communication protocols (e.g., HTTP, gRPC, message brokers AMQP). One of the interviewees also mentioned the following cause regarding invalid configuration and communication. \faHandORight{} “\textit{Microservices systems typically use one or more infrastructure and 3rd party services. Examples of infrastructure services include a service registry, a message broker, and a database server. During the configuration of microservices, a service must be provided with configuration data that tells it how to connect to the external or 3rd party services — for example, the database network location and credentials}” (\textbf{P1, Software Architect, Developer}). We identified and classified 29 types of causes in two subcategories (see Table \ref{tab:CausesTaxnomey} and the Cause Taxonomy sheet in \cite{replpack}). Each of them is briefly described below. \begin{itemize} \item \textit{Incorrect Configuration} (290, 12.79\%): Configuration management in microservices systems is a hefty task because microservices are scattered across multiple servers, containers, databases, and storage units. Each microservices may have multiple instances. Therefore, an incorrect configuration may lead to several types of errors. This subcategory contains 16 causes for different types of microservices issues, in which the leading two causes are \textsc{incorrect configuration setting} (e.g., “\textit{InetAddress is null before getting IP or hostname, \#932}”) and \textsc{wrong connection closure} (e.g., “\textit{Middleware doesn't end the request by calling req.send, \#1228}”). \item \textit{Server and Access Problem} (92, 4.05\%): Each microservices acts as a miniature application that communicates with each other. We need to configure the infrastructure layers of the microservice system for sharing different types of resources. A poor configuration may lead to problems of accessibility for servers and other resources that bring multiple issues in microservices systems. In this subcategory, we identified 13 types of causes, in which the leading three types of causes are \textsc{transient failure} (e.g., “\textit{Transient failure to get the dependency from the provider, \#1289}”), \textsc{service registry error} (e.g., “\textit{The value of the 'Access-Control-Allow-Origin' header in the response must not be the wildcard '*' when the request's credentials mode is 'include', \#530}”), and \textsc{wrong communication protocol} (e.g., “\textit{Hostnames and IP addresses metric names are unusual and inconvenient, \#1564}”). \end{itemize} \textbf{4. Legacy Versions, Compatibility, and Dependency (LC\&D) Problem} (222/2267, 9.79\%): This category represents a broad range of causes arising from outdated repositories, applications, documentation versions, development and deployment platforms, APIs, libraries, and packages. One of the interviewees also mentioned several causes regarding microservices issues, especially compatibility and dependency. One representative quotation is depicted below. \faHandORight{} “\textit{Along with the legacy code version, some of the critical reasons for the microservices issues are i) no clear strategy for code repository and branching, the mix of technologies (each team uses its way of development), dependencies on other services which are not released yet but going for integration and load testing, development pace varies from team to team, lack of centralized release manager, incompatibility of a new version of the service with previous services.}” (\textbf{P8, DevOps Consultant}). We identified and classified 28 types of causes in five subcategories (see Table \ref{tab:CausesTaxnomey} and the Cause Taxonomy sheet in \cite{replpack}). Each of them is briefly described below. \begin{itemize} \item \textit{Compatibility and Dependency} (59, 2.60\%): A typical microservices system consists of several independent services running on multiple servers or hosts \cite{waseem2022}. However, some microservices also depend on other microservices to complete business operations. Usually, practitioners ensure the compatibility (e.g., backward compatibility) of each microservice with previous versions of the microservice systems during the upgrading. In this subcategory, we identified 5 types of causes, in which the leading three types of causes are \textsc{compatibility error} (e.g., “\textit{Error NU1202 Package Newtonsoft.Json 11.0.2 is not compatible with netcoreapp2.0, \#560}”), and \textsc{outdated dependency} (e.g., “\textit{Dependency is Not up-to-date, \#2393}”). \item \textit{Outdated and Inconsistent Repositories} (53, 2.33\%): This subcategory gathers the causes which are the source of the issues that occur when the online code repository of version control systems has updated the local files repository. The most frequent causes in this subcategory are \textsc{old repository version} (e.g., “\textit{Version is not upgraded, \#2363}”), \textsc{old dev branch} (e.g., “\textit{Using commit from the old DEV branch, \#1280}”), and \textsc{version conflicts} (e.g., “\textit{Version incompatibility during upgrade, \#2583}”). \item \textit{Outdated Application and Documentation Version} (45, 1.98\%): We identified several causes arising from using outdated applications and documentation versions in the selected systems, which are mainly related to \textsc{document not updated} (e.g., “\textit{Kubernetes instructions are out-of-date, \#2397}”), and \textsc{deprecated software version} (e.g., “\textit{Deprecated Kafka, \#2232}”). \item \textit{Outdated Development and Deployment Platforms} (38, 1.67\%): Development and deployment platforms help developers build, test, and deploy microservices systems efficiently. We identified 10 types of causes in this subcategory, and among them \textsc{old Kubernetes version} (e.g., “\textit{Abandoned/outdated k8s manifests version, \#2430}”) and \textsc{old visual studio version} (e.g., “\textit{Not able to debug this application in Visual Studio Code old version, \#1269}”) are the top two leading type of causes. \item \textit{Outdated APIs, Libraries, and Packages} (27, 1.09\%): This subcategory covers the causes arising from the usage of outdated APIs, libraries, and packages. The most frequent types of causes in this subcategory are \textsc{outdated version of library} (e.g., “\textit{Old libraries included in Spinnaker, \#2457}”) and \textsc{old version of package} (e.g., “\textit{Need to updates and test Ocelot API Gateway project/image to last version v12.0, \#2440}”). \end{itemize} \textbf{5. Service Design and Implementation Anomalies (SD\&IA)} (174/2267, 7.67\% ): This category covers the causes of issues when microservices practitioners cannot address the associated complexity of distributed systems at the design level. The interviewees also mentioned several causes related to service design and implementation anomalies, and one representative quotation is depicted below. \faHandORight{} “\textit{According to my experience, the primary reason behind the design issues is the distributed nature of microservices systems, which is becoming increasingly complicated with growing systems—especially where multiple subsystems also need to be integrated}” (\textbf{P7, Software Engineer}). We identified and classified 17 types of SD\&IA causes in 2 subcategories (see Table \ref{tab:CausesTaxnomey} and the Cause Taxonomy sheet in \cite{replpack}). Each of them is briefly described below. \begin{itemize} \item \textit{Code Design Anomaly} (130, 5.73\%): Code design anomalies refer to poorly written code that may lead to several problems (e.g., difficulties in maintenance or future enhancements) in microservices systems. Our taxonomy gets 12 types of causes in this subcategory and among them \textsc{poor code readability} (e.g., “\textit{Lack of consistency in the naming of some Projects in the solution, \#2203}”), \textsc{messy code} (e.g., “\textit{Too much duplication in code, \#2238}”), and \textsc{data clumps} (e.g., “\textit{Required String parameter 'version' is not present, \#1159}”) are the top three types of causes. \item \textit{System Design Anomaly} (44, 1.94\%): System design anomalies refer to poorly designed microservices architecture that may lead to maintainability, scalability, and performance issues. This subcategory covers 5 types of causes. Among them, the top three types of causes are \textsc{wrong dependencies chain} (e.g., “\textit{Service A requires module A. If service A changed, the runner reloads, but if module A changed, the runner does not reload, \#1873}”), \textsc{lack of asrs} (e.g., “\textit{The system must bind the specific version/tag of the docker image artifact specified in the Jenkins stage, \#2147}”), and \textsc{wrong application decomposition} (e.g., “\textit{The code has bugs and is inconsistent with a regular Ordering Business Domain due to the incorrect separation of services, \#1022}”). \end{itemize} \textbf{6. Poor Security Management (PSM)} (126/2267, 5.55\%): Microservices systems are distributed over data centres, cloud providers, and host machines. The security of microservices systems is a multi-faceted problem that requires a layered solution to cope with various types of vulnerabilities \cite{yarygina2018overcoming}. The interview participants also mentioned several causes for security issues, and one representative quotation is depicted below. \faHandORight{} “\textit{I think the basic reasons behind the security issues are poor understanding of microservices architecture, large attack surfaces (many distributed points), error-prone encryption techniques while services are communicating, and insecure physical devices}” (\textbf{P5, Solution Architect}). We identified and classified 21 types of PSM causes in 3 subcategories (see Table \ref{tab:CausesTaxnomey} and the Cause Taxonomy sheet in \cite{replpack}). Each of them is briefly described below. \begin{itemize} \item \textit{Coding Level} (55, 2.42\%): This subcategory collects the causes where strict security principles and practices are not followed to prevent potential vulnerabilities while writing code of microservices systems. This subcategory covers 7 types of coding level causes, and among them, the top three types of causes are \textsc{unsafe code} (e.g., “\textit{Security plugin bug, \#1282}”), \textsc{malformed input} (e.g., “\textit{Need to validate Password in HashUtil to accept original Password as char[] instead of String, \#1602}”), and \textsc{wrong implementation of security api} (e.g., “\textit{API Key Security issue was Not defined properly, \#1626}”). \item \textit{Communication Level} (40, 1.76\%): Given the polyglot and distributed nature of microservices systems, practitioners need to secure the inter-microservices communication. This subcategory covers 10 types of communication level causes, and among them, the top two types of causes are \textsc{security dependencies} (e.g., “\textit{Poor security between component communication, \#1651}”) and \textsc{wrong access control} (e.g., “\textit{JWT validation keys are not refreshed, \#1624}”). \item \textit{Application Level} (29, 1.27\%): The application level of security refers to security practices implemented at the interface between an application and various components (e.g., databases, containers) of microservices systems. This subcategory only contains 3 types of causes that are \textsc{insecure configuration management} (e.g., “\textit{The unauthorized client issue occurs because the service redirects URI, \#2378}”), \textsc{missing security features} (e.g., “\textit{Missing authToken, \#225}”), and \textsc{violation of the security policies} (e.g., “\textit{Violates the security policy directive like script-src unsafe-inline, \#594}”). \end{itemize} \textbf{7. Insufficient Resources (IR)} (93/2267, 4.10\% ): Microservices systems are at risk of delivering the required outcome without sufficient resources. We identified and classified 4 types of IR causes in two subcategories (see Table \ref{tab:CausesTaxnomey} and the Cause Taxonomy sheet in \cite{replpack}). Each of them is briefly described below. \begin{itemize} \item \textit{Memory Issue} (68, 2.99\%): Microservices systems are developed by using multiple languages (e.g., Java, Python, C++) and platforms (e.g., containers, virtual machines). Some languages and platforms consume more memory than others. For instance, C/C++ consumes less memory than Java, and Python and Perl consume less memory than C/C++ \cite{prechelt2000empirical}. This subcategory covers 2 types of causes that are \textsc{limited memory for process execution} (e.g., “\textit{The staging cluster uses 8GB disks on the machines, and during the test and build, it constantly run out of disk space, \#438}”), and \textsc{ide problem with memory} (e.g., “\textit{I noticed another problem with Visual Studio 2019 (i.e., ‘404 - not found’), when docker-compose start with the WebSPA application by assigning 4gig to docker, \#1949}”). \item \textit{Lack of Human Resources, Tools and Platforms } (25, 1.10\%): This subcategory deals with the causes related to tools and platforms support for microservice systems. Only 3 types of causes related to this subcategory are identified, including \textsc{lack of tool support} (e.g., “\textit{Lack of support for different encoding and transports, \#2618}”), and\textsc{deployment platform problem} (e.g., “\textit{The scripts in the Vagrantfile are a bit too conservative with starting docker, \#2014}”). \end{itemize} \textbf{8. Fragile Code (FC)} (24/2267, 1.05\%): Fragile code refers to code that is difficult to change, and a minor modification in fragile code may break the service or module. We identified and classified 6 types of FC causes in two subcategories (see Table \ref{tab:CausesTaxnomey} and the Cause Taxonomy sheet in \cite{replpack}). Each of them is briefly described below. \begin{itemize} \item \textit{Poor Implementation of Code} (20, 0.88\%): This subcategory gathers the causes of poor code quality, which exhibits the buggy behaviour of microservices systems. We identified 3 types of causes in this subcategory, including \textsc{poor object-oriented design} (e.g., “\textit{When importing the same type for multiple attributes within a design type, it generates conflicting methods with identical names, \#1762}”), \textsc{poor code reusability} (e.g., “\textit{This issue was introduced by a change in PR \#2324 and is caused by chunk slices being reused in Thanos and keeping references in Cortex instead of copying them. Sorry for that!, \#2013}”), and \textsc{unnecessary code}(e.g., “\textit{Revisit hooks.js to remove unnecessary fixture, \#1184}”). \item \textit{Poor Code Flexibility} (11, 0.48\%): Code flexibility is important for long-lived microservices project code bases, however, we found a few issues that occurred due to poor code flexibility. The 3 types of causes in this subcategory are \textsc{divergent changes} (e.g., “\textit{when importing the same type for multiple attributes within a design type, it generates conflicting methods with identical names, \#1762}”), \textsc{delayed refactoring} (e.g., “\textit{Need to add support to enable Shielded VM related configurations for GCP instance templates, \#1761}”), and \textsc{poorly organized code} (e.g., “\textit{Duplicated payload definition and validation method for 2 methods with same payload, \#2235}”). \end{itemize} {\renewcommand{\arraystretch}{1} \begin{table*}[!h] \centering \scriptsize \caption{Taxonomy of causes of issues in microservices systems} \label{tab:CausesTaxnomey} \begin{tabular}{|c|l|l|} \hline \multicolumn{1}{|l|}{\textbf{Category of Causes}} & \textbf{Subcategory of Causes} & \textbf{Type of Causes} \\ \hline \multirow{12}{*}{\begin{tabular}[c]{@{}c@{}}General Programming \\Error (GPE) (860)\end{tabular}} & \multirow{2}{*}{Compile Time Error (377)} & Semantic Error (205) \\ \cline{3-3} & & Syntax Error in Code (112) \\ \cline{2-3} & \multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Erroneous Method Definition and Execution (262)\end{tabular}} & Lack of Cohesion in Methods (38) \\ \cline{3-3} & & Long Message Chain (36) \\ \cline{2-3} & \multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Incorrect Naming and Data Type (157)\end{tabular}} & Wrong Data Conversion (39) \\ \cline{3-3} & & Wrong Data Type (20) \\ \cline{2-3} & \multirow{2}{*}{Testing Error (25)} & Incorrect Test Case (20) \\ \cline{3-3} & & Wrong Implementation of Security Pattern (5) \\ \cline{2-3} & \multirow{2}{*}{Poor Documentation (22)} & Typo in Documents (9) \\ \cline{3-3} & & Wrong Example in Documents (7) \\ \cline{2-3} & \multirow{2}{*}{Query and Database Issue (13)} & Wrong Query Parameters (6) \\ \cline{3-3} & & Missing Query Parameters (3) \\ \hline \multirow{8}{*}{\begin{tabular}[c]{@{}c@{}}Missing Features \\and\\ Artifacts (MFA) (386)\end{tabular}} & \multirow{2}{*}{Missing Features (186)} & Missing Required System Features (111) \\ \cline{3-3} & & Missing Security Features (37) \\ \cline{2-3} & \multirow{2}{*}{Missing Documentation and Tool Support (104)} & Missing Readme File (88) \\ \cline{3-3} & & Missing Tool Support (14) \\ \cline{2-3} & \multirow{2}{*}{Missing Packages and Files (68)} & Missing Resource (41) \\ \cline{3-3} & & Missing Required Package (15) \\ \cline{2-3} & \multirow{2}{*}{Missing Variables (28)} & Missing Environment Variable (14) \\ \cline{3-3} & & Missing Properties (8) \\ \hline \multirow{4}{*}{\begin{tabular}[c]{@{}c@{}}Invalid Configuration \\and\\ Communication (ICC) Problem (382)\end{tabular}} & \multirow{2}{*}{Incorrect Configuration (285)} & Incorrect Configuration Setting (227) \\ \cline{3-3} & & Wrong Connection Closure (24) \\ \cline{2-3} & \multirow{2}{*}{Server and Access Problem (92)} & Transient Failure (21) \\ \cline{3-3} & & Service Registry Error (24) \\ \hline \multirow{10}{*}{\begin{tabular}[c]{@{}c@{}}Legacy Versions, Compatibility,\\ and\\ Dependency (LC\&D) Problem (222)\end{tabular}} & \multirow{2}{*}{Compatibility and Dependency (59)} & Compatibility Error (42) \\ \cline{3-3} & & Outdated Dependency (9) \\ \cline{2-3} & \multirow{2}{*}{Outdated and Inconsistent Repositories (53)} & Old Repository Version (38) \\ \cline{3-3} & & Old DEV Branch (7) \\ \cline{2-3} & \multirow{2}{*}{Outdated Application and Documentation Version (45)} & Document not Updated (33) \\ \cline{3-3} & & Deprecated Software Version (6) \\ \cline{2-3} & \multirow{2}{*}{Outdated Development and Deployment Platforms (38)} & Old Kubernetes Version (7) \\ \cline{3-3} & & Old IDE Version (6) \\ \cline{2-3} & \multirow{2}{*}{Outdated APIs, Libraries, and Packages (27)} & Outdated Version of Library (15) \\ \cline{3-3} & & Old Version of Required Package (11) \\ \hline \multirow{4}{*}{\begin{tabular}[c]{@{}c@{}}Service Design\\ and\\ Implementation Anomalies (SD\&IA) (174)\end{tabular}} & \multirow{2}{*}{Code Design Anomaly (125)} & Poor Code Readability (54) \\ \cline{3-3} & & Messy Code (35) \\ \cline{2-3} & \multirow{2}{*}{System Design Anomaly (44)} & Wrong Dependencies Chain (16) \\ \cline{3-3} & & Lack of ASRs (5) \\ \hline \multirow{6}{*}{\begin{tabular}[c]{@{}c@{}}Poor Security\\ Management (PSM) (126)\end{tabular}} & \multirow{2}{*}{Coding Level (55)} & Unsafe Code (23) \\ \cline{3-3} & & Malformed Input (12) \\ \cline{2-3} & \multirow{2}{*}{Communication Level (42)} & Security Dependencies (8) \\ \cline{3-3} & & Wrong Access Control (5) \\ \cline{2-3} & \multirow{2}{*}{Application Level (29)} & Insecure Configuration Management (17) \\ \cline{3-3} & & Missing Security Features (7) \\ \hline \multirow{4}{*}{\begin{tabular}[c]{@{}c@{}}Insufficient Resources\\ (IR) (93)\end{tabular}} & \multirow{2}{*}{Memory Issue (68)} & Limited Memory for Process Execution (66) \\ \cline{3-3} & & IDE Problem with Memory (2) \\ \cline{2-3} & \multirow{2}{*}{Lack of Human Resources,Tools and Platforms (25)} & Lack of Tool Support (7) \\ \cline{3-3} & & Deployment Platform Problem (5) \\ \hline \multirow{4}{*}{\begin{tabular}[c]{@{}c@{}}Fragile Code\\ (FC) (24)\end{tabular}} & \multirow{2}{*}{Poor Implementation of Code (20)} & Poor Object Oriented Design (6) \\ \cline{3-3} & & Poor Code Reusability (5) \\ \cline{2-3} & \multirow{2}{*}{Poor Code Flexibility (11)} & Tightly Services Components (7) \\ \cline{3-3} & & Divergent Changes (9) \\ \hline \end{tabular} \end{table*} } \begin{tcolorbox} [sharp corners, boxrule=0.1mm,] \small \textbf{Key Findings of RQ2}: We found that not all the issue discussions provide the information about their causes, and finally we identified 2,225 issue discussions containing information about the causes with 31 causes mentioned by the interviewees and 11 causes indicated by the survey participants, which are 2,267 causes instances in total. The cause taxonomy of microservices issues consists of 8 categories, 26 subcategories, and 228 types. The majority of causes are related to General Programming Errors (37.93\%), Missing Features and Artifacts (17.02\%), and Invalid Configuration and Communication (16.85\%). \end{tcolorbox} \subsection{Solutions of Issues (RQ3)} \label{sec:results_RQ3} The taxonomy of solutions for microservices issues is provided in Table \ref{tab:SolutionsTaxnomey}. It is worth mentioning that not all the issue discussions provide the information about their solutions. Therefore, we identified 1,899 issue discussions containing the information about the solutions. The taxonomy of solutions is derived by mining developer discussions (i.e., 1,899 solutions), conducting practitioner interviews (i.e., 36 solutions, see Section \ref{sec:Ext&Syn}), and conducting a survey (i.e., one instance of solution, see Section \ref{sec:results_RQ4}). Hence, we got a total of 1,936 solutions. We identified a total of 196 types of solutions that can be classified in 8 categories and 35 subcategories. Due to space limitations, we only list the top two types of solutions for each subcategory in Table \ref{tab:SolutionsTaxnomey}. The detail of the types of solutions can be found in the dataset \cite{replpack}. The results show that Fix Artifacts (1056 out of 1936), Add Artifacts (360 out of 1936), and Modify Artifacts (210 out of 1936) are the top three categories of solutions. Each solution category is briefly discussed below. \textbf{1. Fix Artifacts} (1056/1936, 54.54\%): During the analysis of developer discussions about microservices issues, we identified that most developers did not explicitly mention any solution to the problems. They fixed the issue in the local repository and send the fixed code (e.g., through a pull request) to the maintainer of the public repository. In this case, from the developer discussions, we could not find exactly what they added, removed, or modified in the project to fix a specific issue. Therefore, we named this category Fix Artifacts. We identified and classified 25 types of fix artifacts solutions in 4 subcategories (see Table \ref{tab:SolutionsTaxnomey} and the Solution Taxonomy sheet in \cite{replpack}). Each of them is briefly described below. \begin{itemize} \item \textit{Fix Code Issue} (912, 47.10\%): The solutions in this subcategory are related to the direct repair of the source code by developers. We identified 14 types of solutions in this subcategory and among them \textsc{fix source code of issues} (e.g., “\textit{Fix syntax error in pipeline editor, \#1157}”), \textsc{fix illegal symbols (syntax) in code} (e.g., “\textit{Fix syntax error, \#1123}”), and \textsc{clean code} (e.g., “\textit{Clean code duplicate codes to improve the performance, \#553}”) are the top three types of solutions. \item \textit{Fix Testing Issue} (107, 5.52\%): This subcategory covers the types of solutions with which testing issues have been fixed. We collected 2 types of solutions in this subcategory that are \textsc{debug code} (e.g., “\textit{Correct the logic for desired output, \#1046}”) and \textsc{add test services in containers} (e.g., “\textit{Add container level tests for each service, \#23}”). \item \textit{Fix Build Issue} (33, 1.70\%): Build systems are essential for developing, deploying, and maintaining microservices systems. In contrast, build failures frequently occur across the development life cycle, bringing non-negligible costs in microservices system development. We collected 3 types of solutions, including \textsc{fix errors in build files} (e.g., “\textit{Go to edit pipeline as json -->remove the line `imageSource': `priorStage', -->save, \#1156}”), \textsc{correct build type} (e.g., “\textit{Override the identifier value of the Goa package in runtime, \#702}”), and \textsc{hide fail status} (e.g., “\textit{Hide fail fast status code of the pre-configured build script file, \#2019}”). \item \textit{Fix GUI Issue} (4, 0.20\%): We collected two types of solutions related to repairing the graphical user interface of microservices systems, and they are \textsc{fix backwards incompatible ui} (e.g., “\textit{Fix backwards incompatible interface, \#1206}”) and \textsc{fix broken screenshot} (e.g., “\textit{Fix images broken on Readme after sub module, \#1210}”). \end{itemize} \textbf{2. Add Artifacts} (360/1936, 18.59\%): This category covers the types of solutions for addressing missing features, packages, files, variables, and documentation and tool support issues. The interview participants also mentioned that they adopted solutions for addressing several types (e.g., CI/CD, Security) of issues that have occurred because of missing features, and one representative quotation is depicted below. \faHandORight{} “\textit{To address these issues, we recently adopted the `service mesh' approach. This approach helps address security, latency and scalability, fault identification, and runtime error detection in microservices systems. A service mesh pattern can also provide features for a service health check with the lowest latency to address errors and fault identification issues}” \textbf{(P7, Software Engineer)}. We identified and classified 33 types of solutions in 10 subcategories (see Table \ref{tab:SolutionsTaxnomey} and the Solution Taxonomy sheet in \cite{replpack}). Each of them is briefly described below. \begin{itemize} \item \textit{Add Features and Services} (131, 6.76\%): Developers added required features and services to microservices systems. System features and services refer to a process that accepts one or more inputs and returns outputs for particular system functionality \cite{paolucci2002semantic}. We identified 5 types of solutions in this subcategory, and among them, \textsc{add missing features} (e.g., “\textit{Allow webhooks to trigger a build, \#80}”), \textsc{add security features} (e.g., “\textit{Add missing security feature severity to Status, \#225}”), and \textsc{add communication protocols} (e.g., “\textit{Add Http2Client, \#283}”) are the top three types of solutions. \item \textit{Add Files, Templates, and Interfaces} (39, 2.01\%): This subcategory covers the solutions for adding missing files, templates, and interfaces in microservices systems. We identified 3 types of solutions, including \textsc{add files} (e.g., “\textit{Add opentelemetry-go file to address jaeger exporter missing critical data issue, \#177}”), \textsc{add templates} (e.g., “\textit{Add missing response Templates in Swagger output, \#330}”), and \textsc{add interfaces} (e.g., “\textit{Add discovery interface between service and the account, \#378}”). \item \textit{Add Methods and Modules} (33, 1.70\%): We identified the solutions in which developers mainly added or repaired the missing methods and modules to address several types (e.g., Compilation, Service Execution and Communication, Build) of issues in microservices systems. We identified 3 types of solutions, including \textsc{add constructor} (e.g., “\textit{Add constructor method that returns an initialized instance of an application generator, \#22}”), \textsc{add parameters in methods} (e.g., “\textit{Parameterize default client.yml in client module, \#950}”), and \textsc{add security certificates} (e.g., “\textit{Add TLS certificate and OAuth2 certificate SHA1 fingerprint to the /server/info output, \#522}”). \item \textit{Add Test Cases} (28, 1.44\%): We identified the solutions in which developers added test cases to address testing issues of microservices systems. We identified 4 types of solutions in this subcategory in which the top three are \textsc{add test cases to validate service} (e.g., “\textit{Add Test case for ClusterMembership service to return observable of cluster update events, \#262}”), \textsc{generate correct test cases} (e.g., “\textit{Generate validation test case function for grpc client, \#267}”), and \textsc{add test cases to validate data type} (e.g., “\textit{Add a test case with two generic data types for the service module validation, \#12}”). \item \textit{Add Classes and Packages} (27, 1.39\%): This subcategory covers the solutions in which developers added classes and packages to address the microservices issues. We identified 3 types of solutions, including \textsc{add packages} (e.g., “\textit{Add UUID package, \#208}”), \textsc{add properties} (e.g., “\textit{Add missing docker properties to trigger model, \#65}”), \textsc{add objects} (e.g., “\textit{Implement a base ValueObject type that is hiding the Id with a ‘Shadow Primary Key’, \#201}”). \item \textit{Implement Patterns and Strategies (23, 1.18\%)}: The solutions in this subcategory are based on interviewees’ feedback in which they mentioned 19 MSA patterns and strategies for addressing microservices design issues. The most frequently mentioned patterns and strategies are \textsc{service mesh architecture}, \textsc{service instance per container}, and \textsc{serverless deployment}. \item \textit{Add Data Types, Identifiers, and Loops} (21, 1.08\%): This subcategory contains the solutions for addressing errors at the program initialization level. We identified 5 types of solutions in which the top three are \textsc{add identifiers} (e.g., “\textit{Unique identifiers names have been added to avoid panic during bootstrap process, \#1331}”), \textsc{add data types} (e.g., “\textit{Use a string data type to send raw JSON, \#2063}”), and \textsc{add query parameters} (e.g., “\textit{Add context.Context parameter to the GetDependencies interface and implement accordingly to address Jaeger query issue, \#27}”). \item \textit{Add Dependencies and Metrics} (7, 0.36\%): The issues related to missing metrics (e.g., monitoring) and dependencies can be resolved by adding the required metrics and dependencies in the microservices systems. We identified 3 types of solutions, including \textsc{add monitoring metrics} (e.g., “\textit{Add metrics and health check, \#57}”), \textsc{add dependencies} (e.g., “\textit{Add hal deploy dependencies, \#64}”), and \textsc{add stack trace} (e.g., “\textit{Add missing collect Status and stackTrace, \#304}”). \item \textit{Add APIs, Namespaces, and Plugins} (7, 0.36\%): APIs, namespaces, and plugins are the essential part of any microservices systems. This subcategory covers the solutions for addressing missing APIs, namespaces, and plugins. We identified three types of solutions in this subcategory, including \textsc{add apis} (e.g., “\textit{Add Payment API orchestrator missed in compose-override, \#273}”), \textsc{add namespaces} (e.g., “\textit{Add required namespaces for Helm Bake, \#379}”), and \textsc{add plugins} (e.g., “\textit{Add a plugin for Windows environment, \#386}”). \item \textit{Add Logs} (4, 0.20\%): Logging activity is specifically related to monitoring microservices systems. We identified 4 types of solutions that have been used to address monitoring issues (e.g., missing logs), in which the top three are \textsc{add trace id} (e.g., “\textit{Add a trace ID for multi-trace searches, \#1546}”), \textsc{add trace logging} (e.g., “\textit{add trace logging to help debug cors rejections in CorsUtil master, \#293}”), and \textsc{add header logging} (e.g., “\textit{Jaeger clients add uber trace and uberctx headers id for propagating trace context, \#45}”). \end{itemize} \textbf{3. Modify Artifacts} (212/1936, 10.95\%): Besides adding new artifacts to the existing system to address the microservices issues, we also identified a large number of solutions in which the developers explicitly mentioned how they modified modules, services, packages, APIs, scripts, methods, objects, data types, identifiers, databases, and documentation to address the microservices issues. We identified and classified 31 types of solutions in 5 subcategories (see Table \ref{tab:SolutionsTaxnomey} and the Solution Taxonomy sheet in \cite{replpack}). Each of them is briefly described below. \begin{itemize} \item \textit{Modify Methods and Objects} (78, 4.03\%): We found that most developers corrected the proprieties (e.g., method calls, operations, parameters) associated with methods and objects of classes to address various types of issues (e.g., code smells, excessive literals) in microservices systems. We identified 13 types of solutions in this subcategory, among them \textsc{correct method definition} (e.g., “\textit{Pass the correct .aws credentials and AWS\textunderscore PROFILE to the hal container method, \#951}”), \textsc{redefine method operations} (e.g., “\textit{Redefine the operations in a class method for supporting Agent level tags, \#948}”), and \textsc{correct method parameters} (e.g., “\textit{parameterize default client.yml in a client module, \#950}”) are the top three types of solutions. \item \textit{Modify Packages, Modules, and Documentation} (64, 3.30\%): This subcategory covers the packages, modules, and documentation that are modified to address the microservices issues. We identified 4 types of solutions in this subcategory, and among them \textsc{improve documentation} (“\textit{Resolve 'Create Stack' issue and update docs, \#336}”) and \textsc{update packages} (e.g., “\textit{updated package to support the latest version of K8s in both local and aks related deployment, \#2060}”) are the two leading types of solutions. \item \textit{Modify Data Types and Identifiers} (51, 2.63\%): This subcategory covers the data types and identifiers that are modified to address the microservices issues. We collected 5 types of solutions, and among them \textsc{correct naming} (e.g., “\textit{Use primaryName/previousName instead of primaryClass/previousClass for Orca DualExecutionRepository, \#693}”), \textsc{correct data types} (e.g., “\textit{Type cast default values are set in the code, \#650}”), and \textsc{correct nil value} (e.g,. “\textit{when calling client command with a pointer containing a required parameter, correctly check for nil values, \#696}”) are the top three types of solutions. \item \textit{Modify APIs, Services, and Scripts} (13, 0.67\%): This subcategory covers the solutions in which APIs, services, and scripts are modified to address the microservices issues. We identified 6 types of solutions in this subcategory, and among them \textsc{update scripts} (e.g., “\textit{Updated scripts to support latest version of helm and K8s, \#2050}”), \textsc{update syntax} (e.g., “\textit{Update syntex of the configuration in config file, \#2044}”), and \textsc{modify services} (e.g., “\textit{Properly handle services with method named `Use', \#2638}”) are the top three types of solutions. \item \textit{Modify Databases} (6, 0.31\%): This subcategory covers the solutions in which database query strings and tables are modified to address the microservices issues. We identified 2 types of solutions related to this subcategory, including \textsc{modify query strings} (e.g., “\textit{Properly handle array query string parameters, \#2636}”) and \textsc{modify database tables} (e.g., “\textit{Updated table deletes to ignore empty prefixes, \#2053}”). \end{itemize} \textbf{4. Remove Artifacts} (39/1936, 2.01\%): This category covers the solutions in which artifacts are removed to address several types of microservices issues. We identified and classified 14 types of solutions in 5 subcategories (see Table \ref{tab:SolutionsTaxnomey} and the Solution Taxonomy sheet in \cite{replpack}). Each of them is briefly described below. \begin{itemize} \item \textit{Remove Data Types, Methods, Objects, and Plugins} (20, 1.03\%): This subcategory covers the solutions in which data types, methods, objects, and plugins are removed to address the microservices issues. We identified 4 types of solutions in this subcategory, including \textsc{remove data types} (e.g., “\textit{remove non-primitive data types from query string parameters, \#1902}”), \textsc{remove methods} (e.g., “\textit{Remove the userState method from query () in order to avoid from tens of thousands of goroutines, \#967}”), \textsc{remove conflicting plugins} (e.g., “\textit{Exclude conflicting plugin for the docker-compose build, \#996}”), and \textsc{remove objects} (e.g., “\textit{Dispose direct instantiated objects in catalog service, \#970}”). \item \textit{Remove Dependencies and Databases} (8, 0.41\%): This subcategory covers the solutions in which dependencies and database images are removed to address the microservices issues. We identified 2 types of solutions in this subcategory, including \textsc{remove conflicting dependencies} (e.g., “\textit{Remove spark job processes that create conflicting dependencies in the code, \#993}”) and \textsc{remove database images} (e.g., “\textit{Delete the SQL server image from local Docker, \#955}”). \item \textit{Remove Logs} (5, 0.25\%): Logging is required to track the communication and identify the failure in microservices systems. This subcategory covers the solutions in which log messages and transaction IDs are removed to address the microservices issues. We identified 3 types of solutions in this subcategory to address the microservices issues, including \textsc{eliminate log messages} (e.g., “\textit{Eliminate informational log message to avoid from querying/polling to the Ordering database every time, \#995}”), \textsc{remove transaction id for logging} (e.g., “\textit{Delete transaction Id to IntegrationEventLogEntry, \#13}”), and \textsc{unregister from registry} (e.g., “\textit{Calling consul directly with Http2Client instead of consul client, unregister it from consul registry, \#2042}”). \item \textit{Remove Documentation} (5, 0.25\%): Several microservices issues were addressed by removing unnecessary or wrong information from project documentation. We identified 2 types of solutions in this subcategory, including \textsc{remove unnecessary information} (e.g., “\textit{Need to remove the unnecessary variables such as ESHOP\textunderscore AZURE\textunderscore XXX, \#2195}”) and \textsc{remove empty tags} (e.g., “\textit{don’t create tags w/ empty name for internal Zipkin spans, \#1203}”). \end{itemize} \textbf{5. Manage Infrastructure} (160/1936, 8.26\%): This category captures the solutions that are based on efficient resource utilization for addressing the microservices issues. We identified and classified 20 types of solutions in 2 subcategories (see Table \ref{tab:SolutionsTaxnomey} and the Solution Taxonomy sheet in \cite{replpack}). Each of them is briefly described below. \begin{itemize} \item \textit{Manage Storage} (136, 7.02\%): Typical microservices systems store their data in dedicated databases for each service. We found several microservices issues that occurred due to the lack of data storage. We identified 9 types of solutions in this subcategory, in which the top three are \textsc{allocate storage} (e.g., “\textit{Allocate sufficient storage for process execution. e.g., 4GB in docker, \#449}”), \textsc{clean cache} (e.g., “\textit{clean the npm cache by using the command: npm cache clean --force, \#435}”), and \textsc{extend memory} (e.g., “\textit{Extend memory for processes execution, \#508}”). \item \textit{Manage Networking} (24, 1.23\%): Networking is complicated in microservices systems due to managing an explosion of service connections over the distributed network. We collected 10 types of solutions in which the top three are \textsc{change proxy settings} (e.g., “\textit{IPs was in the wrong place, moved into appropriate location by changing the proxy, \#526}), \textsc{server resource management} (e.g., “\textit{use code generation to handle CORS for managing required resources, \#29}”), and \textsc{disable server groups} (e.g., “\textit{during a red/black we first disable old server groups and then optionally scale them down to 0 instances, \#959}”). \end{itemize} \textbf{6. Manage Configuration and Execution} (50/1936, 2.58\%): Managing configuration and execution enables developers to track changes in microservices systems and their consuming applications over time, for example, the ability to track the version history of configuration changes for multiple instances of microservices systems. We identified and classified 13 types of solutions in 2 subcategories (see Table \ref{tab:SolutionsTaxnomey} and the Solution Taxonomy sheet in \cite{replpack}). Each of them is briefly described below. \begin{itemize} \item \textit{Manage Execution} (29, 1.49\%): This subcategory collects the solutions for issues related to managing commands for executing and configuring microservices systems. We identified 5 types of solutions in this subcategory, and among them \textsc{execution and configuration management} (e.g., “\textit{Correct the information in the related configuration file for Extend JSONNET library with additional pipeline options, \#705}”) and \textsc{execute multiple commands} (e.g., “\textit{Execute these commands to address the issue: `hal config deploy edit --account-name my-k8s-account' and `hal config deploy edit --type distributed', \#1007}”) are the top two types of solutions. \item \textit{Manage Configuration} (21, 1.08\%): Managing configurations of each microservice and their instances separately is a tedious and time-consuming task. In this subcategory, we identified 8 types of solutions, and the top three types of solutions are \textsc{documentation for configuration management} (e.g., “\textit{Add docs for dump configuration, \#33}”), \textsc{change configuration files} (e.g., “\textit{Change secret.yml loading from SecretConfig, \#532}”), and \textsc{correct uuid} (e.g., “\textit{correct Goa uuid, \#1197}”). \end{itemize} \textbf{7. Upgrade Tools and Platforms} (47/1936, 2.42\%): The updates of used tools and platforms to develop and manage microservices systems help developers address several issues due to old or legacy versions and protect the microservices systems from security breaches. The interviewees also mentioned a few solutions to address the microservices issues with the help of tools, and one representative quotation is mentioned below. \faHandORight{} “\textit{Normally, we adopt the solutions according to the type of issues. It could include adding several tools, importing different packages, and adopting successful practices. For example, we improve security by using several open-source API gateways such as OKTA, Spring Cloud gateway, JWT token, and Spring Security. To address service communication issues, we mainly use different ways of communication according to the needs of projects, such as Kafka, RabbitMQ, and Service Mesh. In addition, we automated our continuous integration and delivery process using AWS Code Pipeline. AWS Code Pipeline automates the project release process’s build, test, and deployment phases}” \textbf{(P1, Software Architect, Developer)}. We identified and classified 21 types of solutions in two subcategories (see Table \ref{tab:SolutionsTaxnomey} and the Solution Taxonomy sheet in \cite{replpack}). Each of them is briefly described below. \begin{itemize} \item \textit{Upgrade Deployment, Scaling, and Management Platforms} (39, 2.01\%): This subcategory gathers the solutions for addressing the microservices issues related to the deployment, scaling, and management of CI/CD tools and platforms. We identified 16 types of solutions, and the top 3 types of solutions are \textsc{upgrade container logging} (e.g., “\textit {upgrade init container logs to Kubernetes v2 provider container logs, \#25}”), \textsc{upgrade load balancer} (e.g., “\textit{need to upgrade broker load balancer for event broadcasting, \#978}”), and \textsc{upgrade docker files}(e.g., “\textit{upgrade the docker-compose file to version 2.1.101 SDK, \#2555}”). \item \textit{Upgrade Development and Monitoring Tool Support} (8, 0.41\%): Development and monitoring are crucial tasks for developers due to the distributed nature of microservices systems. We identified 5 types of solutions in this subcategory, and the top three types of solutions are \textsc{upgrade kafka flags} (e.g., “\textit{Upgrade Kafka Flags to support Ingester, \#51}”), \textsc{upgrade zipkin thrift} (e.g., “\textit{Upgrade support to Zipkin Thrift as kafka ingestion format, \#295}”), and \textsc{disable tracking} (e.g., “\textit{review all the related code and disable tracking for better performance, \#969}”). \end{itemize} \textbf{8. Import/Export Artifacts} (12/1936, 0.61\%): We found several issues that can be fixed by importing and exporting various artifacts. We identified and classified 3 types of solutions in 2 subcategories (see Table \ref{tab:SolutionsTaxnomey} and the Solution Taxonomy sheet in \cite{replpack}). Each of them is briefly described below. \begin{itemize} \item \textit{Import Artifacts} (9, 0.46\%): This subcategory gathers the solutions related to importing packages and libraries. We identified 2 types of solutions in this subcategory, which are \textsc{import packages} (e.g., “\textit{Import package from the ‘vendor directory’ when the ‘ConvertTo’ is used, \#2082}”), and \textsc{import libraries} (e.g., “\textit{d3 JavaScript library for scalability, \#2081}” to address the scalability issues in microservices systems. \item \textit{Export Artifacts} (3, 0.15\%): In this subcategory, we identified only one type of solutions \textsc{export packages} (e.g., “\textit{Export BaseLogger and add type to logger options for typescript, \#985}”) to fix the Configuration issues in microservices systems (e.g., \textsc{conflict on configuration file names}). \end{itemize} {\renewcommand{\arraystretch}{1} \begin{table*}[!htb] \centering \scriptsize \caption{Taxonomy of solutions of issues in microservices systems} \label{tab:SolutionsTaxnomey} \begin{tabular}{|l|l|l|} \hline \textbf{Category of Solutions} & \textbf{Subcategory of Solutions} & \textbf{Types of Solutions} \\ \hline \multirow{10}{*}{Fix Artifacts (1056)} & \multirow{2}{*}{Fix Code Issue (912)} & Fix Source Code of Issues (791) \\ \cline{3-3} & & Fix Illegal Symbols (Syntax) in Code (79) \\ \cline{2-3} & \multirow{2}{*}{Fix Testing Issue (107)} & Debug Code (105) \\ \cline{3-3} & & Add Test Services in Containers (2) \\ \cline{2-3} & \multirow{2}{*}{Fix Build Issue (33)} & Fix Errors in Build File (17) \\ \cline{3-3} & & Correct Build Type (8) \\ \cline{2-3} & \multirow{2}{*}{Fix GUI Issue (4)} & Fix Backwards Incompatible UI (3) \\ \cline{3-3} & & Fix Broken Screenshot (1) \\ \hline \multirow{19}{*}{Add Artifacts (360)} & \multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Add Features and Services (144)\end{tabular}} & Add Missing Feature (114) \\ \cline{3-3} & & Add Security Feature (26) \\ \cline{2-3} & \multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Add Interfaces, Templates, and Files (39)\end{tabular}} & Add File (37) \\ \cline{3-3} & & Add Templates (1) \\ \cline{2-3} & \multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}Add Methods and Modules (33)\end{tabular}} & Add constructor (24) \\ \cline{3-3} & & Add Parameters in Method (8) \\ \cline{3-3} & & Add Security Certificates (1) \\ \cline{2-3} & \multirow{2}{*}{Add Classes and Packages (27)} & Add Packages (15) \\ \cline{3-3} & & Add Properties(11) \\ \cline{2-3} & \multirow{2}{*}{Add Test Cases (28)} & Add Test Cases to Validate Service (21) \\ \cline{3-3} & & Generate Correct Test Case (3) \\ \cline{2-3} & \multirow{2}{*}{Implement Patterns and Strategies (23)} & Service Mesh Architecture (2)\\ \cline{3-3} & & Serverless Deployment (2)\\ \cline{2-3} & \multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Add Data Types, Identifiers, and Loops (21)\end{tabular}} & Add Identifiers (11) \\ \cline{3-3} & & Add Data Types (5) \\ \cline{2-3} & \multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Add Dependencies and Metrics (7)\end{tabular}} & Add Monitoring Metrics (3) \\ \cline{3-3} & & Add Dependencies (2) \\ \cline{2-3} & \multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Add APIs, Namespaces, and Plugins (7)\end{tabular}} & Add APIs (4) \\ \cline{3-3} & & Add Namespaces (2) \\ \cline{2-3} & \multirow{2}{*}{Add Logs (4)} & Add Trace ID (1) \\ \cline{3-3} & & Add General Purpose Logger (1) \\ \hline \multirow{10}{*}{Modify Artifacts (212)} & \multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Modify Methods and Objects (78)\end{tabular}} & Correct Method Definition (51) \\ \cline{3-3} & & Redefine Method Operations (8) \\ \cline{2-3} & \multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Modify Package, Module, and Documentation (64)\end{tabular}} & Improve Documentation (56) \\ \cline{3-3} & & Update Packages (6) \\ \cline{2-3} & \multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Modify Data Types and Identifiers (51)\end{tabular}} & Correct Naming (29) \\ \cline{3-3} & & Correct Data Types (8) \\ \cline{2-3} & \multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Modify APIs, Services, and Scripts(13)\end{tabular}} & Update Scripts (5) \\ \cline{3-3} & & Update Syntax (3) \\ \cline{2-3} & \multirow{2}{*}{Modify Database (4)} & Modify Database Tables (2) \\ \cline{3-3} & & Modify Query Strings (3) \\ \hline \multirow{10}{*}{Remove Artifacts (38)} & \multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Remove Data Types, Methods, Objects, and Plugins (20)\end{tabular}} & Remove Conflicting Dependencies (17) \\ \cline{3-3} & & Remove Database Images (1) \\ \cline{2-3} & \multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Remove Dependencies and Databases (8)\end{tabular}} & Remove Conflicting Dependencies (7) \\ \cline{3-3} & & Remove Code Dependencies (1) \\ \cline{2-3} & \multirow{2}{*}{Remove Logs (5)} & Eliminate Log Messages(3) \\ \cline{3-3} & & Remove Transaction ID for Logging(1) \\ \cline{2-3} & \multirow{2}{*}{Remove Documentation (5)} & Remove Unnecessary Information (3) \\ \cline{3-3} & & Remove Empty Tags (1) \\ \hline \multirow{6}{*}{Manage Infrastructure (160)} & \multirow{2}{*}{Manage Storage (82)} & Allocate Storage (77) \\ \cline{3-3} & & Clean Cache (5) \\ \cline{2-3} & \multirow{2}{*}{Manage Networking (24)} & Change Proxy Setting (8) \\ \cline{3-3} & & Server Resource Management (5) \\ \hline \multirow{4}{*}{\begin{tabular}[c]{@{}l@{}}Manage Configuration \\ and Execution (50)\end{tabular}} & \multirow{2}{*}{Manage Execution (29)} & Manage Configuration Commands (24) \\ \cline{3-3} & & Execute Multiple Commands (2) \\ \cline{2-3} & \multirow{2}{*}{Manage Configuration (21)} & Documentation for Configuration Management (7) \\ \cline{3-3} & & Change Configuration Files (4) \\ \hline \multirow{4}{*}{\begin{tabular}[c]{@{}l@{}}Upgrade Tools\\ and Platforms (43)\end{tabular}} & \multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Upgrade Deployment, Scaling, and Management Platforms (35)\end{tabular}} & Upgrade Container Logging (6) \\ \cline{3-3} & & Upgrade Load Balancer (4) \\ \cline{2-3} & \multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Upgrade Development and Monitoring Tool Support (8)\end{tabular}} & Upgrade Kafka Flags (2) \\ \cline{3-3} & & Upgrade Zipkin Thrift (2) \\ \hline \multirow{3}{*}{Import/Export Artifacts (12)} & \multirow{2}{*}{Import Artifacts (9)} & Import Packages (5) \\ \cline{3-3} & & Import Libraries (4) \\ \cline{2-3} & Export Artifacts (3) & Export Packages (3) \\ \hline \end{tabular} \end{table*}} \begin{tcolorbox} [sharp corners, boxrule=0.1mm,] \small \textbf{Key Findings of RQ3}: We found that not all the issue discussions provide the information about their solutions, and finally we identified 1,899 issue discussions containing information about the solutions with 36 solutions mentioned by the interviewees and 1 solution mentioned by the survey participants, which are 1,936 solutions in total. The solution taxonomy consists of 8 categories, 32 subcategories, and 177 types of solutions. The majority of solutions are related to Fix Artifacts (54.54\%), Add Artifacts (18.59\%), and Modify Artifacts (10.95\%). \end{tcolorbox} \subsection{Practitioners' Perspective (RQ4)} \label{sec:results_RQ4} We conducted a cross-sectional survey to evaluate the taxonomies of issues, causes, and solutions in microservices systems built in RQ1, RQ2, and RQ3. We provided a list of 19 issue categories and asked survey participants to respond to each category on a 5-point Likert scale (Very Often, Often, Sometimes, Rarely, Never). Similarly, regarding causes and solutions, we provided 8 cause categories and 8 solution categories, and asked practitioners to respond to each category on a 5-point Likert scale (Strongly Agree, Agree, Neutral, Disagree, Strongly Disagree). We also asked three open-ended questions to identify the missing issues, causes, and solutions in the provided categories. We received 150 valid responses completed by microservices practitioners from 42 countries across 6 continents. The results of RQ4 are summarized in four tables (i.e., Table \ref{tab:Issues_categories}, Table \ref{tab:causes_categories}, Table \ref{tab:solution_categories}, and Table \ref{tab:sig_value_group}). This section also provides representative quotations from practitioners for answering open-ended questions with the \faHandORight{} sign. The practitioners’ perspectives on the issues, causes, and solutions categories in microservices systems are presented in Table~\ref{tab:Issues_categories}, Table \ref{tab:causes_categories}, and Table \ref{tab:solution_categories}. Due to the limited space, we only presented the ‘percentage’ and ‘mean’ values of the practitioners’ responses to each category of issues, causes, and solutions. The survey results about the practitioners' perspectives on microservices issues, causes, and solutions are briefly reported below. \textbf{Practitioners' Perspective on Issues}: We asked the microservices participants which issue they faced while developing microservices systems. The majority of the respondents mentioned that they face all of the issues while developing microservices systems. The practitioners' responses presented in Table \ref{tab:Issues_categories} show that IC1: Technical Debt (46.67\% Very Often, 24.00\% Often), IC2: Continuous Integration and Delivery Issue (26.67\% Very Often, 42.67\% Often), and IC5: Security Issue (18.67\% Very Often, 64.00\% Often) occur most frequently than other categories of issues. Some practitioners also suggested 6 types of issues (with 9 instances of issues) that were not part of the initial taxonomy of issues in microservices systems. We added microservices practitioners' suggested types of issues in Figure \ref{fig:Taxonomy} and the Issue Taxonomy sheet in \cite{replpack}. One representative quotation about other types of issues is depicted below. \faHandORight{} “\textit{In my understanding, there should be some other issues such as lack of (i) understanding of the implementation domain, (ii) defined process for designing, developing, and deploying the projects, and (iii) expertise in programming languages for developing microservices systems}” \textbf{(Application Developer, SP2)}. {\renewcommand{\arraystretch}{1} \begin{table}[!h] \centering \scriptsize \caption{Practitioners' perspective (in \%) on the issue categories in microservices systems (VO-Very Often, O-Often, S-Sometimes, R-Rarely, N-Never)} \label{tab:Issues_categories} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \textbf{Code} & \textbf{VO} & \textbf{O} & \textbf{S} & \textbf{R} &\textbf{N} & \textbf{Mean} \\ \hline IC1 &\cellcolor[HTML]{70AD47}46.67 &\cellcolor[HTML]{A9D08E}24.00 &\cellcolor[HTML]{E2EFDA}12.67 &\cellcolor[HTML]{E2EFDA}5.33 &\cellcolor[HTML]{E2EFDA}1.33 &\cellcolor[HTML]{E2EFDA}3.93\\ \hline IC2 &\cellcolor[HTML]{A9D08E}26.67 &\cellcolor[HTML]{70AD47}42.67 &\cellcolor[HTML]{E2EFDA}15.33 &\cellcolor[HTML]{E2EFDA}7.33 &\cellcolor[HTML]{E2EFDA}8.00 &\cellcolor[HTML]{E2EFDA}3.73 \\ \hline IC3 &\cellcolor[HTML]{E2EFDA}14.67 &\cellcolor[HTML]{A9D08E}29.33 &\cellcolor[HTML]{A9D08E}27.33 &\cellcolor[HTML]{E2EFDA}13.33 &\cellcolor[HTML]{E2EFDA}14.67 &\cellcolor[HTML]{E2EFDA}3.14 \\ \hline IC4 &\cellcolor[HTML]{E2EFDA}15.33 &\cellcolor[HTML]{70AD47}44.67 &\cellcolor[HTML]{A9D08E}22.67 &\cellcolor[HTML]{E2EFDA}10.00 &\cellcolor[HTML]{E2EFDA}7.33 &\cellcolor[HTML]{E2EFDA}3.51 \\ \hline IC5 &\cellcolor[HTML]{E2EFDA}18.67 &\cellcolor[HTML]{70AD47}64.00 &\cellcolor[HTML]{E2EFDA}8.67 &\cellcolor[HTML]{E2EFDA}6.00 &\cellcolor[HTML]{E2EFDA}2.67 &\cellcolor[HTML]{E2EFDA}3.90 \\ \hline IC6 &\cellcolor[HTML]{E2EFDA}15.33 &\cellcolor[HTML]{70AD47}43.33 &\cellcolor[HTML]{A9D08E}23.33 &\cellcolor[HTML]{E2EFDA}10.00 &\cellcolor[HTML]{E2EFDA}8.00 &\cellcolor[HTML]{E2EFDA}3.48 \\ \hline IC7 &\cellcolor[HTML]{E2EFDA}14.00 &\cellcolor[HTML]{A9D08E}38.67 &\cellcolor[HTML]{A9D08E}25.33 &\cellcolor[HTML]{E2EFDA}10.67 &\cellcolor[HTML]{E2EFDA}11.33 &\cellcolor[HTML]{E2EFDA}3.33 \\ \hline IC8 &\cellcolor[HTML]{E2EFDA}12.00 &\cellcolor[HTML]{70AD47}40.67 &\cellcolor[HTML]{A9D08E}22.67 &\cellcolor[HTML]{E2EFDA}16.67&\cellcolor[HTML]{E2EFDA}6.67 &\cellcolor[HTML]{E2EFDA}3.31 \\ \hline IC9 &\cellcolor[HTML]{E2EFDA}16.00&\cellcolor[HTML]{A9D08E}34.67 &\cellcolor[HTML]{E2EFDA}15.33 &\cellcolor[HTML]{E2EFDA}14.00 &\cellcolor[HTML]{A9D08E}20.00 &\cellcolor[HTML]{E2EFDA}3.13 \\ \hline IC10 &\cellcolor[HTML]{A9D08E}26.67 &\cellcolor[HTML]{A9D08E}32.67 &\cellcolor[HTML]{A9D08E}20.00 &\cellcolor[HTML]{E2EFDA}8.00 &\cellcolor[HTML]{E2EFDA}11.33 &\cellcolor[HTML]{E2EFDA}3.51 \\ \hline IC11 & \cellcolor[HTML]{E2EFDA}13.33&\cellcolor[HTML]{70AD47}42.00 &\cellcolor[HTML]{E2EFDA}14.67 &\cellcolor[HTML]{E2EFDA}11.33 &\cellcolor[HTML]{E2EFDA}18.67 &\cellcolor[HTML]{E2EFDA}3.20 \\ \hline IC12 &\cellcolor[HTML]{E2EFDA} 7.33 &\cellcolor[HTML]{A9D08E}30.67 &\cellcolor[HTML]{A9D08E}22.00 &\cellcolor[HTML]{E2EFDA}16.67 &\cellcolor[HTML]{A9D08E}20.67 &\cellcolor[HTML]{E2EFDA}2.79 \\ \hline IC13 &\cellcolor[HTML]{E2EFDA} 10.67&\cellcolor[HTML]{70AD47}44.00 &\cellcolor[HTML]{A9D08E}20.67 &\cellcolor[HTML]{E2EFDA}12.00 &\cellcolor[HTML]{E2EFDA}12.00 &\cellcolor[HTML]{E2EFDA}3.27 \\ \hline IC14 &\cellcolor[HTML]{E2EFDA}18.00 &\cellcolor[HTML]{70AD47}40.67 &\cellcolor[HTML]{E2EFDA}18.00 &\cellcolor[HTML]{E2EFDA}16.00 &\cellcolor[HTML]{E2EFDA}7.33 &\cellcolor[HTML]{E2EFDA}3.46 \\ \hline IC15 &\cellcolor[HTML]{E2EFDA} 12.00&\cellcolor[HTML]{A9D08E}38.67 &\cellcolor[HTML]{A9D08E}22.00 &\cellcolor[HTML]{E2EFDA}17.33 &\cellcolor[HTML]{E2EFDA}10.00 &\cellcolor[HTML]{E2EFDA}3.25 \\ \hline IC16 &\cellcolor[HTML]{E2EFDA}17.33 &\cellcolor[HTML]{70AD47}46.00 &\cellcolor[HTML]{E2EFDA}14.67 &\cellcolor[HTML]{E2EFDA}14.67 &\cellcolor[HTML]{E2EFDA}6.67 &\cellcolor[HTML]{E2EFDA}3.51 \\ \hline IC17 &\cellcolor[HTML]{E2EFDA}12.00 &\cellcolor[HTML]{A9D08E}34.67 &\cellcolor[HTML]{A9D08E}22.00 &\cellcolor[HTML]{A9D08E}20.00 &\cellcolor[HTML]{E2EFDA}10.67&\cellcolor[HTML]{E2EFDA}3.15 \\ \hline IC18 &\cellcolor[HTML]{E2EFDA} 12.67&\cellcolor[HTML]{A9D08E}34.00 &\cellcolor[HTML]{A9D08E}21.33 &\cellcolor[HTML]{E2EFDA}10.67 &\cellcolor[HTML]{E2EFDA}18.00 &\cellcolor[HTML]{E2EFDA}3.03 \\ \hline IC19 &\cellcolor[HTML]{E2EFDA} 18.67&\cellcolor[HTML]{70AD47}64.00 &\cellcolor[HTML]{E2EFDA}8.67 &\cellcolor[HTML]{E2EFDA}6.00 &\cellcolor[HTML]{E2EFDA}2.67 &\cellcolor[HTML]{E2EFDA}3.90 \\ \hline \end{tabular} \end{table}} \textbf{Practitioners' Perspective on Causes}: Our survey participants also confirmed the identified causes that could lead to issues in microservices systems. The practitioners' responses presented in Table \ref{tab:causes_categories} show that CC1: General Programming Error (41.33\% Strongly Agree, 38.67\% Agree), CC8: Fragile Code (23.33\% Strongly Agree, 46.67\% Agree), and CC2: Missing Features and Artifacts (12.67\% Strongly Agree, 55.33\% Agree) are the top three cause categories. Some practitioners also suggested 5 types of causes (with 11 instances of causes) that were not part of the initial taxonomy of causes in microservices systems. We added microservices practitioners' suggested types of causes in Table \ref{tab:CausesTaxnomey} and the Cause Taxonomy sheet in \cite{replpack}. One representative quotation about suggested types of causes is depicted below. \faHandORight{} “\textit{Several microservices issues can be occurred because of (i) separate physical database, (ii) lack of resilience support for the whole application, (iii) excessive tooling, (iv) lack of CI/CD (e.g., DevOps) culture in organizations, and (v) lack of practitioners in teams who have multiple skills}” \textbf{(Business Analyst, SP120)} {\renewcommand{\arraystretch}{1} \begin{table}[] \centering \scriptsize \caption{Practitioners' perspective (in \%) on the cause categories in microservices systems (SA-Strongly Agree, A-Agree, UD-Undecided, D-Disagree, SD-Strongly Disagree)} \label{tab:causes_categories} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \textbf{Code} & \textbf{SA} & \textbf{A} & \textbf{U} & \textbf{D} & \textbf{SD} & \textbf{Mean} \\ \hline CC1 &\cellcolor[HTML]{70AD47}41.33 &\cellcolor[HTML]{A9D08E}38.67 &\cellcolor[HTML]{E2EFDA}12.00 &\cellcolor[HTML]{E2EFDA}6.00 &\cellcolor[HTML]{E2EFDA}2.00&\cellcolor[HTML]{E2EFDA}4.11\\ \hline CC2 &\cellcolor[HTML]{E2EFDA}12.67 &\cellcolor[HTML]{70AD47}55.33 &\cellcolor[HTML]{A9D08E}25.33 &\cellcolor[HTML]{E2EFDA}4.67 &\cellcolor[HTML]{E2EFDA}2.67 &\cellcolor[HTML]{E2EFDA}3.70\\ \hline CC3 &\cellcolor[HTML]{E2EFDA}18.00 &\cellcolor[HTML]{70AD47}47.33 &\cellcolor[HTML]{A9D08E}20.00 &\cellcolor[HTML]{E2EFDA}10.00 &\cellcolor[HTML]{E2EFDA}4.67 &\cellcolor[HTML]{E2EFDA}3.64\\ \hline CC4 &\cellcolor[HTML]{E2EFDA}12.67 &\cellcolor[HTML]{70AD47}41.33 &\cellcolor[HTML]{A9D08E}27.33 &\cellcolor[HTML]{E2EFDA}13.33 &\cellcolor[HTML]{E2EFDA}5.33 &\cellcolor[HTML]{E2EFDA}3.43\\ \hline CC5 &\cellcolor[HTML]{E2EFDA}19.33 &\cellcolor[HTML]{70AD47}48.00 &\cellcolor[HTML]{E2EFDA}18.67 &\cellcolor[HTML]{E2EFDA}8.67 &\cellcolor[HTML]{E2EFDA}5.33 &\cellcolor[HTML]{E2EFDA}3.67\\ \hline CC6 &\cellcolor[HTML]{E2EFDA}18.67 &\cellcolor[HTML]{70AD47}42.67 &\cellcolor[HTML]{A9D08E}21.33 &\cellcolor[HTML]{E2EFDA}10.00 &\cellcolor[HTML]{E2EFDA}7.33 &\cellcolor[HTML]{E2EFDA}3.55\\ \hline CC7 &\cellcolor[HTML]{E2EFDA}16.00 &\cellcolor[HTML]{70AD47}46.00 &\cellcolor[HTML]{A9D08E}22.67 &\cellcolor[HTML]{E2EFDA}10.00 &\cellcolor[HTML]{E2EFDA}5.33 &\cellcolor[HTML]{E2EFDA}3.57\\ \hline CC8 &\cellcolor[HTML]{A9D08E}23.33 &\cellcolor[HTML]{70AD47}46.67 &\cellcolor[HTML]{E2EFDA}12.67 &\cellcolor[HTML]{E2EFDA}12.67&\cellcolor[HTML]{E2EFDA}4.67&\cellcolor[HTML]{E2EFDA}3.71\\ \hline \end{tabular} \end{table}} \textbf{Practitioners' Perspective on Solutions}: We also recorded the practitioners' perspective about the solutions for the issues occurring during microservices system development. The practitioners' responses presented in Table \ref{tab:causes_categories} show that SC1: Add Artifacts (42.00\% Strongly Agree, 39.33\% Agree), SC3: Modify Artifact (27.33\% Strongly Agree, 37.33\% Agree), and SC6: Manage Configuration and Execution (18.00\% Strongly Agree, 48.67\% Agree) and are the top three solution categories that have been used to address the microservices issues. One practitioner also suggested one type of solution (with one instance of solution) that was not part of the initial taxonomy of solutions in microservices systems. We added microservices practitioners' suggested types of solutions in Table \ref{tab:SolutionsTaxnomey} and the Solution Taxonomy sheet in \cite{replpack}. One representative quotation about the suggested solutions is depicted below. \faHandORight{} “\textit{Regular training of the employees on the latest technologies and cloud platforms for developing and managing microservices systems can address several types of security, communication, and deployment issues}” (\textbf{DevOps \& Cloud Engineer, SP92}). {\renewcommand{\arraystretch}{1} \begin{table}[] \centering \scriptsize \caption{Practitioners' perspective (in \%) on the solution categories in microservices systems (SA-Strongly Agree, A-Agree, UD-Undecided, D-Disagree, SD-Strongly Disagree)} \label{tab:solution_categories} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \textbf{Code} & \textbf{SA} & \textbf{A} & \textbf{U} & \textbf{D} & \textbf{SD} & \textbf{Mean} \\ \hline SC1 &\cellcolor[HTML]{70AD47}42.00 &\cellcolor[HTML]{A9D08E}39.33 &\cellcolor[HTML]{E2EFDA}8.67 &\cellcolor[HTML]{E2EFDA}6.00 &\cellcolor[HTML]{E2EFDA}3.33 &\cellcolor[HTML]{E2EFDA}4.11\\ \hline SC2 &\cellcolor[HTML]{E2EFDA}16.00 &\cellcolor[HTML]{70AD47}60.67 &\cellcolor[HTML]{E2EFDA}14.00 &\cellcolor[HTML]{E2EFDA}6.67 &\cellcolor[HTML]{E2EFDA}2.00 &\cellcolor[HTML]{E2EFDA}3.83\\ \hline SC3 &\cellcolor[HTML]{A9D08E}27.33 &\cellcolor[HTML]{A9D08E}37.33 &\cellcolor[HTML]{A9D08E}25.33 &\cellcolor[HTML]{E2EFDA}6.00 &\cellcolor[HTML]{E2EFDA}3.33 &\cellcolor[HTML]{E2EFDA}3.80\\ \hline SC4 &\cellcolor[HTML]{E2EFDA}14.00 &\cellcolor[HTML]{70AD47}48.00 &\cellcolor[HTML]{A9D08E}23.33 &\cellcolor[HTML]{A9D08E}10.00 &\cellcolor[HTML]{E2EFDA}4.00 &\cellcolor[HTML]{E2EFDA}3.58\\ \hline SC5 &\cellcolor[HTML]{E2EFDA}18.00 &\cellcolor[HTML]{70AD47}46.67 &\cellcolor[HTML]{A9D08E}24.00 &\cellcolor[HTML]{A9D08E}6.00 &\cellcolor[HTML]{E2EFDA}4.67 &\cellcolor[HTML]{E2EFDA}3.68\\ \hline SC6 &\cellcolor[HTML]{E2EFDA}18.00 &\cellcolor[HTML]{70AD47}48.67 &\cellcolor[HTML]{E2EFDA}18.67 &\cellcolor[HTML]{E2EFDA}9.33 &\cellcolor[HTML]{E2EFDA}4.67 &\cellcolor[HTML]{E2EFDA}3.66\\ \hline SC7 &\cellcolor[HTML]{E2EFDA}15.33 &\cellcolor[HTML]{70AD47}52.00 &\cellcolor[HTML]{A9D08E}22.00 &\cellcolor[HTML]{E2EFDA}8.00 &\cellcolor[HTML]{E2EFDA}2.00 &\cellcolor[HTML]{E2EFDA}3.71\\ \hline SC8 &\cellcolor[HTML]{E2EFDA}14.67 &\cellcolor[HTML]{70AD47}50.00 &\cellcolor[HTML]{A9D08E}28.67 &\cellcolor[HTML]{E2EFDA}3.33 &\cellcolor[HTML]{E2EFDA}2.67 &\cellcolor[HTML]{E2EFDA}3.71\\ \hline \end{tabular} \end{table}} \textbf{Statistical significance on the issue, cause, and solution categories in microservices systems}: We analyzed the practitioners' responses across one pair of demographic groups in Table \ref{tab:sig_value_group}. The first column of Table \ref{tab:sig_value_group} lists the categories of issues, causes, and solutions presented to the survey participants. The subsequent columns of Table \ref{tab:sig_value_group} show the \textit{Likert Distribution}, \textit{Mean}, \textit{p-value}, and \textit{Effect Size}. The practitioners' responses are grouped into Experience $\le$ 6 Years vs. Experience > 6 Years group to check the test whether there are statistical differences between the two groups on the same variable or not. The \textit{Likert Distribution} shows the level of agreement and importance for each issue, cause, and solution category. In contrast, the \textit{Mean} indicates the average of the Likert distribution for the issue, cause, and solution categories. The \textit{p-value} indicates statistical differences between Experience $\le$ 6 Years and Experience > 6 Years in the fourth subcolumn (i.e., Experience Based Grouping). We used the non-parametric Mann–Whitney U test to test the null hypothesis (i.e., there is no significant difference between the responses in both groups). We describe the impact of the groups on the survey responses as significant if the \textit{p-value} is less than 0.05 (see \faBalanceScale{} symbol in Table \ref{tab:sig_value_group}). The \textit{Effect Size} is measured by taking the mean difference of Experience> 6 Years and Experience $\le$ 6 Years. \textbf{Observations}: We made the following observations based on the practitioners' responses. \begin{itemize} \item There are no major statistically significant differences in practitioners' responses on the issue, cause, and solution categories in microservices systems. \item The observed statistically significant differences between experienced-based grouping indicate that the experience of microservices practitioners does not affect the survey responses. \item The survey findings indicate that most issues are related to Technical Debt, CI/CD, and Security; most causes are associated with General Programming Errors, Fragile Code, and Missing Features and Artifacts; and most solutions are associated with Add Artifacts, Modify Artifacts, and Manage Configuration and Execution. \item More than 50\% of the respondents indicated that they frequently (Very Often and Often) encountered different issues belonging to the given list of issue categories. \item Our results indicate that many practitioners rarely or never face GUI (16.67\% Rarely, Never 20.67\%), Compilation (14.00\% Rarely, 20.00\% Never), and Networking issues (20.00\% Rarely, 10.66\% Never) in microservices system development. \end{itemize} {\renewcommand{\arraystretch}{1} \begin{table*}[] \centering \scriptsize \caption{Statistical significance on the issue, cause, and solution categories in microservices systems} \label{tab:sig_value_group} \begin{tabular}{|rccccc|} \hline \multicolumn{1}{|r|}{\multirow{2}{*}{\textbf{Categories}}} & \multicolumn{1}{c|}{\multirow{2}{*}{\textbf{\#}}} & \multicolumn{2}{c|}{\textbf{Likert Distributions}} & \multicolumn{2}{c|}{\textbf{Experience Based Grouping}} \\ \cline{3-6} \multicolumn{1}{|r|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{\textbf{In Total}} & \multicolumn{1}{c|}{\textbf{Mean}} & \multicolumn{1}{c|}{\textbf{P-value}} & \multicolumn{1}{c|}{\textbf{Effect Size}} \\ \hline \multicolumn{6}{|l|}{\textbf{Microservices Issues}} \\ \hline \multicolumn{1}{|r|}{Technical Debt} & \multicolumn{1}{c|}{IC1} & \multicolumn{1}{c|}{\includegraphics[width = 0.6cm, height = 0.19cm]{Figures/LikertFigures/IC1.pdf}} & \multicolumn{1}{c|}{3.89} & \multicolumn{1}{c|}{0.47} & \multicolumn{1}{c|}{0.45}\\\hline \multicolumn{1}{|r|}{Continuous Integration and Delivery (CI/CD) Issue} & \multicolumn{1}{c|}{IC2} & \multicolumn{1}{c|}{\includegraphics[width = 0.6cm, height = 0.19cm]{Figures/LikertFigures/IC3.pdf}} & \multicolumn{1}{c|}{3.73} & \multicolumn{1}{c|}{0.35} & \multicolumn{1}{c|}{0.30} \\\hline \multicolumn{1}{|r|}{Exception Handling Issue} & \multicolumn{1}{c|}{IC3} & \multicolumn{1}{c|}{\includegraphics[width = 0.6cm, height = 0.19cm]{Figures/LikertFigures/IC5.pdf}} & \multicolumn{1}{c|}{3.14} & \multicolumn{1}{c|}{\faBalanceScale{} 0.02} & \multicolumn{1}{c|}{0.05} \\\hline \multicolumn{1}{|r|}{Service Execution and Communication Issue} & \multicolumn{1}{c|}{IC4} & \multicolumn{1}{c|}{\includegraphics[width = 0.6cm, height = 0.19cm]{Figures/LikertFigures/IC6.pdf}} & \multicolumn{1}{c|}{3.51} & \multicolumn{1}{c|}{0.17} & \multicolumn{1}{c|}{0.35}\\\hline \multicolumn{1}{|r|}{Security Issue} & \multicolumn{1}{c|}{IC5} & \multicolumn{1}{c|}{\includegraphics[width = 0.6cm, height = 0.19cm]{Figures/LikertFigures/IC2.pdf}} & \multicolumn{1}{c|}{3.90} & \multicolumn{1}{c|}{0.67} & \multicolumn{1}{c|}{0.16} \\\hline \multicolumn{1}{|r|}{Build Issue} & \multicolumn{1}{c|}{IC6} & \multicolumn{1}{c|}{\includegraphics[width = 0.6cm, height = 0.19cm]{Figures/LikertFigures/IC4.pdf}} & \multicolumn{1}{c|}{3.48} & \multicolumn{1}{c|}{0.14} & \multicolumn{1}{c|}{0.12} \\\hline \multicolumn{1}{|r|}{Configuration Issue} & \multicolumn{1}{c|}{IC7} & \multicolumn{1}{c|}{\includegraphics[width = 0.6cm, height = 0.19cm]{Figures/LikertFigures/IC7.pdf}} & \multicolumn{1}{c|}{3.33} & \multicolumn{1}{c|}{0.14} & \multicolumn{1}{c|}{-0.06}\\\hline \multicolumn{1}{|r|}{Monitoring Issue} & \multicolumn{1}{c|}{IC8} & \multicolumn{1}{c|}{\includegraphics[width = 0.6cm, height = 0.19cm]{Figures/LikertFigures/IC8.pdf}} & \multicolumn{1}{c|}{3.31} & \multicolumn{1}{c|}{0.14} & \multicolumn{1}{c|}{0.23}\\\hline \multicolumn{1}{|r|}{Compilation Issue} & \multicolumn{1}{c|}{IC9} & \multicolumn{1}{c|}{\includegraphics[width = 0.6cm, height = 0.19cm]{Figures/LikertFigures/IC9.pdf}} & \multicolumn{1}{c|}{3.13} & \multicolumn{1}{c|}{\faBalanceScale{} 0.03} & \multicolumn{1}{c|}{-0.06}\\\hline \multicolumn{1}{|r|}{Testing Issue} & \multicolumn{1}{c|}{IC10} & \multicolumn{1}{c|}{\includegraphics[width = 0.6cm, height = 0.19cm]{Figures/LikertFigures/IC13.pdf}} & \multicolumn{1}{c|}{3.51} & \multicolumn{1}{c|}{0.09} & \multicolumn{1}{c|}{-0.36}\\\hline \multicolumn{1}{|r|}{Documentation Issue} & \multicolumn{1}{c|}{IC11} & \multicolumn{1}{c|}{\includegraphics[width = 0.6cm, height = 0.19cm]{Figures/LikertFigures/IC14.pdf}} & \multicolumn{1}{c|}{3.20} & \multicolumn{1}{c|}{0.02} & \multicolumn{1}{c|}{0.31}\\\hline \multicolumn{1}{|r|}{Graphical User Interface (GUI) Issue} & \multicolumn{1}{c|}{IC12} & \multicolumn{1}{c|}{\includegraphics[width = 0.6cm, height = 0.19cm]{Figures/LikertFigures/IC15.pdf}} & \multicolumn{1}{c|}{2.79} & \multicolumn{1}{c|}{0.08} & \multicolumn{1}{c|}{0.60}\\\hline \multicolumn{1}{|r|}{Update and Installation Issue} & \multicolumn{1}{c|}{IC13} & \multicolumn{1}{c|}{\includegraphics[width = 0.6cm, height = 0.19cm]{Figures/LikertFigures/IC18.pdf}} & \multicolumn{1}{c|}{3.27} & \multicolumn{1}{c|}{\faBalanceScale{} 0.04} & \multicolumn{1}{c|}{-0.28}\\\hline \multicolumn{1}{|r|}{Database Issue} & \multicolumn{1}{c|}{IC14} & \multicolumn{1}{c|}{\includegraphics[width = 0.6cm, height = 0.19cm]{Figures/LikertFigures/IC12.pdf}} & \multicolumn{1}{c|}{3.46} & \multicolumn{1}{c|}{0.25} & \multicolumn{1}{c|}{0.03}\\\hline \multicolumn{1}{|r|}{Storage Issue} & \multicolumn{1}{c|}{IC15} & \multicolumn{1}{c|}{\includegraphics[width = 0.6cm, height = 0.19cm]{Figures/LikertFigures/IC16.pdf}} & \multicolumn{1}{c|}{3.25} & \multicolumn{1}{c|}{0.12} & \multicolumn{1}{c|}{0.31}\\\hline \multicolumn{1}{|r|}{Performance Issue} & \multicolumn{1}{c|}{IC16} & \multicolumn{1}{c|}{\includegraphics[width = 0.6cm, height = 0.19cm]{Figures/LikertFigures/IC10.pdf}} & \multicolumn{1}{c|}{3.51} & \multicolumn{1}{c|}{0.21} & \multicolumn{1}{c|}{-0.02}\\\hline \multicolumn{1}{|r|}{Networking Issue} & \multicolumn{1}{c|}{IC17} & \multicolumn{1}{c|}{\includegraphics[width = 0.6cm, height = 0.19cm]{Figures/LikertFigures/IC11.pdf}} & \multicolumn{1}{c|}{3.15} & \multicolumn{1}{c|}{\faBalanceScale{} 0.04} & \multicolumn{1}{c|}{-0.07}\\\hline \multicolumn{1}{|r|}{Typecasting Issue} & \multicolumn{1}{c|}{IC18} & \multicolumn{1}{c|}{\includegraphics[width = 0.6cm, height = 0.19cm]{Figures/LikertFigures/IC17.pdf}} & \multicolumn{1}{c|}{3.03} & \multicolumn{1}{c|}{0.12} & \multicolumn{1}{c|}{0.38}\\\hline \multicolumn{1}{|r|}{Organizational Issue} & \multicolumn{1}{c|}{IC19} & \multicolumn{1}{c|}{\includegraphics[width = 0.6cm, height = 0.19cm]{Figures/LikertFigures/IC19.pdf}} & \multicolumn{1}{c|}{3.52} & \multicolumn{1}{c|}{0.14} & \multicolumn{1}{c|}{0.11} \\\hline \multicolumn{6}{|l|}{\textbf{Causes of Issues}} \\ \hline \multicolumn{1}{|r|}{General Programming Error} & \multicolumn{1}{c|}{CC1} & \multicolumn{1}{c|}{\includegraphics[width = 0.6cm, height = 0.19cm]{Figures/LikertFigures/CC1.pdf}} & \multicolumn{1}{c|}{4.11} & \multicolumn{1}{c|}{0.53} & \multicolumn{1}{c|}{0.17} \\\hline \multicolumn{1}{|r|}{Missing Features and Artifacts} & \multicolumn{1}{c|}{CC2} & \multicolumn{1}{c|}{\includegraphics[width = 0.6cm, height = 0.19cm]{Figures/LikertFigures/CC2.pdf}} & \multicolumn{1}{c|}{3.70} & \multicolumn{1}{c|}{0.47} & \multicolumn{1}{c|}{-0.04}\\\hline \multicolumn{1}{|r|}{Invalid Configuration and Communication Problem} & \multicolumn{1}{c|}{CC3} & \multicolumn{1}{c|}{\includegraphics[width = 0.6cm, height = 0.19cm]{Figures/LikertFigures/CC3.pdf}} & \multicolumn{1}{c|}{3.64} & \multicolumn{1}{c|}{0.25} & \multicolumn{1}{c|}{-0.31}\\\hline \multicolumn{1}{|r|}{Legacy Versions, Compatibility, and Dependency Problem} & \multicolumn{1}{c|}{CC4} & \multicolumn{1}{c|}{\includegraphics[width = 0.6cm, height = 0.19cm]{Figures/LikertFigures/CC4.pdf}} & \multicolumn{1}{c|}{3.43} & \multicolumn{1}{c|}{0.25} & \multicolumn{1}{c|}{0.20}\\\hline \multicolumn{1}{|r|}{Service Design and Implementation Anomaly} & \multicolumn{1}{c|}{CC5} & \multicolumn{1}{c|}{\includegraphics[width = 0.6cm, height = 0.19cm]{Figures/LikertFigures/CC5.pdf}} & \multicolumn{1}{c|}{3.67} & \multicolumn{1}{c|}{0.35} & \multicolumn{1}{c|}{0.16}\\\hline \multicolumn{1}{|r|}{Poor Security Management} & \multicolumn{1}{c|}{CC6} & \multicolumn{1}{c|}{\includegraphics[width = 0.6cm, height = 0.19cm]{Figures/LikertFigures/CC6.pdf}} & \multicolumn{1}{c|}{3.55} & \multicolumn{1}{c|}{0.21} & \multicolumn{1}{c|}{-0.20}\\\hline \multicolumn{1}{|r|}{Insufficient Resources} & \multicolumn{1}{c|}{CC7} & \multicolumn{1}{c|}{\includegraphics[width = 0.6cm, height = 0.19cm]{Figures/LikertFigures/CC7.pdf}} & \multicolumn{1}{c|}{3.57} & \multicolumn{1}{c|}{0.40} & \multicolumn{1}{c|}{0.02} \\\hline \multicolumn{1}{|r|}{Fragile Code} & \multicolumn{1}{c|}{CC8} & \multicolumn{1}{c|}{\includegraphics[width = 0.6cm, height = 0.19cm]{Figures/LikertFigures/CC8.pdf}} & \multicolumn{1}{c|}{3.71} & \multicolumn{1}{c|}{0.25} & \multicolumn{1}{c|}{-0.12}\\\hline \multicolumn{6}{|l|}{\textbf{Solutions for Issues}} \\ \hline \multicolumn{1}{|r|}{Add Artifacts} & \multicolumn{1}{c|}{SC1} & \multicolumn{1}{c|}{\includegraphics[width = 0.6cm, height = 0.19cm]{Figures/LikertFigures/SC1.pdf}} & \multicolumn{1}{c|}{4.11} & \multicolumn{1}{c|}{0.40} & \multicolumn{1}{c|}{0.20}\\\hline \multicolumn{1}{|r|}{Remove Artifacts} & \multicolumn{1}{c|}{SC2} & \multicolumn{1}{c|}{\includegraphics[width = 0.6cm, height = 0.19cm]{Figures/LikertFigures/SC2.pdf}} & \multicolumn{1}{c|}{3.83} & \multicolumn{1}{c|}{0.60} & \multicolumn{1}{c|}{-0.01}\\\hline \multicolumn{1}{|r|}{Modify Artifact} & \multicolumn{1}{c|}{SC3} & \multicolumn{1}{c|}{\includegraphics[width = 0.6cm, height = 0.19cm]{Figures/LikertFigures/SC3.pdf}} & \multicolumn{1}{c|}{3.80} & \multicolumn{1}{c|}{0.21} & \multicolumn{1}{c|}{0.10} \\\hline \multicolumn{1}{|r|}{Manage Infrastructure} & \multicolumn{1}{c|}{SC4} & \multicolumn{1}{c|}{\includegraphics[width = 0.6cm, height = 0.19cm]{Figures/LikertFigures/SC4.pdf}} & \multicolumn{1}{c|}{3.58} & \multicolumn{1}{c|}{0.30} & \multicolumn{1}{c|}{-0.12}\\\hline \multicolumn{1}{|r|}{Fix Artifacts} & \multicolumn{1}{c|}{SC5} & \multicolumn{1}{c|}{\includegraphics[width = 0.6cm, height = 0.19cm]{Figures/LikertFigures/SC5.pdf}} & \multicolumn{1}{c|}{3.68} & \multicolumn{1}{c|}{0.30} & \multicolumn{1}{c|}{0.20}\\\hline \multicolumn{1}{|r|}{Manage Configuration and Execution} & \multicolumn{1}{c|}{SC6} & \multicolumn{1}{c|}{\includegraphics[width = 0.6cm, height = 0.19cm]{Figures/LikertFigures/SC6.pdf}} & \multicolumn{1}{c|}{3.66} & \multicolumn{1}{c|}{0.35} & \multicolumn{1}{c|}{0.12}\\\hline \multicolumn{1}{|r|}{Upgrade Tools and Platforms} & \multicolumn{1}{c|}{SC7} & \multicolumn{1}{c|}{\includegraphics[width = 0.6cm, height = 0.19cm]{Figures/LikertFigures/SC7.pdf}} & \multicolumn{1}{c|}{3.71} & \multicolumn{1}{c|}{0.35} & \multicolumn{1}{c|}{-0.27}\\\hline \multicolumn{1}{|r|}{Import/Export Artifacts} & \multicolumn{1}{c|}{SC8} & \multicolumn{1}{c|}{\includegraphics[width = 0.6cm, height = 0.19cm]{Figures/LikertFigures/SC8.pdf}} & \multicolumn{1}{c|}{3.71} & \multicolumn{1}{c|}{0.47} & \multicolumn{1}{c|}{0.22}\\\hline \end{tabular} \end{table*}} \begin{tcolorbox} [sharp corners, boxrule=0.1mm,] \small \textbf{Key Findings of RQ4}: (i) There are no major statistically significant differences in the issue, cause, and solution categories in practitioners' responses, (ii) practitioners frequently face Technical Debt, CI/CD, and Security issues, (iii) most causes are associated with General Programming Errors, Fragile Code, and Missing Features and Artifacts, and (iv) most solutions are associated with Add Artifacts, Modify Artifact, and Manage Configuration and Execution. The survey participants generally confirmed the categories of issues, causes, and solutions derived from mining developer discussions in open-source microservices systems. However, the survey participants also indicated several other types of issues, causes, and solutions that help improve our taxonomies. \end{tcolorbox} \section{ACKNOWLEDGMENTS} \label{sec:Acknowledgment} \section*{Appendix A} See Table \ref{tab:Abbreviations}. \input{Appendix.tex} \section*{Acknowledgments} \fi This work is partially sponsored by the National Natural Science Foundation of China with Grant No. 62172311. The authors would also like to thank the participants of the interviews and online survey. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{ieeetr}
1,941,325,220,647
arxiv
\section*{Acknowledgments} \noindent{\it Acknowledgments:} We thank Jonah Bernhard, Gabriel Denicol, Scott Moreland, Scott Pratt and Derek Teaney for useful discussions, and Richard J. Furnstahl and Xilin Zhang for their insights on Bayesian Inference and Markov Chain Monte Carlo. This work was supported in part by the National Science Foundation (NSF) within the framework of the JETSCAPE collaboration, under grant numbers ACI-1550172 (Y.C. and G.R.), ACI-1550221 (R.J.F., F.G., M.K. and B.K.), ACI-1550223 (D.E., M.M., U.H., L.D., and D.L.), ACI-1550225 (S.A.B., J.C., T.D., W.F., W.K., R.W., S.M., and Y.X.), ACI-1550228 (J.M., B.J., P.J., X.-N.W.), and ACI-1550300 (S.C., L.C., A.K., A.M., C.N., A.S., J.P., L.S., C.Si., R.A.S. and G.V.). It was also supported in part by the NSF under grant numbersPHY-1516590, 1812431 and PHY-2012922 (R.J.F., B.K., F.G., M.K., and C.S.), and by the US Department of Energy, Office of Science, Office of Nuclear Physics under grant numbers \rm{DE-AC02-05CH11231} (D.O., X.-N.W.), \rm{DE-AC52-07NA27344} (A.A., R.A.S.), \rm{DE-SC0013460} (S.C., A.K., A.M., C.S. and C.Si.), \rm{DE-SC0004286} (L.D., M.M., D.E., U.H. and D.L.), \rm{DE-SC0012704} (B.S. and C.S.), \rm{DE-FG02-92ER40713} (J.P.) and \rm{DE-FG02-05ER41367} (T.D., W.K., J.-F.P., S.A.B. and Y.X.). The work was also supported in part by the National Science Foundation of China (NSFC) under grant numbers 11935007, 11861131009 and 11890714 (Y.H. and X.-N.W.), by the Natural Sciences and Engineering Research Council of Canada (C.G., M.H., S.J., C.P. and G.V.), by the Fonds de Recherche du Qu\'{e}bec Nature et Technologies (FRQ-NT) (G.V.), by the Office of the Vice President for Research (OVPR) at Wayne State University (Y.T.), by the S\~{a}o Paulo Research Foundation (FAPESP) under projects 2016/24029-6, 2017/05685-2 and 2018/24720-6 (M.L.), and by the University of California, Berkeley - Central China Normal University Collaboration Grant (W.K.). U.H. would like to acknowledge support by the Alexander von Humboldt Foundation through a Humboldt Research Award. Allocation of supercomputing resources (Project: PHY180035) were obtained in part through the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1548562. Calculation were performed in part on Stampede2 compute nodes, generously funded by the National Science Foundation (NSF) through award ACI-1134872, within the Texas Advanced Computing Center (TACC) at the University of Texas at Austin \cite{TACC}, and in part on the Ohio Supercomputer \cite{OhioSupercomputerCenter1987} (Project PAS0254). Computations were also carried out on the Wayne State Grid funded by the Wayne State OVPR, and on the supercomputer \emph{Guillimin} from McGill University, managed by Calcul Qu\'{e}bec and Compute Canada. The operation of the supercomputer \emph{Guillimin} is funded by the Canada Foundation for Innovation (CFI), NanoQu\'{e}bec, R\'{e}seau de M\'{e}dicine G\'{e}n\'{e}tique Appliqu\'{e}e~(RMGA) and FRQ-NT. Data storage was provided in part by the OSIRIS project supported by the National Science Foundation under grant number OAC-1541335.
1,941,325,220,648
arxiv
\section[]{Computing the contribution from different gas sources}\label{GasSourcesContribution} The equations governing the mass exchange between stars (${M}_{\star}$), gas in the disk (${M}_{\rm g}$), ejected mass from the disk (${M}_{\rm eject}$) and the hot halo (${M}_{\rm hot}$) are as follows: \begin{eqnarray} \dot{M}_{\star}&=&(1-R)\psi,\label{Eqs:SFset1}\\ \dot{M}_{\rm g} &=& \dot{M}_{\rm cool}-(1-R)\psi-\dot{M}_{\rm eject}\\ \dot{M}_{\rm eject} &=& \beta\,\psi\\ \dot{M}_{\rm hot} &=& -\dot{M}_{\rm cool}+\frac{M_{\rm eject}}{\tau_{\rm rein}}.\label{Eqs:SFset2} \end{eqnarray} \noindent Here, $\psi$ is the instantaneous SFR described in $\S$~\ref{SFlaw}, $\dot{M}_{\rm cool}$ is the cooling rate described in $\S$~\ref{Sec:Cooling}, $R$ is the recycled fraction described in $\S$~\ref{Sec:Randp} and $\beta$ is the efficiency of supernovae feedback. The latter depends on the circular velocity as $\beta=(V/V_{0})^{-\alpha_{\rm hot}}$ (see \citet{Lagos13} for a discussion of the physical motivation of this parametrisation). The parameters adopted in each model are $\alpha_{\rm hot}=3.2$ in the Lagos12, Gonzalez-Perez14 and Lacey14 models, $V_{0}=485\,\rm km\,s^{-1}$ in Lagos12, $V_{0}=425\,\rm km\,s^{-1}$ in Gonzalez-Perez14 and $V_{0}=320\,\rm km\,s^{-1}$ in Lacey14. We define the changes in the quantities above in an arbitrary timestep as $\Delta {M}_{\rm cool}$, $\Delta {M}_{\star}$, $\Delta {M}_{\rm g}$ and $\Delta {M}_{\rm eject}$. In order to follow the three sources of gas in galaxies (galaxy mergers, recycling and gas cooling), we define $M_{\rm merg}$, $M_{\rm recycle}$ and $M_{\rm cooling}$, and calculate them as follows. We first add the amount of cooled gas in the cooling component and the one following the total gas in the disk, ${M}_{\rm g}$, \begin{eqnarray} \Delta M_{\rm cooling} = \Delta {M}_{\rm cool},\label{mcooling1}. \end{eqnarray} \noindent We update the quantities $M_{\rm cooling}$ and ${M}_{\rm g}$ by adding $\Delta M_{\rm cooling}$. From ${M}_{\rm g}$, an amount $\Delta {M}_{\star}$ of stars is formed and the amount of gas that is depleted from the ISM is $\Delta {M}_{\star}$. This mass is subtracted from the quantities $M_{\rm merg}$, $M_{\rm recycle}$ and $M_{\rm cooling}$, preserving their fractional contribution to ${M}_{\rm g}$ before stars formed. We then update ${M}_{\rm g}$ by subtracting $\Delta {M}_{\star}$. After stars form, a fraction $R$ is returned to the ISM, and we modify $M_{\rm recycle}$ and ${M}_{\rm g}$ by $\Delta M_{\rm recycle}$ defined as: \begin{eqnarray} \Delta M_{\rm recycle} = R\,\Delta {M}_{\star},\label{mcooling1}. \end{eqnarray} \noindent From the stars formed, an amount $\Delta {M}_{\rm eject}=\beta\,{M}_{\star}$ is ejected from the galaxy, and we subtract the amount of gas escaping the disk from $M_{\rm merg}$, $M_{\rm recycle}$ and $M_{\rm cooling}$, preserving their fractional contribution to ${M}_{\rm g}$ before the ejection of gas. We then update ${M}_{\rm g}$ by subtracting $\Delta {M}_{\rm eject}$. This procedure ensures that $M_{\rm merg}+M_{\rm recycle}+M_{\rm cooling}\equiv {M}_{\rm g}$. During galaxy mergers, we add the amount of gas accreted by the central galaxy from the satellite to $M_{\rm merg}$ before star formation takes place, and then we proceed to the set of Eqs.~\ref{Eqs:SFset1}-\ref{Eqs:SFset2} with the formalism described above. \section[]{The effect of varying the partial ram pressure stripping parameters}\label{RPpars} \begin{figure} \begin{center} \includegraphics[trim = 1.5mm 0.5mm 1mm 1mm,clip,width=0.47\textwidth]{Figs/HIMs_ratio_RPeffect_Lacey14_PdRGALFORM_Lagos12.eps} \caption{The HI gas fraction, $M_{\rm HI}/L_{\rm K}$, distribution for all galaxies with $L_{\rm K}>6\times10^9\,L_{\odot}$ (solid lines) and the sub-sample of ETGs ($B/T>0.5$; dashed lines) for the Lacey14+RP model. We have adopted three different values for $\epsilon_{\rm strip}$: 0.01 (RP weak), 0.1 (RP inter) and 1 (RP strong). The observations are described in $\S$~\ref{obssec}.} \label{RPparameters} \end{center} \end{figure} We show in Fig.~\ref{RPparameters} the HI gas fractions for the Lacey14+RP model using three different values for $\epsilon_{\rm strip}$, which controls the fraction of the reheated mass from stellar feedback that was driven out from the galaxy after the first passage of the satellite that is affected by stripping. The reheated mass considered here is the one that sits outside the stripping radius. We remind the reader that the latter is calculated at the pericentre of the satellite's orbit (see $\S$~\ref{Sec:rampressure}). A value of $\epsilon_{\rm strip}=1$ implies that the reheated gas gets stripped in subsequent steps at the same rate as the hot gas of the galaxy when the satellite first passed through its pericentre. To adopt this value in the model leads to predictions that are very similar to the Lacey14 model predictions (including strangulation). This shows that $\epsilon_{\rm strip}=1$ drives very efficient hot gas stripping close to the fully efficient case. The model using the values $\epsilon_{\rm strip}=0.01$ and $\epsilon_{\rm strip}=0.1$ produce very similar gas fractions. Both cases lead to predict number densities of ETGs with HI and H$_2$ gas fraction $>0.02$ higher than both the standard Lacey14 and the version including partial ram pressure stripping with $\epsilon_{\rm strip}=1$. This higher number density is due to the higher rates of infalling cold gas, which replenish the ISM with newly cooled gas. We conclude that for values of $\epsilon_{\rm strip}\lesssim 0.3$ the results presented in the paper do not considerably change, mainly because of a self-regulation of outflows and inflows in satellites: if a very small value of $\epsilon_{\rm strip}$ is adopted, it will drive higher accretion rates of gas onto the disk, which will lead to higher star formation rates, and therefore higher outflows rates. For values $\epsilon_{\rm strip}\gtrsim 0.3$ the satellite's hot gas reservoir is removed too quickly driving very little further gas accretion of newly cooled gas onto the galaxy disk. The latter has the effect of quenching star formation in the satellite galaxy quickly, driving low HI and H$_2$ gas fractions. \section{Introduction} The classic picture of early-type galaxies (ETGs), which include elliptical and lenticular galaxies, is that they are `red and dead', without any significant star formation, and contain mainly old stellar populations \citep{Bower92}. ETGs have also been long connected to the red sequence in the color-magnitude relation, establishing a strong connection between quenching of star formation and morphological transformation (e.g. \citealt{Bower92}; \citealt{Strateva01}; \citealt{Baldry04}; \citealt{Balogh04}; \citealt{Bernardi05}; \citealt{Schiminovich07}). By analysing the S\'ersic index of galaxies in the star formation rate-stellar mass plane, \citet{Wuyts11} showed that galaxies in the star forming sequence (the so called `main sequence' of galaxies) are typically disk-like galaxies (with S\'ersic indices close to $1$), while passive galaxies tend to have higher S\'ersic indices (typically $>3$). Wuyts et al. also showed that these trends are observed in galaxies from $z=0$ to $\approx 2$, suggesting that the relation between morphology and quenching is fundamental and that is present over most of the star formation history of galaxies. Although this simple paradigm of `red and dead' ETGs is qualitatively sufficient to explain their location on the red sequence of galaxies, it is far from being quantitatively correct. High quality, resolved observations of the different components of ETGs, mainly from the ATLAS$^{\rm 3D}$\footnote{{\tt http://www-astro.physics.ox.ac.uk/atlas3d/}.} multi-wavelength survey \citep{Cappellari11}, showed that this paradigm is too simplistic. This survey showed that at least $20$\% of ETGs have molecular and atomic hydrogen contents large enough to be detected (\citealt{Young11}; \citealt{Serra12}; see also \citealt{Welch10} for similar results from an independent survey). The approximate detection limits for molecular hydrogen (H$_2$) and atomic hydrogen (HI) masses in the ATLAS$^{\rm 3D}$ are $\approx 10^7-10^8\,M_{\odot}$. Large amounts of cold gas in ETGs are frequently found when star formation is observed (e.g. \citealt{Davis14}). Some of these galaxies with ongoing star formation lie on the red sequence of galaxies in the color-magnitude relation (\citealt{Kaviraj07}; \citealt{Smith12}, \citealt{Young13}). All of this evidence points to a large fraction of ETGs, which were before seen as `passive' in terms of their colours, having star formation rates and cold interstellar medium (ISM) contents that can be rather large. From this it is reasonable to conclude that the quenching of galaxies is indeed more complex than the simple picture of `passive, red and dead' ETGs. This shift of paradigm in ETGs poses new questions regarding how we understand the formation of this galaxy population. For instance, how do we understand the presence of a non-negligible cold ISM in ETGs and their location on the red sequence of galaxies in the color-magnitude diagram? Was the cold gas accreted recently or does it come from internal processes, such as recycling of old stars? These questions are at the core of the understanding of the quenching of galaxies and the decline of the star formation activity with time. Simulations of galaxy formation have long explored the origin of galaxy morphologies in the context of the hierarchical growth of structures. Pioneering ideas about the formation of galaxy disks and bulges were presented by \citet{Toomre77} and \citet{White78}. Toomre proposed for the first time that galaxy mergers could lead to the formation of spheroids, which was implemented in early semi-analytic models of galaxy formation (e.g. \citealt{Baugh96}; \citealt{Kauffmann96}; \citealt{Cole00}). However, with the advent of large area surveys, and more sophisticated cosmological $N$-body simulations, it became clear that major mergers (mergers between galaxies with mass ratios $\gtrsim 0.3$) could not be the only formation mechanism of spheroids (e.g. see \citealt{LeFevre00} for an observational example and \citealt{Naab03} for a theoretical work) because of their expected rareness, which is incompatible with the large numbers of ETGs observed (\citealt{Bernardi03}; \citealt{Lintott08}). Theoretical work on the formation mechanisms of spheroids led to the conclusion that minor mergers (e.g. \citealt{Malbon07}; \citealt{Parry09}; \citealt{Hopkins10}; \citealt{Bournaud11}; \citealt{Naab13}) and disk instabilities (e.g. \citealt{Mo98}; \citealt{Gammie01}; \citealt{Bournaud09}; \citealt{Krumholz10}; \citealt{Elmegreen10}) can also play a major role. Many studies exploited numerical simulations and semi-analytic models to study the formation of ETGs and their mass assembly with interesting predictions, for example that massive ETGs assemble their mass relatively late but have stellar populations that are very old (e.g. \citealt{Baugh96}; \citealt{Kauffmann96}; \citealt{DeLucia06}; \citealt{Parry09}), and that the formation paths for ETGs can be many, going from having had one or more major mergers, to having had no mergers at all (e.g. \citealt{Naab13}). Despite all this progress, little attention has been paid to the study of the neutral gas content of the ETG population. The atomic and molecular gas contents of ETGs may provide strong constraints on the recent accretion history. \citet{Lagos10}, for example, show that the neutral gas content of galaxies is very sensitive to short term variations in the accretion history, while the stellar mass and optical colours are not. Similarly, \citet{Serra14} show that although simulations can reproduce the nature of slow and fast rotators in the early-type population, the atomic hydrogen content predicted by the same simulations is too low. Another reason to believe that the neutral gas content of ETGs will provide strong constraints on galaxy formation models, is that they show different correlations between their gas and stellar contents. For example, for normal star-forming galaxies, there is a good correlation between the HI mass and the stellar mass (e.g. \citealt{Catinella10}; \citealt{Cortese11}; \citealt{Huang12}; \citealt{Wang14}), while ETGs show no correlation between these two quantities (e.g. \citealt{Welch10}; \citealt{Serra12}). Different physical mechanisms are then driving the HI content of ETGs. Similar conclusions were reached for molecular hydrogen (e.g. \citealt{Saintonge11}; \citealt{Lisenfeld11}; \citealt{Young11}; \citealt{Boselli14}). The motivation behind this paper is to investigate the neutral gas content of ETGs in hierarchical galaxy formation models and relate them to the formation and quenching mechanisms of ETGs. We explore the question of the origin of the atomic and molecular gas contents of ETGs and attempt to connect this to their observed HI and H$_2$ contents and their stellar mass content. In paper II (Lagos et al. in prep.), we will explore the question of the alignments between the angular momenta of the gas disk and the stellar contents of ETGs. For the current study, we use three flavours of the semi-analytical model {\texttt{GALFORM}} in a $\Lambda$CDM cosmology (\citealt{Cole00}), namely those of \citet{Lagos12}, \cite{Gonzalez-Perez13}, and Lacey et al. (2014, in prep.). The three models include the improved treatment of SF implemented by \citet{Lagos10}. This extension splits the hydrogen content of the ISM into HI and H$_2$. In addition, these three models allow bulges to grow through minor and major galaxy mergers and through global disk instabilities. The advantage of using three different flavours of {\texttt{GALFORM}} is the ability to characterise the robustness of the trends found. The outputs of the three models shown in this paper will be made publicly available through the Millennium database\footnote{\tt http://gavo.mpa-garching.mpg.de/Millennium}. This paper is organised as follows. In $\S 2$ we present the observations of the HI and H$_2$ content of the entire galaxy population and of ETGs, along with the gas mass functions and gas fraction distribution functions for these two populations. In $\S 3$, we describe the galaxy formation model and the main aspects which relate to the growth of bulges: star formation, disk and bulge build-up, recycling of intermediate and low mass stars and the treatment of the partial ram pressure stripping of the hot gas. We also describe the main differences between the three {\tt GALFORM} flavours and the dark matter simulations used. In $\S 4$ we compare the model predictions with observations of the neutral gas content of ETGs and show the impact of including partial ram pressure stripping of the hot gas. In $\S 5$ we analyse the connection between the neutral gas content of ETGs, their bulge fraction and quenching and explain the physical processes behind this connection. In $\S 6$ we analyse all the sources that contribute to the neutral gas content of ETGs and their environmental dependence. Finally, our main conclusions are presented in $\S 7$. \section{The observed neutral hydrogen content of local galaxies}\label{obssec} \begin{figure} \begin{center} \includegraphics[trim = 0.9mm 0.3mm 1mm 0.5mm,clip,width=0.5\textwidth]{Figs/HIPASSz0.eps} \caption{{\it Top panel:} The HI mass function from \citet{Zwaan05} and \citet{Martin10} at $z=0$, which include both early and late-type galaxies; the HI mass function of ETGs from HIPASS and from the ATLAS$^{\rm 3D}$ survey (with and without $1/V_{\rm max}$ correction), calculated in this work. {\it Bottom panel:} Distribution function of the ratio between the HI mass to the $K$-band luminosity for all and ETGs for the HIPASS and the ATLAS$^{\rm 3D}$ surveys, calculated in this work. Note that we express densities and masses in physical units.} \label{HIobs} \end{center} \end{figure} Our aim is to study the neutral gas content of ETGs and show how this compares to the overall galaxy population. We first need to define the observational datasets that we will use as the main constraints on the galaxy formation simulations\footnote{The units in the observations are expressed in physical units. Note that we normalised all the observational datasets to the choice of $h=0.73$.}. We focus on the HI and H$_2$ gas contents of galaxies and how these compare to the stellar content. We do this through comparisons with the $K$-band luminosity, which is closely related to the stellar mass in galaxies but directly measured by observations. We extensively use the HI Parkes All-Sky Survey (HIPASS; \citealt{Meyer04}), the Arecibo Legacy Fast ALFA Survey (ALFALFA; \citealt{Giovanelli05}), the Five College Radio Astronomy Observatory CO$(1-0)$ survey of \citet{Keres03} and the ATLAS$^{\rm 3D}$ survey \citep{Cappellari11}. The ATLAS$^{\rm 3D}$ survey is of particular interest as it is a volume limited sample of ETGs, where all ETGs within a volume of $1.16\times 10^{5}\,\rm Mpc^{3}$ and with $K$-band rest-frame luminosities of $L_{\rm K}>6\times 10^{9}\,L_{\odot}$ were studied in detail. The total number of ETGs in the ATLAS$^{\rm 3D}$ catalogue is $260$. The morphological classification was obtained through careful visual inspection. The ATLAS$^{\rm 3D}$ survey contains multi-wavelength information, such as broad-band photometry in the $B$-, $r$- and $K$-bands \citep{Cappellari11}, as well as $21$~cm interferometry (presented in \citealt{Serra12}), from which the HI mass is derived, and CO$(1-0)$ single dish observations, presented in \citet{Young11}, from which the H$_2$ mass is derived, along with detailed stellar kinematic information. This survey provides an in depth view of ETGs in the local Universe and shows how the neutral gas content correlates with other galaxy properties. We constructed HI and H$_2$ mass functions for the ATLAS$^{\rm 3D}$ objects. The HI survey presented in \cite{Serra12} includes every ATLAS$^{\rm 3D}$ ETG visible with the Westerbork Synthesis Radio Telescope, a total of $166$ objects. We took HI masses from \cite{Serra12} to construct a HI mass function, which we plot in Fig.~\ref{HIobs}. This HI survey is incomplete in depth due to observations being performed using a fixed $12$ hour integration. This means that HI masses $<10^{7.5}\,M_{\odot}$ could only be detected in nearby objects. Thus, we correct the lower HI mass bins for incompleteness using the standard $1/V_{\rm max}$ method \citep{Schmidt68}. Note that since ATLAS$^{\rm 3D}$ is complete above a $K$-band luminosity of $L_{\rm K}>6\times 10^{9}\,L_{\odot}$, we use a V$_{\rm max}$ calculated for the HI mass only. Total H$_2$ masses are available from \citet{Young11} for all galaxies in the full ATLAS$^{\rm 3D}$ survey volume. We constructed H$_2$ mass functions from this data in the same way (see Fig.~\ref{H2obs}). The IRAM-30m telescope observations have a fixed noise limit, and Fig.~\ref{H2obs} shows both the original and a corrected mass function (where the lower H$_2$ mass bins have been corrected using the $1/V_{\rm max}$ method). Blind HI surveys, such as ALFALFA and HIPASS provide information on the HI content of galaxies in larger volume. HI mass functions were derived from these two surveys and presented in \citet{Martin10} and \citet{Zwaan05}, respectively. These HI mass functions are shown in the top-panel of Fig.~\ref{HIobs}. However, because we are interested in how ETGs differ from the overall galaxy population and in isolating the physical processes leading to such differences, we need a proxy for the stellar mass of these HI-selected galaxies. To this end, we use the HIPASS survey cross-matched with the Two Micron All-Sky Survey ($2$MASS; \citealt{Jarrett00}) to obtain $K$-band luminosities for the HIPASS galaxies (see \citealt{Meyer08a}). We limit the analysis to the southern HIPASS sample \citep{Meyer04}, because the completeness function for this sample is well-described and the HI mass function is determined accurately \citep{Zwaan05}. We find that $86$\% of the southern HIPASS galaxies have $K$-band counterparts, and all of these have morphological classifications which are described in \citet{Doyle05}. Galaxy morphologies are taken from the SuperCOSMOS Sky Survey \citep{Hambly01}, and are obtained by visual inspection, predominantly in the $b_J$-band. With the subsample of HIPASS galaxies with $K$-band luminosities and assigned morphological types, we calculate the HI mass function for ETGs, take to be those identified as `E-Sa'. For this, we use the maximum likelihood equivalents of the $1/V_{\rm max}$ values determined by \citet{Zwaan05}. These values represent the maximum volume over which each of the HIPASS galaxies could have been detected, but they are corrected so as to remove the effects of large scale inhomogeneities in the HIPASS survey (see \citealt{Zwaan05} for details). We add an additional selection in the $K$-band luminosity to match the selection used in ATLAS$^{\rm 3D}$ and re-calculate the HI mass function of the overall galaxy population using the same $V_{\rm max}$ values determined by Zwaan et al. Note that we do not need to recalculate $V_{\rm max}$ due to the $K$-band survey being much deeper than the HI survey (i.e. it is able to detect small galaxies further out than the HI survey). The results of this exercise are shown in the top-panel of Fig.~\ref{HIobs}. We also show the estimated HI mass function of the ATLAS$^{\rm 3D}$ ETGs. There is very good agreement between the HI mass function of ETGs from the HIPASS and ATLAS$^{\rm 3D}$ surveys in the range where they overlap (despite the inclusion of Sa galaxies in the HIPASS sample, which are absent in ATLAS$^{\rm 3D}$). ATLAS$^{\rm 3D}$ provides an important insight into the HI mass function of ETGs in the regime of low HI masses that are not present in HIPASS. In the case of the HI mass function of all galaxies that have $K$-band luminosities above $6\times 10^9\,L_{\odot}$, we find that the HI mass function is fully recovered down to $M_{\rm HI}\approx 5\times 10^9\,M_{\odot}$, with a drop in the number density of lower HI masses due to the $K$-band luminosity limit. Note that this drop is not because of incompleteness but instead is a real feature connected to the minimum HI mass that normal star-forming galaxies with $L_{\rm K}>6\times 10^9\,L_{\odot}$ have. By normal star-forming galaxy we mean those that lie on the main sequence of galaxies in the plane of star formation rate (SFR) vs. stellar mass. The HIPASS matched sample is complete for $K$-band luminosities $L_{\rm K}>6\times 10^9\,L_{\odot}$. The turn-over observed at $M_{\rm HI}\approx 5\times 10^9\,M_{\odot}$ is simply the HI mass expected for a normal star-forming galaxy with $L_{\rm K}\approx 6\times 10^9\,L_{\odot}$. Similarly, we calculate the distribution function of the ratio of HI mass to the $K$-band luminosity, which we refer to as the HI gas fraction. We do this for all HIPASS galaxies with $L_{\rm K}>6\times 10^9\,L_{\odot}$ in the cross-matched catalogue and for the subsample of ETGs. This is shown in the bottom-panel of Fig.~\ref{HIobs}. Also shown is the distribution of $M_{\rm HI}/L_{\rm K}$ for the ETGs in the ATLAS$^{\rm 3D}$ survey. Note that these distribution functions provide higher order constraints on galaxy formation simulations than the more commonly used scaling relations between the HI mass or H$_2$ mass and stellar mass (e.g. \citealt{Catinella10}; \citealt{Saintonge11}). This is because this distribution allows us to test not only if the amount of neutral gas in model galaxies is in the expected proportion to their stellar mass, but also that the number density of galaxies with different gas fractions is correct. \citet{Kauffmann12} show that this is a stronger constraint on semi-analytic models of galaxy formation, as the distribution of the neutral gas fraction depends on the quenching mechanisms included in the models, and the way they interact. So far, no such comparison has been presented for cosmological hydrodynamical simulations. We perform the same exercise we did for HI masses but now for H$_2$ masses. In this case there are no blind surveys of carbon monoxide or any other H$_2$ tracer, and therefore the data for large samples of galaxies is scarce. \citet{Keres03} reported the first and only attempt to derive the local luminosity function (LF) of $\rm CO(1-0)$. This was done using $B$-band and a $60\,\mu$m selected samples, and with follow up using the Five College Radio Astronomy Observatory. This is shown in the top-panel of Fig.~\ref{H2obs}. Also shown is the H$_2$ mass function of ETGs from ATLAS$^{\rm 3D}$. Here, we adopt a Milky Way $\rm H_2$-to-CO conversion factor, $N_{\rm H_2}/\rm cm^{-2}=2\times10^{-20}\, I_{\rm CO}/\rm K\,km\,s^{-1}$, for both the Keres et al. sample and the ATLAS$^{\rm 3D}$ (but see \citet{Lagos12} for more on conversions). Here $N_{\rm H_2}$ is the column density of H$_2$ and $I_{\rm CO}$ is the integrated CO$(1-0)$ line intensity per unit surface area. In the bottom-panel of Fig.~\ref{H2obs} we show the distribution function of the $M_{\rm H_2}/L_{\rm K}$ ratios (we which refer to as H$_2$ gas fractions) for the ATLAS$^{\rm 3D}$ sources. \begin{figure} \begin{center} \includegraphics[trim = 0.9mm 0.3mm 1mm 0.5mm,clip,width=0.5\textwidth]{Figs/H2_ObsDataz0.eps} \caption{{\it Top panel:} The H$_2$ mass function from \citet{Keres03} using CO$(1-0)$ observations of parent samples selected in $B$-band and $60\mu$m, as labelled, and adopting a Milky Way $\rm H_2$-to-CO conversion factor, $N_{\rm H_2}/\rm cm^{-2}=2\times10^{-20}\, I_{\rm CO}/\rm K\,km\,s^{-1}$. Here $N_{\rm H_2}$ is the column density of H$_2$ and $I_{\rm CO}$ is the integrated CO$(1-0)$ line intensity per unit surface area. Also shown are the H$_2$ mass function of ETGs from the ATLAS$^{\rm 3D}$ survey (stars), with and without the $1/V_{\rm max}$ correction, also for a Milky Way $\rm H_2$-to-CO conversion factor. {\it Bottom panel:} Distribution function of the H$_2$ gas fraction for the ATLAS$^{\rm 3D}$ survey.} \label{H2obs} \end{center} \end{figure} Throughout this paper we compare different flavours of {\texttt{GALFORM}} to the set of observations presented in Figs.~\ref{HIobs} and \ref{H2obs}. \section{Modelling the evolution of the morphology, neutral gas content and star formation in galaxies}\label{modelssec} Here we briefly describe the {\texttt{GALFORM}} semi-analytical model of galaxy formation and evolution (introduced by \citealt{Cole00}), focusing on the aspects that are relevant to the build-up of ETGs. The {\tt GALFORM} model takes into account the main physical processes that shape the formation and evolution of galaxies. These are: (i) the collapse and merging of dark matter (DM) halos, (ii) the shock-heating and radiative cooling of gas inside DM halos, leading to the formation of galactic disks, (iii) quiescent star formation in galaxy disks, (iv) feedback from supernovae (SNe), from heating by active galactic nuclei (AGN) and from photo-ionization of the inter-galactic medium (IGM), (v) chemical enrichment of stars and gas, and (vi) galaxy mergers driven by dynamical friction within common DM halos which can trigger bursts of star formation, and lead to the formation of spheroids (for a review of these ingredients see \citealt{Baugh06} and \citealt{Benson10b}). Galaxy luminosities are computed from the predicted star formation and chemical enrichment histories using a stellar population synthesis model (see \citealt{Gonzalez-Perez13} to see the impact of using different models). In the rest of this section we describe the star formation (SF) law used and how this connects to the two-phase interstellar medium (ISM; $\S$~\ref{SFlaw}), the recycled fraction and yield from newly formed stars and how that gas fuels the ISM ($\S$~\ref{Sec:Randp}), the physical processes that give rise to discs and bulges in {\tt GALFORM} ($\S$\ref{Sec:MorphoTrans}) and the modelling of the ram-pressure stripping of the hot gas (\S~\ref{Sec:rampressure}) in satellite galaxies. We put our focus into these processes because we aim to distinguish the contribution of each of them to the neutral gas content of ETGs: galaxy mergers, hydrostatic cooling and recycling from stars. These processes are the same in the three variants of {\tt GALFORM} we use in this paper: the models of Lagos et al. (2012; Lagos12), Gonzalez-Perez et al. (2014; Gonzalez-Perez14) and Lacey et al. (2014, in prep.; Lacey14), albeit with different parameters. In $\S$~\ref{Models} we describe the differences between these variants. We finish the section with a short description of the $N$-body cosmological simulation used and the parameters adopted ($\S$~\ref{Cosmos}). \subsection{Interstellar medium gas phases and the star formation law}\label{SFlaw} In {\tt GALFORM} the SF law developed in Lagos et al. (2011b, hereafter `L11') is adopted. In this SF law the atomic and molecular phases of the neutral hydrogen in the ISM are explicitly distinguished. L11 found that the SF law that gives the best agreement with the observations without the need for further calibration is the empirical SF law of \citet{Blitz06}. Given that the SF law has been well constrained in spiral and dwarf galaxies in the local Universe, L11 decided to implement this molecular-based SF law only in the quiescent SF mode (SF following gas accretion onto the disk), keeping the original prescription of \citet{Cole00} for starbursts (driven by galaxy mergers and global disk instabilities). {\it Quiescent Star Formation.} The empirical SF law of Blitz \& Rosolowsky has the form, \begin{equation} \Sigma_{\rm SFR} = \nu_{\rm SF} \Sigma_{\rm mol}, \label{Eq.SFR} \end{equation} \noindent where $\Sigma_{\rm SFR}$ and $\Sigma_{\rm mol}$ are the surface densities of SFR and molecular gas, respectively, and $\nu_{\rm SF}$ is the inverse of the SF timescale for the molecular gas. The molecular gas mass includes the contribution from helium. The ratio between the molecular and total gas mass, $\rm f_{\rm mol}$, depends on the internal hydrostatic pressure through $\Sigma_{\rm H_2}/\Sigma_{\rm HI}=\rm f_{\rm mol}/(f_{\rm mol}-1)= (P_{\rm ext}/P_{0})^{\alpha}$. Here HI and H$_2$ only include hydrogen (which in total corresponds to a fraction $X_{\rm H}=0.74$ of the overall cold gas mass). To calculate $\rm P_{\rm ext}$, we use the approximation from \citet{Elmegreen89}, in which the pressure depends on the surface density of gas and stars. The parameters $\nu_{\rm SF}$, $P_{0}$ and $\alpha$ are given in \S~\ref{Models} for each of the three {\tt GALFORM} variants. {\it Starbusts.} {For starbursts the situation is less clear than in star formation in disks mainly due to observational uncertainties, such as the conversion between CO and H$_2$ in starbursts, and the intrinsic compactness of star-forming regions, which have prevented a reliable characterisation of the SF law (e.g. \citealt{Genzel10}). For this reason we choose to apply the BR law only during quiescent SF (fuelled by accretion of cooled gas onto galactic disks) and retain the original SF prescription for starbursts (see \citealt{Cole00} and L11 for details).} In the latter, the SF timescale is proportional to the bulge dynamical timescale above a minimum floor value and involves the whole cold gas content of the galaxy, $\rm SFR={\it M}_{\rm cold}/\tau_{\rm SF}$ (see \citealt{Granato00} and \citealt{Lacey08} for details). The SF timescale is defined as \begin{equation} \tau_{\rm SF}=\rm max(\tau_{\rm min},f_{\rm dyn}\tau_{\rm dyn}), \label{SFlawSB} \end{equation} \noindent where $\tau_{\rm dyn}$ is the bulge dynamical timescale, $\tau_{\rm min}$ is a minimum duration adopted for starbursts and $f_{\rm dyn}$ is a free parameter. The values of the parameters $\tau_{\rm min}$ and $f_{\rm dyn}$ are given in \S~\ref{Models} for each of the three {\tt GALFORM} variants. \subsection{Recycled fraction and yield}\label{Sec:Randp} In {\tt GALFORM} we adopt the instantaneous mixing approximation for the metals in the ISM. This implies that the metallicity of the cold gas mass instantaneously absorbs the fraction of recycled mass and newly synthesised metals in recently formed stars, neglecting the time delay for the ejection of gas and metals from stars. The recycled mass injected back to the ISM by newly born stars is calculated from the initial mass function (IMF) as, \begin{equation} R=\int_{m_{\rm min}}^{m_{\rm max}}\, (m-m_{\rm rem})\phi(m)\, {\rm d} m, \label{Eq:ejec} \end{equation} \noindent where $m_{\rm rem}$ is the remnant mass and the IMF is defined as $\phi(m)\propto dN(m)/dm$. Similarly, we define the yield as \begin{equation} p =\int_{m_{\rm min}}^{m_{\rm max}}\, m_{\rm i}(m)\phi(m) {\rm d} m, \label{Eq:yield} \end{equation} \noindent where $m_{\rm i}(m)$ is the mass of newly synthesised metals ejected by stars of initial mass $m$. The integrations limits are taken to be $m_{\rm min}=1\, M_{\odot}$ and $m_{\rm max}=120\, M_{\odot}$. Stars with masses $m<1\, M_{\odot}$ have lifetimes longer than the age of the Universe, and therefore they do not contribute to the recycled fraction and yield. {The quantities $m_{\rm rem}(m)$ and $m_{\rm i}(m)$ depend on the initial mass of a star and are calculated by stellar evolution theory. The stellar evolution model we use for intermediate stars ($1M_{\odot}<m\lesssim 8M_{\odot}$) is \citet{Marigo01} (i.e. which provides $m_{\rm rem}(m)$ and $m_{\rm i}(m)$ for those types of stars), while for massive stars, $m\gtrsim 8M_{\odot}$, we use \citet{Portinari98}.} We describe in \S~\ref{Models} the IMF adopted in the three variants of {\tt GALFORM}. \subsection{Morphological transformation of galaxies}\label{Sec:MorphoTrans} \subsubsection{The build-up of discs}\label{Sec:Cooling} Galaxies form from gas which cools from the hot halo observing conservation of angular momentum. As the temperature decreases, thermal pressure stops supporting the gas which then settles in a rotating disks \citep{Fall80}. We model the gas profile of the hot gas with a $\beta$ profile \citep{Cavaliere76}, \begin{equation} \rho_{\rm hot}(r)\propto (r^2+r^2_{\rm core})^{-3\beta_{\rm fit}/2}, \label{beta-prof} \end{equation} \noindent The simulations of \citet{Eke98} show that $\beta_{\rm fit}\approx 2/3$ and that $r_{\rm core}/R_{\rm NFW}\approx1/3$. In {\tt GALFORM}, we adopt the above values and use $\rho_{\rm hot}(r)\propto (r^2+R^2_{\rm NFW}/9)^{-1}$, where $R_{\rm NFW}$ is the scale radius of the \citet{Navarro97} profile of the dark matter halos. During a timestep $\delta t$ in the integration of galaxy properties, we calculate the amount of gas that cools and estimate the radius at which $\tau_{\rm cool}(r_{\rm cool})=t-t_{\rm form}$, where $t_{\rm form}$ corresponds to the time at which the halo was formed and $t$ is the current time. The gas inside $r_{\rm cool}$ is cool enough to be accreted onto the disk. However, in order to be accreted onto the disk, the cooled gas should have had enough time to fall onto the disk. Thus, the gas that has enough time to cool and be accreted onto the disk is that within the radius in which the free-fall time and the cooling time are smaller than $(t-t_{\rm form})$, defined as $r_{\rm ff}$ and $r_{\rm cool}$, respectively. The mass accreted onto the disk simply corresponds to the hot gas mass enclosed within $r={\rm min}[r_{\rm cool},r_{\rm ff}]$. We calculate $r_{\rm cool}$ from the cooling time, which is defined as \begin{equation} \tau_{\rm cool}(r)=\frac{3}{2}\,\frac{\mu\, m_{\rm H}\,k_{\rm B}\, T_{\rm hot}}{\rho_{\rm hot}(r)\, \Lambda(T_{\rm hot},Z_{\rm hot})}. \end{equation} \noindent Here, $\Lambda(T_{\rm hot},Z_{\rm hot})$ is the cooling function that depends on the gas temperature, $T_{\rm hot}$, which corresponds to the virial temperature of the halo ($T_{\rm hot}= T_{\rm V}$) and the metallicity $Z_{\rm hot}$ (i.e. the ratio between the mass in metals heavier than Helium and total gas mass). The cooling rate per unit volume is $\epsilon_{\rm cool}\propto \rho^2_{\rm hot}\, \Lambda(T_{\rm hot},Z_{\rm hot})$. In {\tt GALFORM} we adopt the cooling function tabulated of \citet{Sutherland93}. \subsubsection{The formation of spheroids}\label{BuildUpBulges} Galaxy mergers and disk instabilities give rise to the formation of spheroids and elliptical galaxies. Below we describe both physical processes. {\it Galaxy mergers.} When DM halos merge, we assume that the galaxy hosted by the most massive progenitor halo becomes the central galaxy, while all the other galaxies become satellites orbiting the central galaxy. These orbits gradually decay towards the centre due to energy and angular momentum losses driven by dynamical friction with the halo material. Eventually, given sufficient time satellites spiral in and merge with the central galaxy. Depending on the amount of gas and baryonic mass involved in the galaxy merger, a starburst can result. The time for the satellite to hit the central galaxy is called the orbital timescale, $\tau_{\rm merge}$, which is calculated following \citet{Lacey93} as \begin{equation} \tau_{\rm merge}=f_{\rm df}\, \Theta_{\rm orbit}\, \tau_{\rm dyn}\, \left[\frac{0.3722}{{\rm ln}(\Lambda_{\rm Coulomb})}\right]\, \frac{M}{M_{\rm sat}}. \end{equation} \noindent Here, $f_{\rm df}$ is a dimensionless adjustable parameter which is $f_{\rm df}>1$ if the satellite's halo is efficiently stripped early on during the infall, $\Theta_{\rm orbit}$ is a function of the orbital parameters, $\tau_{\rm dyn}\equiv \pi\,R_{\rm v}/V_{\rm v}$ is the dynamical timescale of the halo, ${\rm ln}(\Lambda_{\rm Coulomb})={\rm ln}(M/M_{\rm sat})$ is the Coulomb logarithm, $M$ is the halo mass of the central galaxy and $M_{\rm sat}$ is the mass of the satellite, including the mass of the DM halo in which the galaxy was formed. \citet{Lagos12} and \citet{Gonzalez-Perez13} used the $\Theta_{\rm orbit}$ function calculated in \citet{Lacey93}, \begin{equation} \Theta_{\rm orbit}=\left[\frac{J}{J_{\rm c}(E)}\right]^{0.78}\, \left[\frac{r_{\rm c}(E)}{R_{\rm v}}\right]^{2}, \label{Eq:merger:Lacey} \end{equation} \noindent where $J$ is the initial angular momentum and $E$ is the energy of the satellite's orbit, and $J_{\rm c}(E)$ and $r_{\rm c}(E)$ are, respectively, the angular momentum and radius of a circular orbit with the same energy as that of the satellite. Thus, the circularity of the orbit corresponds to $J/J_{\rm c}(E)$. The function $\Theta_{\rm orbit}$ is well described by a log normal distribution with median value $\langle{\rm log}_{10} \Theta_{\rm orbit} \rangle=-0.14$ and dispersion $\langle({\rm log}_{10} \Theta_{\rm orbit}-\langle{\rm log}_{10} \Theta_{\rm orbit} \rangle)^2 \rangle^{1/2}=0.26$. These values are not correlated with satellite galaxy properties. Therefore, for each satellite, the value of $\Theta_{\rm orbit}$ is randomly chosen from the above distribution. Note that the dependence of $\Theta_{\rm orbit}$ on $J$ in Eq~\ref{Eq:merger:Lacey} is a fit to numerical estimates. Lacey et al. (2014) use the updated dynamical friction function of \citet{Jiang07}, which slightly changes the dependence on the mass ratio of the satellite to the central galaxy. The net effect of such a change is that minor mergers occur faster, while major mergers occur slower when compared to the \citet{Lacey93} prescription. If the merger timescale is less than the time that has elapsed since the formation of the halo, i.e. if $\tau_{\rm merge}<t-t_{\rm form}$, we proceed to merge the satellite with the central galaxy at $t$. If the total mass of gas plus stars of the primary (largest) and secondary galaxies involved in a merger are $M_{\rm p}=M_{\rm cold,p}+M_{\star,p}$ and $M_{\rm s}=M_{\rm cold,s}+M_{\star,s}$, the outcome of the galaxy merger depends on the galaxy mass ratio, $M_{\rm s}/M_{\rm p}$, and the fraction of gas in the primary galaxy, $M_{\rm cold,p}/M_{\rm p}$: \begin{itemize} \item $M_{\rm s}/M_{\rm p}>f_{\rm ellip}$ drives a major merger. In this case all the stars present are rearranged into a spheroid. In addition, any cold gas in the merging system is assumed to undergo a burst of SF and the stars formed are added to the spheroid component. We typically take $f_{\rm ellip}=0.3$, which is within the range found in simulations (e.g. see \citealt{Baugh96} for a discussion). \item $f_{\rm burst}<M_{\rm s}/M_{\rm p}\le f_{\rm ellip}$ drives minor mergers. In this case all the stars in the secondary galaxy are accreted onto the primary galaxy spheroid, leaving the stellar disk of the primary intact. In minor mergers the presence of a starburst depends on the cold gas content of the primary galaxy, as set out in the next bullet point. \item $f_{\rm burst}<M_{\rm s}/M_{\rm p}\le f_{\rm ellip}$ and $M_{\rm cold,p}/M_{\rm p}>f_{\rm gas,burst}$ drives a starburst in a minor merger. The perturbations introduced by the secondary galaxy are assumed to drive all the cold gas from both galaxies to the new spheroid, producing a starburst. There is no starburst if $M_{\rm cold,p}/M_{\rm p}<f_{\rm gas,burst}$. The Bau05 and Bow06 models adopt $f_{\rm gas,burst}=0.75$ and $f_{\rm gas,burst}=0.1$, respectively. \item $M_{\rm s}/M_{\rm p}\le f_{\rm burst}$ results in the primary disk remaining unchanged. As before, the stars accreted from the secondary galaxy are added to the spheroid, but the overall gas component (from both galaxies) stays in the disk, and the stellar disk of the primary is preserved. The Bau05 and Bow06 models adopt $f_{\rm burst}=0.05$ and $f_{\rm burst}=0.1$, respectively. \end{itemize} {\it Disk instabilities.} If the disk becomes sufficiently massive that its self-gravity is dominant, then it is unstable to small perturbations by satellites or DM substructures. The criterion for instability was described by \citet{Efstathiou82} and \citet{Mo98} and introduced into {\tt GALFORM} by \citet{Cole00}, \begin{equation} \epsilon=\frac{V_{\rm circ}(r_{\rm d})}{\sqrt{G\, M_{\rm d}/r_{\rm s}}}. \label{DisKins} \end{equation} \noindent Here, $V_{\rm circ}(r_{\rm d})$ is the circular velocity of the disk at the half-mass radius, $r_{\rm d}$, $r_{\rm s}$ is the scale radius of the disk and $M_{\rm d}$ is the disk mass (gas plus stars). If $\epsilon<\epsilon_{\rm disk}$, where $\epsilon_{\rm disk}$ is a parameter, then the disk is considered to be unstable. In the case of unstable disks, stars and gas in the disk are accreted onto the spheroid and the gas inflow drives a starburst. Lagos12 and Gonzalez-Perez14 adopt $\epsilon_{\rm disk}=0.8$, while Lacey14 adopt a slightly higher value, $\epsilon_{\rm disk}=0.9$. \subsection{Gradual Ram pressure stripping of the hot gas}\label{Sec:rampressure} The standard treatment of the hot gas in accreted satellites in {\tt GALFORM} is usually referred to as `strangulation'\footnote{Another way this is referred to in the literature is `starvation', but both terms refer to the same process: complete removal of the hot gas reservoiro f galaxies when they become satellites.} of the hot gas. In this extreme case, the ram pressure stripping of the satellite's hot gas reservoir by the hot gas in the main halo is completely efficient and is assumed to occur as soon as a galaxy becomes a satellite. This treatment has shown to drive redder colour of satellite galaxies \citep{Font08}. As we are studying ETGs, which tend to be found more frequently in denser environments, we test the impact of a more physical and gradual process, the partial ram pressure stripping of the hot gas, on the neutral gas content of ETGs. Simulations show that the amount of gas removed from the satellite's hot reservoir depends upon the ram pressure experienced which is turn is determined by the peri-centre of the orbit \citep{McCarthy08}. Here we briefly describe the more physical partial ram-pressure stripping model introduced by \citet{Font08}. The partial ram-pressure stripping of the hot gas is applied to a spherical distribution of hot gas. The model considers that all the host gas outside the stripping radius, $r_{\rm str}$, is removed from the host gas reservoir and transferred to the central galaxy halo. The stripping radius is defined as the radius where the ram pressure, $P_{\rm ram}$, equals the gravitational restoring force per unit area of the satellite galaxy, $P_{\rm grav}$. The ram pressure is defined as, \begin{equation} P_{\rm ram} \equiv \rho_{\rm gas,p}\,v^{2}_{\rm sat}, \label{RPdefinition} \end{equation} \noindent and the gravitational pressure as \begin{equation} P_{\rm grav}\equiv \alpha_{\rm rp}\,\frac{G\,M_{\rm tot,sat}(r_{\rm str})\,\rho_{\rm gas,s}(r_{\rm str})}{r_{\rm str}}. \label{Pgravdefinition} \end{equation} \noindent Here, $\rho_{\rm gas,p}$ is the gas density of the parent halo, $v_{\rm sat}$ is the velocity of the satellite with respect to the parent halo gas medium, $M_{\rm tot,sat}(r_{\rm str})$ is the total mass of the satellite galaxy (stellar, gas and dark matter components) enclosed within $r_{\rm str}$ and $\rho_{\rm gas,s}(r_{\rm str})$ is the hot gas density of the satellite galaxy at $r_{\rm str}$. In this model, $r_{\rm str}$ is measured from the centre of the satellite galaxy sub-halo. The coefficient $\alpha_{\rm rp}$ is a geometric constant of order unity. In this paper we use $\alpha_{\rm rp}=2$ which is the value found by \citet{McCarthy08} in their hydrodynamical simulations. The hot gas of the parent halo follows the density profile of Eq.~\ref{beta-prof}. This model assumes that the hot gas of the satellite galaxy inside $r_{\rm str}$ remains intact while the hot gas outside is stripped on approximately a sound crossing time. In {\tt GALFORM}, $r_{\rm str}$ is calculated at the time a galaxy becomes satellite solving Eq.~\ref{RPdefinition} and setting the ram pressure to its maximum value, which occurs at the peri-centre of the orbit of the satellite galaxy. The hot gas outside $r_{\rm str}$ is instantaneously stripped once the galaxy crosses the virial radius of the parent halo. This simplified modelling overestimates the hot gas stripped between the time the satellite galaxy crosses the virial radius and the first passage. \citet{Font08} argue that this is not a bad approximation as the timescale for the latter is only a small fraction of the time a satellite galaxy spends orbiting in the parent halo. Font et al. also argue that in terms of hot gas removal, ram pressure is the major physical mechanism, while tidal heating and stripping are secondary effects. The infall velocity of the satellite galaxy is randomly sampled from the $2$-dimensional distribution of infalling velocity of the dark matter substructures, measured by \citet{Benson05} from a large suite of cosmological simulations. Then, the peri-centre radius and velocity at the peri-centre are computed by assuming that the orbital energy and angular momentum are conserved and by treating the satellite as a point mass orbiting within a Navarro-Frenk-White gravitational potential with the same total mass and concentration as the parent halo. The remaining hot gas in the satellite galaxy halo can cool down and feed the satellite's disc. The cooling of this remaining hot gas is calculated by assuming that the mean density of the hot gas of the satellite is not altered by the stripping process, and using a nominal hot halo mass that includes both the current hot gas mass and the hot gas mass that has been stripped. The difference with the standard calculation described in $\S$~\ref{Sec:Cooling} is that the cooling radius cannot be larger than $r_{\rm str}$. As star formation continues to take place in satellite galaxies, there will be an additional source of hot gas which corresponds to the winds escaping the galaxy disk that mix or evaporate to become part of the hot halo gas. Most of this star formation takes place when the satellite galaxy is on the outer parts of its orbit, where the ram pressure is small. \citet{Font08} then suggested that a fraction, $\epsilon_{\rm strip}$ (less than unity) of this gas is actually stripped from the hot halo of the satellite. Font et al. discussed the effects of different values for $\epsilon_{\rm strip}$ and adopted $\epsilon_{\rm strip}=0.1$ to reproduce the colours of satellite galaxies. Throughout this paper we adopt the same value for $\epsilon_{\rm strip}$, but we discuss in Appendix~\ref{RPpars} the effect of varying it. Finally, in order to account for the growth of the parent halo and the effect this has on the ram pressure, the ram pressure is recalculated for each satellite galaxy every time the parent halo doubles its mass compared to the halo mass at the instant of the initial stripping event. We test the effect of partial ram pressure stripping of the hot gas by including the above modelling into the three variants of {\tt GALFORM}, and we refer to the variants with partial ram pressure stripping of the hot gas as Lagos12+RP, Gonzalez-Perez14+RP and Lacey14+RP. \subsection{Differences between the Lagos12, Gonzalez-Perez14 and Lacey14 models}\label{Models} The Lagos12 model is a development of the model originally described in \citet{Bower06}, which was the first variant of {\tt GALFORM} to include AGN feedback as the mechanism suppressing gas cooling in massive halos. The Lagos12 model assumes a universal initial mass function (IMF), the \citet{Kennicutt83} IMF\footnote{The distribution of the masses of stars formed follows ${\rm d}N(m)/{\rm d\, ln}\,m \propto m^{-x}$, where $N$ is the number of stars of mass $m$ formed, and $x$ is the IMF slope. For a \citet{Kennicutt83} IMF, $x=1.5$ for masses in the range $1\,M_{\odot}\le m\le 100\,M_{\odot}$ and $x=0.4$ for masses $m< 1\,M_{\odot}$.}. Lagos12 extend the model of Bower et al. by including the self-consistent SF law described in $\S$~\ref{SFlaw}, and adopting $\nu_{\rm SF}=0.5\,\rm Gyr^{-1}$, $\rm log(P_{0}/k_{\rm B} [\rm cm^{-3} K])=4.23$, where $\rm k_{\rm B}$ is is Boltzmann's constant, and $\alpha=0.8$, which correspond to the values of the parameters reported by \citet{Leroy08} for local spiral and dwarf galaxies. This choice of SF law greatly reduces the parameter space of the model and also extends its predictive power by directly modelling the atomic and molecular hydrogen content of galaxies. All of the subsequent models that use the same SF law have also the ability to predict the HI and H$_2$ gas contents of galaxies. Lagos12 adopt longer duration starbursts (i.e. larger $f_{\rm dyn}$) compared to Bower et al. to improve the agreement with the observed luminosity function in the rest-frame ultraviolet (UV) at high redshifts. Lagos12 adopts $\tau_{\rm min}=100\, \rm Myr$ and $f_{\rm dyn}=50$ in Eq.~\ref{SFlawSB}. The Lagos12 model was developed in the Millennium simulation, which assumed a WMAP1 cosmology \citep{Spergel03}. The Gonzalez-Perez14 model updated the Lagos12 model to the WMAP7 cosmology \citep{Komatsu11}. A small number of parameters were recalibrated to recover the agreement between the model predictions and the observed evolution of both the UV and $K$-band luminosity functions. These changes include a slightly shorter starburst duration, i.e. $\tau_{\rm min}=50\, \rm Myr$ and $f_{\rm dyn}=10$, and weaker supernovae feedback. See \citet{Gonzalez-Perez13} for more details. The Lacey14 model is also developed in the WMAP7 cosmology but it differs from the other two flavours in that it adopts a bimodal IMF. The IMF describing SF in disks (i.e. the quiescent mode) is the same as the universal IMF in the other two models, but a top-heavy IMF is adopted for starbursts (i.e. with an IMF slope $x=1$). This choice motivated by \citet{Baugh05} who used a bimodal IMF to recover the agreement between the model predictions and observations of the number counts and redshift distribution of submillimeter galaxies. We note, however, that Baugh et al. adopted a more top-heavy IMF for starbursts with $x=0$. The stellar population synthesis model used for Lacey14 is also different. While both Lagos12 and Gonzalez-Perez14 use \citet{Bruzual03}, the Lacey14 model uses \citet{Maraston05}. Another key difference between the Lacey14 model and the other two {\tt GALFORM} flavours considered here, is that Lacey14 adopt a slightly larger value of the SF efficiency rate, $\nu_{\rm SF}=0.74\,\rm Gyr^{-1}$, still within the range allowed {by the most recent observation compilation of \citet{Bigiel11}}, making SF more efficient. \subsection{The $N$-body simulations and cosmological parameters}\label{Cosmos} We use halo merger trees extracted from the Millennium cosmological N-body simulation (adopting WMAP1 cosmology; \citealt{Springel05}) and its WMAP7 counterpart. The Millennium simulation\footnote{Data from the Millennium simulation is available on a relational database accessible from {\tt http://galaxy-catalogue.dur.ac.uk:8080/Millennium}.} has the following cosmological parameters: $\Omega_{\rm m}=\Omega_{\rm DM}+\Omega_{\rm baryons}=0.25$ (giving a baryon fraction of $0.18$), $\Omega_{\Lambda}=0.75$, $\sigma_{8}=0.9$ and $h=0.73$. The resolution of the $N$-body simulation is fixed at a halo mass of $1.72 \times 10^{10} h^{-1} M_{\odot}$. \citet{Lagos14} show that much higher resolution merger trees are needed to fully resolve the HI content of galaxies from $z=0$ to $z=10$. However, in this work we are concerned about galaxies with $L_{\rm K}\gtrsim 10^9\,L_{\odot}$, which are well resolved in the Millennium simulations. The Lacey14 and Gonzalez-Perez14 were developed in the WMAP7 version of the Millenniun simulation, where the cosmological parameters are $\Omega_{\rm m}=\Omega_{\rm DM}+\Omega_{\rm baryons}=0.272$ (with a baryon fraction of $0.167$), $\Omega_{\Lambda}=0.728$, $\sigma_{8}=0.81$ and $h=0.704$ (WMAP7 results were presented in \citealt{Komatsu11}). Throughout this work we show gas masses in units of $M_{\odot}$, luminosities in units of $L_{\odot}$ and number densities in units of $\rm Mpc^{-3}\,dex^{-1}$. This implies that we have evaluated the $h$ factors. The largest difference driven by the different cosmologies is in the number density, but this is only $0.05$~dex, which is much smaller than the differences between the models or between model and observations. \section{The neutral gas content of local early-type galaxies: models vs. observations}\label{Sec:ModelComparison} \begin{figure} \begin{center} \includegraphics[trim = 0.9mm 0.3mm 1mm 3.45mm,clip,width=0.47\textwidth]{Figs/MorphoFracz0_OnlyGALFORM_Lagos12.eps} \caption{Fraction of ETGs as a function of the rest-frame $r$-band absolute magnitude, for the Lagos12, Lacey14, Gonzalez-Perez14 and the variants including partial ram pressure of the hot gas (`+RP'), as labelled. Early-type galaxies in the models are those with a bulge-to-total stellar mass ratio $B/T>0.5$. The shaded regions correspond to the observational estimates of \citet{Benson07} and \citet{Gonzalez09} using the SDSS, as labelled. In the case of \citet{Gonzalez09} the upper limit of the shaded region is given by the S\'ersic index selection, while the lower limit is given by the concentration selection (see text for details).} \label{ETratios} \end{center} \end{figure} In this section we compare the predicted properties of ETGs in the three variants of {\tt GALFORM} with and without the inclusion of partial ram pressure stripping of the hot gas. This modification has little effect on the local $b_J$-band and $K$-band luminosity functions in the three variants of {\tt GALFORM}. These observables are usually considered to be the main constraints for finding the best set of model parameters (see for example \citealt{Bower10} and \citealt{Ruiz13}). Other $z=0$ properties, such as half-mass radii, gas and stellar metallicity, are also insensitive to the inclusion of partial ram pressure stripping. We therefore conclude that the partial ram pressure stripping versions of the three {\tt GALFORM} models provide a representation of the local Universe as good as the standard models. The first comparison we perform is the fraction of galaxies that are ETGs as a function of galaxy luminosity. This is a crucial step in our analysis, as we aim to characterise the neutral gas content of the ETG population. Throughout we will refer to ETGs in the model as those having bulge-to-total stellar mass ratios, $B/T$, $>0.5$. Although this selection criterion is very sharp and has been analysed in detail in the literature (see for example \citealt{Weinzirl09} and \citealt{Khochfar11}), we find that our results are not sensitive to the threshold $B/T$ selecting ETGs, as long at this threshold is $B/T>0.3$. We analyse this selection criterion in more detail in $\S$~\ref{Robustness}. Fig.~\ref{ETratios} shows the fraction of ETGs, $f_{\rm early}$, as a function of the $r$-band absolute magnitude at $z=0$ for the three {\tt GALFORM} models described in \S~\ref{Models} and their variants including partial ram pressure of the hot gas. Observational estimates of $f_{\rm early}$ are also shown in Fig.~\ref{ETratios}, for three different ways of selecting ETGs. The first one corresponds to \citet{Benson07}, in which a disc and a bulge component were fitted to $r$-band images of $8,839$ bright galaxies selected from the SDSS Early Data Release. The free parameters of the fitting of the disk and bulge components of each galaxy are the bulge ellipticity and disc inclination angle, $i$. The second corresponds to \citet{Gonzalez09}, where the $r$-band concentration, $c$, and S\'ersic index, $n$, of the SDSS were used to select ETGs: $c>2.86$ or $n>2.5$. The upper and lower limits in the shaded region for the Gonzalez et al. measurements correspond to the two early-type selection criteria. All the models predict a trend of increasing $f_{\rm early}$ with increasing $r$-band luminosity in good agreement with the observations within the errorbars. Note that the inclusion of partial ram pressure stripping of the hot gas leads to a slightly larger $f_{\rm early}$ in galaxies with $M_{r}-5\,\rm log(\it h\rm)>-18$ in the three variants of {\tt GALFORM}. The same happens for the brightest galaxies, $M_{r}-5\,\rm log(\it h\rm)<-22$. Both trends are due to the higher frequency of disk instabilities in the models when partial ram pressure stripping is included; the continuous fueling of neutral gas in satellite galaxies due to cooling from their hot halos (which in the case of partial ram pressure stripping is preserved to some extent) drives more star formation in discs, lowering the stability parameter of Eq.~\ref{DisKins}. These lower stability parameters result in more disk instabilities, driving the formation of spheroids. The space allowed by the observations is large enough so that we cannot discriminate between models. \subsection{The H$_2$-to-HI mass ratio dependence on morphology} \begin{figure} \begin{center} \includegraphics[trim = 0.0mm 0.3mm 1mm 3.45mm,clip,width=0.49\textwidth]{Figs/H2toMstarScaling_z0_Bband_morphology_BT_bins_AllGALFORM.eps} \caption{{Molecular-to-atomic hydrogen mass ratio, $M_{\rm H_2}/M_{\rm HI}$, as a function of the bulge-to-total luminosity in the $B$-band, $(B/T)_{B}$, in the same models of Fig.~\ref{ETratios} (lines as labelled in Fig.~\ref{ETratios}), for galaxies with absolute $B$-band magnitudes, $M_B-5\rm \, log(h)<-19$. Lines show the median of the relations for each model and the $10$ and $90$ percentiles are only shown for the Gonzalez-Perez14+RP model, as an illustration of the dispersion. The other model show very similar $10$ and $90$ percentiles. Observational results from \citet{Young89}, \citet{Bettoni03}, \citet{Leroy08} (THINGS sample), \citet{Lisenfeld11} (AMIGA sample) and \citet{Boselli14b} (HRS sample) are shown as symbols, as labelled, and we combine them so that $B/T<0.2$ corresponds to Irr, Sm, Sd galaxies; $0.2<B/T<0.5$ corresponds to Sc, Sb, Sa galaxies; $B/T>0.5$ corresponds to E and S0 galaxies (see \citealt{deVaucouleurs91} for a description of each morphological type).}} \label{ScalingMstar2} \end{center} \end{figure} It has been shown observationally that the ratio between the H$_2$ and HI masses correlates strongly with morphological type, with ETGs having higher H$_2$/HI mass ratios than late-type galaxies (e.g. \citealt{Young89}; \citealt{Bettoni03}; \citealt{Lisenfeld11}; \citealt{Boselli14b}). Fig.~\ref{ScalingMstar2} shows the H$_2$/HI mass ratio in the models as a function of the bulge-to-total luminosity ratio in the $B$-band, $B/T_{\rm B}$, for all galaxies with $B$-band absolute magnitude of $M_B-5\rm \, log(h)<-19$. This magnitude limit is chosen as it roughly corresponds to the selection criteria applied to the observational data shown in Fig.~\ref{ScalingMstar2}. The observational data have morphological types derived from a visual classification of $B$-band images \citep{deVaucouleurs91}, and have also been selected in blue bands (e.g. \citealt{Simien86}; \citealt{Weinzirl09}). The models predict a relation between the H$_2$/HI mass ratio and $B/T_{\rm B}$ that is in good agreement with the observations. Note that, for $B/T_{\rm B}<0.2$, the models slightly overpredict the median H$_2$/HI mass ratio. The latter has been also observed in the recent Herschel Reference Survey (HRS, \citealt{Boselli14b}; stars in Fig.~\ref{ScalingMstar2}) for galaxies of morphological types later than Sd (including irregular galaxies). {In {\tt GALFORM} there is a monotonic relation between H$_2$/HI mass ratio and stellar mass in a way that H$_2$/HI decreases with decreasing stellar mass (see \citealt{Lagos11} for a detailed discussion). On the other hand the relation between stellar mass and $B/T$ is not monotonic, in a way that the median stellar mass in the lowest $B/T$ bins ($B/T<0.1$) is higher than at $B/T\sim 0.2$. This drives the slight increase of H$_2$/HI at the lowest $B/T$. The physical reason why $B/T$ is not monotonically correlated to stellar mass is because environment plays an important role in the morphology (for example in the number of galaxy mergers, and disk instabilities), which makes it a more transient property of galaxies, while stellar mass is not necessarily correlated to environment but to halo mass. This will be discussed in more detail in paper II.} Of the three {\tt GALFORM} variants, the model that predicts the steepest slope for the relation between H$_2$/HI mass ratio and $B/T$ is the Lacey14 model. However, when including partial ram pressure stripping of the hot gas, all the models show a slight increase in this slope, with ETGs having higher H$_2$/HI mass ratios. This comes from the higher gas surface densities that ETGs have on average when including partial ram pressure, which drive higher hydrostatic pressure. The strangulation scenario, which removes the hot gas instantaneously, drives a quick depletion of the cold gas reservoir in galaxies as star formation continues, while in the partial ram pressure scenario the cold gas reservoir is still replenished due to continuous inflow of gas from the satellite's remaining hot halo. \subsection{The HI content of early-type galaxies} \begin{figure} \begin{center} \includegraphics[trim = 0.9mm 0.3mm 1mm 0mm,clip,width=0.47\textwidth]{Figs/HIMFz0_CompOnlyGALFORM_Lagos12.eps} \caption{{\it Top panel:} The HI mass function of all galaxies at $z=0$ for the models Lagos12, Lacey14 and Gonzalez-Perez14 and its variants including partial ram pressure stripping of the hot gas (`+RP'), as labelled. Observations correspond to \citet{Zwaan05} (triangles) and \citet{Martin10} (diamonds). The shaded region shows the range where the number density of galaxies decline due to halo mass resolution effects. {\it Middle panel:} as in the top panel, but here the HI mass function is shown for galaxies with $K$-band luminosities $L_{\rm K}>6\times 10^9\,L_{\odot}$. Observations correspond to the analysis of HIPASS presented in $\S$~\ref{obssec}. {\it Bottom panel:} as in the top panel but for ETGs (those with a bulge-to-total stellar mass ratio $>0.5$) and $K$-band luminosities $L_{\rm K}>6\times 10^9\,L_{\odot}$. Observations correspond to the analysis of HIPASS (circles) and the ATLAS$^{\rm 3D}$ (with and without volume correction as filled and empty stars, respectively) surveys containing galaxies with $L_{\rm K}>6\times 10^9\,L_{\odot}$ and described in $\S$~\ref{obssec}.} \label{HIRPcomp} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[trim = 0.9mm 0.3mm 1mm 0mm,clip,width=0.49\textwidth]{Figs/HIMs_ratio_CompOnlyGALFORM_Lagos12.eps} \caption{{\it Top panel:} the distribution function of the ratio between the HI mass and the $K$-band luminosity for the same models shown in Fig.~\ref{HIRPcomp} for galaxies with $L_{\rm K}>6\times 10^9\,L_{\odot}$. Observations correspond to the analysis of HIPASS presented in $\S$~\ref{obssec}. {\it Bottom panel:} As in the top panel but for ETGs ($B/T>0.5$) with $L_{\rm K}>6\times 10^9\,L_{\odot}$. Observations correspond to the analysis of HIPASS (circles) and ATLAS$^{3D}$ (stars) presented in $\S$~\ref{obssec}.} \label{HIRPcompv2} \end{center} \end{figure} Fig.~\ref{HIRPcomp} shows the model predictions for the HI mass function for all galaxies (top panel), for the subsample of galaxies with $K$-band luminosities $L_{\rm K}>6\times 10^9\,L_{\odot}$ (middle panel), and for ETGs with the same $K$-band luminosity cut (bottom panel). The observational results in Fig.~\ref{HIRPcomp} are described in \S~\ref{obssec}. For the overall galaxy population (top panel of Fig.~\ref{HIRPcomp}), the six models provide a good description of the HI mass function, and the inclusion of partial ram pressure stripping of the hot gas has little effect. For the galaxy population with $L_{\rm K}>6\times 10^9\,L_{\odot}$ (middle panel of Fig.~\ref{HIRPcomp}), the Lagos12 and Gonzalez-Perez14 models predict a slightly lower number density at the peak of the HI mass distribution compared to the observations, while the Lacey14 model predicts a HI mass distribution in good agreement with the observations throughout the full HI mass range. Note that the inclusion of partial ram pressure stripping of the hot gas has the effect of increasing the number density of galaxies with $10^8\,M_{\odot}<M_{\rm HI}<3\times 10^9\,M_{\odot}$, and decreasing the number density of galaxies with $M_{\rm HI}<10^8\,M_{\odot}$. The reason for this is that many of the galaxies with low HI masses ($M_{\rm HI}<10^8\,M_{\odot}$) become more gas rich when partial ram pressure stripping is included {compared to the case of strangulation of the hot gas}, and move to higher HI masses ($10^8\,M_{\odot}<M_{\rm HI}<3\times 10^9\,M_{\odot}$). This slight change improves the agreement with the observations in the three {\tt GALFORM} models, particularly around the turnover at $M_{\rm HI}\approx 5\times 10^9\,M_{\odot}$ in the mass function shown in the middle panel of Fig.~\ref{HIRPcomp}. In the case of ETGs with $L_{\rm K}>6\times 10^9\,L_{\odot}$ (bottom panel of Fig.~\ref{HIRPcomp}), the Lagos12 model predicts a number density of galaxies with $10^{9}\,M_{\odot}<M_{\rm HI}<10^{10}\,M_{\odot}$ lower than observed, while the predictions from the Gonzalez-Perez14 and Lacey14 models agree well with the observations. The inclusion of partial ram pressure stripping of the hot gas in the three models has the effect of increasing the number density of ETGs with HI masses $M_{\rm HI}>10^8\,M_{\odot}$. This increase allows the models to get closer to the observations, and particularly the Lacey14+RP model predicts a HI mass function of ETGs in very good agreement with the observations. The HI mass function has become a standard constraint on the {\tt GALFORM} model since \citet{Lagos11}. However, \citet{Lemonias13} show that a stronger constraint on simulations of galaxy formation is provided by the conditional mass function of gas, or similarly, the gas fraction distribution. Here, we compare the predictions for the HI gas fraction distribution function with observations in Fig.~\ref{HIRPcompv2}. The HI gas fraction is taken with respect to the $K$-band luminosity to allow direct comparisons with the observations without the need of having to convert between different adopted IMFs, which would be the case if stellar mass was used. \citet{Mitchell13} show that when simulations and observations adopt different IMF, the comparison between the stellar masses predicted by the models and observations is misleading. Instead, a full SED fitting needs to be performed to make a fair comparison in such a case. Note that the same applies when the star formation histories adopted in the observations differ significantly from the simulated galaxies. The top panel of Fig.~\ref{HIRPcompv2} shows the predicted HI gas fraction for galaxies with $L_{\rm K}>6\times 10^9\,L_{\odot}$. The observations are described in \S~\ref{obssec}. The Lagos12 and Gonzalez-Perez14 models predict a peak of the gas fraction distribution at higher gas fractions than observed, while the Lacey14 predicts a peak closer to the observed one (i.e. $M_{\rm HI}/L_{\rm K}\approx 0.15\,M_{\odot}/L_{\odot}$). The galaxies at the peak of the HI gas fraction distribution also lie in the main sequence of galaxies in the SFR-stellar mass plane \citep{Lagos10}. The inclusion of partial ram pressure of the hot gas increases the number density of galaxies with HI gas fractions $M_{\rm HI}/L_{\rm K}>10^{-3}\,M_{\odot}/L_{\odot}$ and reduces the number density of galaxies with lower HI gas fractions. The physical reason behind these trends is that the inclusion of partial ram pressure stripping increases the HI gas fraction of gas poor galaxies, compared to the strangulation scenario, due to replenishment of their cold gas reservoir. The bottom panel of Fig.~\ref{HIRPcompv2} shows the HI gas fraction distribution of ETGs ($B/T>0.5$) with $L_{\rm K}>6\times 10^9\,L_{\odot}$. The HI gas fraction distribution of ETGs is very different from that of all galaxies, showing a much broader distribution with a tail to very low HI gas fractions in all six models. The Lagos12 and Gonzalez-Perez14 models predict lower HI gas fractions for ETGs than observed, while the predictions from Lacey14 are closer to the observations. By including partial ram pressure stripping of the hot gas, the HI gas fractions increase in all the models due to the replenishment of the cold gas reservoirs in satellite galaxies. The predictions of the Lacey14+RP model are a very good description of the observations, with a peak in the number density of ETGs in the range $M_{\rm HI}/L_{\rm K}\approx 0.002-0.02\,M_{\odot}/L_{\odot}$. Overall, the HI content of ETGs is moderately sensitive to the treatment of the hot gas content of satellite galaxies, while the overall galaxy population does not show the same sensitivity to this physical process due to the dominance of central galaxies. \subsection{The H$_2$ content of early-type galaxies} \begin{figure} \begin{center} \includegraphics[trim = 0.9mm 0.3mm 1mm 0mm,clip,width=0.49\textwidth]{Figs/H2MFz0_CompOnlyGALFORM_Lagos12.eps} \caption{{\it Top panel:} the H$_2$ mass function at $z=0$ for all galaxies in the Lagos12, Lacey14 and Gonzalez-Perez14 models and their variants including partial ram pressure stripping of the hot gas (`+RP'). Observations from the $60\mu$m (downwards pointing triangles) and $B$-band (triangles) samples of \citet{Keres03} are also shown (see $\S$~\ref{obssec}). {\it Bottom panel:} as in the top panel but for ETGs ($B/T>0.5$) with $L_{\rm K}>6\times 10^9\,L_{\odot}$. Observations correspond to ETGs from the ATLAS$^{3D}$ survey with (filled stars) and without (open stars) volume correction (see \S~\ref{obssec} for details).} \label{H2RPcomp} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[trim = 0.9mm 0.3mm 1mm 0mm,clip,width=0.47\textwidth]{Figs/H2Ms_ratio_CompOnlyGALFORM_Lagos12.eps} \caption{The distribution function of the ratio between the H$_2$ mass and the $K$-band luminosity for ETGs ($B/T>0.5$) with $L_{\rm K}>6\times 10^9\,L_{\odot}$, for the models as labelled. Observations correspond to ETGs from the ATLAS$^{3D}$ survey (see \S~\ref{obssec} for details of the observational dataset).} \label{H2RPcompv2} \end{center} \end{figure} The top panel of Fig.~\ref{H2RPcomp} shows the predicted H$_2$ mass functions for all galaxies. The observations are from \citet{Keres03} and are described in \S~\ref{obssec}. All the models provide a good description of the observations. The largest differences between the models are at the high-mass end. The Gonzalez-Perez14 model predicts the highest number densities of galaxies with $M_{\rm H_2}>5\times 10^{9}\,M_{\odot}$, although it is still in agreement with the observations within the errorbars. Note that the inclusion of partial ram pressure stripping of the hot gas leads to very little change. This is due to the dominance of central galaxies in the H$_2$ mass function, which are only indirectly affected by the treatment of partial ram pressure stripping. In the bottom panel of Fig.~\ref{H2RPcomp} we show the H$_2$ mass function of ETGs ($B/T>0.5$) with $K$-band luminosities $L_{\rm K}>6\times 10^9\,L_{\odot}$. The differences between the models are significant: the Lagos12, Gonzalez-Perez14 and Lacey14 models predict a number density of ETGs with H$_2$ masses $M_{\rm H_2}>10^7\,M_{\odot}$ much lower than is observed, by factors of $10$, $6$ and $4$, respectively. It is only when partial ram pressure stripping of the hot gas is included that their predictions get closer to the observations. In particular, the Lacey14+RP model predicts a number density of ETGs with $M_{\rm H_2}>10^7\,M_{\odot}$ that is very close to the observations. The Lagos12+RP and Gonzalez-Perez14+RP models are still a factor of $3-4$ lower than the observations. At the high-mass end of the H$_2$ mass function for all galaxies (top panel of Fig.~\ref{H2RPcomp}), the contribution from ETGs is significant (although not dominant), while for the HI, ETGs are only a minor contribution. The reason for this is simply the higher H$_2$-to-HI mass ratios in ETGs compared to late-type galaxies (see Fig.~\ref{ScalingMstar2}). Fig.~\ref{H2RPcompv2} shows the H$_2$ gas fraction distribution for ETGs with $L_{\rm K}>6\times 10^9\,L_{\odot}$ for the same six models of Fig.~\ref{H2RPcomp}. Similarly to the H$_2$ mass function, the Lagos12 and Gonzalez-Perez14 models predict a number density of ETGs with $M_{\rm H_2}/L_{\rm K}>10^{-3}\,M_{\odot}/L_{\odot}$ lower than observed, while their variants including partial ram pressure stripping of the hot gas predict higher number densities, in better agreement with the observations. The Lacey14+RP model provides the best description of the observed H$_2$ gas fractions. The physical reason for the higher number density of H$_2$ `rich' ETGs in the models including partial ram pressure stripping of the hot gas is that the replenishment of the cold gas reservoir leads to an increase in the surface density of gas. Since the HI saturates at $\Sigma_{\rm H_2}\approx 10\,M_{\odot}\,\rm pc^{-2}$, due to H$_2$ self-shielding at higher densities, the effect of cold gas replenishment in the ISM has a stronger effect on the H$_2$ reservoir than on the HI. The incorporation of partial ram pressure stripping of the hot gas brings the models into better agreement with the observed gas fractions of galaxies, and particularly of ETGs. This indicates that partial ram pressure of the hot gas is relevant in a wide range of environments. Note that do not include any description of the ram pressure stripping of the cold gas, which has been shown to take place in clusters through observations of the HI and H$_2$ contents of galaxies in the Virgo cluster (e.g. \citealt{Cortese11}; \citealt{Boselli14}). However, no deficiency of HI or H$_2$ has been observed in galaxies in environments other than clusters. \citet{Tecce10} using galaxy formation models show that ram pressure stripping of the cold gas is relevant only in halos with $M_{\rm halo}>3\times 10^{14}\,h^{-1}\,M_{\odot}$. Since most of the galaxies in the ATLAS$^{\rm 3D}$ and HIPASS surveys are not cluster galaxies, we expect the effect of the ram pressure stripping of the cold gas to be insignificant in our analysis. The study of the neutral gas content of ETGs offers independent constraints on the modelling of the ram pressure stripping of the hot gas. \citet{Font08} used the fraction of passive to active galaxies as the main constraint on the satellite's hot gas treatment. Here we propose that the exact levels of activity or cold gas content of galaxies that are classified as passive offer new, independent constraints. The Lacey14+RP model agrees the best with the observations of HI and H$_2$ in different galaxy populations. This is the first time such a successful model is presented. \citet{Serra14} use a sample of hydrodynamical simulations of galaxies and find that the simulations have difficulties reproducing the HI masses of ETGs. This may partially be due to the small sample of simulated ETGs analysed by Serra et al. ($50$ in total). Here, by taking the full simulated galaxy population, we can make a statistically robust comparison with the observed ETG population. \subsection{Expectations for the evolution of the HI and H$_2$ mass functions} \begin{figure} \begin{center} \includegraphics[trim = 0.9mm 0.3mm 1mm 0.45mm,clip,width=0.46\textwidth]{Figs/HIMFz0_ET_evolution_PdRGALFORM_Lagos12.eps} \includegraphics[trim = 0.9mm 0.3mm 1mm 0.45mm,clip,width=0.46\textwidth]{Figs/H2MFz0_ET_evolution_PdRGALFORM_Lagos12.eps} \caption{{\it Top panel:} The HI mass function of ETGs ($B/T>0.5$) at $z=0$ and $z=1$ for the Lagos12+RP and Lacey14+RP models. Observations are as in the bottom panel of Fig.~\ref{HIRPcomp}. {\it Bottom panel:} As in the top panel, but for H$_2$. Observations are as in the bottom panel of Fig.~\ref{H2RPcomp}.} \label{HIH2Evolution} \end{center} \end{figure} In the near future, the Australian Square Kilometer Array (ASKAP) and the South African Karoo Array Telescope (MeerKAT) will be able to trace the evolution of the HI gas content of ETGs towards redshifts higher than $z=0.1$, while current millimeter telescopes, such as the Plateau de Bure Interferometer and the Atacama Large Millimeter Array (ALMA), can already trace H$_2$ in ETGs. To provide insights into the expected redshift evolution of the HI and H$_2$ mass functions of ETGs, we show in Fig.~\ref{HIH2Evolution} the predictions for the mass functions at $z=0$ and $z=1$ for the Lagos12+RP and Lacey14+RP models, which give the lowest and highest number densities of ETGs (see Figs.~\ref{HIRPcomp} and \ref{H2RPcomp}). The top panel of Fig.~\ref{HIH2Evolution} shows that both models predict an ETG HI mass function weekly evolving with redshift. On the contrary, a large increase in the number density of ETGs with large H$_2$ masses, $M_{\rm H_2}>10^{10}\,M_{\odot}$, from $z=0$ to $z=1$ is predicted by both models (bottom panel of Fig.~\ref{HIH2Evolution}). This is driven by the predicted increase of the H$_2$/HI mass ratio as well as an increase in the overall gas content of ETGs with increasing redshift. \citet{Lagos11} and \citet{Lagos14} present detailed studies which unveil the physics behind this evolution. In short this is due to a combination of higher gas contents and more compact galaxies at high redshift, which increase the hydrostatic pressure in galaxies, and therefore the H$_2$/HI mass ratio. \section{The morphological transformation and quenching of galaxies}\label{Robustness} \begin{figure} \begin{center} \includegraphics[trim = 0.1mm 0.3mm 1mm 0.45mm,clip,width=0.49\textwidth]{Figs/HIMFz0_SelectionEffect_PdRGALFORM_Lagos12.eps} \caption{The HI mass function for galaxies with $L_{\rm K}>6\times 10^9\,L_{\odot}$ in the Lacey14+RP model. Galaxy populations with different bulge-to-total mass ratios are shown: all galaxies (solid thick line), $B/T>0$ (solid thin line), $B/T>0.1$ (dotted line), $B/T>0.3$ (triple dot-dashed line), $B/T>0.5$ (dashed line) and $B/T>0.7$ (long dashed line). We also show the HI mass function for ETGs selected in an alternative way: in addition to those with $B/T>0.5$, galaxies with $B/T<0.5$ that are gas poor, $M_{\rm gas}/M_{\rm stellar}=f_{\rm gas}<0.1$, can also appear as early-types. The latter values are consistent with those presented in \citet{Khochfar11}).} \label{MorphoSel} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[trim = 0.1mm 0.3mm 1mm 0.45mm,clip,width=0.49\textwidth]{Figs/HIMs_ratio_SelecEffect_PdRGALFORM_Lagos12.eps} \includegraphics[trim = 0.1mm 0.3mm 1mm 0.45mm,clip,width=0.49\textwidth]{Figs/H2Ms_ratio_SelecEffect_PdRGALFORM_Lagos12.eps} \caption{Same as Fig.~\ref{MorphoSel} but for the HI gas fraction (top panel) and H$_2$ gas fraction (bottom panel) distribution functions.} \label{MorphoSel3} \end{center} \end{figure} A key aspect in the analysis performed here is the morphological selection of ETGs. Observationally, morphologies are derived from visual inspection of optical images of galaxies (see for example \citealt{Cappellari11} for the ATLAS$^{\rm 3D}$ survey sample selection). This means that among the galaxies selected as early-type there could be contamination by edge-on spirals that are gas poor, and that therefore would appear red with no spiral arms. Following this argument, \citet{Khochfar11} suggested that to select ETGs in the models that are comparable to the observed ones one needs to include truly bulge dominated objects, which are selected by their bulge-to-total mass ratio, and late-type galaxies, which are gas poor. Khochfar et al. suggest the following selection to select early-type looking galaxies: $B/T>0.5$ and $B/T<0.5$ with gas fractions $<0.1$. Note that of the latter subpopulation, only the fraction that are edge-on oriented will be confused as early-types, and therefore only a small fraction will contribute to the ETG population. We assume random inclinations for late-type galaxies, select those that have inclination angles $>45^{\circ}$ and add them to the sample of galaxies with $B/T>0.5$. The HI mass function and gas fractions of ETGs obtained using this selection criterion are shown in Fig.~\ref{MorphoSel} and Fig.~\ref{MorphoSel3}, respectively, for the Lacey14+RP model. The focus on the latter model as it predicts HI and H$_2$ masses of ETGs in best agreement with the observations. The effect of including the contamination from gas poor late-type galaxies is very small and therefore does not change the results presented earlier. Another interesting question is how much the $B/T$ threshold to select ETGs in the model affects our results. In order to answer this question we show in Fig.~\ref{MorphoSel} and Fig.~\ref{MorphoSel3} different $B/T$ thresholds to select ETGs in the Lacey14+RP model. $B/T$ thresholds lower than $0.5$ have the expected impact of increasing the number density of galaxies compared to the canonical value of $0.5$, particularly at $M_{\rm HI}>10^7\,M_{\odot}$, $M_{\rm HI}/L_{\rm K}>5\times 10^{-3}M_{\odot}/L_{\odot}$ and throughout the whole H$_2$ gas fraction range. This increase is of a factor of $2$ for $B/T=0.3$ and $7$ for $B/T=0.1$. In addition, about $40$\% of the galaxies with $L_{\rm K}>6\times 10^9\,L_{\odot}$ are pure disks (no bulges; see difference between thick and thin solid lines in Fig.~\ref{MorphoSel} and Fig.~\ref{MorphoSel3}), which are mainly located around the peaks of the HI and H$_2$ gas fraction distributions of all galaxies with $L_{\rm K}>6\times 10^9\,L_{\odot}$ (see Fig.~\ref{MorphoSel3}). This shows that the development of a small bulge is connected with an important gas depletion in galaxies. Spheroids in the model are formed when galaxies undergo a starburst, either driven by a galaxy merger or a global disk instability. The fact that these galaxies remain bulge dominated is because large disks fail to regrow after the formation of the spheroid. To adopt a higher $B/T$ threshold has the expected effect of lowering the number density of ETGs. The number density of galaxies with low HI content is only weekly dependent on the $B/T$ threshold. This is due to a connection between low gas fractions and large $B/T$; i.e. if a galaxy has a large bulge fraction it will also be gas poor. This has been observed in the ATLAS$^{\rm 3D}$ \citep{Cappellari13}, where ETGs with the largest velocity dispersions (a tracer of bulge fraction) have the lowest gas fractions. This implies that the modelling of the morphological transformation in {\tt GALFORM} is able to capture the processes that lead to the relation between bulge fraction and gas depletion. Thresholds of $B/T$ used to select ETGs which are in the range $0.3-0.6$ produce similar results and therefore does not affect the analysis presented here. \begin{figure} \begin{center} \includegraphics[trim = 0.1mm 0.3mm 1mm 0.45mm,clip,width=0.45\textwidth]{Figs/HIMFz0_SelectionEffect_SatCen_PdRGALFORM_Lagos12.eps} \caption{{\it Top panel:} The HI mass function for galaxies with $B/T>0$ and $L_{\rm K}>6\times 10^9\,L_{\odot}$ separated into central (solid line) and satellite galaxies (dashed line) in the Lacey14+RP model. The subpopulation of central galaxies with AGN heating the hot halo of galaxies is also shown as dot-dashed line. Observations correspond to the analysis of HIPASS galaxies with $L_{\rm K}>6\times 10^9\,L_{\odot}$. {\it Bottom panel:} Same as in the top panel but for galaxies with $B/T>0.5$. Observations here correspond to the ATLAS$^{\rm 3D}$ and HIPASS, described in $\S$~\ref{obssec}.} \label{MorphoSel2} \end{center} \end{figure} The key question is why do galaxies fail to rebuild large disks and remain spheroid dominated with low gas fractions? First of all, ETGs have relatively old stellar populations as we will see in $\S$~\ref{CompetingSources}, and therefore we need to understand why after the formation of bulges, galaxy disks fail to reaccrete significant quantities of gas to keep forming stars at the level of the main sequence of galaxies in the SFR-M$_{\rm stellar}$ plane. This question is then related to the physical mechanisms quenching star formation in ETGs. In order to understand why large disks fail to regrow, we first look into the nature of galaxies with $B/T>0$. The top panel of Fig.~\ref{MorphoSel2} shows the HI mass function of galaxies with $B/T>0$ and $L_{\rm K}>6\times 10^9\,L_{\odot}$ separated into centrals and satellites. The satellite galaxy population makes up most of the tail of low HI masses, $M_{\rm HI}<5\times 10^8\,M_{\odot}$. These galaxies have little cold gas replenishment after they become satellites as they continuously lose part of their hot gas reservoir due to the continuous action of ram pressure stripping. This has the consequence of lowering the cooling rates. However, satellite galaxies do preserve some neutral gas reservoir; i.e. they hardly completely deplete their gas content. The mechanism for this is connected to the dependence of the star formation timescale with the gas surface density. Low gas surface densities produce low H$_2$/HI ratios and low SFRs. As the gas is being depleted, the star formation timescale becomes longer and longer, allowing satellite galaxies to retain their gas reservoir. This mechanism drives the population of satellite galaxies with low gas fractions. This is consistent with observations, where there is a non negligible fraction of ETGs with H$_2$ and/or HI contents detected in high mass groups or clusters (e.g. \citealt{Young11}). In the case of central galaxies, there is a population with HI masses $M_{\rm HI}>10^9\,M_{\odot}$ and HI gas fractions of $\approx 0.15\,M_{\odot}/L_{\odot}$ that have cooling rates large enough to replenish their cold gas contents. The tail of central galaxies with low HI masses, i.e. $M_{\rm HI}<10^9\,M_{\odot}$, fail to replenish their gas reservoirs and rebuild a new disk due to the effect of AGN heating their hot halo (see dashed line in the top panel of Fig.~\ref{MorphoSel2}). In {\tt GALFORM}, AGN feedback acts in halos where the cooling time is larger than the free fall time at the cooling radius (`hot accretion' mode; \citealt{Fanidakis10b}). In these halos, the AGN power is examined and if it is greater than the cooling luminosity, the cooling flows are switched off (see \citealt{Bower06}). This means that in central galaxies under the action of AGN feedback, there will be no further gas accretion onto the galaxy, driving a low HI and H$_2$ gas contents. Note that, in the model, the close connection between bulge fraction and gas depletion naturally arises in galaxies where AGN feedback operates. This is because the black hole grows together with the bulge (\citealt{Fanidakis10b}). The consequence of this is that large black hole masses, hosted by large bulges, are capable of large mechanical luminosities which can more easily affect their hot halo. These large bulges are also connected to large bulge-to-total stellar mass ratios due to the impeded disk regrowth. \begin{figure} \begin{center} \includegraphics[trim = 0.1mm 0.3mm 1mm 0.45mm,clip,width=0.49\textwidth]{Figs/HItoMstarScaling_Mbulge_z0_GALFORM.eps} \caption{{HI gas fraction as a function of the bulge-to-total stellar mass for galaxies in the Lacey14+RP model with $L_{\rm K}>6\times 10^9L_{\odot}$. The solid lines and errorbars represent the median and 10 and 90 percentiles of the distribution. In colours we show the mean bulge mass in $2$-dimensional bins of HI gas fraction and $B/T$, in an arbitrary region encompassing the errorbars. The mean bulge masses are as labelled in the bar at the top of the figure. The wiggle features in the coloured region are an artifact of the binning.}} \label{BTMbulge} \end{center} \end{figure} {We show the HI gas fraction as a function of $B/T$ in Fig.~\ref{BTMbulge} with the background colour scheme showing the mean bulge mass in $2$-dimensional bins of HI gas fraction and $B/T$. There is a clear anti-correlation between the HI gas fraction and $B/T$. In addition, at a fixed $B/T$ there is a trend of increasing HI gas fraction with decreasing bulge mass. The latter is related to the stronger AGN feedback in higher mass bulges that lead to gas depletion. Recent high redshift observations show evidence for the strong connection between quenching and bulge mass \citep{Lang14}. Lang et al. point to the bulge mass as a fundamental property related to star formation quenching, rather than the bulge fraction, which in our model is also understood due to the effect of AGN feedback.} \begin{figure} \begin{center} \includegraphics[trim = 0.1mm 0.3mm 1mm 0.45mm,clip,width=0.49\textwidth]{Figs/HhaloMFz0_SatCen_PdRGALFORM_Lagos12.eps} \caption{Host halo mass distribution for ETGs ($B/T>0.5$) in the Lacey14+RP model that are satellites (dashed line), centrals under the action of AGN feedback (dot-dashed line) and centrals without AGN feedback (solid line).} \label{Mhalos} \end{center} \end{figure} Focusing on the population of galaxies with $B/T>0.5$ (bottom panel of Fig.~\ref{MorphoSel2}), one can find similar trends. Most of the galaxies with $M_{\rm HI}<10^8\,M_{\odot}$ correspond to satellite galaxies, while the central galaxy population is responsible for the massive end of the HI mass function. The tail of central galaxies with $M_{\rm HI}<5\times 10^8\,M_{\odot}$ is under the influence of AGN feedback which explains the low HI contents. There is a subpopulation of ETGs with large HI masses and gas fractions, $M_{\rm HI}>10^9\,M_{\odot}$ and $M_{\rm HI}/L_{\rm K}\gtrsim 0.1\,M_{\odot}/L_{\odot}$, respectively. The latter population is also the one living in low mass haloes. Fig.~\ref{Mhalos} shows the distribution of masses of the halos hosting ETGs in the Lacey14+RP model. ETGs with the highest {HI gas contents, which correspond to centrals without AGN feedback (see Fig.~\ref{MorphoSel2}),} are hosted by low mass halos, with a median host halo mass of $3\times 10^{11}\,M_{\odot}$; centrals with AGN feedback on are hosted by higher mass halos, with median masses of $2\times 10^{12}\,M_{\odot}$, while satellites are distributed throughout a wider range of halo masses, with a median mass of $3\times 10^{13}\,M_{\odot}$. In the observations similar trends are seen. \citet{Young11} show that the ETGs with the highest H$_2$ masses also reside in the lowest density environments, {which in our model correspond to ETGs that are central galaxies and are not undergoing AGN feedback}. Similarly, \citet{Serra12} reported that the HI mass as well as the ratio $M_{\rm HI}/L_{\rm K}$ decrease with increasing density of the environment. We find that these trends are simply a reflection of the halo masses in which either AGN feedback switches on or environmental quenching acts more effectively (where ram pressure stripping of the hot gas is effective enough to remove significant amounts of hot gas). Our conclusion is that most ETGs with low gas fractions correspond to satellite galaxies, which due to environmental quenching (partial ram pressure stripping of the hot gas), are unable to regrow a significant disk. The rest of the ETGs with low gas fractions are central galaxies under the action of AGN feedback. The break at the high end of the HI gas fraction distribution, traced by HIPASS, is due to a population of ETGs with large HI masses that are still forming stars at the same level of spiral galaxies. \section{The competing sources of the neutral gas content of early-type galaxies}\label{CompetingSources} One the aims of this paper is to answer the question: what is the source of the HI and H$_2$ gas contents in ETGs? We follow all the gas sources throughout the star formation history of galaxies identified as ETGs today: radiative cooling from hot halos (which we refer to as `cooling'), mass loss from old stars (which we refer to as `recycling') and galaxy mergers (see Appendix~\ref{GasSourcesContribution} for details of how we do this). Note that in our model the recycled mass from old stars does not incorporate into the hot halo and therefore it can be distinguished from the gas coming from cooling. We study the sources of the gas in ETGs in two of the models shown in $\S$~\ref{Sec:ModelComparison} to find general trends present in the different models as well as variations between them. We focus here on the Lagos12+RP and Lacey14+RP models as, after including partial ram pressure stripping of the hot gas, they give the lowest and highest number densities of ETGs, respectively. \begin{table*} \begin{center} \caption{The percentage of ETGs with $L_{\rm K}>6\times 10^9\,L_{\odot}$ in the Lacey14+RP and Lagos12+RP models under different selection criteria, which we group in four categories: neutral gas content, neutral gas sources, mergers and disk instability.}\label{Contributions} \begin{tabular}{l c c} \\[3pt] \hline Selection & ETGs ($L_{\rm K}>6\times 10^9\,L_{\odot}$) & ETGs ($L_{\rm K}>6\times 10^9\,L_{\odot}$) \\ & Lacey14+RP & Lagos12+RP\\ \hline Neutral gas content \\ \hline ETGs with neutral gas content $M_{\rm HI}+M_{\rm H_2}>10^7\,M_{\odot}$ & $58$\% & $65$\%\\ \hline Neutral gas sources of the sample of ETGs with $M_{\rm HI}+M_{\rm H_2}>10^7\,M_{\odot}$.\\ \hline ETGs with current gas content dominated by mergers & $7.5$\% & $17$\%\\ ETGs with current gas content dominated by recycling & $1.5$\% & $0.8$\%\\ ETGs with current gas content dominated by cooling & $91$\% & $82$\%\\ \hline Mergers of ETGs with $M_{\rm HI}+M_{\rm H_2}>10^7\,M_{\odot}$.\\ \hline ETGs that had a merger in the last $1$~Gyr & $11$\% & $25$\%\\ ETGs that had a merger-driven starburst in the last $1$~Gyr & $1$\% & $1$\%\\ Mergers in ETGs that took place in $M_{\rm halo}<10^{14}\,M_{\odot}\,h^{-1}$ & $95$\% & $94$\% \\ Mergers in ETGs that increased the neutral gas content by a factor of $>2$ & $69$\% & $66$\%\\ \hline Disk instabilities on ETGs with $M_{\rm HI}+M_{\rm H_2}>10^7\,M_{\odot}$.\\ \hline ETGs that had a disk instability in the last $1$~Gyr & $4$\% & $2$\%\\ \hline \end{tabular} \end{center} \end{table*} We first estimate the fraction of ETGs that have neutral gas masses ($M_{\rm HI}+M_{\rm H_2}$) $>10^7\,M_{\odot}$. We find that $58$\% of ETGs with $K$-band luminosities $L_{\rm K}>6\times 10^9\,L_{\odot}$ in the Lacey14+RP model and $65$\% in the Lagos12+RP model have $M_{\rm HI}+M_{\rm H_2}>10^7\,M_{\odot}$. We analyse these sub-samples of ETGs and estimate the fraction of the ETGs with neutral gas contents supplied mainly by mergers, recycling or cooling (summarised in Table~\ref{Contributions}). Most ETGs have neutral gas contents supplied predominantly by cooling. A smaller percentage have neutral gas contents supplied mainly by mergers ($\approx 8$\% for the Lacey14+RP and $17$\% for the Lagos12+RP model) or by recycling ($\approx 1.5$\% for the Lacey14+RP and $0.8$\% for the Lagos12+RP model). The latter percentages are not sensitive to the $K$-band luminosity or stellar mass of ETGs. However, they are sensitive to the current neutral gas content and halo mass. In order to get an insight into the properties of ETGs that have different gas suppliers, we show in Table~\ref{Contributions} the fraction of ETGs that had a minor merger in the last $1$~Gyr, the fraction of these that increased the neutral gas content by at least a factor of $2$, and the fraction of ETGs that had a starburst driven by either a galaxy merger or a disk instability in the last $1$~Gyr. The main conclusions we draw from Table~\ref{Contributions} are: \begin{itemize} \item Minor mergers took place in a tenth of the ETG population with $L_{\rm K}>6\times 10^9\,L_{\odot}$ and $M_{\rm HI}+M_{\rm H_2}>10^7\,M_{\odot}$ in the last $1$~Gyr in the Lacey14+RP model. Only $10$\% of these resulted in a starburst, although none of these starbursts made a significant contribution to the stellar mass build-up (mass weighted stellar ages are usually $>7$~Gyr). The large percentage of minor mergers that did not drive starbursts in the last $1$~Gyr is due to the very low mass ratios between the accreted galaxy and the ETG, which is on average $\approx 0.05$. Such small mass ratios are not considered to drive starbursts in {\tt GALFORM} unless they are very gas rich (see $\S$~\ref{BuildUpBulges}). For the Lagos12+RP model the fraction of galaxies that had a minor merger in the last $1$~Gyr is higher, $25$\%, with a smaller fraction ($0.05$) of these driving starbursts. The mass ratios of these minor merger events are also very small, which explains the small percentage of merger driven starbursts. \item $\approx 68$\% of minor mergers in ETGs in both the Lacey14+RP model Lagos12+RP models increased the neutral gas content significantly (at least by a factor of $2$). The frequency of minor mergers times the percentage of those which increased the gas content significantly explains the percentages of ETGs with neutral gas contents supplied by minor merger accretion in the models. \item Of these minor merger accretion episodes, $\approx 95$\% in both the Lacey14+RP and Lagos12+RP models took place in halos with masses $<10^{14}\,M_{\odot}$, which implies that this source of neutral gas accretion is negligible in cluster environments. This agrees with the observations of \citet{Davis11}. \item There is only a small percentage, $~\approx 3$\%, of ETGs with $L_{\rm K}>6\times 10^9\,L_{\odot}$ and $M_{\rm HI}+M_{\rm H_2}>10^7\,M_{\odot}$, that had a starburst driven by disk instabilities in the last $1$~Gyr. These galaxies have very small disks (usually the bulge half-mass radius is larger than the disk half-mass radius) and $B/T\gtrsim 0.9$. In the model we use the properties of galaxy disks to determine whether they are unstable under small perturbations (see Eq.~\ref{DisKins}). In reality, one would expect that such large bulges dominate over the gravity of the disk, stabilizing it. \citet{Martig13} show this to happen in hydrodynamical simulations of individual galaxies: self-gravity of the disk is reduced when it is embedded in a bulge, preventing gas fragmentation. This results in an overall lower efficiency of star formation in ETGs. Our model does not capture this physics showing that it needs improvement to account for these cases. \end{itemize} \begin{figure} \begin{center} \includegraphics[trim = 0.9mm 0.3mm 1mm 0.45mm,clip,width=0.43\textwidth]{Figs/FractionsOFGasSources_RecycleOnly_MhaloOnly_Lagos12.RP_z1.eps} \caption{Fraction of ETGs ($B/T>0.5$) with $L_{\rm K}>6\times 10^9\,L_{\odot}$ and $M_{\rm HI}+M_{\rm H_2}>10^7\,M_{\odot}$ at $z=0$ in the Lacey14+RP and Lagos12+RP models that have most of their cold gas supplied by mass loss from intermediate- and low-mass stars, as a function of the host halo mass.} \label{FracGasContributions} \end{center} \end{figure} The largest differences found between the two models is in the fraction of ETGs with $M_{\rm HI}+M_{\rm H_2}>10^7\,M_{\odot}$ and the percentage of those with current neutral gas contents dominated by merger accretion. These differences are due to a combination of the different threshold values to examine disk instabilities and the different dynamical friction prescriptions used by the models ($\S$~\ref{BuildUpBulges}). In the Lacey14+RP model more disk instabilities take place due to the higher $\epsilon_{\rm disk}$, which drives a more rapid gas exhaustion in the galaxies that are prone to disk instabilities. Many of the galaxies that go through disk instabilities in the Lacey14+RP model do not do so in the Lagos12+RP model due to lower value of $\epsilon_{\rm disk}$. In the case of the dynamical friction, the prescription used by the Lagos12+RP model produces more minor mergers at lower redshifts than the prescription used in the Lacey13+RP model. \begin{figure} \begin{center} \includegraphics[trim = 0.9mm 0mm 1mm 0.45mm,clip,width=0.43\textwidth]{Figs/LBT_to_last_latetype_HaloMass_Lacey13.RP.eps} \includegraphics[trim = 0.9mm 0mm 1mm 0.45mm,clip,width=0.43\textwidth]{Figs/LBT_to_last_latetype_StellarMass_Lacey13.RP.eps} \caption{{\it Top Panel:} Look-back time to the last time ETGs ($B/T>0.5$) selected as $L_{\rm K}>6\times 10^9\,L_{\odot}$, $M_{\rm HI}+M_{\rm H_2}>10^7\,M_{\odot}$ at $z=0$ had a $B/T<0.5$ (late-type morphology), expressed in $1+z$, as a function of the current host halo mass, for the Lacey14+RP and Lagos12+RP models. Lines with errorbars show the median and 10 and 90 percentiles of the distributions, respectively. For clarity, errorbars are only shown for the Lacey14+RP model, but the ones in the Lagos12+RP model are of similar magnitude. {\it Bottom panel:} {As in the top panel but here the look-back time to the last time ETGs had a $B/T<0.5$ is shown as a function of stellar mass.}} \label{Misalignment2} \end{center} \end{figure} Another interesting dichotomy between cluster environments and lower mass halos is that the amount of ETGs that have neutral gas contents supplied mainly by mass loss from intermediate and low-mass stars increases with increasing halo mass. This is shown in Fig.~\ref{FracGasContributions} for the models Lagos12+RP and Lacey14+RP. In cluster environments, we expect $~7$\% of ETGs to have neutral gas contents mainly supplied by recycling in the Lacey14+RP model and $3$\% in the Lagos12+RP model, while that percentage drops dramatically at halo masses $M_{\rm halo}<10^{14}\,M_{\odot}$. Note that both models show a minimum contribution from recycling at $M_{\rm halo}\approx 2\times 10^{12}\,M_{\odot}$, which is connected to the halo mass in which feedback, either by stellar feedback in the lower halo masses or AGN in the higher halo masses, is the least effective. At this halo mass we have the most efficient accretion of newly cooled gas, which minimises the contribution from mass loss from old stars. The higher frequency of ETGs with neutral gas contents dominated by internal origin in higher mass halos takes place together with the aging of the bulge. In Fig.~\ref{Misalignment2}, we show the look-back time to the last time ETGs had a bulge-to-total stellar mass ratio $<0.5$ (a late-type morphology), {as a function of the current halo and stellar mass. There is a positive correlation between the current halo and stellar mass and the last time these ETGs were late-type: ETGs residing in high mass halo have had an early-type morphology for longer time than those residing in lower mass halos, and similarly, the most massive ETGs have a tendency of having had an early-type morphology for longer than lower mass ETGs}. Note, however, that the dispersion around these relations is very high, showing that there is no single path for the formation of spheroids and that the star formation history of ETGs can be quite complex (see also \citealt{Naab13}). The Lagos12+RP model predicts systematically lower times at halo masses $M_{\rm halo}<10^{13}\, M_{\odot}$ than the Lacey14+RP model, which is due to the higher disk instability threshold in the latter model, which drove ETGs to undergo disk instabilities on average earlier than in the Lagos12+RP model. \section{Conclusions}\label{conclusion} We have studied the current neutral gas content of ETGs and its origin in the context of hierarchical galaxy formation. We first use the HIPASS and ATLAS$^{\rm 3D}$ surveys to quantify the HI and H$_2$ gas fraction distribution functions for the overall galaxy population and ETGs observationally. We then explored the predictions for the neutral gas content of galaxies in three flavours of the {\tt GALFORM} semi-analytic model of galaxy formation, the Lagos12, Gonzalez-Perez14 and Lacey14 models and performed a thorough comparison with observations. For quiescent star formation, the three models use the pressure-based SF law of \citet{Blitz06}, in which the ratio between the surface density of H$_2$ and HI is derived from the radial profile of the hydrostatic pressure of the disk. The SFR is then calculated from the surface density of H$_2$. The advantage of this SF law is that the atomic and molecular gas phases of the ISM of galaxies are explicitly distinguished, which allows us to compare the predictions for the HI and H$_2$ contents of ETGs directly with observations. Other physical processes in the three models are different, such as the IMF adopted and the strength of both the SNe and the AGN feedback, as well as the cosmological parameters. We also tested the importance of the modelling of the processing of the hot gas of galaxies once they become satellites. The original {\tt GALFORM} flavours include a strangulation treatment of the hot gas: once galaxies become satellites they immediately lose all of their hot gas reservoir. We run the three {\tt GALFORM} flavours with a different hot gas processing: the partial ram pressure stripping of the hot gas of satellite galaxies, which depends upon the orbit followed by the satellite galaxy, with cooling continuing onto the satellite galaxies. Our conclusions are: (i) The three flavours of {\tt GALFORM} predict overall HI and H$_2$ mass functions in good agreement with the observations, regardless of the treatment of the hot gas of galaxies once they become satellites. However, when focusing exclusively on the ETG population, the inclusion of partial ram pressure stripping of the hot gas (as opposed to the strangulation scenario) results in the models predicting ETGs with higher contents of HI and H$_2$, improving the agreement with the observations. This shows that the HI and H$_2$ gas contents of ETGs are a great test for the modelling of the hot gas stripping in simulations of galaxy formation. Moreover, the gas fraction distribution is a statistical measurement which is particularly good for placing constraints on models. (ii) The presence of a bulge in galaxies is strongly correlated with depleted HI and H$_2$ gas contents in the three {\tt GALFORM} models tested. This close correspondence between the bulge fraction and the depleted neutral gas contents in ETGs has been observed and here we provide a physical framework to understand it. We show that this is due to AGN feedback in central galaxies, and the environmental quenching due to partial ram pressure stripping of the hot gas in satellite galaxies. In the former, the black hole mass is correlated with the bulge mass, which implies that feedback can be stronger in larger bulges (as the Eddington luminosity increases with back hole mass). Galaxies experiencing AGN feedback do not accrete significant amounts of newly cooled gas, which impedes the regeneration of a prominent disk. In the case of satellites, the lower accretion rates due to depletion of the hot gas prevent the regrowth of a substantial disk, or drive the exhaustion of the gas in the disk, leaving a (close to) gas-free disk. There is a fraction of ETGs though that are neither experiencing AGN feedback, nor environmental quenching, that have normal HI and H$_2$ contents which are comparable to those obtained for late-type galaxies of the same mass. (iii) We find that about $\approx 90$\% of ETGs accreted most of their neutral gas from the hot halo through radiative cooling. A lower fraction have current HI and H$_2$ contents supplied by accretion from minor galaxy mergers (ranging from $8$\% to $17$\%, depending on the model). An even smaller fraction ($0.5-2$\%) have their neutral gas content supplied mass loss from intermediate and low mass stars. Interestingly, most of those galaxies are hosted by large mass halos ($M_{\rm halo}>10^{14}\,M_{\odot}$; clusters of galaxies), while most of those dominated by minor merger accretion are in non-cluster environments ($M_{\rm halo}<10^{14}\,M_{\odot}$). We find that the source of the HI and H$_2$ gas in ETGs has strong consequences for the expected alignment between the gas disk and the stellar component, which we discuss in depth in paper II (Lagos et al. in prep.). (iv) We find a general trend of increasing look-back time to the last time ETGs were late-types ($B/T<0.5$) with increasing {host halo mass and stellar mass}. However, these trends are characterised by a very large dispersion around the median, suggesting that the paths for the formation of ETGs of a given stellar mass are variable and non self-similar. The latter is due to the stochastic nature of galaxy mergers and disk instabilities. Our analysis shows the power of studying the gas contents of galaxies, and how sub-samples of them are affected by different physical processes. In particular, our work points to the need for improved modelling of ram pressure stripping of the hot gas, which has an important effect in a wide range of environments. Although we show how the model we include, originally developed by \citet{Font08}, works well with the observational constraints we currently have, it may be too simplistic. For example this model does not explicitly take into account the three dimensional position and velocity of galaxies in the simulation, which means that we do not consider their specific position in the halo to calculate the ram pressure stripping throughout its transit. This will be possible with high resolution simulations, given that for such a detailed analysis it is necessary to resolve all halos with few hundreds particles at least. In the future, we suggest that the study of the HI and H$_2$ gas contents of galaxies classified as ``passive'' will provide stringent constraints on the details of the ram pressure stripping modelling. \section*{Acknowledgements} We thank Martin Meyer, Diederik Kruijssen, Andreas Schruba and Paolo Serra for very motivating discussions. The research leading to these results has received funding from the European Community's Seventh Framework Programme ($/$FP7$/$2007-2013$/$) under grant agreement no 229517 and the Science and Technology Facilities Council grant number ST/F001166/1. This work used the DiRAC Data Centric system at Durham University, operated by the Institute for Computational Cosmology on behalf of the STFC DiRAC HPC Facility ({\tt www.dirac.ac.uk}). This equipment was funded by BIS National E-infrastructure capital grant ST/K00042X/1, STFC capital grant ST/H008519/1, and STFC DiRAC Operations grant ST/K003267/1 and Durham University. DiRAC is part of the National E-Infrastructure. VGP acknowledges support from a European Research Council Starting Grant (DEGAS-259586). \bibliographystyle{mn2e_trunc8}
1,941,325,220,649
arxiv
\section{Introduction} Several experiments from solar to atmospheric neutrinos and laboratory oscillation experiments \cite{cald98} indicate that neutrinos oscillate and leptonic flavors are not conserved. This is in contrast with the minimal standard model (SM). It is quite possible that the energy scale of breaking of the three lepton numbers is comparable or even smaller than the Fermi scale. Furthermore, if they are spontaneously broken by the expectation values of some scalar fields then, a few bosons with zero mass should exist - the Nambu-Goldstone bosons (NG) - one per broken global symmetry. It was recently pointed out\thinspace \cite{bent98} that these NG bosons (called Majorons or familons when associated with lepton numbers) couple to the time rate of creation of the respective lepton numbers carried by the matter particles and therefore coherent NG fields are produced whenever lepton number violating processes occur simultaneously. That is the case if neutrinos change flavor on their way out from stars as seems to happen in the solar system. Once the NG fields are generated (the triggering process may be a normal Mikheyev-Smirnov-Wolfenstein (MSW) resonant conversion \cite {mikh86}), they change in turn the relative potentials of the different neutrino species and so get a life of their own. \strut The numbers show that if the scale of symmetry breaking is below 1 TeV then, the Majoron fields are important enough to play a role in supernova neutrino oscillations. They are however too small in the case of the Sun unless the scale of symmetry breaking lies below 1 KeV. As a result, supernova neutrinos may exhibit oscillation patterns in contradiction with the observations of solar, atmospheric and terrestrial neutrinos. We ought to be prepared, in the event of a close by supernova explosion, for the possible kind of effects caused by NG fields. In the previous paper \cite{bent98}, the example studied was that Majoron fields generated by the conversion $\nu _{e}\rightarrow \nu _{\tau }$, assumed to take place in a certain resonance shell, yield neutrino potentials which become competitive with the standard electroweak potentials at larger radii and therefore affect the other flavor transitions characterized by smaller $\Delta m^{2}$. The effects can be so dramatic as the resonant oscillation $\bar{\nu}_{e}\leftrightarrow \bar{\nu}_{\mu }$ in a context of $m_{\nu_e}<m_{\nu_{\mu}}$ hierarchy, where the resonance is otherwise possible for $\nu_{e} \leftrightarrow \nu_{\mu }$ but not for the anti-neutrinos, if they only interact via standard $W$ and $Z$ bosons. In the present paper, I want to discuss what happens if the Majoron potentials are already significant in the very region where the oscillations they are generated from occur. Then, a back reaction effect takes place yielding an interesting flavor dynamics. \strut \strut Suppose that in the Sun the electron-neutrino oscillates into the muon-neutrino with the parameters of the non-adiabatic, small mixing angle solution \cite{haxt86,rose86}. Then, the $\nu _{e}\leftrightarrow \nu _{\mu }$ transitions are also non-adiabatic in a supernova and only a fraction of each neutrino species is converted into the other. Furthermore, since the level crossing probability is an increasing function of the energy, the hotter $\nu _{\mu }$s\ have larger survival probabilities than the cooler $\nu _{e}$s. It will be shown that the back reaction of the NG fields improves the adiabaticity of the neutrino transitions, thus yielding a hotter energy spectrum for the outgoing $\nu _{e}$s. That kind of effect can in principle be traced in those detectors such as Super-Kamiokande and SNO, capable of detecting supernova electron-neutrinos \cite{burr92}. \section{Majoron and neutrino flavor dynamics} \strut \strut \strut In the following it is assumed that the partial lepton number $L_{e}$ is conserved at the Lagrangian level but the global symmetry associated with it, U(1)$_{e}$, is spontaneously broken by the expectation values of one or more scalar iso-singlets $\sigma _{i}$. Then, a NG boson $\xi _{e}$ exists with zero mass. The neutrino mass matrix violates in principle the three lepton numbers but for simplicity I will ignore the other possibly existing Majorons. It may be interpreted as meaning that the respective scales of symmetry breaking are slightly higher. It is well established that the Nambu-Goldstone bosons only interact through derivative couplings \cite{chen89,wein96}\ (related to the soft pion low energy theorems). This is clarified \cite{gelm83,bent98} by changing variables from the original fields with definite $L_{e}$ charges namely, fermions $\chi ^{a}$ and scalars $\sigma _{i}$, as follows: \begin{eqnarray} \chi ^{a} &=&\exp (-i\xi _{e}L_{e}^{a})\,\psi ^{a}\ , \label{psi} \\ && \nonumber \\ \sigma _{i} &=&\exp (-i\xi _{e}L_{e}^{i})\,\left( \left\langle \sigma _{i}\right\rangle +\rho _{i}\right) \ . \label{sigma} \end{eqnarray} The so defined {\em physical} weak eigenstates, the fermions $\psi ^{a}=e,\,\nu _{e},\,...$\ and massive bosons $\rho _{i}$, are invariant under the group U(1)$_{e}$. In terms of these fields the symmetry is simply realized as translations\ of the Majoron field, $\xi _{e}\rightarrow \xi _{e}+\alpha $, and consequently the Lagrangian can only depend on the derivatives $\partial _{\mu }\xi _{e}$. \strut The non-standard interactions relevant to this work are contained in the expression \begin{equation} {\cal L}=-{{\nu _{L}^{a}}^{T}\,}C\,m{_{ab}\,\nu _{L}^{b}\,}+{\rm h.c.}+\frac{% 1}{2}\Omega _{e}^{2}\,\partial _{\mu }\xi _{e}\,\partial ^{\mu }\xi _{e}+\,% {\cal L}_{{\rm int}}(\partial _{\mu }\xi _{e})\ , \label{LbSM} \end{equation} where $\Omega _{e}^{2}\,=2\sum_{i}\left| L_{e}^{i}\,\left\langle \sigma _{i}\right\rangle \right| ^{2}$ and $\nu _{L}^{a}$ denote the standard $\nu _{L}^{e},\nu _{L}^{\mu },\nu _{L}^{\tau }$ or any extra neutrino singlets. The $\xi _{e}$ equation of motion, \begin{equation} \partial _{\mu }\partial ^{\mu }\,\xi _{e}=-\partial _{\mu }J_{e}^{\mu }/\Omega _{e}^{2}\ , \label{ddfi} \end{equation} identifies with the conservation law of the N\"{o}ether current associated with the symmetry $\xi _{e}\rightarrow \xi _{e}+\alpha $. All the Majoron interactions are cast in the current $J_{e}^{\mu }$ obtained from ${\cal L}_{% {\rm int}}(\partial _{\mu }\xi _{e})$. Its leading terms do not depend on the particular model and are derived from the $\chi ^{a}$, $\sigma _{i}$ kinetic Lagrangian using Eqs.\ (\ref{psi}), (\ref{sigma}). The result is the following: \begin{mathletters} \begin{eqnarray} {\cal L}_{{\rm int}} &=&(\partial _{\mu }\xi _{e})\, \left( {\bar{e}\,}\gamma ^{\mu }\,e+{\bar{\nu}_{e}}\gamma ^{\mu }\nu _{e}+\cdots \right) \ , \label{Lint} \\ & & \nonumber \\ J_{e}^{\mu } &=&\,{\bar{e}\,}\gamma ^{\mu }\,e+{\bar{\nu}_{e}}\gamma ^{\mu }\nu _{e}+\cdots \ . \label{Je} \end{eqnarray} The dots represent scalar boson terms and model dependent radiative corrections that are not relevant for this work and will ignored in the following. \strut The expressions above are typical of one kind of models, the Abelian-singlet Majorons \cite{chik81,vall82,grin85,bent98}. Singlet means that the scalar fields that spontaneously break the lepton number symmetries are all singlets under the SM gauge group SU(2)$\times $U(1), thus complying with the LEP results on the $Z^{0}$ invisible width, unlike the triplet-Majoron model \cite{gelm81}. By Abelian I mean U(1) symmetry groups, not horizontal in flavor space, associated to $L_{e},$ $L_{\mu },$ $L_{\tau }$ or any linear combinations of them. As the respective currents do not change flavor these models are not bound by the laboratory limits on the familon models \cite{pdg96}. In addition, of all interactions involving SM particles the neutrino masses are the most important source of lepton numbers violation. Thus, in single collision or decay reactions, the effective strength of the Majoron couplings, resulting from $\partial _{\mu }J^{\mu }/\left\langle \sigma \right\rangle ,$ is proportional to the neutrino masses and suppressed by the symmetry breaking scale: $g\sim m_{\nu }/\left\langle \sigma \right\rangle $. For that reason, one does not expect observable zero neutrino double beta decays accompanied by Majoron emission as shown in \cite{hirs96}. With a sensitivity to neutrino-Majoron couplings\ of the order of $10^{-5}$ \cite{bern92} they cannot even probe relatively low symetry breaking scales. This kind of models also escape present astrophysical bounds on the couplings of Nambu-Goldstone\ bosons. Neutrino-Majoron couplings\ with strengths $g\sim m_{\nu }/\left\langle \sigma \right\rangle $ \ are too far from $10^{-4}$ to change supernova collapse dynamics \cite {kolb82}, and even below the threshold of $\sim 10^{-8.5}$ for supernova cooling through singlet Majoron emission \cite{choi90}. Finally, the pseudo-scalar couplings to electrons, that could be responsible for energy loss in stars \cite{raff97}, only arise through radiative corrections and are so further suppressed. In the $\xi _{e}$ equation of motion, the source term $\partial _{\mu }J_{e}^{\mu }$ is nothing but the time rate of {\em creation} of\ $L_{e}$% -number carried by {\em matter} particles per unity of volume. If the neutrinos $\nu _{e}$ and $\nu _{\mu }$ oscillate into each other outside the supernova neutrinospheres, but not their anti-particles, the net variation of ${\,}L_{e}$ is given by the difference between the numbers of converted neutrinos $N(\nu _{\mu }\to \nu _{e})$ and $N(\nu _{e}\to \nu _{\mu })$. In a stationary regime the fluxes are constant in time and Eq.\ (\ref{ddfi}) reduces to a Poisson equation with a Coulombian solution for $\xi _{e}$ \cite {bent98}. The gradient $\vec{A}_{e}=-\vec{\nabla}\xi _{e}$ obeys a Gauss law. In a spherical symmetric configuration it only has a radial component, \end{mathletters} \begin{equation} A_{e}(r)=-\frac{1}{\Omega _{e}^{2}}\frac{\dot{L}_{e}(r)}{4\pi \,r^{2}}\ , \label{Aei} \end{equation} that is determined by the integral of the source term over the volume of radius $r$, in this case, $L_{e}$-number created per unity of time in that volume, $\dot{L}_{e}(r)=\smallint d^{3}x\ \partial _{\mu }J_{e}^{\mu }$% . It can also be expressed as \cite{bent98} \begin{equation} A_{e}{\scriptsize (}r{\scriptsize )}=-\frac{1}{\Omega _{e}^{2}}\left[ j({\nu _{\mu }\to \nu _{e})}-j({\nu _{e}\to \nu _{\mu })}\right] \ , \label{Ae} \end{equation} where $j({\nu _{e}\to \nu _{\mu })}$ denotes the flux of $e$-neutrinos converted to $\mu $ flavor and $j({\nu _{\mu }\to \nu _{e})}$ the reciprocal. Both these quantities are functions of the radius $r$. \strut Taking in consideration the interactions specified by Eqs.\ (\ref {LbSM}), (\ref{Lint}), the equation of motion of the neutrino wave function is, including flavor space, \begin{equation} \left( i\,\partial \hspace{-0.22cm}/\,-\gamma ^{0}V_{0}-\vec{\gamma}\!\cdot \!\vec{A}_{e}\,\hat{L}_{e}\right) \psi _{L}=m\,\psi _{R}\ , \label{empsi} \end{equation} where $m$\ is the $\nu $ mass matrix (real for simplicity), $\hat{L}_{e}$\ is the flavor-valued quantum number (1 for ${\nu _{e}}$ and 0 for ${\nu _{\mu }} $, ${\nu _{\tau }}$) and $V_{0}$\ designates the flavor conserving SM potential in a medium at rest. One derives in the same way as in the case of a scalar potential $V_{0}$\ the equations governing flavor oscillations \cite{kuo89} (see also \cite{bent98}), \begin{equation} i\frac{\partial }{\partial \,r}\nu =\left( \frac{m^{2}}{2E}+V_{0}+A_{e}\,% \hat{L}_{e}\right) \nu \ , \end{equation} which give (after absorbing a flavor universal term in the wave function) \begin{equation} i\frac{\partial }{\partial \,r}\left( \begin{array}{c} {\nu _{e}} \\ \\ {\nu _{\mu }} \end{array} \right) =\frac{1}{2E}\left( \begin{array}{cc} 2E(V_{W}+A_{e}) & \quad {\rm \,}\frac{1}{2}\Delta m^{2}\sin 2\theta \\ & \\ {\rm \,}\frac{1}{2}\Delta m^{2}\sin 2\theta & \quad {\rm \,\,}\Delta m^{2}\cos 2\theta \end{array} \right) \left( \begin{array}{c} {\nu _{e}} \\ \\ {\nu _{\mu }} \end{array} \right) \;. \end{equation} $V_{W}=\sqrt{2}\,G_{{\rm F}}\,n_{e}$ is the charged-current potential of $% \nu _{e}$\ in a medium with electron number density $n_{e}$ \cite{wolf78}\ and $\theta $\ is the mixing angle. It is worth to notice that the results do not change if one considers alternatively a NG boson associated with breaking of $L_{\mu }$ or $L_{e}-L_{\mu }$. The reason is, $\nu _{e}\leftrightarrow \nu _{\mu }$ oscillations only violate $L_{e}-L_{\mu }$ (the non-conservation of $L_{e}+L_{\mu }$ is suppressed by $m_{\nu }^{2}/E^{2}$) and only care about the difference between the $\nu _{e}$ and $% \nu _{\mu }$ potentials. In order to calculate $A_{e}(r)$\ one needs to know the fluxes of $\nu _{\mu }$ and $\nu _{e}$\ as functions of the radius with and without oscillations. I make the simplification of neglecting the oscillation length by saying that a neutrino with energy $E_{R}$ is converted to the other flavor at the position where the resonance condition, \begin{equation} E_{R}=\frac{\Delta m^{2}\cos 2\theta }{2(V_{W}+A_{e})}\ , \label{ER} \end{equation} is fulfilled. In addition, having in mind that in a non-adiabatic regime only a fraction $1-P_{c}$\ changes flavor (small mixing angle), the level crossing probability $P_{c}$\ is calculated using the Landau-Zener approximation \cite{haxt86,kuo89}, \begin{equation} P_{c}(E)=\exp \left\{ -\frac{\pi }{4}\left| \frac{{\rm \,}\Delta m^{2}}{{\rm % d}E_{R}/{\rm d}r}\frac{\sin ^{2}2\theta }{\cos 2\theta }\right| \right\} _{\!\!R}\ . \label{Pc} \end{equation} I believe that these approximations change the numbers but not the lesson taken by comparing the results with and without a Majoron field. Let the number of emitted particles per unity of time and energy be specified by the distribution functions $f_{\nu _{e}}(E)={\rm d}\dot{N}_{\nu _{e}}/{\rm d}E$ and $f_{\nu _{\mu }}(E)={\rm d}\dot{N}_{\nu _{\mu }}/{\rm d}% E $. They are normalized by the relation between the number luminosity $\dot{% N}_{\nu }$, the energy luminosity $L_{\nu }$ and the average energy $\bar{E}% _{\nu }$\ of each $\nu $ flavor namely, $\dot{N}_{\nu }=L_{\nu }/\bar{E}% _{\nu }$. $L_{\nu }$\ and $\dot{N}_{\nu }$\ will be given below in unities of ergs/s and ergs/s/MeV respectively. Equation (\ref{ER}) gives the resonance energy as a function of the radius: particles with lower energies reach the resonance position at higher density regions and the hottest neutrinos oscillate at the largest radii. The statement that the number of converted neutrinos is the fraction $1-P_{c}$\ of the number of particles with resonance energy translates into an equation for the time rate ${\rm d}\dot{L% }_{e}={\rm d}\dot{N}({\nu _{\mu }\to \nu _{e})-{\rm d}}\dot{N}({\nu _{e}\to \nu _{\mu })}$ of $L_{e}$-number creation in a shell with depth ${\rm d}r$: \begin{equation} {\rm d}\dot{L}_{e}=(1-P_{c})\,\left[ f_{\nu _{\mu }}(E_{R})-f_{\nu _{e}}(E_{R})\right] \frac{{\rm d}E_{R}}{{\rm d}r}{\rm d}r\ . \label{dLe} \end{equation} This establishes a differential equation for $\dot{L}_{e}(r)$. Notice that the derivative of $E_{R}$\ is not independent from the derivative of $\dot{L}% _{e}$ because the Majoron potential $A_{e}(r)$\ that enters in $E_{R}$\ depends also on $\dot{L}_{e}(r)$,\ as Eq.\ (\ref{Aei}) shows. In addition, the probability $P_{c}$\ depends on the $E_{R}$ and $\dot{L}_{e}$ derivatives as well, and that makes a non-linear differential equation for $% \dot{L}_{e}(r)$ specified by Eqs.\ (\ref{Aei}), (\ref{ER}) - (\ref{dLe}). \begin{figure}[t] \centering \epsfig{file=fig1.eps,width=80mm} \vspace{15pt} \caption{Level crossing probability $P_{c}$ as a function of the $\nu $ energy. Dashed curve holds for the SM with constants $\tilde{M}=4\times 10^{31}\,{\rm g}$\ and $Y_{e}=1/2$. In the dotted and bold curves the Nambu-Goldstone field $\xi _{e}$ exists with $G_{e}=G_{{\rm F}}$ and $% G_{e}=4\,G_{{\rm F}}$ respectively, and the luminosities are $10^{52}\,{\rm % ergs/s}$ for $\nu _{e}$ and $7\times 10^{51}\,{\rm ergs/s}$ for $\nu _{\mu }$% .} \end{figure} It remains to tell the density profile of the medium. In the regions of a supernova star with densities typical of the Sun the mass density goes as $% 1/r^{3}$, the constant $\tilde{M}=\rho \,r^{3}$ lying between $10^{31}{\rm g} $ and $15\times 10^{31}{\rm g}$ depending on the star \cite{wils86}. Then, in terms of the electron abundance $Y_{e}\approx 1/2$, the electroweak potential ($\sqrt{2}{\rm \,}G_{{\rm F}}\,n_{e}$) is \begin{equation} V_{W}=0.76\ Y_{e}{\frac{{\tilde{M}}}{10^{31}{\rm g}}\,}r{_{10}^{-3}}\times 10^{-12}\,\,{\rm eV}\;, \label{VW} \end{equation} where $r_{10}=r/10^{10}\,{\rm cm}$. This is to be compared with \begin{equation} A_{e}=1.48\ \frac{G_{e}}{G_{{\rm F}}}\frac{-\dot{L}_{e}}{10^{51}\,{\rm % ergs/s/MeV}}\,r_{10}^{-2}\times {}10^{-12}\,\,{\rm eV\ ,} \label{Ae2} \end{equation} where $G_{F}=11.66\,{\rm TeV}^{-2}$ is the Fermi constant and $% G_{e}=1/\Omega _{e}^{2}$. Clearly, if the neutrino luminosities are sufficiently high say, $10^{52}\,{\rm ergs/s}$ for $10{\rm \ MeV}$ neutrinos, and the scale of lepton symmetry breaking is around or below the Fermi scale, the Majoron potential $A_{e}$ becomes competitive with $V_{W}$ at radii where the resonance occurs for $\Delta m^{2}$ values interesting for solar neutrinos ($10^{-5}-10^{-4}\,{\rm eV}^{2}$). More generally, at large enough distances the Majoron potentials decay as the inverse square radius and necessarily dominate over the local interactions. \begin{figure}[t] \centering \epsfig{file=fig2.eps,width=80mm} \vspace{15pt} \caption{Total potential $V_{e}=V_{\nu _{e}}-V_{\nu _{\mu }}$\ as a function of the radius. The dashed curve stands for the SM potential, the dotted and bold curves for the Majoron case with the same parameters as for the homologous curves of Fig. 1.} \end{figure} Let us examine the $\nu _{e}\leftrightarrow \nu _{\mu }$\ oscillations with the mixing parameters of the non-adiabatic solar neutrino solution (for a recent update see \cite{hata97}), choosing in particular the values $\Delta m^{2}=7\times 10^{-6}\,{\rm eV}^{2}$, $\Delta m^{2}\sin ^{2}2\theta =$\ $% 4\times 10^{-8}\,{\rm eV}^{2}$. In a supernova the resonance is non-adiabatic as well and, as Eq.\ (\ref{Pc}) indicates, the survival probability $P_{c}$ increases with the $\nu $ energy. This was studied in detail in the framework of the SM \cite{mina88}. In Fig. 1, $P_{c}$ in the Landau-Zener approximation is plotted against the energy. The dashed curve holds for the SM potential with a constant $\tilde{M}=4\times 10^{31}{\rm g}$% . It is manifest the aggravation of the non-adiabaticity with the energy. To study the Majoron case one has to specify the energy spectra and luminosities. I used Fermi-Dirac distributions with the following values of temperature and chemical potential \cite{burr90}: for $\nu _{e}$, $T=2.4{\rm % \ MeV}$ and$\ \mu =3.2\,T$; for $\nu _{\mu }$, $T=5.1{\rm \ MeV},\ \mu =4.1\,T$. This gives average energies of $10$ and $23{\rm \ MeV}$ respectively. The luminosity intensities are in turn $10^{52}\,{\rm ergs/s}$ for $\nu _{e}$ and $7\times 10^{51}\,{\rm ergs/s}$ for $\nu _{\mu }$ which amount to particle emission rates of $10^{51}$ and $3\times 10^{50}\,{\rm % ergs/s/MeV}$ respectively. Because the $e$-neutrinos are more numerous, the $% \nu _{e}\leftrightarrow \nu _{\mu }$\ oscillations produce a net destruction of $L_{e}$-number and a positive Majoron potential $A_{e}$. The dynamics is the following: the less energetic $\nu _{e}$\ oscillate to $\nu _{\mu }$\ at the smallest radii; this conversion produces a positive $A_{e}(r)$\ which attenuates the fall of the total potential $V_{e}=V_{W}+A_{e}$ with the radius; as a consequence, the adiabaticity improves at larger $r$ and the most energetic neutrinos change flavor with higher probabilities. \begin{figure}[t] \centering \epsfig{file=fig3.eps,width=3.375in} \vspace{15pt} \caption{ The dotted curves are the assumed luminosity distributions for $% \nu _{e}$ and\ $\nu _{\mu }$ as emitted from the neutrinospheres. The dashed curve represents the $\nu _{e}$ luminosity after electroweak neutrino oscillations whereas in the bold curve a $\xi _{e}$ field exists with $% G_{e}=4\,G_{{\rm F}}$.} \end{figure} Figure 2 shows the total potential $V_{e}$\ as a function of the radius. The dashed curve stands for $V_{W}$, the SM potential, with $\tilde{M}=4\times 10^{31}\,{\rm g}$\ and $Y_{e}=1/2$. In the dotted and bold curves the Nambu-Goldstone field associated to $L_{e}$ symmetry breaking\ operates with a constant $G_{e}=G_{{\rm F}}$ and $G_{e}=4\,G_{{\rm F}}$, \ respectively. The potential falls more slowly as the field $\xi _{e}$ grows. The level crossing probability is plotted in Fig.\ 1 for both cases (dotted and bold curves) and the effect is clear: the stronger the Majoron field, the more efficient is the flavor conversion. It is worth to mention that if the $\mu $% -neutrinos were more numerous than the $e$-neutrinos the effect would be the opposite because the Majoron potential would be negative ($\dot{L}_{e}>0$). That is actually reflected in the rapid rise of $P_{c}$\ at the $\nu _{\mu }$% \ energy band around $20{\rm \ MeV}$. Figure 3 shows the implications for the outgoing $\nu _{e}$\ energy spectrum. The dotted curves are the assumed luminosity distributions for the emitted $% \nu _{e}$s and\ $\nu _{\mu }$s. The dashed curve represents the luminosity of the $e$-neutrinos that come out of the star after standard MSW oscillations and the bold curve is the same but with a Majoron field ($% G_{e}=4\,G_{{\rm F}}$). The improvement of adiabaticity makes more $\nu _{\mu }$s\ to convert into $\nu _{e}$\ and less $\nu _{e}$\ to survive, and because the $\mu $-neutrinos are more energetic, the outgoing $\nu _{e}$\ spectrum is harder than if there was no Majoron field. The average energy of the outgoing $e$-neutrinos\ is $13{\rm \ MeV}$ if $G_{e}=0$ but rises to $17% {\rm \ MeV}$ if $G_{e}=G_{{\rm F}}$ and $21{\rm \ MeV}$\ if $G_{e}=4\,G_{% {\rm F}}$. \section{Conclusions and discussion} To summarize, if the explanation of the solar neutrino deficit is the MSW non-adiabatic oscillation $\nu _{e}\rightarrow \nu _{\mu }\ $(or $\nu _{e}\rightarrow \nu _{\tau }$) then, the standard model of electroweak interactions predicts that in a supernova the $\nu _{e}\leftrightarrow \nu _{\mu }$\ transitions are also non-adiabatic. It means that, to a large extent, the $e$-neutrinos preserve their lower energy spectrum, unless $\nu _{e}$\ also mixes to another flavor with a too high or too low $\Delta m^{2}$% \ to show up in solar neutrinos. If however, $L_{e}$\ is a spontaneously broken quantum number, the associated Nambu-Goldstone boson, $\xi _{e}$, will acquire a classic field configuration which may be strong enough to produce a back reaction with the net effect of improving the adiabaticity of the $\nu _{e}\leftrightarrow \nu _{\mu }$\ transitions. The final result is a $\nu _{e}$\ energy spectrum harder than expected. In 1987, the existing detectors were only able to detect electron anti-neutrinos but the now operating Super-Kamiokande and SNO experiments will be capable of detecting supernova $\nu _{e}$\ events. The analysis of the energy distribution can in principle reveal or put limits on that kind of effect. The scenarios of neutrino mixing change considerably if one considers the evidences from atmospheric and terrestrial neutrino experiments (for a review see \cite{cald98}). The atmospheric neutrino anomaly and the zenith angle dependence observed by Super-Kamiokande can be explained by $\nu _{\mu }\rightarrow \nu _{\tau }$ oscillations, best fit \cite{suzu98} $\Delta m^{2}=5\times 10^{-3}\,{\rm eV}^{2}$, $\sin ^{2}2\theta =1$. The alternative $\nu _{\mu }\rightarrow \nu _{e}$ is excluded by the CHOOZ limits \cite {CHOOZ97}. This can still accommodate $\nu _{e}\rightarrow \nu _{\mu }$ or $% \nu _{e}\rightarrow \nu _{\tau }$ as solar neutrino solutions but that is no longer true if one takes in consideration the evidence from the Liquid Scintillation Neutrino Detector (LSND) for $\bar{\nu}_{\mu }\rightarrow \bar{\nu}_{e}$\ \cite{LSND96} and \ $\nu _{\mu }\rightarrow \nu _{e}$ \cite{LSND97} oscillations. The very different $% \Delta m^{2}$ scales involved in LSND ($\Delta m_{e\mu }^{2}>0.2\,{\rm eV}% ^{2}$), atmospheric and solar neutrinos call for a fourth flavor - a sterile neutrino. In that picture solar $\nu _{e}$s oscillate into the sterile $\nu _{s}$. We now examine the consequences for supernovae neutrinos always assuming the non-adiabatic solar neutrino solution, ignoring for definiteness the possible mixing between $\nu _{s}$ and $\nu _{\mu }$ or $\nu _{\tau }$. The oscillation pattern is the following: 1) MSW conversion of $\nu _{e}\rightarrow \nu _{\mu }$ with LSND $\Delta m^{2}$; 2) sequential oscillation $\nu _{\mu }\rightarrow \nu _{e}\rightarrow \nu _{s}$, the first a LSND transition, the second a solar $\nu $ process. The outcome is a hard spectrum for $\nu _{e}$ depleted by $\nu _{e}\rightarrow \nu _{s}$, but only partially because of the non-adiabaticity of this transition. If alternatively, a NG field $\xi _{e}$ exists (created by $\nu _{e}\rightarrow \nu _{\mu }$, $\nu _{\mu }\rightarrow \nu _{e}$ and $\nu _{e}\rightarrow \nu _{s}$), it improves the adiabaticity of $\nu _{e}\rightarrow \nu _{s}$ causing a $\nu _{e}$ depletion stronger than predicted by SM interactions. If one repeats the analysis with other scenarios of $\nu $ mixing the effects will be different in detail but with one thing in common: the signature of NG fields is a {\em surprise} {\em i.e.}, an oscillation pattern not consistent with the $\nu $ mixing derived from solar, atmospheric and terrestrial $\nu $ experiments. It should be kept in mind that the situation turns more complex and rich if there is mixing between the three NG bosons associated with the three lepton flavors, a very natural feature if they are all spontaneously broken. This was explored in \cite {bent98}. A point that cannot be overstressed is that the Nambu-Goldstone fields are proportional to the rate of charge violation processes and therefore to the very reaction rates. In the case of neutrino oscillation this manifests as a strong dependence of the Majoron fields on the magnitudes of the neutrino luminosities. In the numeric simulation I chose $10^{52}\,{\rm ergs/s}$ for $% \nu _{e}$ and $7\times 10^{51}\,{\rm ergs/s}$ for $\nu _{\mu }$,\ values produced and even exceeded in the about half a second that lasts between the neutronisation $\nu _{e}$ burst and the supernova explosion \cite {burr92,mayl87,burr90}. The neutrino luminosities decay afterwards in a time scale of 1 second, or rather 4 seconds \cite{burr90,blud88},\ as indicated by SN 1987A events \cite{hira87}. The highest luminosity happens during the first $\nu _{e}$ burst - above $10^{53}\,{\rm ergs/s}$ in the peak \cite {burr92,mayl87,burr90} - and the Majoron field may be even stronger then. However, the time scale of the rise and fall of the $\nu _{e}$ signal is about $5\,{\rm ms}$, too short to authorize a stationary approximation in the calculation of $\xi _{e}$. In fact, distances of the order of $10^{10}\,% {\rm cm}$ in such a period of time are beyond the light cone and a special study is required. The effects of the Majoron fields on the neutrino spectra, if any, will be observed in a shorter or longer interval of time depending on the actual scale of lepton number symmetry breaking. The observation of such a correlation with the flux magnitudes, by itself a signature of the NG fields, would thus provide a measurement of the scale of spontaneous symmetry breaking. \strut \acknowledgements This work was supported in part by the project ESO/P/PRO/1127/96.
1,941,325,220,650
arxiv
\section{Introduction} A considerable enhancement in molecular deuteration is observed towards the early stages of star formation. Despite the cosmic deuterium abundance relative to hydrogen being $\sim$1.5$\times$10$^{-5}$ \citep{linsky03}, the abundances of deuterated molecules relative to the correspondent main species range from $\sim$20\% for N$_2$H$^+$ towards pre-stellar cores to 10\% for methanol and formaldehyde in protostellar cores \citep{parise06}. These enhancements are a consequence of the exothermic reaction \begin{equation} \rm H_3^+ + HD \rightleftarrows H_2D^+ + H_2 + 230 K, \end{equation} and successive deuterations up to the formation of D$_3^+$ (see \citealt{ceccarelli14} and references therein). The deuterated isotopologues of H$_3^+$ can also dissociatively recombine with electrons, enhancing the atomic D/H ratio in the gas phase \citep{roberts03}. While H$_2$D$^+$, and the multiply deuterated isotopologues of H$_3^+$, are deuterating molecules in the gas phase, the enhanced atomic D/H ratio is transferred to grains where it deuterates the molecules on the surface. Isotopic fractionation, and in particular the deuterium fractionation, is a powerful tool to study the evolution of material during the process of star and planetary system formation. Observations of ortho and para H$_2$D$^+$, for example, have allowed to derive the age of a core forming a Sun-like star \citep{bruenken14}. Furthermore, observing and modelling water and its deuterated isotopologues has established that a substantial fraction of the water in the Solar System is inherited from the pre-stellar core where the Sun formed \citep{cleeves14, vandishoeck21}. The case of water has highlighted how powerful it is to use deuteration as a probe of inheritance, and it is important to note that the conclusive evidence was found in the high abundances of the doubly deuterated water. Molecules with the possibility of multiple deuteration provide in fact crucial constraints not only to the deuteration processes involved, but also to the formation of the main species. This is essential particularly in the case of molecules like H$_2$CO and H$_2$CS that are formed with an interplay of of gas-phase and grain-surface chemistry. H$_2$CS, likewise H$_2$CO, is formed both in the gas phase (mainly from atomic S) and on the surface of dust grains by addition of hydrogen atoms on CS. H$_2$CS has been observed towards cold molecular clouds, protostellar cores, hot cores, circumstellar envelopes and protoplanetary disks \citep{sinclair73, vastel18, agundez08,legal19, drozdovskaya18}. Its deuterated isotopologues have also been observed towards starless and prostostellar cores \citep{marcelino05, drozdovskaya18}. The low-mass pre-stellar core L1544, located in the Taurus molecular cloud at 170 pc \citep{galli19}, is a well-studied object. It is centrally concentrated \citep{wt99} and shows signs of contraction motion. Its central density is between $\sim$10$^6$ and $\sim$10$^7$ cm$^{-3}$ and the central temperature is $\sim$6 K \citep{Keto10a,crapsi07}. The core exhibits a high degree of CO freeze-out, and a high level of deuteration towards its center \citep{crapsi05}. It is chemically rich, showing spatial inhomogeneities in the distribution of molecular emission \citep{spezzano17, redaelli19, chacon19}. The sulfur chemistry towards the dust peak of the pre-stellar core L1544 has been presented in \cite{vastel18}, where it is shown that the sulfur-bearing species in L1544 are emitting from an external layer ($\sim$ 10$^4$ AU from the core centre). Furthermore, only a fraction of a percent of the cosmic sulfur fractional abundance w.r.t. total H nuclei (1.5$\times$10$^{-5}$) is needed to reproduce the observations, confirming that sulfur is highly depleted in the dense interstellar medium \citep{laas19}. In this paper we present the first deuteration maps of H$_2$CS towards the pre-stellar core L1544. In Section 2 we present the observations. The analysis is described in Section 3, and the chemical modelling is presented in Section 4. We discuss the results in Section 5 and summarise our conclusions in Section 6. \begin{table*}{} \caption{Spectroscopic parameters of the observed lines} \label{table:parameters} \begin{tabular}{ccccc} \hline\hline \\[-2ex] Molecule & Transition & Rest frequency & $E_\text{up}$ & $n_\text{crit}$ (at 10\,K) \\ & $J_{K_a,K_c}$ &(MHz) & (K) &(cm$^{-3}$) \\[0.5ex] \hline \\[-2ex] H$_2$CS & $3_{0,3}-2_{0,2}$ & 103040.447(1) & 9.9 & $1.5\times 10^5$ \\ HDCS & $3_{0,3}-2_{0,2}$ & 92981.60(2) & 8.9 & -- \\ D$_2$CS & $3_{0,3}-2_{0,2}$ & 85153.92(5) & 8.1 & -- \\ \hline \end{tabular} \tablefoot{Numbers in parentheses denote $1\sigma$ uncertainties in unit of the last quoted digit. n$_{crit}$ is the critical density of the transition.} \end{table*} \section{Observations} The emission maps of H$_2$CS, HDCS, and D$_2$CS towards L1544 were obtained using the IRAM 30m telescope (Pico Veleta, Spain) in 2 different observing runs in 2013 and 2015, and are shown in Figure~\ref{fig:integrated_intensity}. The size of the D$_2$CS map shown in Figure~\ref{fig:integrated_intensity} is smaller with respect to the size of the H$_2$CS and HDCS maps because, given the weakness of the D$_2$CS line, in 2015 a deeper integration for this line was performed towards the inner region of L1544. The spectra of the three isotopologues extracted towards the dust peak of L1544 are shown in Figure~\ref{fig:spectra}. We performed a 2.5$^\prime$ $\times$2.5$^\prime$ on-the-fly (OTF) map centred on the source dust emission peak ($\alpha _{2000}$ = 05$^h$04$^m$17$^s$.21, $\delta _{2000}$ = +25$^\circ$10$'$42$''$.8). We used position switching with the reference position set at (-180$^{\prime \prime}$,180$^{\prime\prime}$) offset with respect to the map centre. The observed transitions are summarised in Table \ref{table:parameters}. The EMIR E090 receiver was used with the Fourier transform spectrometer backend (FTS) with a spectral resolution of 50 kHz. The mapping was carried out in good weather conditions ($\tau_{225GHz}$ $\sim$ 0.3) and a typical system temperature of T$_{sys}$ $\sim$ 90-150 K. The data processing was done using the GILDAS software \citep{pet05}. The emission maps of H$_2$CS and HDCS have been smoothed to the beam size of D$_2$CS (30.5$^{\prime\prime}$). All maps have been gridded to a pixel size of 6$''$ with the CLASS software in the GILDAS package, which corresponds to 1/5 of the beam size. The integrated intensity maps shown in Figure~\ref{fig:integrated_intensity} have been computed in the 6.9-7.6 km s$^{-1}$ velocity range. To compute the column densities, we used the forward efficiency $F_{eff}$=0.95 and main beam efficiency B$_{eff}$=0.76 to convert the T$_A^*$ temperature scale into the T$_{MB}$ temperature scale. The velocity rest frame used in this work is the local standard of rest (lsr). \begin{figure*} \begin{center} \includegraphics[width=19cm]{Fig1.pdf \end{center} \caption{Integrated intensity maps of the 3$_{0,3}$-2$_{0,2}$ transitions of H$_2$CS, HDCS, and D$_2$CS towards the inner 2.5$^\prime$ $\times$2.5$^\prime$ of L1544. All maps have been smoothed to 30.5$^{\prime\prime}$, and the beam is shown at the bottom left of each map. The solid white contours are the 30\%, 60\% and 90\% of the peak intensity of the N(H$_2$) map of L1544 computed from {\em Herschel}/SPIRE data \citep{spezzano16}. The dashed white contours indicate the 3$\sigma$ integrated emission with steps of 3$\sigma$ ({\normalfont rms}$_{H_2CS}$= 10 mK km s$^{-1}$, {\normalfont rms}$_{HDCS}$=12 mK km s$^{-1}$, {\normalfont rms}$_{D_2CS}$= 9 mK km s$^{-1}$).} \label{fig:integrated_intensity} \end{figure*} \section{Analysis} \subsection{Deuteration maps} The column density maps of each H$_2$CS isotopologue (shown in Figure~\ref{fig:H2CS_cd}) have been computed using the formula reported in \cite{Mangum15}, assuming that the source fills the beam, and the excitation temperature T$_{ex}$ is constant: \begin{equation} N_{tot} = \frac{8\pi\nu^3Q_{rot}(T_{ex})W}{c^3A_{ul}g_u}\frac{e^{\frac{E_u}{kT}}}{J(T_{ex}) - J(T_{bg})},\\ \end{equation} \noindent where $J(T) = {\frac{h\nu}{k}}(e^{\frac{h\nu}{kT}}-1)^{-1}$ is the source function in Kelvin, $k$ is the Boltzmann constant, $\nu$ is the frequency of the line, $h$ is the Planck constant, $c$ is the speed of light, $A_{ul}$ is the Einstein coefficient of the transition, $W$ is the integrated intensity, $g_u$ is the degeneracy of the upper state, $E_u$ is the energy of the upper state, $Q_{rot}$ is the partition function of the molecule at the given temperature $T_{ex}$. $T_{bg}$ is the background (2.7 K) temperature. Following the results of the MCMC analysis reported in \cite{vastel18}, we assumed $T_{ex}$ = 12.3, 6.8 and 9.3 K for H$_2$CS, HDCS and D$_2$CS, respectively. While computing the column density across the core, the excitation temperature was kept constant for each isotopologue. The error introduced by using a constant excitation temperature to calculate the column density map across a pre-stellar core has been found negligible in a previous study of L1544 (see in the Appendix of \citealt{redaelli19}). \\ The deuteration maps are shown in Figure~\ref{fig:H2CS_ratio}. Towards the dust peak N(HDCS)/N(H$_2$CS)$\sim12\pm2\%$, N(D$_2$CS)/N(H$_2$CS)$\sim12\pm2\%$ and N(D$_2$CS)/N(HDCS)$\sim100\pm16\%$, consistent with the values reported in \cite{vastel18}, where the column densities have been derived assuming constant excitation temperature with the MCMC as well as with the rotational diagram method, using three rotational transitions for each of the H$_2$CS isotopologues. The column density ratios involving D$_2$CS have been detected with a signal to noise larger than 3 only towards a 30$''$ by 60$''$ region around the center of L1544, and show an increase of deuteration towards the dust peak. However, given the small coverage, it is difficult to draw conclusions on spatial variations of the efficacy of the second deuteration of H$_2$CS. The N(HDCS)/N(H$_2$CS) map shows that the deuteration peak for H$_2$CS in L1544 is located towards the north-east at a distance of about 10000 AU ($\sim$60$^{\prime\prime}$) from the dust peak, where N(HDCS)/N(H$_2$CS)$\sim27\pm7\%$. To test the effects of using different excitation temperatures for the three isotopologues on our results, we have computed the column density maps and deuteration maps assuming $T_{ex}$ = 9.3 K for all isotopologues, see Figure~\ref{fig:H2CS_cd9.4}. The difference in the corresponding column densities is rather small, with N(H$_2$CS) showing the largest variation ($\sim$20$\%$). However, the deuteration maps do not show significant changes, and most importantly the single deuteration peak is still located in the same position as in the map shown in Figure~\ref{fig:H2CS_ratio}. \begin{figure*} \begin{center} \includegraphics[width=18cm]{Fig2.pdf \end{center} \caption{Deuteration maps of H$_2$CS towards L1544. The column density ratio has been computed only in pixels where both molecules have been observed at least at a 3$\sigma$ level. The solid white contours are the 30\%, 60\% and 90\% of the peak intensity of the N(H$_2$) map of L1544 computed from {\em Herschel}/SPIRE data. The dotted white contours indicate the 3$\sigma$ integrated emission contour for HDCS in the left panel, and of D$_2$CS in the central and right panel. } \label{fig:H2CS_ratio} \end{figure*} \begin{table} \centering \caption{Initial abundances (with respect to $n_{\rm H} \approx 2\,n({\rm H_2})$) used in the chemical modeling.} \begin{tabular}{l|l} \hline \hline Species & Abundance\\ \hline $\rm H_2$ & $5.00\times10^{-1}\,^{(a)}$\\ $\rm He$ & $9.00\times10^{-2}$\\ $\rm C^+$ & $1.20\times10^{-4}$\\ $\rm N$ & $7.60\times10^{-5}$\\ $\rm O$ & $2.56\times10^{-4}$\\ $\rm S^+$ & $8.00\times10^{-8}$\\ $\rm Si^+$ & $8.00\times10^{-9}$\\ $\rm Na^+$ & $2.00\times10^{-9}$\\ $\rm Mg^+$ & $7.00\times10^{-9}$\\ $\rm Fe^+$ & $3.00\times10^{-9}$\\ $\rm P^+$ & $2.00\times10^{-10}$\\ $\rm Cl^+$ & $1.00\times10^{-9}$\\ \hline \end{tabular} \label{tab:initialabundances} \tablefoot{$^{(a)}$ The initial $\rm H_2$ ortho/para ratio is $1 \times 10^{-3}$.} \end{table} \begin{table*} \caption{Column density of the normal, singly, and doubly deuterated isotopologues of $c$-C$_3$H$_2$, H$_2$CO, and H$_2$CS, and their deuteration ratios towards the dust peak of L1544.} \label{table:ratios} \scalebox{0.95}{ \begin{tabular}{cc|cc|cc} \hline\hline \\[-2ex] \multicolumn{6}{c}{Column densities (10$^{12}$ cm$^{-2}$)}\\ \hline \\[-2ex] $c$-C$_3$H$_2$ &37(1)&H$_2$CO&36(23)&H$_2$CS&6.9(6)\\ $c$-C$_3$HD &6.2(3)&HDCO&1.30(9)&HDCS&0.8(1)\\ $c$-C$_3$D$_2$ &0.66(2)&D$_2$CO&1.5(3)&D$_2$CS&0.80(8)\\ \hline \\[-2ex] \multicolumn{6}{c}{Column density ratios}\\ \hline \\[-2ex] $c$-C$_3$HD/ $c$-C$_3$H$_2$&17(1)\%&HDCO/H$_2$CO&4(2)\%&HDCS/H$_2$CS&12(2)\% \\ $c$-C$_3$D$_2$/ $c$-C$_3$H$_2$&1.7(1)\%&D$_2$CO/H$_2$CO&4(3)\%&D$_2$CS/H$_2$CS&12(2)\%\\ $c$-C$_3$D$_2$/ $c$-C$_3$HD&10(1)\%&D$_2$CO/HDCO&115(10)\%&D$_2$CS/HDCS&100(16)\%\\ \hline \end{tabular} } \tablefoot{The column densities of H$_2$CO and $c$-C$_3$H$_2$ have been calculated from the column density of the $^{13}$C isotopologues, assuming a $^{12}$C/$^{13}$C ratio of 68. Numbers in parentheses denote $1\sigma$ uncertainties in units of the last quoted digit. References: $c$-C$_3$H$_2$ and isotopologues from Spezzano et al. 2013, H$_2$CO and isotopologues from Chac\'on-Tanarro et al. 2019, H$_2$CS and isotopologues from this work. \\ } \end{table*} \section{Comparison with chemical models} \label{chemical_models} To investigate whether the observed trends in the deuteration of H$_2$CS and H$_2$CO can be understood in the context of the current knowledge of deuterium chemistry in the ISM, we have run a set of gas-grain chemical simulations attempting to reproduce the column densities and column density ratios observed toward L1544. For this, we used our chemical model which includes an extensive description of deuterium and spin-state chemistry; the main features of the chemical code and the chemical networks are described in detail in \citet{Sipila15a, Sipila15b, Sipila19b}, and are omitted here for brevity. We assume monodisperse spherical grains with a radius of 0.1\,$\mu$m and use the initial abundances displayed in Table~\ref{tab:initialabundances}. The simulation results discussed below correspond to a two-phase chemical model, i.e., one where the ice on the grain surface layer is treated as a single active layer. For the present work, we have run a set of single-point chemical simulations to check the effect of the volume density on the deuterium fractionations. In these simulations, the temperature and visual extinction are set to ``standard'' values for starless cores: $T_{\rm gas} = T_{\rm dust} = 10\,\rm K$, $A_{\rm V} = 10 \, \rm mag$. We have also run a core simulation using the physical model for L1544 presented by \citet{Keto10a}; the physical model was divided into concentric shells and chemical simulations were run in each shell to produce time-dependent radially varying abundance profiles. Essentially the same modeling procedure was recently used in \citet{redaelli21} to investigate the $r$-dependence of the cosmic-ray ionization rate in L1544. It was found in that paper that observations of the line profiles of several species are well matched by the "low" model of \citet{Padovani18}; we adopt that model here as well. We also employ the new description for cosmic ray induced desorption presented in \citet{Sipila21}. Figure~\ref{fig:dh_ratios} shows the simulated gas-phase and grain-surface deuterium fraction of $\rm H_2CO$ and $\rm H_2CS$ in the single-point chemical models. The values of the various ratios depend on the volume density, but it is evident that in all cases the simulated doubly-to-singly deuterated ratios are clearly below unity, and lie between 0.05 and 0.4. In particular, the ratios are well below unity also on the grain surfaces, which shows that the low gas-phase ratios (as compared to the observations) are not due to inadequate desorption. The results of the L1544 simulation are shown in Figure~\ref{fig:L1544}, which displays the simulated column density ratios toward the center of the model core, i.e., toward the dust peak in L1544. The column densities have been convolved to the appropriate beam sizes. Comparing the simulated ratios to the observations tabulated in Table~3 shows that the model reproduces well the HDCX/$\rm H_2CX$ ratios (X=O or S), while the amount of double deuteration is again underestimated by the model by an order of magnitude. Note that in the core model, the $\rm D_2CO/HDCO$ ratio is enhanced with respect to the $\rm HDCO/H_2CO$ ratio when compared to the results of the single-point simulations (Fig.\,\ref{fig:dh_ratios}). In the core model, the temperature ranges from $\sim$6 K in the centre to $\sim$20 K in the outer core. The efficiency of deuterium chemistry is sensitive to the temperature, and hence the spatial variations in temperature affect deuteration across the core. The ratios shown in Figure~\ref{fig:L1544} are a result of a line-of-sight integration of the column density which include these spatial variations. For this reason, the results of the core model cannot be compared one-to-one to the single point models, which adopt a constant temperature of 10\,K. The deuterium fractions $c$-C$_3$H$_2$ are slightly underestimated by the model, but lie within a factor of about two of the observed values. In this case too the model predicts an enhancement of the doubly-to-singly deuteration ratio over the singly-deuterated-to-normal ratio\footnote{And the opposite in the single-point models (not shown in Fig.\,\ref{fig:dh_ratios})}. The simulation results are naturally sensitive to the adopted model parameters. We have tried out several variations of our models in an effort to boost the doubly-to-singly deuterated ratios: 1) gas-phase chemistry only; 2) multilayer ice chemistry; 3) modifications to the branching ratios of surface reactions to promote the formation of doubly-deuterated molecules; 4) decreased activation energies for the formation of D-bearing molecules on the grain surface; 5) switching from complete scrambling to proton hop as the main deuteration mechanism \citep{Sipila19b}. Some of these schemes are able to boost the doubly-to-singly deuterated ratios of $\rm H_2CO$ and $\rm H_2CS$ to a level of $\sim$0.5, but always at the associated cost of increasing the HDCX/$\rm H_2CX$ ratios as well. It remains unknown why the doubly-to-singly deuterated ratios in L1544 are boosted for $\rm H_2CO$ and $\rm H_2CS$, but not for $c$-C$_3$H$_2$, see Table~2. It is however important to note that the main difference between $c$-C$_3$H$_2$ and H$_2$CO or H$_2$CS is that towards cold cores $c$-C$_3$H$_2$ is formed and deuterated only by gas-phase reactions \citep{spezzano13}, while H$_2$CO and H$_2$CS need a combination of reaction in the gas phase and on the surface of dust grains. \begin{figure*} \centering \includegraphics[width=1.8\columnwidth]{dh_ratios-eps-converted-to.pdf} \caption{Simulated deuterium fraction ratios of $\rm H_2CO$ and $\rm H_2CS$ as functions of time in the single point model (0D). The top row displays gas-phase abundance ratios, while the grain-surface ratios are displayed on the bottom row. From left to right, the columns correspond to a volume density of $n({\rm H_2}) = 10^4\,\rm cm^{-3}$, $n({\rm H_2}) = 10^5\,\rm cm^{-3}$, or $n({\rm H_2}) = 10^6\,\rm cm^{-3}$, respectively. The asterisks denote grain-surface molecules.} \label{fig:dh_ratios} \end{figure*} \begin{figure} \centering \includegraphics[width=1\columnwidth]{columnDensityRatios_L1544.pdf} \caption{Simulated column density deuterium fractions of $\rm H_2CO$, $\rm H_2CS$, and $c$-C$_3$H$_2$ as functions of time in the L1544 model (1D).} \label{fig:L1544} \end{figure} \section{Discussion} \subsection{Deuteration maps} The N(HDCS)/N(H$_2$CS) deuteration map shown in the left panel of Figure~\ref{fig:H2CS_ratio} presents a peak towards the north-east of L1544. The column density ratio error map is shown in Figure~\ref{fig:error}. H$_2$CS is the first molecule whose deuteration peak is not coincident with the dust peak, or in its vicinity within 5000 AU. The deuteration peak of H$_2$CS towards the north-east seems to suggest that its deuteration is more efficient in the outer layers of the core. The normal and deuterated isotopologues of H$_2$CS do not necessarily trace the same regions within pre-stellar cores, and we expect H$_2$CS to be present also in the external layers of L1544, while HDCS and D$_2$CS will be efficiently produced only in the inner 6000 AU of the core, the so-called deuteration zone \citep{caselli12}. When moving from the dust peak towards the N(HDCS)/N(H$_2$CS) peak in the north-east, the column density of H$_2$CS shows a steeper decrease compared to the HDCS column density. While the H$_2$CS column density drops by 50\%, the HDCS column density drops by only 20\% suggesting that the deuteration peak towards the north-east of L1544 might be a consequence of the steeper drop of the H$_2$CS in the outer layers of L1544, and not the result of a local enhancement in the deuteration of the molecule. The chemistry in the outer layers of L1544, where we expect only H$_2$CS and not its deuterated isotopologues to be present, is more affected by the uneven illumination onto the core than the inner layers, and in turn also the distribution of H$_2$CS is not expected to be even, but to peak towards the south, the $c$-C$_3$H$_2$ peak, and decrease towards the north-east, the CH$_3$OH peak \citep{spezzano16, spezzano17}. Being the column density a measure of the column of molecules along the line-of-sight, it takes into account the molecules present in the outer as well as in the inner layers of the core, and the column density ratios that we measure at different offsets are affected by the inhomogeneous distribution of the molecule in the different layers of the core. The illumination does not have an effect on the D/H ratio, it only dilutes the N(HDCS)/N(H$_2$CS) ratio towards the South of L1544, where H$_2$CS is more efficiently formed in the outer layers of the core. This is due to the fact that more efficient formation of H$_2$CS is expected in regions where C atoms are not mainly locked in CO (as in the southern region of L1544, rich in carbon-chain molecules; \citealt{spezzano17}). The same behaviour as H$_2$CS is seen also in CH$_3$OH and H$_2$CO \citep{chacon19}, although H$_2$CS shows the farthest distance of the deuteration peak from the dust peak ($\sim$10000 AU). The deuteration peak for methanol and formaldehyde is shifted by few thousand AU towards the south-west and north-west of the dust peak, respectively, in agreement with the direction of steepest decrease of the column density of the main isotopologue. The situation is slightly different for N$_2$H$^+$ and HCO$^+$, whose deuteration maps have been studied in \cite{redaelli19}. N$_2$H$^+$ forms directly from molecular Nitrogen, N$_2$, a late-type molecule that is not very abundant in the outer layers of L1544, and as a consequence N$_2$H$^+$ is more abundant towards the center of the core \citep{caselli99, hily-blant10}. N$_2$H$^+$ shows signs of depletion only at the very center of starless cores \citep{bergin02, caselli02, redaelli19}, where N$_2$ also starts to freeze-out. Both N$_2$H$^+$ ad N$_2$D$^+$ are centrally concentrated in starless cores, and consequently also the N(N$_2$D$^+$)/N(N$_2$H$^+$) map peaks at the centre of L1544. HCO$^+$ and DCO$^+$ instead are also quite abundant in the cloud surrounding the core, and their column density maps and deuteration map will reflect the overall gas distribution. In the case of L1544, N(HCO$^+$), N(DCO$^+$) as well as their ratio map peaks towards the north-west of the dust peak (see the bottom panel of Figure 8 and Figure 10 in \citealt{redaelli19}). It is important to note that the N(HDCS)/N(H$_2$CS) ratio is 0.12$\pm$0.02 at the dust peak and 0.27$\pm$0.07 at the deuteration peak towards the north-east, so the difference in the deuteration at the two peaks is only at 2$\sigma$ level. Furthermore, the position of the N(HDCS)/N(H$_2$CS) peak is at the border of the 3$\sigma$ contour of the HDCS column density map. Maps with higher sensitivity and angular resolution are needed in order to make quantitative conclusions on the possible local increase of deuteration fraction of H$_2$CS towards the north-east. \subsection{Deuteration towards the dust peak} With the D$_2$CS map not being much extended across the core, it is not possible to draw conclusions about the N(D$_2$CS)/N(H$_2$CS) and N(D$_2$CS)/N(HDCS) deuteration maps. However, we can compare the single and double deuteration of H$_2$CS with cyclopropenylidene, $c$-C$_3$H$_2$, and H$_2$CO, previously observed in their singly and doubly deuterated isotopologues towards the dust peak of L1544 \citep{spezzano13,chacon19}. The column densities and deuteration ratios are summarised in Table \ref{table:ratios}. Three main conclusions can be drawn from the numbers reported in Table \ref{table:ratios}: i) in the case of $c$-C$_3$H$_2$, N($c$-C$_3$HD) / N($c$-C$_3$H$_2$) $\sim$ N($c$-C$_3$D$_2$) / N($c$-C$_3$HD) $\sim$ 10$\%$; ii) H$_2$CS is more efficiently deuterated than H$_2$CO; iii) the column densities of the singly and doubly deuterated isotopologues of H$_2$CS and H$_2$CO are the same within errorbars, leading to a D$_2$CX/HDCX $\sim 100\%$ (with X= S or O). As discussed in Section~\ref{chemical_models}, the deuteration of $c$-C$_3$H$_2$ is reproduced fairly well (factor of $\sim$2) by the chemical models with gas-phase reactions. $c$-C$_3$HD is mainly formed by the reaction of $c$-C$_3$H$_2$ with H$_2$D$^+$ or other deuterated isotopologues of H$_3^+$, followed by dissociative recombination with electrons. $c$-C$_3$D$_2$ is formed in the same fashion from the reaction of $c$-C$_3$HD with H$_2$D$^+$. The formation of H$_2$CO and H$_2$CS instead occur both in the gas phase and on the surface of dust grains (by hydrogenation of CO and CS). Despite the similar formation pathways, H$_2$CS is more efficiently deuterated than H$_2$CO as a consequence of the longer time spent on the surface because of its higher binding energy. Another noticeable difference among the deuteration of these three molecules is that the column densities of the singly and doubly deuterated H$_2$CO and H$_2$CS are, within errorbars, the same, while the column density of the doubly deuterated $c$-C$_3$H$_2$ is 10$\%$ of the column density of the singly deuterated. A larger D$_2$CO/HDCO ratio (12$\%$) with respect to the HDCO/H$_2$CO ratio (6$\%$) has been observed towards the protostellar core IRAS 16293-2422 B with ALMA observations \citep{persson18}. The results of \cite{persson18} can be reproduced with the gas-grain chemical model described in \cite{taquet14}, where the physical and chemical evolution of a collapsing core is followed until the end of the deeply-embedded protostellar stage (Class 0). The time step that best-reproduces the observations of \cite{persson18} is 1$\times$10$^5$ yr, at the beginning of the Class 0 stage, suggesting that the deuterium fractionation observed for H$_2$CO in IRAS 16293-2422 B is mostly inherited from the pre-stellar core phase. It is interesting to note that towards the protostellar core IRAS 16293-2422 B, as well as towards the pre-stellar core L1544, the HDCS/H$_2$CS ratio is larger than the HDCO/H$_2$CO ratio, suggesting an inheritance of H$_2$CO, H$_2$CS and their deuterated isotopologues from the pre-stellar to protostellar phase. We were however unable to reproduce the D$_2$CX/HDCX ratios (with X= S or O) observed towards the pre-stellar core L1544, despite using the same reaction schemes used in \cite{taquet14} that include the abstraction and substitution reactions studied in the laboratory by \cite{hidaka09}. The deuteration efficiency is affected when the chemical model is coupled with a hydrodynamical description of core collapse, instead of using a static physical model as is done here, but even in such a case a D$_2$CX/HDCX = 100\% ratio cannot be reached (see Fig. 12 in \citealt{sipila18}). The deuteration on the surface is boosted significantly when using a three-phase model with respect to the two-phase model, but still fails to reproduce the D$_2$CX/HDCX and HDCX/H$_2$CX ratios observed for H$_2$CO and H$_2$CS towards the centre of L1544. Our chemical models can reproduce fairly well the HDCX/H$_2$CX for both H$_2$CO and H$_2$CS, suggesting that the reaction network for the formation of the doubly deuterated H$_2$CS and H$_2$CO are not complete yet. Additionally, the exothermicity of the formation of D$_2$CX could be larger than for HDCX, leading to a more efficient reactive desorption for the doubly deuterated isotopologues. Different reactive desorption rates for the different isotopologues are not implemented in our chemical models, where all molecules desorb with a constant efficiency from the grains (1\%). However, as pointed out in Section~\ref{chemical_models}, a higher desorption rate for D$_2$CO and D$_2$CS alone will not be able to reproduce the observations. \cite{marcelino05} observed H$_2$CS, HDCS and D$_2$CS towards Barnard 1 and found N(HDCS)/N(H$_2$CS)$\sim$N(D$_2$CS)/N(HDCS)$\sim$0.3. The authors used steady state gas-phase chemical models and were able to reproduce the observed column density ratios. However, at the time many surface processes were still poorly known, and were not included. Efficient H-D substitution reactions on the surfaces may play an important role and more laboratory work is needed to quantify their rates. \section{Conclusions} We carried out a comprehensive observational study of the deuteration of H$_2$CS towards the pre-stellar core L1544, and compared its deuteration with other molecules observed towards the same source. We find that the N(HDCS)/N(H$_2$CS) deuteration map presents a peak towards the north-east of L1544. Given that the column density of the main species towards the north-east of the core drops faster with respect to the deuterated isotopologue, we suggest that the deuteration peak of H$_2$CS towards the north-east could be a consequence of the steeper drop of the H$_2$CS in the outer layers of L1544. However, deeper integrations and observations at higher angular resolution are needed to draw conclusions on the H$_2$CS deuteration peak towards the north-east of L1544. The present results imply that the large deuteration of H$_2$CO and H$_2$CS observed in protostellar cores as well as in comets could be inherited from the pre-stellar phase, as suggested by previous works. We compared the single and double deuteration of $c$-C$_3$H$_2$, H$_2$CS and H$_2$CO and found that while for $c$-C$_3$H$_2$ both deuteration ratios are $\sim$10\%, for H$_2$CS and H$_2$CO the second deuteration is more efficient than the first one, leading to a similar column densities for the singly and doubly deuterated isotopologues. We used state of the art chemical models to reproduce the observed column density ratios and found that the deuteration of $c$-C$_3$H$_2$ can be very well reproduced both for the single as well as for the double deuteration, but this is not the case for H$_2$CO and H$_2$CS. Our models can reproduce well the column densities of H$_2$CO, H$_2$CS, HDCO and HDCS, but fail to reproduce the observed large D$_2$CO and D$_2$CS column densities, suggesting that the reaction network for the formation of the doubly deuterated H$_2$CS and H$_2$CO is not complete yet. More laboratory work should be dedicated to study H-D substitution reactions on the surface of dust grains. Also a more efficient reactive desorption for the doubly deuterated isotopologues with respect to the singly deuterated and main isotopologues might play a role. \section*{Acknowledgments} S.S. thanks the Max Planck Society for the Independent Max Planck Research Group funding. AF and GE thank the Spanish MICINN for funding support from PID2019-106235GB-I00.
1,941,325,220,651
arxiv
\section{Introduction} This lecture is concerned with the three questions raised by the Mayor of Toledo at the reception held during this conference: {\bf ?`De d\'onde venimos?} Namely, what is the origin of the structure we see in the Universe? {\bf ?`Qu\'e somos?} Namely, what is the nature of the Dark Matter around us? and {\bf ?`Ad\'onde vamos?} Namely, what lies beyond the Standard Model? \section{On the Origin of Structure in the Universe} \subsection{How Much Dark Matter?} Naturalness and inflation \cite{infl} suggest that the density averaged over the universe as a whole should be very close to the critical density, which marks the boundary between a universe that expands forever and one which eventually collapses, i.e. $\Omega\equiv\rho/\rho_c\simeq$1. On the other hand, the matter we can see shining in stars, in dust, etc. amounts only to $\Omega \simeq 0.003$ to $0.01$ \cite{copi}, as seen in fig.~1. The commonly-agreed concordance between big bang nucleosynthesis calculations \cite{Copietal} and the observed abundances of light elements suggests that $\Omega_{baryons} \lsim 0.1$, as also seen in fig.~1. This concordance has recently been questioned, and it has even been suggested that big bang nucleosynthesis may be in crisis \cite{Hata}. I do not share this view (see also \cite{OS},\cite{Copietal}\cite{OlSc}). For one thing, I have long believed that all the systematic errors in the relevant physical quantities were not taken fully into account \cite{eens},\cite{subir},\cite{highz}. For another, I have less faith than some \cite{Hata} in models of the chemical evolution of the galaxy (see also \cite{Copietal,OlSc}). Finally, I would prefer not to treat systematic errors as ``top hats", as was done in \cite{Hata}, which cuts off the tails of the distributions, and leads to estimates of confidence levels that are difficult to interpret. Also shown in fig.~1 is an estimate of $\Omega_{baryons}$ from X-ray observations of rich clusters \cite{Xray}, which tends to lie somewhat higher than the big bang nucleosynthesis estimate. However, the original rich cluster estimate was made in a pure cold dark matter model. It is modified in the type of mixed dark matter model to be discussed later \cite{newrich}, and could also be reduced if the clusters are not virialized. In any case, the possible discrepancy in fig.~1 is not very significant for values of $H_0$ in much of the favoured range discussed below. The big bang nucleosynthesis estimate of $\Omega_{baryons}$ is comparable to the amount $\Omega_{halo}$ of matter that is suggested by observations of rotation curves \cite{rotcurv} and the virial theorem to be contained in galactic haloes: $\Omega_{halo} \simeq 0.1$. Mathematically, the galactic haloes could in principle be purely baryonic, although they seem unlikely to be made out of gas, dust or ``snow balls" \cite{hegyi}. As you know, there has recently been considerable interest in the possibility that haloes might be largely composed of ``brown dwarfs" weighing less than $\simeq 1/10$ of the solar mass. The searches for such ``failed stars" in our galactic halo via microlensing \cite{Pacz} of stars in the Large Magellanic Cloud in fact indicate \cite{lmc} that only a fraction \begin{equation} f = 0.20^{+0.33}_{-0.14}\,{\rm [MACHO]},\, <0.5\,{\rm [EROS]} \label{E71} \end{equation} is composed of brown dwarfs \cite{Masso},\cite{newMACHO}, assuming a simple spherical halo model, which would have a local density \begin{equation} \rho_{halo} = 0.3 \ \hbox{GeV/cm}^3 \times 1.5^{0\pm1} \label{E72} \end{equation} The possibility has recently been reconsidered \cite{flat} that our halo is in fact significantly flattened, in which case the estimate (\ref{E72}) of the local density should be increased, and the brown dwarf fraction (\ref{E71}) correspondingly decreased. To be on the conservative side when discussing cold dark matter detection rates later in this talk, I will retain the spherical halo estimate (\ref{E72}). It should be emphasized that, in the standard theory of structure formation reviewed in the next section, our halo {\it must} contain a large fraction of non-baryonic cold dark matter \cite{GatTurn}. On the other hand, conventional infall models of galaxy formation suggest \cite{EllSik} that our halo is unlikely to be composed mainly of massive neutrinos, at least if their mass is chosen to yield $\Omega_{hot} \simeq 0.2$ as suggested in the next section. These observations follow from the need for cold dark matter to boost galaxy formation, whereas the phase-space density of neutrinos is severely restricted \cite{TremaineGunn}. The dominant component of our galactic halo (\ref{E72}) should therefore be some form of cold dark matter. Before addressing in more detail the nature of the non-baryonic dark matter, I will first comment on the age and Hubble expansion rate of the Universe, which have recently been the subject of some controversy \cite{contro}. Globular clusters seem to be at least $14 \pm 3$ Gyr old, and nucleocosmochronology suggests an age of $13 \pm 3$ Gyr \cite{copi}. Taken together, these constraints suggest that the Universe cannot be younger than $10$ Gyr, and that a greater age would be more comfortable. The question is whether such an age is compatible with current estimates of the Hubble constant $H_0$ km/sec/Mpc, some of which are listed in Table~1. These may be combined \cite{mrr} to yield the estimate \begin{equation} H_0 = 66 \pm 13 \label{hzero} \end{equation} where the central value is statistical, and the error is supposed to be realistic, particularly in view of the fact that any determination of $H_0$ involves the combination of many steps. For example, there has recently been a second determination based on Hubble Space Telescope observations \cite{Leo} of Cepheid variables (which have their own intrinsic uncertainties), which must rely on other rungs in the cosmic distance ladder, such as the distance to the Large Magellanic Cloud, as well the extrapolation from Leo to the Coma cluster. Errors in all of these must be combined in order to arrive at the total uncertainty in $H_0$. \begin{center} \[ \left. \begin{array}{l} 55 \pm 8 \\ 67 \pm 7 \end{array}~ \right\} {\rm Type ~IA~ supernovae} \begin{array}{l} ~~{\rm (Sandage~ et~ al.)} \\ ~~{\rm (Riess ~et~ al.)} \end{array}\] \[ \begin{array}{l} 73 \pm 13 \end{array}~~ {\rm Type ~II~ supernovae} \begin{array}{l} ~~~~{\rm (Schmidt~ et~ al.)} \end{array}\] \[ \left. \begin{array}{l} 60 \pm 10 \\ 70 \pm 25 \end{array} \right\} {\rm Gravitational ~Lensing} \begin{array}{l} {\rm (Lehar~ et~ al.)} \\ {\rm (Wilkinson ~et~ al.)} \end{array}\] \[ \begin{array}{l} 55 \pm 17 \end{array}~~ {\rm Sunyaev-Zeldovich} \begin{array}{l} ~~{\rm (Birkinshaw~ et~ al.)} \end{array}\] \[ \begin{array}{l} 80 \pm 17 \end{array}~~ {\rm Virgo ~Cepheids} \begin{array}{l} ~~~~~~~~~~{\rm (Freedman~ et~ al.)} \end{array}\] \[ \begin{array}{l} 69 \pm 8 \end{array}~~ ~~{\rm Leo ~I~ Cepheids} \begin{array}{l} ~~~~~~~~~~{\rm (Tanvir~ et~ al.)} \end{array}\] Table 1 - Recent determinations of $H_0$ (in km/s/Mpc) \end{center} The range (\ref{hzero}) is shown on the vertical axis of fig.~1, where we see that there is no incompatibility between the age of the Universe being $10$ Gyr old and $\Omega = 1$ as wanted by inflation \cite{infl}, as long as $H_0$ is in the lower part of the range (\ref{hzero}). Therefore, I see no immediate need for inflation theorists to explore models in which $\Omega$ is significantly below unity \cite{lowO}, which do not, in any case, look very natural to me. Assuming that indeed $\Omega \simeq 1$, at least $90 \%$ of the matter in the Universe must be unseen non-baryonic dark matter. \subsection{Hot or Cold Dark Matter?} In addition to the above arguments based on contributions to $\Omega$, non-baryonic dark matter is required for structure formation, because it enables density perturbations to grow via gravitational instability even before recombination, while perturbations in the conventional baryonic matter density are still restrained by the coupling to radiation. Which structures form when depends whether the non-baryonic dark matter was relativistic or non-relativistic at the cosmological epoch when structures such as galaxies and clusters began to form, which is the distinction between ``hot" and ``cold". Whether you favour hot or cold dark matter depends on your favourite theory of structure formation. If you believe that its origins lie in an approximately scale-invariant Gaussian random field of density perturbations, as suggested by inflationary models \cite{perts}, then you should favour cold dark matter. This is because it enables perturbations to grow on all distance scales, whereas relativistic hot dark matter escapes from small-scale perturbations, whose growth via gravitational instabilities is thereby stunted \cite{sformation}. Thus galaxies form later in a scenario based on Gaussian fluctuations and hot dark matter than they would in a scenario with cold dark matter, indeed, too late. For this reason, the combination of Gaussian perturbations with cold dark matter has come to be regarded as the ``standard model" of structure formation. However, if you believe that structures originated from seeds such as cosmic strings \cite{strings}, then you should prefer hot dark matter, because cold dark matter would then give too much power in perturbations on small distance scales. Fig. 2 shows a compilation \cite{comp} of data on the power spectrum of astrophysical perturbations, as obtained from earlier COBE \cite{COBE} and other observations of the cosmic microwave background radiation, and direct astronomical observations of galaxies and clusters. Subsequent to this compilation, data from the full 4 years of COBE DMR data have been made available \cite{COBE4}. These show no indications of non-Gaussian correlations \cite{COBEGau}, and are consistent with a scale-invariant spectrum \cite{COBEflat}, in agreement with inflationary models. However, models of structure formation based on cosmic strings would not predict non-Gaussian correlations observable in the present data, and would also yield a flat spectrum. Therefore, such models cannot yet be excluded, though I will not address them further in this talk \cite{strings}. The overall normalization of the perturbation spectrum is of interest to inflation theorists, since it specifies the scale of the inflationary potential in field-theoretical models. Parametrizing this by $V = \mu^4 \bar V$, where $\bar V$ is a dimensionless function of order unity, one finds that \begin{equation} \delta \rho / \rho \simeq \mu^2 G_N \label{delrho} \end{equation} Taking the normalization of $\delta \rho / \rho$ from the COBE data \cite{COBE}, one may estimate \begin{equation} \mu \simeq 10^{16} GeV \label{inflscale} \end{equation} which is eerily close to the usual estimate \cite{susygut} of the scale of supersymmetric grand unification. A related quantity of physical interest is the mass of the quantum of the inflationary field, the inflaton: \begin{equation} m_{infl} \simeq 10^{13} GeV \label{inflmass} \end{equation} which may have implications for baryogenesis and neutrino masses, as discussed later. The scale (\ref{inflscale}) also determines the reheating temperature at the end of the inflationary epoch, which is of relevance to calculations of the potentially-dengerous relic gravitino abundance \cite{gravitino} in supersymmetric models. The perturbations dicovered by COBE and experiments may in general be a combination of density (scalar) and gravity wave (tensor) fluctuations $A_{S,G}$, whose ratio depends on details of the inflationary potential: \begin{equation} A_S / A_G = \sqrt{4 \pi G_N} H / |H'| \label{ratio} \end{equation} The ratio (\ref{ratio}) exceeds unity if the inflaton field accelerates during inflation, as expected, but the COBE experiment is sensitive to the combination $\simeq 25~A_G^2 /2~A_S^2$, so gravity waves could be important. Nevertheless, one usually assumes, as above, that scalar perturbations are dominant. A goal for future experiments is to disentangle the scalar and tensor contributions, and to measure the possible `tilts' of their spectra: \begin{equation} n_{S,G} \equiv 1\,-\,(d/d\,ln\,\lambda) A_{S,G} \label{tilt} \end{equation} so as to map out the inflaton potential \cite{map}. Fig.~3 compiles data on fluctuations in the cosmic microwave background radiation \cite{Tegmark}, and provides the basis for a discussion of the issues arising in these future measurements. The original COBE measurements at scales larger than the horizon at recombination are conventionally interpreted as due to the Sachs-Wolfe effect: \begin{equation} \delta T / T \simeq - \delta \phi / 3 \label{SachsWolfe} \end{equation} where $\phi$ is the gravitational potential. There are by now many detections in the region within the horizon at recombination, where the first D\"oppler peak is expected to appear, with \begin{equation} \delta T / T \simeq v, \label{Doppler} \end{equation} where $v$ is the baryonic matter velocity. The existence of this first D\"oppler peak cannot yet be regarded as confirmed, but the outlook for models which do not predict it does not look very bright \cite{KogMin}. The COBE data alone yield an error of $\pm0.3$ on the effective spectral index \cite{COBEflat}, whereas a combined fit to the available data indicates \cite{fit} the following range: \begin{equation} n \simeq 1.1 \pm 0.1 \label{index} \end{equation} Cosmic string models \cite{strings} are consistent with this and the apparently Gaussian nature of the fluctuations seen at large scales: a key test will be whether they still look Gaussian in the region of the first D\"oppler peak. Within the standard model of structure formation, the height of the this peak will be a measure of $\Omega_{baryons}$, and its location on the horizontal axis is sensitive to the total $\Omega$: $l\sim 220/\sqrt{\Omega}$. Cold dark matter and related models predict further D\"oppler peaks, which can only be resolved with a higher-resolution experiment. Recall that the COBE resolution of a few degrees includes a comoving volume that will later contain several hundred clusters of galaxies: future experiments are aiming at resolutions of a fraction of a degree. The drop-off in fig.~3 visible at still smaller scales is due to the thickness of the last scattering surface. The solid line which does not quite pass through all the points in fig.~2 is one calculated in the above-mentioned standard model of Gaussian scalar fluctuations and cold dark matter, assuming there is no tilt in the initial spectrum. Crucial tests of this and other models of structure formation will be provided by future measurements of the cosmic microwave background radiation and of larger-scale structure in the region of the bump in fig.~2, e.g., by the proposed COBRAS/SAMBA satellite and the ongoing Sloan Digital Sky Survey. The present discrepancies from this curve indicate that there is less perturbation power at small distances than would be expected in this theory, compared with the COBE normalization at large distance scales. This and other observations have suggested that it may be necessary to modify the pure cold dark matter model. Several suggestions have been offered, including a non-zero cosmological constant and a tilt in the spectrum of Gaussian perturbations away from scale invariance. However, the preferred scenario seems to be an admixture of hot dark matter together with the cold, resulting in the following cocktail recipe for the Universe \cite{mdm}: \begin{equation} \Omega_{cold} \simeq 0.7\,,~~ \Omega_{hot} \simeq 0.2,\, ~~ \Omega_{baryons} \mathrel{\mathpalette\@versim<} 0.1 \label{E73} \end{equation} The way in which this scenario works is illustrated in fig.~4. Hot dark matter alone would give a spectrum of perturbations that dies out at small scales, whereas hot dark matter does not. Combining the two, one can reconcile the relatively high COBE normalisation at large scales with the relatively small perturbations seen at small scales. Fig.~5, which is adapted from \cite{Caldwell}, illustrates the performances of various dark matter models of structure formation, as compared to measurements on various different distance scales. We see that a pure cold dark matter model has severe problems at smaller scales. These may be somewhat alleviated by the introduction of biasing or a cosmological constant $\Lambda$, but there are still problems on galactic scales. A mixed dark matter model ({\ref{E73}) with the hot dark matter provided by a single neutrino species of mass $\simeq 5$ eV works quite well, except possibly for the density of clusters. The authors of \cite{Caldwell} prefer for this reason a model with two neutrino species each weighing $\simeq 2.5$ eV, which is also motivated by their interpretation of the LSND experiment \cite{LSND}. However, I do not see the necessity for this embellishment of the mixed dark matter model, and remain to be convinced by the LSND data, as I discuss in the next section. \section{On the Nature of the Dark Matter} \subsection{Neutrino Masses and Oscillations} Theorists have been saying for years that there is no fundamental reason why neutrino masses should vanish, and oscillations are inevitable if they are non-zero. However, for the time being, we only have the following upper limits on neutrino masses: \begin{equation} m_{\nu_e} < 4.5\,\hbox{eV},\,m_{\nu_{\mu}} < 160\,\hbox{KeV},\, m_{\nu_{\tau}} < 23\,\hbox{MeV} \label{numasses} \end{equation} Cosmology in the form of big bang nucleosynthesis is close to strengthening the above upper limit on $m_{\nu_{\tau}}$ to a fraction of an MeV \cite{nutaumass}, if the $\nu_{\tau}$ is a long-lived Majorana particle. There are many models for neutrino masses \cite{Valle}, which I will not discuss here. Instead, I will be inspired by the simplest see-saw mass matrix \cite{seesaw}: \begin{equation} (\nu_L, \bar\nu_R) \left(\matrix{m_M&m_D\cr m_D & M_M}\right)~~ \left(\matrix{\nu_L\cr \bar\nu_R}\right) \label{seesaw} \end{equation} where $m_M$ and $M_M$ are Majorana masses for the left- and right-handed neutrinos $\nu_{L,R}$, respectively, and $m_D$ is a Dirac mass coupling $\nu_L$ and $\nu_R$. All of $m_{M,D},M_M$ are to be understood as matrices in flavour space. We expect $M_M$ to be comparable (on a logarithmic scale) with the grand unification scale $M_X$, and the Dirac masses $m_D$ to be comparable with the corresponding charge 2/3 quark masses $m_{2/3}$. We know from the experimental absence to date of neutrinoless double-$\beta$ decay that \begin{equation} <m_{\nu_e}>_M~\lsim~1/2~\hbox{eV} \label{double} \end{equation} and diagonalization of (\ref{seesaw}) naturally suggests that \begin{equation} m_{\nu}~\simeq~m_D^2/M_M \label{small} \end{equation} for the known light neutrinos. If indeed $m_D \simeq m_{2/3}$, we may expect that for the three light neutrino flavours \begin{equation} m_{\nu_i} \simeq {m_{{2/3}_i}^2 \over M_{M_i}} \label{nuhier} \end{equation} The heavy Majorana masses $M_{M_i}$ are not necessarily universal, but (\ref{nuhier}) nevertheless suggests that \begin{equation} m_{\nu_e} << m_{\nu_{\mu}} << m_{\nu_{\tau}} \label{much} \end{equation} and the $\nu_{\tau}$ mass could be in the range of interest to hot or mixed dark matter models if $M_{M_3} \simeq 10^{12}$ GeV. To my mind, the most serious evidence for neutrino oscillations, and, by extension, for neutrino masses, is the persistent and recurrent solar neutrino deficit seen by four experiments \cite{solarnu}, as compared with standard solar model calculations \cite{Bahcall}. There was much heated debate at this meeting on the interpretation of these data \cite{debate}, and on the uncertainties in the theoretical calculations. It seems to me that the crispest way of posing the dilemma is to plot the data in more than one dimension, for example in the two-dimensional representation of fig.~6 \cite{nuplane}. Taken at face value, all the experiments indicate a strong suppression of the Beryllium neutrinos, and a weaker suppression of the Boron neutrinos, compared with the predictions of \cite{Bahcall}. This major feature is very difficult to explain in plausible modifications of this standard solar model, which tend to suppress the Boron neutrinos more than the Beryllium neutrinos, as also seen in fig.~6 \cite{nuplane}. Note in particular the dotted curve which corresponds a simply changing the central temperature of the Sun, as happens in low-opacity and simple mixing models. These cannot explain the data, if the latter are taken at face value. Some models which use different input nuclear cross sections \cite{DS} even fall below the low-temperature curve: they may explain the Boron deficit, but {\it a fortiori} they cannot explain simultaneously the Beryllium deficit, as seen clearly in this two-dimensional plot. As we heard at this meeting \cite{helios}, the helioseismologists are now making it very difficult even to reduce the central temperature of the Sun. They are able to verify that the sound speed, which is closely related to the temperature, agrees with the standard solar model to within about $1 \%$ down to about $5 \%$ of the solar radius. Indeed, the helioseismological talk here \cite{helios} included revised estimates of the Boron neutrino flux that were {\it even higher} than the standard solar model. One other possibility that was mentioned at this meeting was that slow convection could alter significantly the standard solar model predictions \cite{haxton}, but this remains to be demonstrated. Two major new solar neutrino experiments will start taking data next year, namely Superkamiokande \cite{SK} and SNO \cite{SNO}. The former will provide mind-boggling statistics for solar neutrinos, and for any supernova explosion inside our galaxy. The SNO experiment should be able to tell us whether the Beryllium neutrinos are really absent, or ``merely" converted into another $\nu$ species, thanks to its aim of measuring both the charged- and neutral-current reactions. Many other solar neutrino experiments are on the drawing boards, including Borexino \cite{Borexino} which is also aimed at the Beryllium neutrinos, as well as HELLAZ \cite{HELLAZ} and HERON \cite{HERON}. In my view, there {\it is} a solar neutrino problem, and novel neutrino physics is the most likely explanation, though certainly not yet established. The above-mentioned experiments may resolve the issue. To make a useful theoretical contribution to the continuing debate, I believe it is insufficient to point to some possible modification of the standard solar model without giving good reason to think that (a) it is consistent with {\it all} the constraints, including, e.g., those from helioseismology, and (b) it modifies the standard solar model predictions outside the quoted errors. To support any such claim, a valuable contribution should include some solid calculations. If one does indeed take the solar neutrino deficit as an indication for novel neutrino physics, I am inclined to plump for mass mixing and oscillations rather than magnetic effects \cite{Okunetal} (the required magnetic dipole moment seems very high, and I am not impressed by claims of a time dependence in the Homestake data), and prefer the MSW scenario to vacuum oscillations. In view of the prejudice (\ref{much}), the most likely interpretation then becomes $\nu_e \rightarrow \nu_{\mu}$ oscillations, with \begin{equation} \Delta m^2 \simeq 10^{-5}\,\hbox{eV}^2\,\hbox{and} \,sin^2\theta \simeq 10^{-2} \label{MSW} \end{equation} provided by \begin{equation} m_{\nu_e} << m_{\nu_{\mu}} \simeq 10^{-3}\,\hbox{eV} \label{MSW2} \end{equation} Scaling $m_{\nu_{\mu}}$ (\ref{MSW2}) up by $m_t^2/m_c^2$ (\ref{nuhier}), it is easy to imagine that there could be $\nu_{\tau}$ dark matter with a mass around $10$ eV. One of the most appealing aspects of this scenario is that it may soon be tested in accelerator $\nu_{\mu} \rightarrow \nu_{\tau}$ oscillation experiments, since it suggests that $\delta m^2 \simeq 100$ eV$^2$, and many models for $\nu$ masses further suggest a value of $sin^2 \theta$ in the range accessible to the CHORUS \cite{CHORUS} and NOMAD \cite{NOMAD} experiments that are already taking data at CERN, or the planned COSMOS experiment \cite{COSMOS} at Fermilab. If needed, increased sensitivity could in principle be attained with a next-generation detector \cite{NAUSICAA}. This interpretation of the solar neutrino data also offers the possibility of an appealing scenario for cosmological baryogenesis \cite{FY}. After an inflationary epoch at the scale (\ref{inflscale}), the inflaton with mass (\ref{inflmass}) can decay into a massive $\nu_R$ state, which then decays out of thermal euilibrium. Diagrams of the type shown in fig.~7 may produce a lepton asymmetry \cite{FY},\cite{ELNO} \begin{equation} \epsilon = {1\over 2\pi (\lambda^+_L\lambda_L)_{ii}} \sum_j ({\rm lm} \left[ (\lambda^+_L\lambda_L)_{ij}\right]^2) f\left({M^2_j\over M^2_i}\right) \label{las} \end{equation} where the $\lambda_L$ are Yukawa couplings, $(i,j)$ are generation indices, and $f$ is a kinematic function of the $\nu_{R_i}$ masses $M_i$. The asymmetry $\epsilon$ is subject to subsequent reprocessing by electroweak sphalerons \cite{sphal} to produce a baryon asymmetry. This scenario can be valid only if $m_{infl} > m_{\nu_R}$, which imposes a lower limit on the inflation scale (\ref{inflscale}) and/or a lower limit on the light neutrino mass (\ref{nuhier}). In the longer term, ideas are afoot for long baseline $\nu$ oscillation experiments between Fermilab and Soudan II, between KEK and Superkamiokande, and between CERN and either the Gran Sasso laboratory and/or the NESTOR underwater detector now under construction \cite{Cav}. In the case of the possible CERN-based experiments, the $\nu$ beam would be produced by a $120$ GeV proton beam extracted from the SPS and directed along the planned transfer line to the LHC. This type of experiment would address principally the question of possible atmospheric $\nu$ oscillations raised by the Kamiokande experiment \cite{atmo}. I am not yet convinced of the reality of this effect, and would like to see it confirmed with convincing statistics by a large experiment using a completely different experimental approach, as well as in Superkamiokande. Just for fun, let me mention the idea for what would surely be the ultimate earth-based $\nu$ oscillation experiment, namely to send a beam from CERN or Fermilab to Superkamiokande. This would involve digging a beam and decay tunnel inclined downwards at some $40$ degrees, which would certainly amuse the civil engineers! There is one final possibility of novel neutrino physics that I would like to address, namely the suggestion that neutrino oscillations may provide an explanation of the LSND data \cite{LSND}. as you see in fig.~8, there is not much room for this explanation, given the constraints imposed by other experiments. Here again, I would like to see confirmation from a different experiment, as well as more data from the LSND experiment itself, particularly in view of the fact that there is no consensus yet on its interpretation \cite{Hill}. Fortunately, we may not have to wait long, as the LSND group promises us more information in the near future \cite{Caldwell}, and reactor experiments should soon be able to explore the region of interest. \subsection{Lightest Supersymmetric Particle} My favourite candidate for cold dark matter \cite{ehnos}, the lightest supersymmetric particle (LSP) is expected to be stable in many models, and hence present in the Universe as a cosmological relic from the Big Bang. This is because supersymmetric particles possess a multiplicatively-conserved quantum number called $R$ parity \cite{Fayet}, which takes the values $+1$ for all conventional particles and $-1$ for all their supersymmetric partners. Its conservation is a consequence of baryon and lepton number cancelation, since \begin{equation} R = (-1)^{3B + L + 2 S} \label{E74} \end{equation} There are three important consequences of $R$ conservation: \begin{enumerate} \item Sparticles should always be produced in pairs. \item Heavier sparticles should decay into lighter ones. \item The LSP should be stable, since it has no legal decay mode. \end{enumerate} In order to avoid condensation into galaxies, stars and planets such as ours, where it could in principle be detected in searches for anomalous heavy isotopes \cite{isotopes}, it was argued in \cite{ehnos} that any supersymmetric relic LSP should be electromagnetically neutral and possess only weak interactions. Scandidates in the future sparticle data book include the sneutrino $\tilde\nu$ of spin $0$, some form of ``neutralino" of spin $1/2$, or the gravitino $\tilde G$ of spin $3/2$. The sneutrino is essentially excluded by the LEP experiments which measured the decay of the $Z^0$ into invisible particles, which have counted the number of light neutrino species: $2.991 \pm 0.0016$ \cite{Renton}, which does not leave space for any sneutrino species weighing less than ${1\over2}M_{Z}$, and by underground experiments to be discussed in the next section, which exclude a large range of heavier sneutrino masses \cite{xsneutrino}. Since the gravitino is probably impossible to discover, and is anyway theoretically disfavoured as the LSP, we concentrate on the neutralino \cite{ehnos}. The neutralino $\chi$ is a mixture of the photino $\tilde \gamma$, the two neutral higgsinos $\tilde H_{1,2}^0$ expected in the minimal supersymmetric extension of the Standard Model, and the zino $\tilde Z$. This is characterized essentially by three parameters, the unmixed gaugino $m_{1/2}$, the Higgs mixing parameter $\mu$, and the ratio of Higgs vacuum expectation values $\tan\,\beta$. The phenomenology of the lightest neutralino is quite complicated in general, but simplifies in the limit $m_{1/2} \rightarrow 0$, where $\chi$ is approximately a photino state \cite{Goldberg}, and in the limit $\mu\rightarrow 0$, where it is approximately a higgsino state. As seen in fig.~9, experimental constraints from LEP and the Fermilab collider in fact exclude these two extreme limits \cite{erz}, so that \begin{equation} m_\chi\mathrel{\mathpalette\@versim>} (10\ \,\hbox{to}\ \,20) \,\hbox{GeV} \label{E77} \end{equation} Fig. 9 also indicates that there are generic domains of parameter space where the LSP may have an ``interesting" cosmological relic density \cite{density}, namely \begin{equation} 0.1 \mathrel{\mathpalette\@versim<} \Omega_\chi H_0^2 \mathrel{\mathpalette\@versim<} 1 \label{E78} \end{equation} for some suitable choice of supersymmetric model parameters. Fig.~10 displays the calculated LSP density in a sampling of phenomenological models \cite{BergGond}, \cite{sample} where we see that an interesting cosmological density is quite plausible for LSP masses \begin{equation} 20\ \hbox{GeV} \mathrel{\mathpalette\@versim<} m_\chi \mathrel{\mathpalette\@versim<} 300 \ \hbox{GeV} \label{E79} \end{equation} For simplicity, this and most cancellations have made the simplifying assumption of universality in the spectrum of sparticles, and have also assumed that CP violation in the LSP couplings can be neglected. Studies exploring the relaxation of these assumptions have appeared recently. As seen in fig.~11a, it is much easier for the LSP to be a higgsino-like state if the universality assumption is relaxed \cite{Pok},\cite{Bott}, and, as seen in fig.~11b, CP violation can relax the upper limit on the LSP mass \cite{FOS}. \section{On Searches for Cold Dark Matter Particles} In this section we first review some of the strategies that have been proposed to search for relic neutralinos, and then discuss cosmological axions. In considering interaction rates for any given relic $\chi$, one should keep in mind the correlation between the overall cosmological density $\Omega_{\chi}$ and the local halo density $\rho_{\chi}$. The most reasonable assumption is that \begin{equation} \rho_{\chi} = (\Omega_{\chi}/\Omega_{cold})(1-f)\rho_{halo} \label{reasonable} \end{equation} where $f, \rho_{halo}$ and $\Omega_{cold}$ are taken from equations (\ref{E71}, \ref{E72}) and (\ref{E73}). It is not in general worthwhile calculating rates for detection rates for relics with uninteresting cosmological densities $\Omega_{\chi} << 1$, and certainly not if one assumes the local density to be $\rho_{halo}$. \subsection{Annihilation in the Galactic Halo} The first neutralino search strategy that we discuss is that for the products of their annihilations in our galactic halo \cite{haloann}. Here the idea is that two self-conjugate $\chi$ particles may find each other while circulating in the halo, and have a one-night stand and annihilate each other: $\chi\chi\rightarrow\ell\ell,\bar q q$, leading to a flux of stable particles such as $\bar p, e^+,\gamma,\nu$ in the cosmic rays. Several experiments have searched for cosmic-ray antiprotons \cite{previous}, with the results shown in fig.~12. At low energies there are only upper limits, but there are several positive detections at higher energies, which are comparable with the flux expected from secondary production by primary matter cosmic rays \cite{Gaisser}. As also seen in fig.~12, relic LSP annihilation in our galactic halo might produce an observable flux of low-energy cosmic ray antiprotons somewhat below the present experimental upper limits \cite{Freese}. This calculation was made fixing the supersymmetric model parameters so that $\Omega_{\chi} = 1$, and assuming that the local halo density (\ref{E72}) is dominated by neutralinos $\chi$, and is subject to uncertainties associated with the length of time that the $\bar p$'s spend in our galactic halo. Fluxes higher than those in \cite{Freese} may be obtained if one considers neutralinos with $\Omega_{\chi} < 1$, because they have larger annihilation cross sections. This is a logical possibility, though it would mean that neutralinos would not be the only (or even dominant) cold dark matter component, if one retains the cocktail recipe (\ref{E73}). This would have the corollary that the assumed local density should be correspondingly reduced to some fraction of (\ref{E72}), with a quadratic effect on the $\bar p$ flux, which is proportional to $\rho_{halo}^2$. In any case, it is mathematically impossible for the halo density (\ref{E72}) to be saturated by neutralinos if the annihilation cross section is so large that $\Omega_{\chi} < \Omega_{halo} \simeq 0.1$. As already mentioned, in my view one should be careful when quoting rates and limits on neutralino parameters to check consistency with reasonable postulates on $\Omega_{\chi}$ and $\Omega_{halo}$. The flux estimates of \cite{Freese} may be interpreted as suggesting that \begin{equation} \rho_\chi \mathrel{\mathpalette\@versim<} 10\,\rho_{halo} \label{E80} \end{equation} and I do not believe it is possible to be much more precise at the present time. NASA and the DOE have recently approved a satellite experiment called AMS \cite{AMS}, which should be able to improve significantly the present upper limits on low-energy antiprotons, and may be able to start constraining significantly supersymmetric models. Finally, I note that it is also possible to derive limits on supersymmetric models from the present experimental measurements of the cosmic-ray $e^+$ \cite{HEAT} and $\gamma$ fluxes \cite{Bergstrom}, but these are not yet very constraining. \subsection{Annihilation in the Sun or Earth} A second LSP detection strategy is to look for $\chi\chi$ annihilation inside the Sun or Earth. Here the idea is that a relic LSP wandering through the halo may pass through the Sun or Earth \cite{solar}, collide with some nucleus inside it, and thereby lose recoil energy. This could convert it from a hyperbolic orbit into an elliptic one, with a perihelion (or perigee) below the solar (or terrestrial) radius. If so, the initial capture would be followed by repeated scattering and energy loss, resulting in a quasi-isothermal distribution within the Sun (or Earth). The resulting LSP population would grow indefinitely, \`a la Malthus, unless it were controlled either by emigration, namely evacuation from the surface, or by civil war, namely annihilation within the Sun (or Earth). Evaporation is negligible for $\chi$ particles weighing more than a few GeV \cite{Gould}, so the only hope is annihilation. The neutrinos produced by any such annihilation events would escape from the core, leading to a high-energy solar neutrino flux ($E_{\nu} \gsim 1$ GeV). This could be detected either directly in an underground experiment, or indirectly via a flux of upward-going muons produced by neutrino collisions in the rock. (By the way, LSPs in the core of the Sun do not affect significantly its temperature, and hence have no impact on the low-energy solar neutrino problem.) The high-energy solar neutrino flux produced in this way is given approximately by the following general formula \cite{EFR}: \begin{eqnarray} R_\nu = &2.7\times 10^{-2} \,f\left(m_\chi/m_p\right)\, \left({\sigma (\chi p \rightarrow \chi p) \over 10^{-40}\, \hbox{cm}^2}\right) \hfill \nonumber \\ & \left({\rho_\chi\over 0.3\,\hbox{GeV cm}^{-3}}\right) \left({300 \ \hbox{km\,s}^{-1}\over \bar v_{\chi}}\right) \times F_\nu \hfill \label{E81} \end{eqnarray} assuming that proton targets dominate capture by the Sun. Here $f$ is a kinematic function, $\sigma (\chi\,p \rightarrow \chi\,p)$ is the elastic LSP-proton scattering cross section, $\rho_{\chi}$ and $\bar v_{\chi}$ are the local density and mean velocity of the halo LSPs, and $F_\nu$ represents factors associated with the neutrino interaction rate in the apparatus. There is an analogous formula for the production of upward-going muons originating from the collisions in rock of high-energy solar neutrinos, and rates in a sampling of supersymmetric models are shown in fig.~13. While some models are already excluded by unsuccessful searches, most are not \cite{sample}. We see in fig.~14 that searches for solar signals usually constrain models more than searches for terrestrial signals, though this is not a model-independent fact \cite{sample}. As seen in fig.~15, in the long run it seems that a search for upward-going neutrino-induced muons with a $1$ km$^2$ detector could almost certainly detect LSP annihilation \cite{Halzen}, if most of the cold dark matter is indeed composed of LSPs. Before leaving this subject, I would like to recall that MSW oscillations may also be important for high-energy solar neutrinos, as seen in fig.~16 \cite{EFM}. Until the possible neutrino mass and oscillation parameters are pinned down, this introduces another ambiguity into the above analysis. For the time being, it would be conservative to quote upper limits on fluxes assuming that the neutrinos arriving at the detector are those for which the detector has the smallest efficiency. \subsection{Dark Matter Search in the Laboratory} The third LSP search strategy is to look directly for LSP scattering off nuclei in the laboratory \cite{GW}. The typical recoil energy \begin{equation} \Delta E < m_\chi v^2 \simeq 10 \left({m_\chi\over 10\,\hbox{GeV}}\right)\,\hbox{keV} \label{E85} \end{equation} deposited by elastic $\chi$-nucleus scattering would probably lie in the range of $10$ to $100$ keV. Spin-dependent interactions mediated by $Z^0$ or $\tilde q$ exchange are likely to dominate for light nuclei \cite{Flores}, whereas coherent spin-dependent interactions mediated by $H$ and $\tilde q$ exchange are likely to dominate scattering off heavy nuclei \cite{Griest}. The spin-dependent interactions on individual nucleons are controlled by the contributions of the different flavours $q$ of quark to the total nucleon spin, denoted by $\Delta q$. These have now been determined by polarized lepton-nucleon scattering experiments with an accuracy sufficient for our purposes. Translating the $\Delta q$ into matrix elements for interactions on nuclei depends on the contributions of the different nucleon species to the nuclear spin, which must be studied using the shell model \cite{Flores} or some other theory of nuclear structure \cite{nuclear}. The spin-independent interactions on individual nucleons are related to the different quark and gluon contributions to the nucleon mass, which is also an interesting phenomenological issue related to the $\pi$-nucleon $\sigma$-term \cite{sigma}. Again, the issue of nuclear structure arises when one goes from the nucleon level to coherent scattering off a nuclear target. It is in particular necessary to understand the relevant nuclear form factors, which are expected to exhibit zeroes at certain momentum transfers \cite{zeroes}. We will not discuss here the details of such nuclear calculations, but present in figs~17 and 10 the results of a sampling of different supersymmetric models \cite{sample}. We see in fig.~17 that the spin-independent contribution tends to dominate over the spin-dependent one in the case of Germanium, though this is not universally true, and would not be the case for scattering off Fluorine \cite{Flores}. In fig.~10 we plot the scattering rates off $^{73}$Ge, where we see that there are many models in which more than $0.01$ events/kg/day are expected, which may be observable \cite{BergGond}. The direct search for cold dark matter scattering in the laboratory may be a useful complement to the searches for supersymmetry at accelerators \cite{Flores}. Fig.~18 shows as solid lines upper limits from searches for elastic scattering in the laboratory, for spin-dependent rates: \begin{eqnarray} &&\sigma_p^{dep}\, \lsim 0.3\,\hbox{pb}\;\hbox{for} \nonumber \\ &&20\,\hbox{GeV}\, \lsim\,m_{\chi}\,\lsim\,300\,\hbox{GeV} \label{spindep} \end{eqnarray} in part (a), and for spin-independent scattering rates: \begin{eqnarray} &&\sigma_p^{ind}\,\lsim\,3~10^{-5}\,\hbox{pb}\;\hbox{for} \nonumber \\ &&20\,\hbox{GeV}\, \lsim\,m_{\chi}\,\lsim\,300\,\hbox{GeV} \label{spinindep} \end{eqnarray} in part (b) \cite{survey}. Also shown in fig.~18 as dashed lines are corresponding upper limits from indirect searches of the type discussed in the previous subsection. These may appear more stringent, but involve more uncertainties, as already discussed. Please note also that limits may also be obtained from studies of tracks in ancient Mica \cite{Mica}, and from searches for inelastic excitations by relic particles \cite{inelastic}. It should also be emphasized that the significances and relative importances of these different search strategies are sensitive to the usual assumption of universality in the sparticle masses. This point is made in fig.~19, whose panel (a) shows results in a sampling of ``universal" models, whilst panel (b) shows what happens in a sampling of ``non-universal" models \cite{Pok}, \cite{Bott}. The latter have yet to be explored so systematically. \subsection{Axions} The axion \cite{axion} is my second-favourite candidate for the cold dark matter. As you know, it was invented to guarantee conservation of $P$ and $CP$ in the strong interactions. These would otherwise be violated by the QCD $\theta$ parameter, which is known experimentally to be smaller than about $10^{-9}$ \cite{RPP}. The $\theta$ parameter relaxes to zero in any extension of the Standard Model which contains the axion, whose mass and couplings to matter that are scaled inversely by the axion decay constant $f_a$. The fact that no axion has been seen in any accelerator experiment tells us that \begin{equation} f_a \mathrel{\mathpalette\@versim>} 1\,\hbox{TeV} \label{E88} \end{equation} and hence that any axion must be associated with physics beyond the scale of the Standard Model. Axions would have been produced in the early Universe in the form of slow-moving coherent waves that could constitute cold dark matter. The relic density of these waves has been estimated as \cite{adensity} \begin{eqnarray} \Omega_a \,\simeq\,& \left({0.6\times 10^{-5}\,\hbox{eV}\over m_a}\right)^{7/6} \hfill \nonumber \\ &\left({200\,\hbox{MeV}\over \Lambda_{QCD}}\right)^{3/4} \left({75\over H_0}\right)^2 \label{E89} \end{eqnarray} which is less than unity if \begin{equation} f_a \mathrel{\mathpalette\@versim<} 10^{12}\,\hbox{GeV} \label{E90} \end{equation} In addition to these coherent waves, there may also be axions radiated from cosmic strings \cite{radiation}, which would also be non-relativistic by now, and hence contribute to the relic axion density and strengthen the limit in equation (\ref{E90}). The fact that the Sun shines photons rather than axions, or, more accurately but less picturesquely, that the standard solar model describes most data, implies the lower limit \begin{equation} f_a \mathrel{\mathpalette\@versim>} 10^{7}\,\hbox{GeV} \label{E91} \end{equation} This has been strengthened somewhat by unsuccessful searches for the axio-electric effect \cite{Sun}, in which an axion ionizes an atom. More stringent lower bounds on $f_a$ are provided by the agreements between theories of Red Giant and White Dwarf stars with the observations \cite{stars}: \begin{equation} f_a \mathrel{\mathpalette\@versim>} 10^{9}\,\hbox{GeV} \label{E92} \end{equation} Between equations (\ref{E90}) and (\ref{E92}) there is an open window in which the axion could provide a relic density of interest to astrophysicists and cosmologists. Part of this window is closed by the observations of the supernova SN1987a. According to the standard theory of supernova collapse to form a neutron star, $99 \%$ of the binding energy released in the collapse to the neutron star escapes as neutrinos. This theory agrees \cite{neutrinos} with the observations of SN1987a made by the Kamiokande \cite{Kam} and IMB experiments \cite{IMB}, which means that most of the energy could not have been carried off by other invisible particles such as axions. Since the axion is a light pseudoscalar boson, its couplings to nucleons are related by a generalized Goldberger-Treiman relation to the corresponding axial-current matrix elements, and these are in turn determined by the corresponding $\Delta q$ \cite{GTaxion}. Specifically, we find for the axion couplings to individual nucleons that \begin{eqnarray} C_{ap} = &\,2[{-} 2.76\, \hbox{$\Delta u$} - 1.13\, \hbox{$\Delta d$} + 0.89\, \hbox{$\Delta s$} \nonumber \\ &-\cos 2 \beta \, (\hbox{$\Delta u$} - \hbox{$\Delta d$} - \hbox{$\Delta s$}) ]\,, \hfill \nonumber\\ \label{E93}\\ C_{an} = &\,2[{-} 2.76\, \hbox{$\Delta d$} - 1.13\, \hbox{$\Delta u$} + 0.89\, \hbox{$\Delta s$} \nonumber \\ &-\cos 2 \beta \, (\hbox{$\Delta d$} - \hbox{$\Delta u$} - \hbox{$\Delta s$}) ] \hfill \phantom{\,,} \nonumber \end{eqnarray} Evaluating the $\Delta q$ at a momentum scale around $1$ GeV, as is appropriate in the core of a neutron star, we estimate \cite{BjSRalphas} that \begin{eqnarray} C_{ap} &=& ({-}3.9 \pm 0.4) - (2.68 \pm 0.06) \cos 2 \beta \nonumber\\ \label{E94}\\ C_{an} &=& (0.19 \pm 0.4)\, + \,(2.35 \pm 0.06) \cos 2 \beta \phantom{,} \nonumber \end{eqnarray} which are plotted in fig.~20. As in the case of LSP scattering, the uncertainties associated with polarized structure function measurements are by now considerably smaller than other uncertainties, in this case particularly those associated with the nuclear equation of state. A particular focus of attention here has been the suggestion \cite{Raffelt2} that nucleon spin fluctuations in the supernova core might suppress substantially the axion emission rate. In fact, sum-rule considerations \cite{Sigl} suggest that this suppression may be less important than first thought, though there is some shift in the open part of the axion window \cite{newsn}. The good news here is that an experiment \cite{axexp} is underway which should be able to detect halo axions if they live in at least part of this window. \section{Prospects} We have every reason to think that the near future will be a very exciting period for astroparticle physics. As seen in Table 2, many experiments are underway, under construction, or being actively planned, which will contribute to resolving the fundamental issues in this field. On the side of astrophysics and cosmology, we have every reason to hope that a verified ``Standard Model" of structure formation will soon emerge, and that the nature of the invisible $90 \%$ or more of the matter in the Universe may soon be resolved. On the side of particle physics, we have every reason to hope that the resolution of these astrophysical and cosmological issues will take us beyond the current Standard Model of particle physics, a strait-jacket from which accelerators have not yet been able to extract us. As is seen in Table 2, it may in fact be the next generation of accelerator experiments that creates these twin revolutions in astroparticle physics. \section*{Acknowledgements} I thank Angel Morales, Mercedes Fatas and members of the Organizing Committee for creating such a stimulating meeting. \newpage \begin{figure*}[H] \epsfig{figure=9610T2.eps,width=16cm} \begin{center}Table~2 - Chronology of some possible future interesting experiments and \\ the astroparticle physics issues they may resolve. \end{center} \end{figure*} \newpage
1,941,325,220,652
arxiv
\section{Introduction ($\lambda$-symmetries)} Let me briefly recall for the reader's convenience the basic definition of $\lambda$-symmetry (with lower case $\lambda$), originally introduced by C. Muriel and J.L. Romero in 2001 \cite{Cic:MR1,Cic:MR2}. Let me consider the simplest case of a single ODE $\Delta(t,u(t),\dot u,\ddot u\ldots)=0$ for the unknown function $u\,=\,u(t)$ (I will denote by $t$ the independent variable, with the only exception of the final Section 4, because the applications I am going to propose will concern the case of Dynamical Systems (DS), where the independent variable is precisely the time $t$, and $\dot u={\rm d}u/{\rm d} t$, etc.). Given a vector field \[ X\,=\,\varphi(u,t){\frac {\partial} {\partial u}}+ \tau(u,t){\frac \partial {\partial t}}\] the idea is to suitably {\it modify} its prolongation rules. The first $\lambda$-prolongation $X^{(1)}_\lambda$ is the defined by \begin{equation}\label{Cic:la1}X^{(1)}_\lambda\,=\, X^{(1)}+ \lambda (\varphi-\tau\dot u){\frac \partial {\partial \dot u}}\end{equation} where $\lambda=\lambda(u,\dot u,t)$ is a $C^\infty$ function, and $X^{(1)}$ is the standard first prolongation. Other modifications have to be introduced for higher prolongations, but in the present paper I need only just the first one. \smallskip An $n$-th order ODE $\Delta=0$ is said to be $\lambda$-invariant under $X^{(n)}_\lambda$ if \[X^{(n)}_\lambda\Delta\Big|_{\Delta=0}\,=\,0\] where $X^{(n)}_\lambda$ is the $n$-th $\lambda$-prolongation of $X$. \smallskip It should be emphasized that $\lambda$-symmetries are not properly symmetries, because they do not transform in general solutions of a $\lambda$-invariant equation into solutions, nevertheless they share with standard Lie point-symmetries some important properties, namely: if an equation is $\lambda$-invariant, then \smallskip \noindent $\bullet$ the order of the equation can be lowered by one \smallskip \noindent $\bullet$ invariant solutions can be found (notice that conditional symmetries do the same, but $\lambda$-symmetries are clearly {\it not} conditional symmetries) \smallskip \noindent $\bullet$ convenient new (``symmetry adapted'') variables can be suggested. \smallskip In the context of DS, which is the main object of this paper, the first two properties are not effective, the third one is instead one of my starting points. \smallskip Before considering the role of $\lambda$-symmetries in DS, let me recall that many applications and extensions of this notion have been proposed in these 10 years: these include extensions to systems of ODE's, to PDE's, applications to variational principles and Noether-type theorems, the analysis of their connections with nonlocal symmetries, with symmetries of exponential type, with hidden, or ``lost'' symmetries, with potential, telescopic symmetries as well. Other investigations concern their deep geometrical interpretation, with the introduction of a suitable notion of deformed Lie differential operators, the study of their dynamical effects in terms of changes of reference frames, and so on. Only the papers more directly involved with the argument considered in this paper will be quoted; for a fairly complete list of references see e.g. \cite{Cic:CHam,Cic:Gatw,Cic:GC09}. A very recent application concerns discrete difference equations \cite{Cic:LRdde}. \section{$\Lambda$-symmetries for DS} I am going to consider the case of dynamical systems, i.e. systems of first-order ODE's \[\dot u_a\,=\,f_a(u,t)\quad\quad\ (a=1,\ldots, m)\] for the $m>1$ unknowns $u_a\,=\,u_a(t)$. Let me start with a trivial (but significant) case: if the DS admits a rotation symmetry, then it is completely natural to introduce as new variables the radius $r$ and the angle $\theta$, and the DS immediately takes a simplified form, as well known. However, in general, symmetries of DS may be very singular, and/or difficult to detect. An example can be useful: the DS \[\dot u_1\,=\,u_1u_2 \quad\quad\ \dot u_2\,=\,-u_1^2\] admits the (not very useful or illuminating) symmetry generated by (with $r^2=u_1^2+u_2^2$) \[X\,=\,\Big({\frac {2u_1} {r^2}}- {\frac {u_1u_2} {r^3}}\log{\frac {u_2-r} {{u_2+r}}}\Big) {\frac {\partial} {\partial u_1}}+ \Big({\frac {2u_2}{r^2}-{\frac{u_1^2}{r^3}} \log{\frac {u_2-r} {{u_2+r}}}\Big) {\frac {\partial} {\partial u_2}}}\ .\] In this example the rotation (with a commonly accepted abuse of language, the same symbol $X$ denotes both the symmetry and its Lie generator) \[X\,=\,u_2{\frac {\partial} {\partial u_1}}-u_1{\frac \partial {\partial u_2}}\] is a {\it $\lambda$-symmetry}\ (its precise definition will be given in a moment), and {\it not} a symmetry in the ``standard'' sense; {\it nevertheless}, still introducing the variables as before, i.e. $r$ and $\theta$, the DS takes the very simple form \[\dot r\,=\,0\quad\quad\ \dot\theta\,=\,-r\cos\,\theta\ .\] This is just a first, simple example of the possible role of $\lambda$-symmetries in the context of DS. \bigskip \subsection{$\Lambda$-symmetries of general DS} The natural way to extend the definition (\ref{Cic:la1}) of the first $\lambda$-prolongation of the vector field \[X\,=\,\varphi_a(u,t){\frac \partial {\partial u_a}}+ \tau(u,t){\frac\partial {\partial t}}\,=\, \varphi\cdot{\nabla_u}+\tau\partial_t\] to the case of $m>1$ variables $u_a$ is the following (sum over repeated indices) \[X_\Lambda^{(1)}\,=\, X^{(1)}+\Lambda_{ab}(\varphi_b-\tau\dot u_b) \cdot\nabla_{\dot u_a}\] where now $\Lambda=\Lambda(t,u_a,\dot u_a)$ is a $(m\times m)$ matrix; accordingly, I denote by the upper case $\Lambda$ these symmetries in this context. To simplify, let me assume from now on $\tau\,=\,0$ (or use evolutionary vector field, it is not restrictive). Then the given DS is $\Lambda$-invariant under $X$ (or $X$ is a $\Lambda$-symmetry for the DS), i.e. $X_\Lambda^{(1)}(\dot u-f)|_{\dot u=f}=0$, if and only if \[ [\,f,\varphi\,]\,_a+\partial_t\varphi_a\,=\,-(\Lambda\varphi)_a \quad\qquad (a=1,\ldots,m)\, \] where \[ [\,f, \varphi\,]\,_a\equiv f_b\nabla_{u_b}\varphi_a-\varphi_b \nabla_{u_b}f_a \ .\] Given $X$, we now introduce the following new $m+1$ ``canonical'' (or {\it symmetry-adapted}) variables ({\it notice that they are independent of} $\Lambda$): precisely, $m-1$ variables $w_j=w_j(u)$ which, together with the time $t$, are $X$- invariant: \[X\,w_j\,=\,X\,t\,=\,0\quad\quad (j=1,\ldots,m-1)\] and the coordinate $z$, ``rectifying'' the action of $X$, i.e. \[ X\,=\,{\frac \partial {\partial z}}\ .\] Writing the given DS in these new variables, we obtain a ``reduced'' form of the DS, as stated by the following theorem \cite{Cic:Olv,Cic:MRVi,Cic:CLa}. \begin{theorem} Let $X$ be a $\Lambda$-symmetry for a given DS; once the DS is written in terms of the new variables $w_j,z,t$, i.e. \[ \dot w_j \,=\,W_j(w,z,t) \quad\quad\ \dot z \,=\,Z(w,z,t)\] the dependence on $z$ of the r.h.s. $W_j\,,Z$ is controlled by the formulas \[{\frac{\partial W_j}{\partial z}}\,=\, {\frac{\partial w_j}{\partial u_a}}(\Lambda\varphi)_a\equiv M_j \quad\quad\ {\frac{\partial Z}{\partial z}}\,=\, {\frac{\partial z}{\partial u_a}}(\Lambda\varphi)_a\ \equiv M_{m}\ .\] One has: \[\bullet\ \ If \ \Lambda=0\ \ then\ M_j=M_m=0 \ \ and \ \ W_j\,,Z \ are\ independent\ of \ z \hskip 7.6cm\] \[\bullet\ \ If \ \Lambda\,=\,\lambda I\ \ then\ only\ Z\ depends \ on\ z \hskip8.7cm\] \vskip.05cm\noindent $\bullet$ Otherwise, a ``partial'' reduction\ is\ obtained:\ If some $M_k=0$, then $W_k$ is independent of $z$. In terms of the new variables, the $\Lambda$-prolongation becomes \[ X_{\Lambda}^{(1)}\,=\,{\frac\partial {\partial z}}+M_{j}{\frac\partial {\partial\dot w_j}}+M_{m}{\frac\partial {\partial\dot z}}\ .\] \end{theorem} The first case ($\Lambda=0$) clearly means that $X$ is an {\it exact}, or standard Lie point-symmetry \cite{Cic:Olv}; the second one has been considered in detail by Muriel and Romero \cite{Cic:MRVi} (notice that actually it would be enough to require $\Lambda\varphi=\lambda\varphi$); the last case has been dealt with in \cite{Cic:CLa}: several situations can be met, depending on the number of vanishing $M_j$ (e.g., one may obtain triangular DS, or similar). \subsection{Hamiltonian DS} I now consider the special case in which the DS is a {\it Hamiltonian} DS. Obvious changes in the notations can be introduced: the $m$ variables $u=u_a(t)$ are replaced by the $m=2n$ variables $q_\alpha(t),p_\alpha(t)\ (\alpha=1,\ldots,n)$, and the DS is now the system of the Hamiltonian equations of motions for the given Hamiltonian $H=H(q,p,t)$: \[ \dot u\,=\,J\nabla H\equiv F(u,t)\quad\,,\quad \ \nabla\equiv \nabla_u\equiv(\nabla_q,\nabla_p)\] where $J$ is the standard symplectic matrix \[J\,=\,\pmatrix{0&I \cr -I&0}\ . \] A vector field\ $X$ can be written accordingly (with $a=1,\ldots,2n\,;\, \alpha=1,\ldots,n$) \[X=\varphi_\alpha(u,t){\frac\partial {\partial q_\alpha}}+ \psi_\alpha (u,t){\frac\partial {\partial p_\alpha}}\equiv{\bf \Phi}\cdot\nabla_u\quad\,,\quad\ {\bf \Phi}\equiv(\varphi_\alpha,\psi_\alpha)\] and all the above discussion clearly holds if $X$ is a $\Lambda$-symmetry for an Hamiltonian DS. Clearly, here $\Lambda$ is a $(2n\times 2n)$ matrix. But Hamiltonian problems possess certainly a {\it richer} structure with respect to general DS, which deserves to be exploited; a first instance is clearly provided by the notion of conservation rules, with its related topics. Let me then distinguish two cases: \smallskip\noindent $(i$) $X$ admits a {\it generating function} $G(u,t)$ (then $X$ is often called a ``Hamiltonian symmetry''): \begin{equation}\label{Cic:Phi}{\bf \Phi}\,=\,J\nabla G\quad{\rm i.e.}\quad \varphi\,=\,\nabla_pG \ ,\ \psi\,=\,-\nabla_qG \end{equation} this implies $\nabla D_tG=0$, where $D_t$ is the total derivative, i.e. $G$ is a constant of motion, $D_tG=0$, possibly apart from an additional time-dependent term, as well known. \smallskip\noindent $(ii$) $X$ does not admit a generating function: also in this case, defining \begin{equation}\label{Cic:defS}S(u,t)\equiv \nabla\cdot{\bf \Phi}\quad one \ has \ that\ \quad D_tS\,=\,0 \end{equation} and therefore, if $S\not=$ const, then $S$ is a first integral (the examples known to me of first integrals of this form are rather tricky, being usually obtained multiplying symmetries by first integrals; but they ``in principle'' exist, and their presence will be important for the following discussion, see subsect. 3.4). \smallskip Direct calculations can show the following: \begin{theorem} If the Hamiltonian equations of motion admit a $\,\Lambda$-symmetry $X$ with a matrix $\Lambda$, then: \noindent in case $(i)$ \[\nabla(D_tG)\,=\,J\,\Lambda\,{\bf \Phi}\,=\,J\,\Lambda\,J\,\nabla\,G\] \noindent in case $(ii)$ \[D_tS\,=\,-\nabla(\Lambda\,{\bf \Phi})\ .\] \end{theorem} \smallskip When this happens, $G$ (resp. $S$) will be called a ``{\it $\Lambda$-constant of motion}''. \bigskip If $\Lambda\!=\!0$, i.e. when $X$ is a ``standard'' (or ``exact'') symmetry, the above equations become clearly the usual conservation rules; $\Lambda$-symmetries can then be viewed as ``perturbations'' of the exact symmetries. More explicitly, the equations in Theorem 2 state the precise ``deviation'' from the conservation of $G$ (resp. of $S$) due to the fact that the invariance under $X$ is ``broken'' by the presence of a nonzero matrix $\Lambda$. As a special case for case $(i)$, the following Corollary may be of interest: \smallskip \begin{corollary} Under mild assumptions ($\Lambda\,{\bf \Phi}\!=\!\lambda\,{\bf \Phi}$, $\lambda\!=\!\lambda(G)$), then the $\Lambda$-constant of motion $G$ satisfies a ``completely {\it separated} equation'', involving only $G(t)$: \[\dot G\,=\,\gamma(t,G)\ . \] \end{corollary} This equation expresses how much the conservation of $G(t)$ is ``violated'' along the time evolution. If $\Lambda$ is in some sense ``small'', then $G$ is ``almost'' conserved. \section{When a $\Lambda$-symmetry of the Hamiltonian equations is inherited by a $\Lambda$-invariant Lagrangian} \subsection{$\Lambda$-invariant Lagrangians, Noether theorem and\\ $\Lambda$-conservation rules} Let me consider (for simplicity) only first-order Lagrangians: \[{{\cal L}}\,=\,{{\cal L}}(q_\alpha,\dot q_\alpha,t)\quad\quad\quad (\alpha=1,\ldots,n) \] Such a Lagrangian is said to be $\Lambda^{({\cal L})}$-invariant\cite{Cic:MRO,Cic:CGNoe} under \[ X^{({{\cal L}})}\,=\, \varphi_\alpha(q,t){\frac\partial {\partial q_\alpha}}\,=\, \varphi\cdot\nabla_q\] if there is an $(n\times n)$ matrix \[\Lambda^{({{\cal L}})}\,=\,\Lambda^{({{\cal L}})}(q,\dot q,t)\] such that \[\Big(X_\Lambda^{({{\cal L}})}\Big)^{(1)}({{\cal L}})\,=\,0\] where $\Big(X_\Lambda^{({{\cal L}})}\Big)^{(1)}$ is the first $\Lambda^{({{\cal L}})}$-prolongation of $X^{({{\cal L}})}$ (the notation is rather heavy, to carefully distinguish the Lagrangian case from the Hamiltonian one, to be considered in the next subsection). We then have \cite{Cic:CGNoe} \begin{theorem} If the Lagrangian ${{\cal L}}$ is $\Lambda^{({{\cal L}})}$-invariant under $X^{({{\cal L}})}$ then, putting \[{{\cal P}}_{\alpha\beta}\,=\,\varphi_\alpha p_\beta \quad\quad with \quad\quad p_\beta\,=\, {\frac{\partial{{\cal L}}}{\partial \dot q_\beta}}\] one has \[ D_t{\bf P}= -\Lambda^{({{\cal L}})}_{\alpha\beta}\varphi_\beta{\frac{\partial{{\cal L}}}{\partial \dot q_\alpha}} \,=\, -\big(\Lambda^{({{\cal L}})}\varphi\big)_\alpha p_\alpha\] where ${\bf P}={{\tt Tr}}({{\cal P}})\,=\,\varphi_\alpha p_\alpha$; or also, introducing a ``deformed derivative'' ${\widehat D}_{t}$ \[({\widehat D}_{t})_{\alpha\beta}\equiv D_t\delta_{\alpha\beta}+ \Lambda^{({{\cal L}})}_{\alpha\beta} \quad then \quad {{\tt Tr}}({\widehat D}_t{{\cal P}})\,=\,0\ .\] \end{theorem} This result can be called the {\it ``Noether $\Lambda^{({{\cal L}})}$-conservation rule''}. Indeed, if $\Lambda^{({{\cal L}})}\!=\!0$, the standard Noether theorem is recovered. \smallskip In the special case $\Lambda^{({{\cal L}})}\varphi=\lambda \varphi$, the above result becomes \[{\widehat D}_t{\bf P}=0\quad\quad {\rm where}\quad\quad {\widehat D}_t=D_t+\lambda\ .\] Theorem 3 can be extended \cite{Cic:CGNoe} to divergence symmetries and to generalized symmetries as well. Also, higher-order Lagrangians can be included: the $\Lambda^{({{\cal L}})}$-conservation rule has the same form, but ${{\cal P}}_{\alpha\beta}$ is different: for instance, for second-order Lagrangians one has \[{{\cal P}}_{\alpha\beta}\,=\, \varphi_\alpha{\frac{\partial {{\cal L}}}{\partial \dot q_\beta}} + (({\widehat D}_t)_{\alpha\gamma} \varphi_\gamma) {\frac{\partial {{\cal L}}}{\partial \ddot q_\beta}} - \varphi_\alpha D_t{\frac{\partial {{\cal L}}} {\partial\ddot q_\beta}} \ .\] \subsection{From Lagrangians to Hamiltonians} Assume to have a Lagrangian which is $\Lambda^{({{\cal L}})}$-invariant under a vector field \[X^{({{\cal L}})}\,=\,\varphi_\alpha{\frac\partial {\partial q_\alpha}}\] and introduce the corresponding Hamiltonian $H$ with its Hamiltonian equations of motion. The natural question is whether the $\Lambda^{({{\cal L}})}$-symmetry $X^{({{\cal L}})}$ of the Lagrangian is transferred to some $\Lambda^{(H)}$-symmetry $X^{(H)}$ of the Hamiltonian equations of motion. Two problems then arise: $i)$ to extend the vector field $X^{({{\cal L}})}$ to a suitable vector field $X^{(H)}$, and $ii)$ to extend the $(n\times n)$ matrix $\Lambda^{({{\cal L}})}$ to a suitable $(2n\times 2n)$ matrix $\Lambda^{(H)}$. First, the vector field $X^{(H)}$ is expected to have the form \begin{equation}\label{Cic:Lapq} X\equiv X^{(H)}\,=\, \varphi_\alpha{\frac\partial {\partial q_\alpha}}+ \psi_\alpha{\frac\partial {\partial p_\alpha}} \end{equation} where the coefficient functions $\psi$ must be determined. This can be done observing that the variables $p$ are related to $\dot q$ (and then the first $\Lambda^{({{\cal L}})}$-prolongation of $X^{({{\cal L}})}$ is needed, where the ``effect'' of $\Lambda^{({{\cal L}})}$ is present). One finds, after some explicit calculations, \begin{equation}\label{Cic:XH} \psi_\alpha\,=\,{\frac\partial {\partial \dot q_\alpha}}\Big(D_t {\bf P}+\Lambda^{({{\cal L}})}_{\beta\gamma}\varphi_\gamma{\frac{\partial {{\cal L}}}{\partial\dot q_\beta}}\Big)-{\frac{\partial\Lambda^{({{\cal L}})}_ {\beta\gamma}}{\partial\dot q_\alpha}}\varphi_\gamma{\frac{\partial{{\cal L}}} {\partial\dot q_\beta}}- p_\beta{\frac{\partial\varphi_\beta}{\partial q_\alpha}}\ . \end{equation} But the term in parenthesis vanishes if the Lagrangian is $\Lambda^{({{\cal L}})}$-invariant, thanks to Theorem 3; in addition, if $\Lambda^{({{\cal L}})}$ does not depend on $\dot q$ (as happens in most cases, otherwise a separate treatment is needed, see subsect. 3.4), then we are left with \begin{equation}\label{Cic:psiH} \psi_\alpha=-p_\beta{\frac{\partial\varphi_\beta}{\partial q_\alpha}}\ . \end{equation} This implies that $X$ admits a generating function, which is just \[G\,=\,\varphi_\alpha p_\alpha\equiv{\bf P} \] using the notations introduced in Theorem 3. Second, let me now introduce the following $(2n\times 2n)$ matrix \[\Lambda\equiv\Lambda^{(H)}\,=\,\pmatrix{\Lambda^{({{\cal L}})} & 0\cr - {\frac{\partial\Lambda^{({{\cal L}})}}{\partial q_\alpha}}p_\gamma & \Lambda^{(2)}} \] where $\Lambda^{(2)}$ must satisfy ($\Lambda$ is not uniquely defined, as well known) \[\Lambda^{(2)}_{\alpha\beta}\,{\frac{\partial\varphi_\gamma}{\partial q_\beta} }\,=\,\Lambda^{({{\cal L}})}_{\gamma\beta} {\frac{\partial\varphi_\beta} {\partial q_\alpha}}\ .\] It is well known that Euler-Lagrange equations coming from a $\Lambda^{({{\cal L}})}$-invariant Lagrangian do {\it not exhibit in general $\Lambda$-symmetry}. In contrast with this, it is not difficult to verify explicitly that the Hamiltonian equations of motion turn out to be $\Lambda^{({H)}}$-symmetric under the vector field $X^{({H)}}$ obtained according to the above prescriptions. \smallskip In conclusion, I have shown the following \begin{theorem} If ${{\cal L}}$ is a $\Lambda$-invariant Lagrangian under a vector field $X^{({{\cal L}})}$ with a matrix $\Lambda^{({{\cal L}})}$ (not depending on $\dot q$), one can extend $X^{({{\cal L}})}$ to a vector field\ $X\equiv X^{(H)}$ and the $(n\times n)$ matrix $\Lambda^{({{\cal L}})}$ to a $(2n\times 2n)$ matrix $\Lambda\equiv \Lambda^{(H)}$ in such a way that the resulting Hamiltonian equations of motion are $\Lambda$-symmetric under $X$; in addition, $G=\varphi_\alpha p_\alpha$ is a $\Lambda$-constant of motion. \end{theorem} \smallskip\smallskip \begin{example} The Lagrangian (with $n=2$) \[{{\cal L}}={\frac 1 2}\Big({\frac{\dot q_1} {q_1}}-q_1\Big)^2+{\frac 1 2}(\dot q_1-q_1\dot q_2)^2\exp(-2q_2)+q_1\exp(-q_2) \] is $\Lambda^{({{\cal L}})}$-invariant under \[X^{({{\cal L}})}\,=\,q_1{\frac\partial {\partial q_1}}+{\frac\partial {\partial q_2}}\] with \[\Lambda^{({{\cal L}})}\,=\,{\rm diag}\ (q_1,q_1)\ .\] It is easy to write the Hamiltonian equations of motion and to check that they are indeed $\Lambda$-symmetric under \[ X\,=\,q_1{\frac\partial {\partial q_1}}+ {\frac\partial {\partial q_2}}-p_1{\frac\partial {\partial p_1}}\] with \[ \Lambda\,=\,\Lambda^{(H)}\,=\,\pmatrix {q_1&0&0&0\cr 0&q_1&0&0\cr -p_1&-p_2&q_1&0\cr 0&0&0&0}\ . \] $X$-invariant coordinates are $w_1=q_1\exp(-q_2),\,w_2=q_1p_1,\,w_3=p_2$, and, as expected, the generating function $G=w_2+w_3$ satisfies the $\Lambda$-conservation rule \[\nabla_uD_tG\,=\,J\Lambda{\bf \Phi} \quad\ {\rm or}\quad D_tG\,=\,-q_1G\ .\] \end{example} \smallskip A special, but rather common, case is described by the following: \begin{corollary} If \[\Lambda^{({{\cal L}})}{\bf \varphi}=c\, \varphi\] where $c$ is a constant, then also $\Lambda{\bf \Phi}=c\,{\bf \Phi}$ and the ``most complete'' reduction of the Hamiltonian equations of motion is obtained: \[\dot G=\gamma(G,t)\quad\quad \dot w_j=W_j(w,G,t)\quad\quad \dot z=Z(w,G,z,t)\] \end{corollary} \subsection{Reduction of the Euler-Lagrange equations versus\\ the Hamiltonian equations} In this section I want to compare the reduction procedure which is provided by the presence of a $\Lambda$-symmetry of a Lagrangian (i.e. the reduction of Euler-Lagrange equations) with the analogous reduction of the Hamiltonian equations of motion. Let me start recalling that any vector field\ $X=\varphi_\alpha\partial/\partial q_\alpha$ admits $n$ (0-order) invariants (as already said, see subsect. 2.1) \[w_j=w_j(q,t)\quad\ (j=1,\ldots,n-1) \quad {\rm and \ the\ time}\ t\] and $n$ first-order differential invariants $\eta_\alpha=\eta_\alpha(q,t,\dot q)$ under the first prolongation $X^{(1)}$ \[X^{(1)}\eta_\alpha\,=\,0 \quad\quad\quad\ (\alpha=1,\ldots,n)\ .\] {\it Both} if $X^{(1)}$ is standard and if it is a $\Lambda$ prolongation ({\it under the condition} $\Lambda\,\varphi=\lambda\,\varphi$), it is well known that $\dot w_j$ are $n-1$ first-order differential invariants (notice that this is an ``algebraic'' property, not related to dynamics). If one now chooses another independent first-order differential invariant $\zeta=\zeta(q,t,\dot q)$, then one has that any first-order $\Lambda^{({{\cal L}})}$-invariant Lagrangian is a function of the above $2n$ invariants \[t,w_j,\dot w_j\quad{\rm and}\quad \zeta \ .\] Writing the Lagrangian in terms of these variables, the Euler-Lagrange equation for $\zeta$ is then simply \[ {\frac{\partial {{\cal L}}}{\partial\zeta}}\,=\,0\ .\] This first-order equation provides in general a ``partial'' reduction, i.e., it produces only {\it particular solutions}, even considering the Euler-Lagrange equations for the other variables \cite{Cic:MRO,Cic:CHam} (notice that this is true both for exactly invariant and for $\Lambda^{({{\cal L}})}$-invariant Lagrangians). \smallskip I want to emphasize that, introducing $\Lambda$-symmetric Hamiltonian equations of motion along the lines stated in Theorem 4, then a ``better'' reduction is obtained, and no solution is lost. The following example clarifies this point. \smallskip \begin{example} The Lagrangian ($n=2$) \[{{\cal L}}\,=\,{\frac 1 2}\Big({\frac{\dot q_1}{q_1}}-\log q_1\Big)^2+{\frac 1 2}\Big({\frac{\dot q_1}{q_1}}+{\frac{\dot q_2}{q_2}}\Big)^2\quad\quad (q_1>0)\] is $\Lambda^{({{\cal L}})}$-invariant under \[X^{({{\cal L}})}\,=\,q_1{\frac\partial {\partial q_1}}-q_2{\frac\partial {\partial q_2}}\] with $\Lambda^{({{\cal L}})}={\rm diag}\ (1,1)$. With \[ w\,=\,q_1q_2,\ \dot w\,=\,\dot q_1q_2+q_1\dot q_2,\ \zeta\,=\,{\frac{\dot q_1}{ q_1}}-\log q_1 \] the Lagrangian becomes \[ {{\cal L}}\,=\,{\frac 1 2}\zeta^2+{\frac1 2}{\frac{\dot w^2}{w^2}}\] and the Euler-Lagrange equation for $\zeta$ is \[{\partial {{\cal L}}/{\partial \zeta}}=\zeta=0\quad\quad\ {\rm or}\quad\ \dot q_1\,=\,q_1\log q_1\] with the particular solution \[q_1(t)\,=\,\exp(c\, e^t)\ .\] The corresponding Hamiltonian equations of motion are $\Lambda$-symmetric under \[ X\,=\, q_1{\frac\partial {\partial q_1}}- q_2{\frac\partial {\partial q_2}}-p_1{\frac\partial {\partial p_1}}+p_2{\frac\partial {\partial p_2}}\] with $\Lambda={\rm diag}\ (1,1,1,1)$. Invariants under this $X$ are \[w_1\,=\,q_1q_2,\ w_2\,=\, q_1p_1,\ w_3\,=\,q_2p_2\] and $X$ is generated by $G=w_2-w_3$. A ``complete'' reduction is obtained: with $z=\log q_1$, we get \[\dot w_1\,=\,w_1w_3 \quad\quad \dot w_2\,=\,w_3-w_2 \] \[ \dot G\,=\,-G \quad\quad\ \dot z\,=\,z+w_2-w_3\] The above ``partial'' (Lagrangian) solution $\zeta=0$ corresponds~to \[\dot z\,=\,z,\quad w_2=w_3=c= {\rm const},\quad \dot w_1=cw_1\ .\] From the Hamiltonian equations, instead, e.g.: \[q_1(t)\,=\,\exp(c\, e^t)+c_1\exp(-t)\quad\quad {\rm etc.}\] The reader can easily complete the calculations. \end{example} \subsection{When $\Lambda^{({{\cal L}})}$ depends on $\dot q$} If $\Lambda$ depends also on $\dot q$ (see eq.s (\ref{Cic:Lapq},\ref{Cic:XH})), the calculations performed in subsect. 3.2 cannot be repeated, the coefficient functions $\psi_\alpha$ cannot be expressed in the simple form (\ref{Cic:psiH}) and the vector field $X$ does not admit a generating function $G$. In this case one can resort to the other quantity $S$, introduced in (\ref{Cic:defS}), which provides a $\Lambda$-constant of motion. An example can completely illustrate this situation. \begin{example} ($n=1$) \[{{\cal L}}\,=\,{\frac 1 2}\Big({\frac{\dot q}{q}}+1\Big)^2\exp(-2q)\] is $\Lambda^{({{\cal L}})}$-invariant under \[X^{({{\cal L}})}\,=\,q{\frac\partial {\partial q}}\quad\quad\ {\rm with}\quad\quad \Lambda^{({{\cal L}})}\,=\,q+\dot q\ .\] One finds $\psi=-qp-p$ and the resulting vector field \[ X\,=\,q{\frac\partial {\partial q}}-(qp+p){\frac\partial {\partial p}}\] does {\it not} admit a generating function. Nevertheless, the Hamiltonian equations of motion are $\Lambda$-symmetric under $X$ with \[\Lambda\,=\,\pmatrix{ q+\dot q & 0 \cr -p & q+\dot q}\ .\] Here \[S\,=\,-q\] satisfies $D_tS=-\nabla(\Lambda\Phi)$ and is a $\Lambda$-constant of motion. \end{example} \section{A digression: general $\Lambda$-invariant Lagrangians} The $\Lambda$-invariance of a Lagrangian ${{\cal L}}={{\cal L}}(q,\dot q,t)$ considered in subsect.~3.1 is a special case of a much more general situation. Instead of $n$ time-dependent quantities $q_\alpha(t)$, let me consider now $n$ ``fields'' \[u_\alpha(x_i)\quad (\alpha=1,\ldots,n\,;\,i=1,\ldots,s)\] depending on $s>1$ real variables $x_i$. Now, the Euler-Lagrange equations become a system of PDE's, and the notion of $\mu$-symmetry \cite{Cic:GM,Cic:CGMmu} extends and replaces that of $\lambda$-symmetry (or $\Lambda$-symmetry if $n>1$). \smallskip In this case, there are $s>1$ matrices $\Lambda_i$ ($n\times n$), which must satisfy the compatibility condition \begin{equation}\label{Cic:LaLa}D_i\Lambda_j-D_j\Lambda_i+[\Lambda_i,\, \Lambda_j]\,=\,0\quad \quad (D_i\equiv D_{x_i}) \end{equation} which can be rewritten putting ${\widehat D}_i\,=\,D_i\,\delta+\Lambda_i$ (or, in explicit form: $({\widehat D}_i)_{\alpha\beta}\,=\,D_i\delta_{\alpha\beta}+ (\Lambda_i)_{\alpha\beta}$, with a notation extending the one introduced in Theorem 3), \[ [ {\widehat D}_i , {\widehat D}_j ] \ = \ 0 \ .\] Then one has \cite{Cic:CGNoe,Cic:GM,Cic:CGMmu}: \begin{theorem} Given $s>1$ matrices $\Lambda_i$ satisfying (\ref{Cic:LaLa}), there exists (locally) a $(n\times n)$ nonsingular matrix $\Gamma$ such that \[\Lambda_i\,=\,\Gamma^{-1}(D_i\Gamma)\ .\] If a Lagrangian ${{\cal L}}$ is $\Lambda$-invariant under a vector field \[ X\,=\,\varphi_\alpha{\frac {\partial} {\partial u_\alpha}}\] then there is a matrix-valued vector \[{{\cal P}}_i\equiv({{\cal P}}_i)_{\alpha\beta}\] which is $\Lambda$-conserved; this $\Lambda$-conservation law holds in the form \[ {\tt {Tr}}\, \big[ \Gamma^{-1}D_i\big( \Gamma\,{{\cal P}}_i \big)\big] \,=\, 0 \] or in the equivalent forms \[ D_i{{\bf P}}_i\,=\,-(\Lambda_i)_{\alpha\beta}({{\cal P}}_i)_{\beta\alpha} \,=\,- {\tt {Tr}}(\Lambda_i{{\cal P}}_i)\ ,\ \ where \quad {{\bf P}}_i=({{\cal P}}_i)_{\alpha\alpha}\,=\,{\tt Tr}\,{{\cal P}}_i \ ,\] \[{\tt {Tr}}({\widehat D}_i\,{{\cal P}}_i)\,=\,0\ .\] \end{theorem} \smallskip\noindent For first-order Lagrangians the $\Lambda$-conserved ``current density vector'' ${\cal P}_i$ is given by \[({{\cal P}}_i)_{\alpha\beta}\,=\,\varphi_\alpha{\frac{\partial {\cal L}} {\partial u_{\beta,i}}} \quad\quad \quad {\rm where}\quad \quad u_{\beta,i}\,=\,{\frac{\partial u_\beta} {\partial x_i}}\] and for second-order Lagrangians by \[ ({{\cal P}}_i)_{\alpha\beta}\,=\, \varphi_\beta{\frac {\partial{{\cal L}}}{\partial u_{\alpha,i}}}+ (({\widehat D_j)_{\beta \gamma}\varphi_\gamma){\frac{\partial {{\cal L}}} {\partial u_{\alpha,ij}}}-\varphi_\beta D_j {\frac{\partial {{\cal L}}} {\partial u_{\alpha,ij}}}}\ .\] \medskip \begin{example} Let $n=s=2$. Writing for ease of notation, $x,y$ instead of $x_1,\, x_2$, and $u=u(x,y), \,v=v(x,y)$ instead of $u_1,\,u_2$, consider the vector field \begin{equation}\label{Cic:exe4}X\,=\,u{\frac \partial {\partial u}}+ {\frac \partial {\partial v}}\end{equation} and the two matrices \[\Lambda_1\,=\,\pmatrix{0 & 0\cr u_x & 0} \quad\quad \Lambda_2\,= \pmatrix { 0 & 0\cr u_y & 0 } \] and then \[\Gamma\,=\,\pmatrix{1 & 0 \cr u & 1}\ .\] It is easy to check that the Lagrangian \[{\cal L}\,=\, {\frac 1 2}\Big( u_x^2+u_y^2\Big)-{\frac 1 u} \big( u_xv_x+u_yv_y \big) + u^2 \exp(-2v) \] is $\Lambda$-invariant (or better, in this context, $\mu$-invariant) but not invariant under the above vector field $X$. The $\mu$-conservation law ${\tt Tr} (\widehat D_i {{\cal P}}_i)=0$ takes here the form \[ D_i {\bf P}^i \equiv D_x \big( uu_x-v_x-{\frac {u_x} u} \big)+ D_y\big(uu_y-v_y-{\frac {u_y} u}\big)\,=\, u_x^2+u_y^2\ .\] In agreement with Theorem 5, the r.h.s. of this expression is precisely equal to \[-{\tt Tr}(\Lambda_i{\cal P}_i)= -(\Lambda_i\varphi)_\alpha{\frac {\partial {\cal L}} {\partial u_{\alpha,i}}}\ .\] Notice that in this case the quantity $u_x^2+u_y^2$ is just the ``symmetry-breaking term", i.e. the term which prevents the above Lagrangian from being exactly symmetric under the vector field (\ref{Cic:exe4}). \end{example} \smallskip It should be remarked that $\mu$-symmetries are actually strictly related to {\it standard} symmetries, or -- more precisely -- are {\it locally gauge-equivalent} to them (see for details \cite{Cic:CGNoe,Cic:Gatw,Cic:Ggau}). Given indeed the vector field $X=\varphi_\alpha\partial/\partial u_\alpha$ and the $s$ matrices $\Lambda_i$, let me denote by \[X_\Lambda^{(\infty)}\,=\,\sum_J\Psi^{(J)}_ \alpha {\frac \partial {\partial u_{\alpha,J}}}\] the infinite $\Lambda$-prolongation of $X$, where the sum is over all multi-indices $J$ as usual, and $\Psi^{(0)}_\alpha=\varphi_\alpha$. Introducing now the other vector field $\widetilde X$ \[ \widetilde X\equiv\widetilde\phi_\alpha{\frac\partial{\partial u_\alpha}}\ \quad {\rm with}\quad \ \widetilde\varphi_\alpha\equiv(\Gamma\,\varphi)_\alpha \] where $\Gamma$ is assigned in Theorem 5, and denoting by \[\widetilde X^{(\infty)}\,=\, \sum_J\widetilde\varphi^{(J)}_\alpha{\frac \partial {\partial u_{\alpha,J}}}\] the {\it standard} prolongation of $\widetilde X$, one has \cite{Cic:CGMmu,Cic:CGNoe} that the coefficient functions $\Psi^{(J)}_\alpha$ of the $\Lambda$ prolongation of $X$ are connected to the coefficient functions $\widetilde\varphi^{(J)}_\alpha$ of the standard prolongation of $\widetilde X$ by the relation \[\Psi^{(J)}_\alpha\,=\,\Gamma^{-1}\widetilde\varphi^{(J)}_\alpha \ .\] In the particularly simple case $n=1$ (i.e., a single ``field" $u(x_i)$), then the $s>1$ matrices $\Lambda_i$, and the matrix $\Gamma$ as well, become (scalar) functions $\lambda_i$ and $\gamma$; in this case, if a Lagrangian is $\mu$-invariant under the vector field $X$, then it is also invariant under the {\it standard} symmetry $\widetilde X=\gamma X$. In addition, the $\mu$ conservation law can be also expressed as a {\rm standard} conservation rule \[ D_i{\bf\widetilde P}^i\,=\,0 \] where ${\bf\widetilde P}^i=\gamma\,\varphi_\alpha\partial{\cal L}/\partial u_{\alpha,i}$ is the ``current density vector'' determined by the vector field $\widetilde X=\gamma X$. \begin{example} Let now $n=1,\,s=2$, and let me introduce for convenience as independent variables the polar coordinates $r,\theta$. I am considering a single ``field" $u=u(r,\theta)$ and the rotation vector field $X=\partial/\partial\theta$. The Lagrangian \[{\cal L}\,=\, {\frac 1 2} r^2\exp(-\epsilon\theta)u_r^2+{\frac 1 2} \exp(\epsilon\theta)u_\theta^2\] is clearly not invariant under rotation symmetry (if $\epsilon\not=0$), but is $\mu$-invariant with $\lambda_1=0,\,\lambda_2=\epsilon$. The above Lagrangian is the Lagrangian of a perturbed Laplace equation, indeed the Euler-Lagrange equation is the PDE \[r^2u_{rr}+2ru_r+\exp(2\epsilon\theta)(u_{\theta\theta}+\epsilon\, u_\theta)\,=\,0 \ .\] It is easy to check that the current density vector \[{\bf P}\equiv\big(-r^2\exp(-\epsilon\theta)u_ru_\theta\, , \, {\frac 1 2} r^2\exp(-\epsilon\theta)u_r^2-{\frac 1 2} \exp(\epsilon\theta)u_\theta^2\big)\] satisfies the $\mu$-conservation law \[D_i{\bf P}_i\,=\,-\epsilon {\bf P}_2 \ .\] According to the above remark on the (local) equivalence of the $\mu$-symmetry $X$ to the standard symmetry $\widetilde X\,=\,\gamma\,X\,=\,\exp(\epsilon\theta)\,{\partial/\partial\theta}$, also the (standard) conservation law $D_i\widetilde{{\bf P}}^i=0$ holds, with \[\widetilde{{\bf P}}\equiv \Big(-r^2 u_ru_\theta\, , \, {\frac 1 2} r^2 u_r^2- {\frac 1 2}\exp(2\epsilon\theta)u_\theta^2\Big)\ .\] \end{example} \section{Conclusions} I have shown that the notion of $\lambda$-symmetry, and the related procedures for studying differential equations, can be conveniently extended to the case of dynamical systems. The use and the interpretation of this notion becomes particularly relevant when the DS is a Hamiltonian system, and even more if the symmetry is inherited by an invariant Lagrangian: in this context indeed it is possible to introduce in a natural way and to draw a comparison between the notions of $\Lambda$-constant of motion and of Noether $\Lambda$-conservation rule. Similarly, the symmetry properties of Euler-Lagrange equations and of the Hamiltonian ones can be compared, and some reduction techniques for the equations can be conveniently introduced. Finally, I have shown that the $\Lambda$-invariance of the Lagrangians in the context of the DS is a special case of a more general and richer situation, where several independent variables are present and a $\Lambda$-conservation rule of very general form is true. Another interesting problem is the nontrivial relationship between $\lambda$ (or $\Lambda$, or $\mu$) symmetries with the standard ones. An aspect of this problem has been mentioned in the above section of this paper. In different situations, this may involve the introduction of nonlocal symmetries and other concepts in differential geometry, as briefly indicated in the Introduction, which clearly go beyond the scope of the present contribution.
1,941,325,220,653
arxiv
\section{Introduction} Mkn421 is a bright, nearby ($z$=0.03) BL Lac object classified as HBL (High energy peaked BL Lac, Padovani \& Giommi 1996) since its Spectral Energy Distribution (SED) peaks (in a $\nu f(\nu)~vs~\nu$ representation) in the UV/X-rays. Mkn421 is one of a few extragalactic objects (with Mkn501, 1ES2344+514 and PKS2155-304, all HBLs) so far detected at TeV energies e.g. Punch et al. 1992 where it shows tremendous variability, down to timescales of about 15 minutes (Gaidos et al. 1996). Mkn421 was repeatedly monitored with the Whipple, HEGRA and CAT Cherenkov telescopes and, whenever possible, simultaneously observed with X-ray satellites. The X-ray emission is also highly variable, with distinct differences between the soft and hard X-rays. Multi-wavelength campaigns have shown correlated flux changes between the X-ray and the TeV region (Takahashi et~al. 1996). In this paper we report the results of a {\it Beppo}SAX~ (Boella et al. 1997a) TOO observation of Mkn421 during a high intensity state. The source was hard and showed significant temporal and spectral variability on timescales down to 500-1000 seconds. A comparison with previous {\it Beppo}SAX~ observations carried out when Mkn421 was less luminous also shows remarkable spectral variations. As seen in two other HBL BL Lacs, namely Mkn501 (Pian et al. 1998) and 1ES2344+514 (Giommi et al. 1999), both observed by {\it Beppo}SAX~ during high states, also in Mkn 421 the peak of the synchrotron power significantly moves to higher energy, well into the X-ray band. These findings exploit the unique {\it Beppo}SAX~ spectral coverage (from 0.1 up to 300 keV) to observe emission from the high energy tail of the emitting electron distributions. \section{Observation and Data analysis} The {\it Beppo}SAX~ Narrow Field Instruments (NFI) observed Mkn421 on 22 June 1998 as part of a Target Of Opportunity (TOO) program dedicated to the study of different types of AGN in high states. Mkn421 was also observed earlier by {\it Beppo}SAX~ on April and May 1997 as part of the normal observation program. The TOO observation was triggered when Mkn421 was detected in one of the {\it Beppo}SAX~ Wide Field Cameras (WFC, Jager et al. 1997) at a flux level of $\sim$20 mCrabs, that is in a high state compared with previous observations. The WFC trigger is very effective (as demonstrated by the {\it Beppo}SAX~ Gamma Ray Burst experience, e.g. Costa 1998) since it allows a much faster response compared to other triggering methods. Figure 1 shows the X-ray light curve of Mkn421 during June 1998 (day 880 = May 31 1998). In the top panel filled circles represent one day averages WFC measurements in the 2-10 keV energy band (4-7 orbits, depending on the primary pointing). The error as given by the rms between different orbits is 20\%. The first point in the plot (open circle) indicates the flux measured by the WFC during a single available orbit and the uncertainty here is correspondingly higher. The star indicates the flux seen during our observation of June 22, 1998. The dashed lines identify the flux levels recorded during two {\it Beppo}SAX~ NFI exposures on this source in 1997 (Guainazzi et al. 1998, Fossati et al. 1998). The bottom panel shows the RXTE-ASM one day average light curve in the 2-10 keV energy range (RXTE-ASM public archive). Variations of the order of a factor of two are evident on time scales of a few days. The peak near day 884 (= June 4 1998) in the ASM light curve coincides with the largest flux recorded by the {\it Beppo}SAX~ WFC. The flux of about 40-50 mCrab in June 1998 is by far the largest X-ray flux ever observed from Mkn 421. Note that in June 1998, Mkn421 was always detected in a rather high state. The {\it Beppo}SAX~ NFI used for the observation of Mkn 421 consist of a Low Energy Concentrator Spectrometer (LECS, Parmar et al. 1997) sensitive between 0.1 and 10 keV; three identical Medium Energy Concentrator Spectrometers (MECS, Boella et al. 1997b) covering the 1.5-10 keV band; and two co-aligned high energy instruments, the High Pressure Scintillator Proportional Counter (HPGSPC, Manzo et al. 1997) and the Phoswich Detector System (PDS, Frontera et al. 1997) operating in the 4-120 keV and 15-300 keV bands respectively. The MECS was composed at launch by three identical units. On 1997 May 6$^{\it th}$ a technical failure caused the switch off of unit MECS1. All observations after this date are performed with two units (MECS2 and MECS3). The LECS is operated during spacecraft dark time only, therefore LECS exposure times are usually smaller than the MECS ones by a factor 1.5-3. The PDS and the HPGSPC are collimated instruments with a FWHM of about 1.4 degrees. The PDS consists of four phoswich units, and is normally operated with the two collimators in rocking mode, that is with two phoswich units pointing at the source while the other two monitor the background. The two halves are swapped every 96 seconds. This default configuration was also used during our observation of Mkn421. The net source spectra have been obtained by subtracting the off and the on counts. The effective exposure time was 32516 seconds in the MECS, 11082 seconds in the LECS, 13912 in the PDS and 13750 in the HPGSPC. Standard data reduction was performed using the software package "SAXDAS" (see http://www.sdc.asi.it/software and the Cookbook for BeppoSAX NFI spectral analysis, Fiore, Guainazzi \& Grandi 1998). Data are linearized and cleaned from Earth occultation periods and unwanted period of high particle background (satellite passages through the South Atlantic Anomaly). The LECS, MECS and PDS background is relatively low and stable (variations of at most 30 \% during the orbit) thanks to the satellite low inclination orbit (3.95 degrees). Data have been accumulated for Earth elevation angles $>5$ degrees and magnetic cut-off rigidity $>6$). For the PDS data we adopted a fine energy and temperature dependent Rise Time selection, which decreases the PDS background by $\sim 40 \%$. This improves the signal to noise ratio of faint sources by about 1.5 (Frontera et al. 1997, Perola et al. 1997, Fiore, Guainazzi \& Grandi 1998). Data from the four PDS units and the two MECS units have been merged after equalization and single MECS and PDS spectra have been accumulated. We extracted spectra from the LECS and MECS using 8 arcmin radius regions. LECS and MECS background spectra were extracted from blank sky fields from regions of the same size in detector coordinates. \begin{table*} \centering \caption{Mkn421 spectral fits} \begin{tabular}{lcccccc} \hline \hline model (0.1-100 keV) & $\alpha_E$ & $\alpha_H$ & $E_{0}^a$ & $\beta^b$ & $E_{f}^{a,c}$ & $\chi^2$ (d.o.f.)\\ \hline April 1997 & &&\\ PL+ABS$^{\star}$ & 2.4 & -- & -- & & -- & 2086 (136) \\ CurvedPL+ABS$^{\star}$ & 1.4 & 2.7 & 3.91 & 0.32 & -- & 112.8 (136) \\ \hline May 1997 & && \\ PL+ABS$^{\star}$ & 2.4 & -- & -- & -- & -- & 921.8 (136) \\ CurvedPL+ABS$^{\star}$ & 2.0 & 2.8 & 3.89 & 0.84 & -- & 155.0 (136) \\ \hline June 1998 & &&\\ PL+ABS$^{\star}$ & 2.2 & -- & -- & -- & -- & 4027 (201) \\ CurvedPL+ABS$^{\star}$ & 1.8 & 2.4 & 2.7 & 1.08 & -- & 291 (201) \\ CurvedPL+ABS$^{\star}$+Highecut & 1.8 & 2.3 & 1.72 & 1.1 & 30 & 239 (201) \\ \hline \hline \end{tabular} $^{\star}N_H$ = 1.6 $\times$ $10^{20}$ cm$^{-2}$ FIXED; $^a$ in keV; $^b$ = curvature radius; $^c$ Folding Energy \end{table*} \begin{figure} \epsfig{ file = mkn421_fig1.ps, height=8.5cm} \caption{{\it Beppo}SAX~-WFC light curve (top panel) compared with ROSSI-ASM one day average (bottom panel) during {\it Beppo}SAX~ observations of June 1998.} \end{figure} \section{Results} \subsection{Temporal Analysis} \begin{figure} \epsfig{ file =mkn421_fig2.ps, height=8.5cm, angle=-90 } \caption{MECS light curve (1.3--10) keV with a binning time of 500s} \end{figure} Figure 2 shows the MECS 1.3--10 keV light curve in bins of 500 seconds. Variations of a factor of 25\% are present on time-scales of 8-10 ks (see the rise from time=24ks to time=32ks). Variations of a smaller amplitude are present down to a 500-1000 seconds timescale (see for example the events at time=24 ks, 46ks, 62ks in Figure 2). These variations are accompanied by strong spectral variability. This is shown in Figure 3, where we plot the light curves in the PDS 13-50 keV band, LECS 0.1-0.7 band, MECS ``Soft'' 1.3-3 keV and ``hard'' 5-10 keV bands, along with the MECS Hard/Soft hardness ratio (HR) (from top to bottom). The bin size used, 2850 seconds, roughly corresponds to half of the satellite orbit, and helps in illustrating how the LECS data are actually acquired for part of each orbit only. Therefore, they are not completely simultaneous to the MECS and PDS data. This can introduce an offset between the normalization in the LECS and the other instruments in spectral fits (see next section). The comparison of the light curves in different energy bands (in particular the two MECS light curves and the LECS light curve) shows that the source hardens while brightening, in agreement with the results of Giommi et al. (1990), Takahashi et al (1996) and Sambruna et al. (1996). Unfortunately, the statistics in the PDS is not good enough to allow us to search for an extension of this trend at energies higher than 10 keV. The behaviour of the MECS HR suggests that the hard X-rays lead the soft X-rays. Figure 4 shows the 5-10 keV/1.3-3 keV hardness ratio HR plotted against the intensity (1.3-3keV + 5-10keV count rate), in bins of 5700 seconds (roughly one orbit). Numbers indicate progressive orbits. Starting from orbit number 1, the HR first decreases with decreasing count rate, and successively increases again, but at an higher rate, following a clockwise motion. Takahashi et al. (1996) found a similar behaviour in an ASCA observation, when the source was at a flux level similar or slightly higher than during our {\it Beppo}SAX~ TOO observation, indicating again that hard X-rays lead the soft X-rays. To measure the lag time we have calculated the Discrete Correlation Function (Edelson \& Krolik 1988) between the MECS Soft and Hard light curves. We find a lag of 1700$\pm$600 seconds (90 \% confidence interval). \begin{figure} \epsfig{ file =mkn421_fig3.ps, height=8.5cm, width=15cm ,angle=-90 } \caption{Light curves in the 13--50 keV (PD), 0.1--0.7 keV (LE), 1.3--3 keV (ME-H) and 5--10 keV (ME-S) and their hardness ratio (ME-H/ME-S)} \end{figure} \begin{figure} \epsfig{ file =mkn421_fig4.ps, height=8.5cm} \caption{Hardness ratio (5-10 keV/1.3-3 keV) versus intensity (1.3-3 keV + 5-10 keV). In the left-bottom corner the April 97 counts have been plotted while the TOO counts are the points located in the right-top corner. The numbers put on the side of our data points indicate progressive orbit (5700s)} \end{figure} \subsection{Spectral Analysis} Spectral fits were performed using the XSPEC 9.0 software package and public response matrices as from the 1998 November issue. PI channels are rebinned sampling the instrument resolution with the same number of channels at all energies when possible and to have at least 20 counts per bin. This guarantees the use of the $\chi^2$ method in determining the best fit parameters, since the distribution in each channel can be considered Gaussian. Constant factors have been introduced in the fitting models in order to take into account the intercalibration systematic uncertainties between instruments (Cusumano et al. 1999, Fiore, Guainazzi \& Grandi 1998). The expected factor between LECS and MECS is about 0.9. In the fits we use the MECS as reference instruments and constrained the LECS parameter to vary in the small range 0.8-1. The expected factor for MECS and PDS is 0.8 and we constrained the PDS parameter to vary in the range 0.7-0.9. The energy range used for the fits are: 0.1--4 keV for the LECS (channels 11--400), 1.65--10 keV for the MECS (channels 37--220) and 13--100 keV for the PDS. Many existing narrow band X-ray spectra of blazars are sufficiently well described by a single power-law model. However recent wide band X-ray spectra of Mkn421 (Fossati et al. 1998, Guainazzi et al. 1998), PKS 2155-304 and other bright BL Lacs require more complex models like a broken power-law or a curved spectrum (e.g. Giommi et al. 1998, Wolter et al. 1998). Figure 5 shows the {\it Beppo}SAX~ LECS, MECS, HPGSPC, PDS, 0.1-100 keV spectrum of Mkn421 measured during the June 1998 observation and fitted to a simple power-law plus low energy absorption due to a $N_H$ column equal to the Galactic value along the line of sight. A single power-law model is clearly an unacceptable representation of the data ($\chi{^2}_{\nu}$ = 23). Figure 5 demonstrates that this is not due to a localized feature but to an incorrect modelling of the spectrum across the entire 0.1-100 keV energy range. In fact, the residuals plotted at the bottom of figure show that large convex spectral curvature is present. A gradual steepening with energy is in line with the Synchrotron Self Compton (SSC) mechanism, a widely accepted scenario to explain the SED of HBL objects (Ghisellini et al. 1998). We have thus fitted our data to a {\it curved spectrum} (Matt, private communication) defined as follows $$ F(E)=E^{-[(1-f(E)*\alpha_E + f(E)*\alpha_H]} $$ where f(E)=[1-exp(-E/E$_{0}$)]$^{\beta}$, $\alpha_E$ and $\alpha_H$ are the low and high energy asymptotic energy indices E$_{0}$ is a break energy and $\beta$ is the curvature radius. The column density has been fixed to the Galactic value along the line of sight of Mkn421 ($N_H$ = $1.6 \times 10^{20} cm^{-2}$, Dickey \& Lockman 1990). This model has been successfully applied to the previous {\it Beppo}SAX~ observation of Mkn421 (Guainazzi et al. 1998) and PKS2155-304 (Giommi et al. 1998). This curved model gives acceptable fits to the 1997 {\it Beppo}SAX~ observations. However the June 1998 spectrum requires additional curvature; a good fit ($\chi^{2}$=189/153) can be obtained adding a high energy cutoff to the model (see figure 6). The analysis of the residuals in figure 6 shows a deviation of the order of 30\% below 0.5 keV, probably due to the carbon edge like feature in the LECS energy range. Moreover at lowest energies (0.1-0.2 keV) a more curved model seems to be required. The results of our spectral analysis on all the observations considered in this paper are summarized in Table1. Our results on the April 1997 observation of Mkn421 (taken from the {\it Beppo}SAX~ public archive) are well in agreement with the original analysis presented in Guainazzi et al. (1998). \begin{figure} \epsfig{ file = mkn421_fig5.ps, height=8.5cm, angle=-90} \caption{{\it Beppo}SAX~ broad band spectrum of Mkn421 during the TOO of June fitted with a simple power law. The filled circles and the triangles represent the HPGSPC and PDS data respectively.} \end{figure} \begin{figure} \epsfig{ file = mkn421_fig6.ps, height=8.5cm, angle=-90} \caption{{\it Beppo}SAX~ broad band spectrum of Mkn421 during the TOO of June fitted with a curved power law plus a high energy cutoff. The filled circles and the triangles represent the HPGSPC and PDS data respectively.} \end{figure} \section{Comparison with previous observations} {\it Beppo}SAX~ has observed Mkn421 in several campaigns between 1997 and 1998 and in particular the April 1997 and May 1997 observations have been taken into consideration in the present work to study the spectral variations of the source. Figure 4 compares the MECS HR measured during the June 1998 observation with that measured during three April 1997 observations, when the source was in a quiescent state (Guainazzi et al. 1998). During the April 1997 observations the HR increases with increasing intensity until it saturates (Guainazzi et al 1998), in line with what was found in a similar quiescent state by Giommi et al. 1990 and Sambruna et al. 1994. The HR in June 1998 is much higher than during the 1997 observations, indicating that the saturation possibly concerns single variability events only. The 1998 HR does not saturate, indicating that either the observation did not catch the source at the peak of a variability cycle, or that HR saturation does not apply to variability events in high source states. The comparison of the June 1998-TOO observation of Mkn421 with the 1997 {\it Beppo}SAX~ observations shows strong spectral variations. In figure 7 we have plotted the ratio between the spectrum seen during our TOO observation and that during the April 97 (open squares) and May 97 (filled circles) observations. It is evident that the source hardened significantly when it brightened (up to a spectral ratio of 4-5 at 10 keV). This hardening is more pronounced above 1 keV. Below this energy the spectral ratio is less than a factor 2 showing that most of the variability occurred at high energy. In figure 8 we report the 0.1-100 keV spectra of Mkn 421 during the three observations considered, multiplied by the frequency $\nu $. The maxima in this plot identify the region where most of the synchrotron power is emitted. During the high state the peak is located at log$\nu \sim 17.4$, or about 1 keV, while during the other observations the peak was below 1 keV. \section{Discussion and Conclusions} The spectral energy distribution of high-energy peaked BL Lacs, from radio to $\gamma$-ray energy, in the $\nu$-$\nu$F$_{\nu}$ representation, is characterized by two peaks: one in the UV/soft X-ray band and the second at the GeV to TeV energies. This spectral energy distribution is generally interpreted as due to incoherent synchrotron radiation followed by Inverse Compton emission (e.g. Ghisellini, Maraschi \& Dondi 1996). The radio to X-ray emission is produced by the synchrotron process as strongly suggested by the connection of the X-ray and IR, optical, UV spectra. On the contrary the inverse Compton is responsible for the $\gamma$-ray emission and the correlated flaring at X-ray and TeV energies (Takahashi et al. 1996). Many theoretical models have been proposed to explain the spectral and the timing variability observed in the high-energy peaked BL Lac. The comparison between different spectral states can give us information on the source electron distribution. The 1997 and the 1998 {\it Beppo}SAX~ data show both an increase in power and a shift in the synchroton peak. This suggests that during high states the number of energetic electrons is higher and that a simple increase of the magnetic field or of the electron cutoff energy is not sufficient to explain the data. A possible explanation could be the injection of energetic electrons caused by shocks in a relativistic jet (Kirk et al. 1998). A behavior similar to that of Mkn421 has been observed in Mkn 501 (Pian et al. 1998, Ghisellini 1998) and 1ES2344+514 (Giommi et al. 1999). On the other hand, the study of the short term variability can provide detailed information on the cooling mechanisms and the source geometry. The characteristic short time scale change of the hardness ratio as a function of the count rate and the lags between the hard and soft photons have been interpreted by Takahashi et al. (1996) in terms of Syncrotron cooling. They estimated a time lag between hard (2-7.5 keV) and soft (0.5-1 keV) energy bands of the order of one hour during an ASCA observation when the source was in a state similar to that seen during our {\it Beppo}SAX~ TOO observation. Following Takahashi et al. (1996) and assuming a t$_{sync}$ $\sim$ 1.2 $\times$ 10$^{3}$ B$^{-3/2}$E$_{keV}^{-1/2}$ $\delta^{-1/2}$, where E$_{keV}$ is the observed energy, using their values of B $\sim$ 0.2 G and $\delta$ = 5 we have calculated that the time lag between hard (5-10 keV) and soft (1.3-3.5 keV) emission in our observation should be of about 1500 seconds ($\sim$ 25 min) which is consistent with the value found with the DCF method (see section 3.1). In a recent work Chiaberge and Ghisellini (1999) studied the time dependent behaviour of the electron distribution injected in the emitting region and proposed a comprehensive model to describe the evolution of synchrotron and self Compton spectra. They pointed out that the cooling time can be shorter that the light crossing time (R/c) and if this is the case the particle distribution will evolve more rapidly than R/c and the observer will see the contribution of the different spectra produced in each {\it slice} of the source. Taking into account the light time crossing effects, the different cooling times of electrons emitting at various frequencies can cause remarkable time delays, of the order of those observed by us and by Takahashi et al. (1996). \begin{figure} \epsfig{file=mkn421_fig7.ps, height=9.5cm} \caption{Spectral ratio of the June 1998 TOO observation to those of April 97 (open squares) and May 97 (filled circles). Both ratios increase with energy, reaching a maximum of 4-5 at 10 keV.} \end{figure} \begin{figure} \epsfig{file=mkn421_fig8.ps, height=9.5cm} \caption{0.01-100 keV spectra of Mkn421 during the three {\it Beppo}SAX~ observations, multiplied by the frequency.} \end{figure} \bigskip We thank G. Matt for providing the XSPEC gradually changing index power-law model used to fit the data in this paper.
1,941,325,220,654
arxiv
\section{Introduction}\label{sec:intro} \IEEEPARstart{U}{nmanned} vehicles such as unmanned aerial vehicles (UAVs), unmanned ground vehicles (UGVs) and unmanned surface vehicles (USVs) have been widely adopted in industrial applications due to reduced safety hazard for human and potential cost saving \cite{doi:10.1142/S2301385020500089, GALCERAN20131258}. Tethered systems are commonly employed to extend working duration, enhance communication quality and prevent loss of unmanned vehicles. For autonomous tethered robots, it is important that the planning algorithms consider the risk of the tether being entangled with the surroundings, which will limit the reachable space of the robots and even cause damage. In this work, we consider the trajectory planning problem for multiple tethered robots in a known workspace with static obstacles. Each robot is attached to one end of a cable that is flexible, not fully stretched, allowed to lie on the ground and be driven over by the robots. The other end of the cable is attached to a fixed base station. We consider the cable to have low-friction surface so that it can slide over the surface of the static obstacles or another cables. An entanglement occurs when the cables of the robots physically interact in a way that the movement of at least one of the robots is restricted. Consider the scenario shown in Figure \ref{fig: entangle1g}, where two ground robots' cables cross each other. If the robots continue to move in the directions indicated by the arrows, the cables will be stressed and at least one of the robot's movement will be affected. Situations like this are more likely to happen when more robots operate in the same workspace. \begin{figure}[!t] \centering \includegraphics[width=0.8\linewidth]{figs/entangle1_g.png} \caption{\footnotesize Top-down view of a workspace to illustrate an entanglement situation. Note the Z-order of the cables (shown as blue and yellow curves) at two intersections between them.} \label{fig: entangle1g} \end{figure} While there exists abundant path and trajectory planning literature for multi-robot navigation, most of them are not safe for direct application to a tethered multi-robot scenario. Among the works that address the tethered robot planning problem, many focus on the single robot case and use a representation of homotopy to identify the path or cable configuration; feasible paths are found by searching in a graph augmented with the homotopy classes of the paths. However, the existing representation of homotopy lacks the capability of representing the interaction of multiple mobile robots efficiently, and slow online graph expansion means it is necessary to build the graph in advance to online planning. Those works that do consider multiple tethered robots present centralized and offline approaches without considering static obstacles. In this work, we present NEPTUNE, a decentralized and online trajectory generation framework for \underline{n}on-\underline{e}ntangling \underline{p}lanning for multiple \underline{t}ethered \underline{un}manned v\underline{e}hicles. Firstly, we present a novel multi-robot tether-aware representation of homotopy that encodes the interaction among the cables of the planning robot and its collaborating robots, and the static obstacles. The benefit of using this representation is twofold: efficient evaluation of the risk of the planning robot getting entangled with other robots and static obstacles, and efficient estimation of the tether length to determine the reachability of a destination under the given tether length constraints. The trajectory planning consists of front-end trajectory finding and back-end optimization. The front end finds a feasible, collision-free, non-entangling and goal-reaching polynomial trajectory, using kinodynamic A* trajectory searching in a graph augmented with the introduced multi-robot tether-aware representation. The back-end trajectory optimization refines the first few segments of the feasible trajectory to generate a trajectory with lower control effort while still satisfying the non-collision and non-entangling requirements. Each robot generates its own trajectory in a decentralized and asynchronous manner, and broadcasts its future trajectory through a local network for others to access. To the best of our knowledge, NEPTUNE is the first trajectory planner for multiple tethered robots that considers static obstacles and runs online in a decentralized manner. The main contributions of this paper are summarized as follows: \begin{itemize} \item A detailed procedure to construct a multi-robot tether-aware representation of homotopy, which enables efficient checks on the risk of entanglement, as well as efficient computation of the required cable length to reach a target; \item A complete tether-aware planning framework consisting of a kinodynamic trajectory finder and a trajectory optimizer; \item Comparisons with the existing tethered robot planning approaches in their application scenarios (single-robot obstacle-rich and multi-robot obstacle-free) show significant improvements in computation time; \item Simulations using up to $8$ robots in an environment with obstacles show average computation time of less than $70$ms and high mission success rate; \item Flight experiments using three UAVs verifies the practicality of the approach. We open source the algorithm for the benefit of the community. \end{itemize} In this paper, we mainly consider the mobile robots to be UAVs, but the approach presented is also applicable to other types of vehicles such as UGVs and USVs. \section{Related Works}\label{sec:related} \subsection{Tethered Robot Path Planning} Interestingly, most of the early works that consider the tethered robot planning problem focus on multiple robots instead of a single robot. Sinden \cite{sinden1990tethered} investigated the scheduling of tethered robots to visit a set of pre-defined locations in turn, so that none of the cables crosses each other during robot motion. A bipartite graph is constructed, with colored edges representing the ordered cable configurations. In \cite{hert1996ties,hert1999motion}, the authors addressed the path planning of tethered robots to their target positions with specified final cable configurations. Directed graph is used to represent the motion constraints and ordering of the movements; the output is a piece-wise linear path for each robot and waiting time at some specified location. Zhang \textit{et al.} \cite{zhang2019planning} extend the result of \cite{hert1996ties} by providing analysis of a more efficient motion profile where all robots move straight and concurrently. These works are very different from our work in terms of problem formulation and approach: (1) they consider taut cables that form straight lines between robots and bases, whereas we consider slack cables that are allowed to slide over one other; (2) static obstacles are not considered in these works; (3) their approaches are offline and centralized while our work presents a decentralized online approach; (4) the outputs of these algorithms are piece-wise linear paths whereas our approach provides dynamically feasible trajectories. The development in planning algorithms for a single tethered robot typically focuses on navigating the robot around obstacles to reach the goal while satisfying the cable length constraint. Early work \cite{xavier1999shortest} and its recent derivative \cite{brass2015shortest} find shortest paths in a known polygonal environment by tracing back along the previous path to look for turning points in a visibility graph-like construction. Recent developments in homotopic path planning using graph-search techniques \cite{igarashi2010homotopic,bhattacharya2012topological,hernandez2015comparison,bhattacharya2018path} provide the foundation for a series of new works on tethered robot planning. Particularly, Kim \textit{et al.} \cite{kim2014path} use a homotopy invariant (h-signature) to determine the homotopy classes of paths by constructing a word for each path, as will be described in Section \ref{sec: prelim}. A homotopy augmented graph is built with the graph nodes carrying not only a geometric location but also the homotopy class of the path leading to the location, then graph search techniques can be applied to find the optimal path subject to grid resolution. Kim \textit{et al.} \cite{kim2015path} and Salzman \textit{et al.} \cite{salzman2015optimal} improve on the graph search and graph building processes of \cite{kim2014path} respectively through applying a multi-heuristic A* \cite{Aine2016} search algorithm and replacing the grid-based graph with a visibility graph, and McCammon \textit{et al.} \cite{mccammon2017planning} extend the result to a multi-point routing problem. The homotopy invariant in these works, which is for a 2-D static environment, is insufficient to represent the complex interactions when multiple tethered robots are involved (justification will be provided in Section \ref{subsec: homotopy}). Furthermore, these works use a curve shortening technique to determine whether an expanded node satisfies the cable length constraint, which is computationally expensive and leads to slow graph expansion. It is thus a common practice to construct the augmented graph in advance. Bhattacharya \textit{et al.} \cite{bhattacharya2018path} proposed a homotopy invariant for multi-robot coordination, which can potentially be applied to centralized planning of tethered multi-robot tasks. However, the high dimensions of the graph and the complexity related to identifying homotopy equivalent classes (the word problem) mean it is even time-consuming to build a graph for a simple case (more than 30s for $3$ robots in a $7\times7$ grid). In our approach, we also use augmented graph search techniques, but we develop an efficient representation of homotopy that records the cable interaction with other robots and facilitates the computation of the required cable length, hence the graph expansion and search can be executed online for on-demand targets. Teshnizi \textit{et al.} \cite{Teshnizi2014computing} proposed an online decomposition of workspace into a graph of cells for single-robot path searching, in which a new cell is created when an event of cable-cable crossing or cable-obstacle contact is detected. However, infinite friction between cable surfaces is assumed, which deviates from the practical cable model. Furthermore, a large number of cells can be expected during the graph search in an obstacle-rich environment, because a new cell can be created for each visible vertex and no heuristics for choosing the candidate cells is provided. It is also worth noting that there have been several works in recent years that leverage Barid groups to characterize the topological patterns of robots' trajectories and facilitate planning of multiple robots \cite{Diaz2017multirobot, Mavrogiannis2020, mavrogiannis2022analyzing}. Despite promising results, none of these works has been extended to planning of tethered robots. \subsection{Decentralized Multi-robot trajectory planning} Most of the existing tethered robot planning works generate piece-wise linear paths. In this section, we review some smooth trajectory generation techniques for decentralized multi-robot planning, with a focus on methods applied to UAVs. In general, decentralized multi-robot trajectory generation includes synchronous and asynchronous methods. Synchronous methods such as \cite{chen2015decoupled,luis2019trajectory} require trajectories to be generated at the same planning horizon for all robots, whereas asynchronous methods do not have such restriction and hence are more suitable for online application. In \cite{liu2018towards}, the authors presented a search-based multi-UAV trajectory planning method, where candidate polynomial trajectories are generated by applying discretized control inputs and those that violate the collision constraint (result in non-empty intersection between robots' polygonal shapes) are removed. In \cite{zhou2021ego}, the collision-free requirement is enforced as a penalty function in the overall objective function of the nonlinear optimization problem. In \cite{zhou2021decentralized}, a similar approach is taken as \cite{zhou2021ego}, but a newly developed trajectory representation \cite{wang2021geometrically} is employed (as compared to B-spline in \cite{zhou2021ego}) and the polynomial and time parameters are optimized concurrently. In \cite{tordesillas2021mader}, robots' trajectories are converted into convex hulls, and collision-free constraint is guaranteed by optimizing a set of planes that separate the convex hulls of the collaborating robots and those of the planning robot. To save computational resources and ensure short-term safety, an intermediate goal is chosen, which is the closest point to the goal within a planning radius. {\color{blue}As pointed out by the comparisons in \cite{tordesillas2021mader,luis2019trajectory}}, centralized and offline trajectory generation approaches typically require much longer computation time to generate a feasible solution than the online decentralized approaches, and the difference grows with the number of robots involved. Furthermore, the absence of online replanning means that the robots cannot handle in-flight events, such as the addition of a new robot into the fleet or the appearance of non-cooperative agents. Hence, we believe that a decentralized planner with online replanning is more suitable for practical applications, and the framework presented in this paper has the flexibility to integrate new features such as prediction and avoidance of dynamic obstacles and non-cooperative agents. Similar to \cite{tordesillas2021mader}, our approach uses an asynchronous planning strategy with convex hull representation of trajectories. {\color{blue}However, the greedy strategy used by \cite{tordesillas2021mader} may result in an intermediate goal near a large non-traversible region, where the robot has to make sharp turns to avoid the obstacles, causing inefficient trajectory. In contrast, our approach selects the intermediate goals that are derived from a feasible, goal-reaching and efficient (based on some evaluation criteria such as length) trajectory generated by our front-end kinodynamic trajectory finder.} \section{Preliminaries}\label{sec: prelim} \subsection{Notation} In this paper, $\|\vecbf{x}\|$ denotes the $2$-norm of the vector $\vb{x} \in \mathbb{R}^{n}$, $\vb{x}^{(i)}$ denotes the $i$-th order derivative of vector $\vb{x}$ and $\myset{I}_{n}$ denotes the set consisting of integers $1$ to $n$, i.e., $\myset{I}_{n} = \{1,\dots,n\}$. The notation frequently used in this paper is shown in Table \ref{tab: notation}. More symbols will be introduced when they appear in the paper. \begin{table}[t] \def1.1{1.1} \caption{Notation} \begin{tabular}{| M{0.19\linewidth}| m{0.702\linewidth} |} \hline Symbol & Meaning \\ \hline n & Number of robots. \\ \hline m & Number of static obstacles. \\ \hline $\aug{\mathcal{W}}$, $\mathcal{W}$ & 3-dimensional workspace and its 2-D projection, i.e., $\aug{\mathcal{W}}=\{(x,y,z)|(x,y)\in{\mathcal{W}}\}$. \\ \hline {$\aug{\vb{p}}_i$, $\vb{p}_i$} & Position of robot $i$ and its 2-D projection, i.e., $\aug{\vb{p}}_i=[\vb{p}_i, p^{z}_{i}]^\top\in\mathbb{R}^3$, $\vb{p}_i=[p_{i}^{x}, p_{i}^{y}]^\top\in\mathbb{R}^2$. \\ \hline {$\aug{\vb{p}}^{\text{term}}_i$}, $\aug{\vb{p}}^{\text{inter}}$ & Terminal goal and intermediate goal position, $\aug{\vb{p}}^{\text{term}}_i=[p^{\text{term},\text{x}}_i,p^{\text{term},\text{y}}_i,p^{\text{term},\text{z}}_i]^\top\in\mathbb{R}^3$. \\ \hline $\init{\aug{\vb{p}}}$, $\init{\aug{\vb{v}}}$, $\init{\aug{\vb{a}}}$ & The position, velocity and acceleration of the robot at time $\init{t}$, $\in\mathbb{R}^3$ \\ \hline $\mathcal{P}{(}k{)}$ & Set of robots' 2-D positions at time $k$, i.e. $\mathcal{P}(k)\coloneqq\{\vb{p}_j(k)|j\in \myset{I}_n\}$. \\ \hline $\mathcal{P}_{j,l}$ & Set consisting of the positions of robot $j$ at discretized times, $\mathcal{P}_{j,l}\coloneqq\{\vb{p}_j(k)|t\in[\init{t}+lT, \init{t}+(l+\frac{1}{\sigma})T,\dots, \init{t}+(l+1)T]\}$ \\ \hline $\init{t}$ & The start time of the planned trajectory, referenced to a common clock. \\ \hline {$\maxop{\vb{v}}_i$, $\minop{\vb{v}}_i$, $\maxop{\vb{a}}_i$, $\minop{\vb{a}}_i$} & Upper and lower bounds of velocity and acceleration. \\ \hline {$O_j$} & 2-D projection of the obstacle $j$. \\ \hline {$\mathbf{O}$} & Set of obstacles, i.e., $\mathbf{O}\coloneqq\{O_j|j\in \myset{I}_m\}$. \\ \hline {$\vb{b}_i$} & 2-D base position, $\vb{b}_i=[b_i^{x},b_i^{y}]^\top$. \\ \hline {$\phi_i$} & Cable length of robot $i$. \\ \hline {$\vb{c}_{i,l}(t)$} & $l$-th contact point of robot $i$ at time $t$, $\in\mathbb{R}^2$ \\ \hline {$\numcon_i(t)$} & Number of contact points of robot $i$ at time $t$, $\in\mathbb{Z}$. \\ \hline $\mathcal{C}_i(k)$ & List of contact points of robot $i$ at time $k$, i.e. $\mathcal{C}_i(k)=\{\vb{c}_{i,f}(k)|f\in\myset{I}_{\numcon_i(k)}\}$. \\ \hline $\beta$ & Order of polynomial trajectory. \\ \hline $\aug{\mathbf{E}}_l$, $\mathbf{E}_l$ & $\aug{\mathbf{E}}_l\in\mathbb{R}^{(\trajorder+1)\times3}$ consists of the coefficients of a 3-dimensional $\trajorder$-th order polynomial. ${\mathbf{E}}_l\in\mathbb{R}^{(\trajorder+1)\times2}$ consists of the coefficients for the X and Y dimensions. \\ \hline $\vb{g}(t)$ & The monomial basis, $\vb{g}(t)=[1, t, \dots,t^\trajorder]^\top$. \\ \hline $o_{j,0}$, $o_{j,1}$ & The virtual segments of obstacle $j$. \\ \hline $r_{j,0}$, $r_{j,1}$ & The virtual segments of robot $j$. \\ \hline $\gbf{\zeta}_{j,0}$, $\gbf{\zeta}_{j,1}$ & The two points on the surface of $O_j$ and define $o_{j,0}$, $o_{j,1}$. \\ \hline $h(k)$ & The homotopy representation of the robot, expressed as a word. \\ \hline size($h(k)$) & The total number of entries (letters) in the word $h(k)$. \\ \hline $h(k)[\,l\,]$ & The $l$-th entry in $h(k)$. \\ \hline index($\numcon$) & The position of the obstacle entry $ \textornot{o}_{j,f}$ in $h(k)$ whose surface point is the $\numcon$-th contact point, i.e. $h(k)[\text{index}(\numcon)]= \textornot{o}_{j,f}$, $\vb{c}_\numcon=\gbf{\zeta}_{j,f}$. \\ \hline \rule{0pt}{0.3cm} $\overbar{\vb{a}\vb{b}}$ & Line segment bounded by points $\vb{a}$ and $\vb{b}$. \\ \hline \rule{0pt}{0.4cm} $\overleftrightarrow{\vb{a} \vb{b}}$ & Line passing through $\vb{a}$ and $\vb{b}$. \\ \hline $\diamond$ & Concatenation operation. \\ \hline $\maxop{u}$ & The maximum magnitude of control input, $>0$. \\ \hline $\sigma_\text{u}$ & $2\sigma_\text{u}+1$ is the number of sampled control inputs for each axis in the kinodynamic search. \\ \hline $\sigma$ & Number of discretized points for evaluating the trajectory of each planning interval. \\ \hline $T$ & Duration of each piece of trajectory. \\ \hline $\front{\eta}$, $\back{\eta}$, $\maxop{\eta}$ & $\front{\eta}$ is the number of polynomial curves in the front-end output trajectory, $\back{\eta}$ is the number of polynomial curves to be optimized in the back end. $\maxop{\eta}$ is a user-chosen parameter such that $\back{\eta}=\text{min}(\front{\eta},\maxop{\eta})$. \\ \hline $\mathcal{Q}_{l}$ & Set of position control points for a trajectory with index $l$. \\ \hline \end{tabular} \label{tab: notation} \end{table} \subsection{Homotopy and Shortest Homotopic Path}\label{subsec: homotopy} \begin{figure}[!t] \centering \includegraphics[scale=0.5]{figs/hsig_orig1.png} \caption{\footnotesize Example of generating a homotopy invariant (h-signature) of a curve: the solid blue path has an initial word of $ o_2o_3o_4o_4^-o_3^-o_3o_5$ which can be reduced to $ o_2o_3o_5$. The dashed blue line is the shortened homotopy-equivalent path to the original path using the curve shortening technique in \cite{kim2014path}, which also represents the shortened cable configuration of a robot if it follows the original path and its base coincides with the start point.} \label{fig:hsig} \end{figure} We briefly review the concepts related to homotopy. Consider a workspace of arbitrary dimension that consists of obstacles. Two curves in this workspace, sharing the same start and end points, are \emph{homotopic} (or belong to the same \emph{homotopy class}) if and only if one can be continuously deformed into another without traversing any obstacles. In our case, a curve can represent a path or a cable in the workspace. A \emph{homotopy invariant} is a representation of homotopy that uniquely identifies the homotopy class of a curve. In a 2-dimensional workspace consisting of $m$ obstacles, there exists a standard procedure to compute a homotopy invariant \cite{Allen2002Algebraic} (as illustrated in Figure \ref{fig:hsig}): firstly, construct a set of non-intersecting rays $o_1,o_2,\dots,o_m$, each emitting from a reference point $\gbf{\zeta}_i$ in each obstacle; then, a representation (word) can be constructed by tracing the curve from start to goal and add the corresponding letters of the rays it crosses, distinguishing a right-to-left crossing by an extra superscript `-1'. Then, the word is reduced by cancelling consecutive crossings of the same ray with opposite crossing directions. This reduced word is called the h-signature and is a homotopy invariant, i.e., two curves with the same start and goal belong to the same homotopy class if and only if they have the same h-signature. The h-signature introduced above records ray-crossing events that indicate robot-obstacle interactions; however, it is not capable of recording key events of cable-cable interaction for identifying entanglements. Consider that we construct a h-signature for the path taken by each robot in Figure \ref{fig: entangle1g}. Each robot is treated as an obstacle by creating a ray emitting in the positive y direction from it. The entanglement situation requires each robot to take a path that crosses the ray of the other robot once; hence, the resulting h-signature only records one ray-crossing event and does not indicate any potential entanglements. A tethered robot following a path from its base to a goal should have its cable configuration homotopic to the path. Hence, a shorter homotopy-equivalent curve may be computed to approximate the cable configuration and compute the required cable length to reach the goal. In \cite{kim2014path, kim2015path, salzman2015optimal}, a common curve shortening technique is used, which requires tracing back the original path to obtain a list of turning points, i.e., points on the shortened path where the path changes direction, whose visual line of sight to the previous turning point does not intersect any obstacles. An example of this shortened curve is shown in Figure \ref{fig:hsig}. \section{Formulation and Overview}\label{sec: formulation} In this work, we consider a 3-dimensional simply connected and bounded workspace with constant vertical limits, $\aug{\mathcal{W}}=\{(x,y,z)|(x,y)\in{\mathcal{W}},z\in[\minop{z},\maxop{z}]\}$, where $\maxop{z}$ and $\minop{z}$ are the vertical bounds. The workspace contains $m$ obstacles, whose 2-D projections are denoted as $O_1, \dots,O_m$. Consider a team of $n$ robots. Each robot $i$ is connected to a cable of length $\phi_i$, and the end of the cable is attached to a base fixed on the ground at $\vb{b}_i$. We reasonably assume that the bases are placed at the boundary of the workspace, so that they do not affect the robots or the cables. As we mentioned in Section \ref{sec:intro}, we consider flexible and slack cables that are pulled towards the ground by gravity. In the case of UGV, most part of the cable lies on the ground, and in the case of tethered UAV, the part of cable near the UAV is lifted in the air. A robot is allowed to cross (move over) the cables of other robots, which will result in its cable sliding over the cables of others. We do not allow robots to move directly on top of the obstacles, as this will create ambiguous cable configuration, where we cannot determine if the cable of a robot stays on an obstacle or falls on the ground, and in the second case, which side of the obstacle it falls to. Such a restriction allows us to express most of the interaction of cables in the 2-D plane, regardless of the type of robots involved. Our task is to compute trajectories for each robot in the team to reach its target $\aug{\vb{p}}^{\text{term}}_i$. The trajectories should be continuous up to acceleration ({\color{blue}to prevent aggressive attitude changes when controlling the UAVs}), satisfying the velocity and acceleration constraints $\maxop{\vb{v}}_i$, $\minop{\vb{v}}_i$, $\maxop{\vb{a}}_i$, $\minop{\vb{a}}_i\in\mathbb{R}^3$ and free of collision and entanglements. \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{figs/flowchart.png} \caption{\footnotesize Overview of the system. The blocks coloured in red are the core modules that will be introduced in detail. The state estimation module takes inputs from sensors such as IMUs, cameras and Lidars, which are not shown to save space. The controller module generates commands for the actuators of the robot, which are also omitted in this diagram.} \label{fig:flowchart} \end{figure} Figure \ref{fig:flowchart} illustrates the core components of our approach. In our approach, the robots share a communication network through which they can exchange information. Within each robot $i$, several modules are run concurrently. The Homotopy Update module maintains an updated topological status of the robot based on other robots' latest information and the robot's current position. The output of this module is a representation of homotopy in the form of a word, and a list of contact points $\vb{c}_{i,1}(t),\vb{c}_{i,2}(t),\dots,\vb{c}_{i,\numcon_i(t)}(t)\in\mathbb{R}^2$, where $\numcon_i(t)\in\mathbb{Z}$ is the number of contact points at time $t$. The contact points are the estimated positions that the robot's cable touches the obstacles if the cable is fully stretched to be tight. Similar to the turning points described in Section \ref{sec: prelim}, contact points form a shorter homotopy-equivalent cable configuration. The procedure of updating the representation and the determination of contact points will be detailed in Section \ref{sec: homotopy}. \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{figs/asyplanning2.png} \caption{\footnotesize An illustration of decentralized and asynchronous planning using a common clock time. From time $t_0$ to $t_2$, robot $j$ computes its trajectory for iteration $k_j-1$ (we use the subscript $j$ in $k_j$ to differentiate the iteration number of robot $j$ from that of robot $i$, as they do not share the same number of planning iterations). At time $t_2$, robot $j$ finishes computing its trajectory and publishes it to the communication network. During $t_2$ to $t_3$, robot $i$ receives robot $j$’s updated trajectory ({\color{blue}there is a difference between the times of publishing and receiving due to communication delay}), and it is able to compute an updated trajectory in iteration $k$ that avoids the future trajectories of robot $j$. } \label{fig:asyplanning} \end{figure} The trajectory handler stores the future trajectory of the robot, in the form of $\eta_i$ pieces of polynomial curves, each expressed as polynomial coefficients $\aug{\mathbf{E}}_l\in\mathbb{R}^{(\trajorder+1)\times3}$ for a 3-dimensional $\trajorder$-th order polynomial. Hence, the future position of the robot at time $t$ can be predicted as \begin{align} \aug{\vb{p}}(t)=\aug{\mathbf{E}}_l^\top \vb{g}(t-t_{i,l}), \,\forall t\in[t_{i,l},t_{i,l+1}],\,l=0\dots\eta_i-1,\nonumber \end{align} where $t_{i,0}$ is the current time, $[t_{i,l},t_{i,l+1}]$ is the valid period of the trajectory. {\color{blue}Each robot plans iteratively and online,} i.e., new trajectories are generated while the robot is moving towards the goal; each planning iteration is asynchronous with the plannings of other robots, as illustrated in Figure \ref{fig:asyplanning}. At every planning iteration, the robot computes a trajectory starting at time $\init{t}$, which is ahead of the robot's current time by the estimated computation time for the trajectory. The input to the planning module are the updated topological status from the Homotopy Update module, the initial states of the robot, $\init{\aug{\vb{p}}}$, $\init{\aug{\vb{v}}}$, $\init{\aug{\vb{a}}}$ at time $\init{t}$, and the future trajectories of other robots. The planning module includes a front-end trajectory finder and a back-end trajectory optimizer, which are described in detail in Sections \ref{sec: search} and \ref{sec: opt} respectively. Once a planning is finished, the output trajectory will replace the existing trajectory in the trajectory handler starting from time $\init{t}$. At every iteration, robot $i$ broadcasts the following information to the network: (1) its current position $\aug{\vb{p}}_{i}(t)$; (2) its future trajectory in the form of polynomial coefficients and (3) its current list of contact points $\vb{c}_{i,1}(t),\dots,\vb{c}_{i,\numcon_i(t)}(t)$. This message will be received by all other robots $j\in\myset{I}_n\backslash i$. The published trajectory is annotated with a common clock known to all robots, so that other robots are able to take into account both the spatial and temporal profile of the trajectory in their subsequent plannings. In the following sections, we consider all computations and plannings are conducted from the perspective of a robot $i$. All the other robots are called the collaborating robots of robot $i$. When no ambiguity arises, we omit the subscript $i$ in many expressions for simplicity. \section{Multi-robot Homotopy Representation}\label{sec: homotopy} We present a multi-robot tether-aware representation of homotopy that is constructed from the positions of the robots as well as their cable configurations. The procedure for updating the homotopy representation in every iteration is detailed in Algorithm \ref{alg: homotopyupdate}, which includes updating the word (Section \ref{subsec: lineseg}), reducing the word (Section \ref{subsec: reduce}) and updating the contact points (Section \ref{subsec: contact}). \begin{algorithm} \DontPrintSemicolon \SetKwBlock{Begin}{function}{end function} \Begin($\text{homotopyUpdate} {(}h{(}k-1{)}, \mathcal{P}{(}k{)}, \mathcal{P}{(}k-1{)}, \{\mathcal{C}_j{(}k{)}\}_{j\in\myset{I}_n\backslash i}, \{\mathcal{C}_j{(}k-1{)}\}_{j\in\myset{I}_n}, \mathbf{O}{)}$) { $h(k)\leftarrow$UpdateWord$($$h(k-1)$, $\mathcal{P}\left(k-1\right)$, $\mathcal{P}\left(k\right)$, $\{\mathcal{C}_j(k-1)\}_{j\in\myset{I}_n}$, $\{\mathcal{C}_j{(}k{)}\}_{j\in\myset{I}_n\backslash i}$, $\mathbf{O}$$)$\; $h(k)$$ \leftarrow $reduction$($$h(k)$, $\{\mathcal{C}_j{(}k{)}\}_{j\in\myset{I}_n\backslash i}$$)$\; $\mathcal{C}_i(k)$ $\leftarrow$updateContactPoints$($$\vb{p}(k-1)$, $\vb{p}(k)$, $h(k)$, $\mathcal{C}_i(k-1)$$)$\; \Return{$h(k)$, $\mathcal{C}_i(k)$} } \caption{Homotopy update}\label{alg: homotopyupdate} \end{algorithm} \subsection{Construction of Virtual Line Segments and Updating the Word}\label{subsec: lineseg} For each robot, the construction of its representation of homotopy requires setting up a set of virtual line segments from the obstacles and the collaborating robots, so that crossing one of these segments indicates an interaction with the corresponding obstacle or robot. To obtain the virtual segments of the static obstacles, we firstly construct $m$ non-intersecting lines $o_j$, $\forall j\in \myset{I}_m$, such that each of them passes through the interior of an obstacle $O_j$. Each line $o_j$ is separated into three segments by two points on the surface of the obstacle, $\gbf{\zeta}_{j,0}$ and $\gbf{\zeta}_{j,1}$. The virtual segments are the two segments outside $O_j$, which are labelled as $o_{j,0}$ and $o_{j,1}$ respectively, as shown in Figure \ref{fig:hsig_new}. To build the virtual line segments of the collaborating robots, we use the positions and the contact points obtained from the communication network. The shortened cable configuration of robot $j$, labelled as $r_{j,1}$, is a collection of $\numcon_j+1$ line segments that are constructed by joining in sequence its base point $\vb{b}_j$, its contact points $\vb{c}_{j,1},\vb{c}_{j,2},\dots,\vb{c}_{j,\numcon_j}$ and finally its position $\vb{p}_j$. The extension of $r_{j,1}$ beyond the robot position $\vb{p}_j$ until the workspace boundary is labelled as $r_{j,0}$. We call $r_{j,1}$ and $r_{j,0}$ the cable line and extension line of robot $j$ respectively. Figure \ref{fig:hsig_new} shows the actual cable configuration and the corresponding line segments of robot $j$. Having constructed the virtual line segments of all robots and obstacles, a word of a path is obtained by adding the letter corresponding to the line segment being crossed in sequence, regardless of the direction of crossing. An example is shown in Figure \ref{fig:hsig_new}, where the word for the black dashed path for robot $i$ is $ o_{1,0}o_{2,0}o_{2,0}o_{2,0}r_{j,1}o_{3,1}$. \begin{figure*}[!t] \centering \includegraphics[width=0.9\linewidth]{figs/hsig_new_long.png} \caption{\footnotesize {\color{blue}Two tethered robots in a workspace consisting of three static obstacles.} Yellow curve indicates the cable of robot $j$, yellow double dashed line indicates the line segments constructed by robot $i$ about robot $j$. In this case, the contact points for robot $j$ are the same as $\vb{\zeta}_{2,1}$ and $\vb{\zeta}_{3,0}$. In the case that robot $j$ remains static at $\vb{p}_j$, the word of the black dashed path for robot $i$ is $ o_{1,0}o_{2,0}o_{2,0}o_{2,0}r_{j,1}o_{3,1}$.} \label{fig:hsig_new} \end{figure*} In an online implementation, the word, denoted as $h(k)$, can be updated iteratively based on incremental movements of the robots. To ensure that each robot starts with an empty word $h(0)=$` ', we make the following assumption: \begin{ass}{(Initial Positions)}\label{ass: initialpos} The base positions and the initial positions of the robots are placed such that for each robot $i$, its initial simplified cable configuration $\overbar{\vb{p}_i(0)\vb{b}_i}$ does not intersect with any virtual segments $o_{l,f}$ and $r_{j,f}(0)$, $\forall l\in\myset{I}_m$, $j\in\myset{I}_n\backslash i$, $f\in\{0,1\}$, where $r_{j,1}(0)=\overbar{\vb{p}_j(0)\vb{b}_j}$. \end{ass} \begin{algorithm} \DontPrintSemicolon \SetKwBlock{Begin}{function}{end function} \Begin($\text{UpdateWord}{(}h{(}k-1{)},\mathcal{P}{(}k-1{)}, \mathcal{P}{(}k{)}, \{\mathcal{C}_j{(}k-1{)}\}_{j\in\myset{I}_n}, \{\mathcal{C}_j{(}k{)}\}_{j\in\myset{I}_n\backslash i}, \mathbf{O}{)}$) { $h(k)\leftarrow h(k-1)$\; $r_{j,f}(k),r_{j,f}(k-1) \leftarrow \text{getVirtualSegments}()$\; \uIf{$\overbar{\vb{p}_i(k-1)\vb{p}_i(k)}$ crosses $\textornot{o}_{j,f}$, $j\in \myset{I}_m, f\in\{0,1\}$, \label{ln: homo_normal_start}} { Append ${o}_{j,f}$ to $h(k)$ } \uIf{$r_{j,f}$ sweeps across $\vb{p}_i$ {\color{blue}during time $k-1$ to $k$}, $j\in \myset{I}_n\backslash i, f\in\{0,1\}$,} { Append ${r}_{j,f}$ to $h(k)$ }\label{ln: homo_normal_end} \uIf{$r_{j,0}$ sweeps across $\vb{b}_i$ {\color{blue}during time $k-1$ to $k$}, $j\in \myset{I}_n\backslash i$,\label{ln: homo_spe_start}} { Append ${r}_{j,0}$ to $h(k)$ }\label{ln: homo_spe_end} return $h(k)$ } \label{updateword} \caption{{\color{blue}Word updating}}\label{alg: crossing} \end{algorithm} The procedure for updating the word at every discretized time $k$ is shown in Algorithm \ref{alg: crossing}. There are two cases where a letter can be appended to the word: \begin{enumerate} \item an active crossing case (line \ref{ln: homo_normal_start}-\ref{ln: homo_normal_end}), where robot $i$ crosses a virtual line segment during its movement from $\vb{p}(k-1)$ to $\vb{p}(k)$, or in the case that a collaborating robot $j$ is moving, robot $i$ is swept across by a robot $j$'s virtual segment $r_{j,f}$, $f\in\{0,1\}$, as shown in Figure \ref{fig: crossing}; \item a passive crossing case (line \ref{ln: homo_spe_start}-\ref{ln: homo_spe_end}), where the extension line of a collaborating robot, $r_{j,0}$, $j\in \myset{I}_n\backslash i$, has swept across robot $i$'s base point, $\vb{b}_i$. \end{enumerate} The second case is needed to ensure consistency of representation regarding different starting positions of robots. As shown in Figure \ref{fig: crossing}, both positions $\vb{p}_j(k-2)$ and $\vb{p}_j(k)$ are valid starting positions of robot $j$, i.e. they can be chosen as $\vb{p}_j(0)$ because they satisfy Assumption \ref{ass: initialpos} with $\vb{p}_i(k)$ chosen as the initial position of robot $i$. Therefore, both positions should induce an empty word for robot $i$. The only way to ensure this is to append the letter $r_{j,0}$ twice when robot $j$ moves from $\vb{p}_j(k-2)$ to $\vb{p}_j(k)$, one time when $r_{j,0}$ sweeps across robot $i$ and the other time when $r_{j,0}$ sweeps across $\vb{b}_i$. Then, a reduction procedure cancels the consecutive same letters, as will be described in Section \ref{subsec: reduce}, resulting in an empty word. \begin{figure}[!t] \centering \includegraphics[width=1.0\linewidth]{figs/crossing2.png} \caption{\footnotesize {\color{blue}An illustration of robot $i$ crossing the extension line of robot $j$. Between time $k-2$ and time $k-1$, robot $i$ have crossed segment $r_{j,0}$, because $\vb{p}_i(k-2)$ and $\vb{p}_i(k-1)$ lie on different sides of $r_{j,0}(k-2)$ and $r_{j,0}(k-1)$, respectively. } It can also be viewed as robot $i$ being swept across by $r_{j,0}$. Hence, $r_{j,0}$ should be appended to $h(k-1)$. At time $k$, robot $i$'s base is swept across by $r_{j,0}$, hence $r_{j,0}$ should be appended to $h(k)$. } \label{fig: crossing} \end{figure} \begin{rem} In practice, Assumption \ref{ass: initialpos} can always be satisfied by adjusting the initial positions and the base positions of the robots, such that any intersections with virtual segments is avoided. For a given set of initial positions and base positions, valid virtual segments $o_j$ can be constructed by selecting a reference point from the interior of each obstacle (the reference points should not coincide), and then sampling (uniformly or randomly) directions for extending the points into sets of parallel lines. The set of lines that satisfies Assumption \ref{ass: initialpos} is chosen. This procedure only needs to run at the initialization stage and $o_{j,0}$ and $o_{j,1}$ can be saved for online use. \end{rem} \subsection{Reduced Form of Homotopy Representation}\label{subsec: reduce} A critical step for establishing a homotopy invariant for a topological space is to identify a set of words that are topologically equivalent to the empty word (corresponding to the elements of the fundamental group that can be mapped to the identity element \cite{bhattacharya2018path}), and remove them from the original hotomotpy representation. For example, in \cite{Allen2002Algebraic}, consecutive same letters, corresponding to crossing and then un-crossing the same segment, can be removed from the original word to obtain a reduced word. In our setting, not only consecutive identical letters, but also combinations of letters that represent paths or sub-paths that loop around intersections among virtual segments should be removed. For example, in Figure \ref{fig:hsig_new}, the dashed blue path that loops around the intersection of $r_{j,1}$ and $o_{1,1}$ can be topologically contracted to a point (it is null-homotopic), hence its word $o_{1,1}r_{j,1}o_{1,1}r_{j,1}$ can be replaced with an empty string (for further understanding of the topological meaning of null-homotopic loops, readers may refer to Appendix \ref{apd: homotopy3d}). The reduction procedure is shown in Algorithm \ref{alg: reduction}, {\color{blue}and consists of the following steps}: \begin{enumerate} \item given an unreduced homotopy representation $h(k)$ at time $k$, we check if it contains any pairs of identical letters $\chi_{j,f}$, $\chi\in\{o, r\}$, $f\in\{0,1\}$, and extract the string $\chi_{j,f}\chi^1\chi^2\dots\chi^\omega\chi_{j,f}$ from $h(k)$, where $\omega$ is the number of letters in between the pair; \item we check if the virtual segment $\chi_{j,f}$ intersects with all of the segments $\chi^1$, $\chi^2$, $\dots$, $\chi^{\omega}$ at time $k$, if so, remove both letters $\chi_{j,f}$ from $h(k)$; \item if a removal happens, go to step 1) and start checking again. \end{enumerate} \begin{algorithm} \DontPrintSemicolon \SetKwBlock{Begin}{function}{end function} \Begin($\text{Reduction}{(}h{(}k{)},\{\mathcal{C}_j{(}k{)}\}_{j\in\myset{I}_n\backslash i}{)}$) { Reduced $\leftarrow$ True\; \While{Reduced} { Reduced $\leftarrow$ False\; \For{$\iota\in[1,\text{size}(h(k))-1]\cap\mathbb{Z}$} { \For{$j\in[\iota+1,\text{size}(h(k))]\cap\mathbb{Z}$} { \uIf{$h(k)[\iota]==h(k)[j]$} { remove $h(k)[\iota]$ and $h(k)[j]$\; Reduced $\leftarrow$ True\; break } \uElseIf{$h(k)[\iota]$ does not cross $h(k)[j]$} {break} } \uIf{Reduced}{break} } } return $h(k)$ } \label{reduction} \caption{{\color{blue}Word reduction}}\label{alg: reduction} \end{algorithm} The reasoning behind the procedure is that, in a static environment, the string $\chi_{j,f}\chi^1\chi^2\dots\chi^\omega\chi_{j,f}$ constitutes a part of the loop if the segment $\chi_{j,f}$ intersects with all of the segments in between, $\chi^1$, $\dots$, $\chi^\omega$. The loop is in the form of $\chi_{j,f}\chi^1\chi^2\dots\chi^\omega\chi_{j,f}\diamond\Lambda(\chi^1\chi^2\dots\chi^\omega)$, where $\Lambda(\cdot)$ denotes a particular sequence of the input letters. Figure \ref{fig: loop} shows two particular forms of loops for $\omega=3$. Knowing that the loop is an identity element in the fundamental group, we can write $\chi_{j,f}\chi^1\chi^2\dots\chi^\omega\chi_{j,f} = \Lambda(\chi^1\chi^2\dots\chi^\omega)^{-1}$. Assuming $\Lambda(\chi^1\dots\chi^\omega)$ takes the form in Figure \ref{fig: loopa}, i.e., $\Lambda(\chi^1\dots\chi^\omega) = \chi^\omega\chi^{\omega-1}\dots\chi^1$, we can establish a relation $\chi_{j,f}\chi^1\chi^2\dots\chi^\omega\chi_{j,f} = \chi^1\chi^2\dots\chi^\omega$, equivalent to an operation that removes both letters $\chi_{j,f}$. A special case exists in the procedure, that although virtual segments $r_{j,0}$ and $r_{j,1}$ share a common point at robot $j$'s position, we do not consider them to be intersecting; otherwise, any loop around the robot, e.g., the dashed green path in Figure \ref{fig:hsig_new}, will be reduced to a single letter and lose the topological meaning. \begin{figure} \centering \subcaptionbox{ \footnotesize \label{fig: loopa}}[0.45\linewidth]{\includegraphics[height=2.5cm]{figs/loop1.png}} \subcaptionbox{\footnotesize \label{fig: loopb}}[0.45\linewidth]{\includegraphics[height=2.5cm]{figs/loop2.png}} \caption{\footnotesize Two loops around the intersections among 4 segments. \\{\color{blue}(a) Loop expressed as $\chi_{j,f}\chi^1\chi^2\chi^3\chi_{j,f}\chi^3\chi^2\chi^1$. (b) Loop expressed as $\chi_{j,f}\chi^1\chi^2\chi^3\chi_{j,f}\chi^1\chi^2\chi^3$.} } \label{fig: loop} \end{figure} We note that the reduction rules introduced here are insufficient to generate a true homotopy invariant for the topological space considered in our scenario, which requires a much more involved construction in 4-dimensional (X-Y-Z-Time) space and is beyond the scope of this paper. However, we show that the reduced homotopy representation captures important topological information of the robot for us to check for entanglement of cables. The condition we use to identify entanglement can be stated as a two-entry rule: a robot is considered to be risking entanglement at time $k$, if its reduced homotopy representation contains two or more letters corresponding to the same collaborating robot, i.e. $h(k)$ consists of two or more entries of $r_{j,f}$ for a robot $j$, $j\in\myset{I}_n\backslash i$, $f\in\{0,1\}$. We give the justification for using this rule in the following Proposition. \begin{prop}\label{prop: twoentries} Consider two robots moving in a 2-D workspace $\myset{W}$ without any static obstacles, each tethered to a fixed base at $\vb{b}_1$ and $\vb{b}_2$. The starting positions of the robots satisfy Assumption \ref{ass: initialpos}. An entanglement occurs only if the reduced homotopy representation of at least one of the robots, $h_i(k)$, $i\in\myset{I}_2$, contains at least two entries related to the other robot, $ \textornot{r}_{j,f}$, $j\in\myset{I}_2\backslash i$, $f\in\{0,1\}$. \end{prop} \begin{proof} Figure \ref{fig: entangle1g} illustrates the simplest entanglement scenario between two robots; any other 2-robot entanglement is a result developed from this scenario where the robots circle around each other more times, hence having more entries in their homotopy representations. Hence, it is sufficient to evaluate the simplest case. It is clear that, a sequence of crossing actions is required to realize this case: robot $i$ crosses the cable line of robot $j$, $r_{j,1}$, followed by that robot $j$ crosses the cable line of robot $i$, $r_{i,1}$, $i\in\myset{I}_2$, $j\in\myset{I}_2\backslash i$. Due to the requirement on the initial positions (Assumption \ref{ass: initialpos}), before robot $i$ crosses the cable of robot $j$, its extension line $r_{i,0}$ will sweep across either robot $j$ or base $j$, causing the reduced word of robot $j$ to be $h_j={r}_{i,0}$. Hence, after robot $j$ crosses $r_{i,1}$, its word is guaranteed to have two entries, $h_j = r_{i,0}r_{i,1}$. \end{proof} \begin{figure*} \centering \subcaptionbox{ \label{fig: ent_pro1}}[0.24\linewidth]{\includegraphics[width=\linewidth]{figs/entangle2_1.png}} \subcaptionbox{ \label{fig: ent_pro2}}[0.24\linewidth]{\includegraphics[width=\linewidth]{figs/entangle2_2.png}} \subcaptionbox{ \label{fig: ent_pro3}}[0.24\linewidth]{\includegraphics[width=\linewidth]{figs/entangle2_3.png}} \subcaptionbox{ \label{fig: ent_pro4}}[0.24\linewidth]{\includegraphics[width=\linewidth]{figs/entangle2_4.png}} \caption{ \footnotesize Two UAVs follow a series of movements, resulting in an entanglement situation. The black dashed paths indicate the paths each robot will follow until the goals. The blue and yellow curves are the cables of robot $i$ and robot $j$ respectively. (a) Robot $i$ moves to the negative $x$ direction. (b) Robot $j$ follows the indicated path that crosses the cable of robot $i$ two times and passes an obstacle. (c) Robot $i$ moves in the positive $x$ direction. (d) Robot $i$'s movement is restricted because its cable is tangled with robot $j$'s cable.} \label{fig: ent2} \end{figure*} \begin{figure}[!t] \centering \includegraphics[width=0.6\linewidth]{figs/3-robot_ent.png} \caption{\footnotesize A 3-robot entanglement scenario. The solid lines and curves show the cable configurations with their Z-ordering. The double dashed line is a virtual segment. } \label{fig: 3-rob ent} \end{figure} For operations using $3$ and more robots, any pair-wise entanglements that involve only two robots can be detected using the same argument as Proposition \ref{prop: twoentries}. Figure \ref{fig: 3-rob ent} illustrates a more complicated $3$-robot entanglement, where robot $i$'s motion is hindered due to the cables of both robot $j$ and $l$ (removing either robot $j$ or $l$ will release robot $i$ from the entanglement). This scenario is realized by the sequence of movements where robot $i$ (1) crosses the cable line of robot $j$, (2) crosses the extension line $r_{l,0}$ of robot $l$, (3) crosses the cable line of robot $j$ again. The homotopy representation of robot $i$ will be $h_i=r_{j,1}r_{l,0}r_{j,1}$, and because segments $r_{l,0}$ and $r_{j,1}$ do not intersect, $h_i$ cannot be reduced further and hence the entanglement can be detected using the two-entry rule. {\color{blue}Considering obstacle-ridden environments,} Figure \ref{fig: ent2} illustrates an entanglement caused by two robots following the intended paths sequentially, which also can be identified using the two-entry rule, as robot $j$ has an unreducible word $h_j=r_{i,1}o_{1,1}r_{i,1}$ after following the movement in Figure \ref{fig: ent_pro2}. Intuitively, two-entry rule can be used to identify cases where a robot partially circles around another robot, as is the case in Figure \ref{fig: entangle1g}, or it has circled around part of the cable of another robot in a topologically non-trivial way, as are the cases for robot $j$ in Figure \ref{fig: ent_pro3} and for robot $i$ in Figure \ref{fig: 3-rob ent}. \subsection{Determination of Contact Points}\label{subsec: contact} The list of contact points is updated at every iteration, along with the update and reduction of the word. Intuitively, a contact point addition (or removal) occurs when a new bend of the cable is created at (or an existing bend is released from) the surface of the obstacles that the robot has passed. For efficiency, we adapt a simplification of the obstacle shapes, such that each obstacle $j$ is only represented as a thin barrier $\overbar{\gbf{\zeta_{j,0}}\gbf{\zeta_{j,1}}}$. The detailed procedure is described in Algorithm \ref{alg: funnel}, in which the indication for the timestamp $(k)$ in the expressions $\vb{c}_1(k)$, \dots, $\vb{c}_{\numcon_i(k)}(k)$ is omitted for brevity. This algorithm checks if the robot has crossed any lines linking a contact point to the surface points of the obstacles, or any lines that link the two consecutive contact points; if such crossing happens, add or remove contact points accordingly. Figure \ref{fig: funnel} illustrates how Algorithm \ref{alg: funnel} works. \begin{algorithm} \DontPrintSemicolon \SetKwBlock{Begin}{function}{end function} \Begin($\text{updateContactPoints}{(}\vb{p}{(}k-1{)}, \vb{p}{(}k{)}, h{(}k{)}, \mathcal{C}_i{(}k-1{)}{)} $) { $\mathcal{C}_i(k)\leftarrow \mathcal{C}_i(k-1)$\; \While{$\numcon_i\geq1$ and \label{ln:contact1}\\$\overline{\vb{p}(k-1) \vb{p}(k)}$ intersects with $ \overleftrightarrow{\vb{c}_{\numcon_i-1} \vb{c}_{\numcon_i}}$} { \tcp{$\vb{c}_0 = \vb{b}$ in this algorithm} remove $\vb{c}_{\numcon_i}$ from $\mathcal{C}_i(k)$\; $\numcon_i\leftarrow \numcon_i-1$\label{ln:contact2} } \ForAll{$l\in [\text{index}(\numcon_i)+1,\text{size}(h(k))]\cap\mathbb{Z}$\label{ln:contact3}} { \uIf{$\exists j\in \myset{I}_m, f\in\{0,1\}$, s.t. $h(k)[\,l\,]==\textornot{o}_{j,f}$,} { \uIf{$\overline{\vb{p}(k-1) \vb{p}(k)}$ intersects with $\overleftrightarrow{\vb{c}_{\numcon_i} \gbf{\zeta}_{j,f}}$} { $\numcon_i\leftarrow \numcon_i+1$\; add $\vb{c}_{\numcon_i}\leftarrow\gbf{\zeta}_{j,f}$ to $\mathcal{C}_i(k)$\label{ln:contact4}\; } } } return $\mathcal{C}_i(k)$ }\label{funn} \caption{Contact point update}\label{alg: funnel} \end{algorithm} \begin{figure*} \centering \subcaptionbox{ \footnotesize time $k$ \label{fig: fun1a}}[0.29\linewidth]{\includegraphics[height=5.5cm]{figs/funnel1_1.png}} \subcaptionbox{\footnotesize time $k+1$ \label{fig: fun1b}}[0.33\linewidth]{\includegraphics[height=5.5cm]{figs/funnel1_2.png}} \subcaptionbox{\footnotesize time $k+2$ \label{fig: fun1c}}[0.36\linewidth]{\includegraphics[height=5.5cm]{figs/funnel1_3.png}} \caption{\footnotesize A sequence of movements of robot $i$. Figures (a) and (b) only show part of the workspace. Orange line segments are the shortened cable configuration at each time. The blue dashed lines are the lines intersected by the robots' movements, resulting in addition or removal of contact points. (a) At time $k$, robot does not have any contact point. $h(k)=\textornot{o}_{2,1}\textornot{o}_{1,0}$. (b) In the interval between $k$ and $k+1$, the robot crosses the line linking $\vb{b}_i$ and $\gbf{\zeta}_{1,0}$, causing $\gbf{\zeta}_{1,0}$ to become a contact point. $h(k+1) = \textornot{o}_{2,1}\textornot{o}_{1,0}\textornot{o}_{3,0}\textornot{o}_{4,1}$. (c) In the interval between $k+1$ and $k+2$, the robot crosses the line linking $\vb{c}_1$ and $\gbf{\zeta}_{4,1}$, causing $\gbf{\zeta}_{4,1}$ to become the second contact point. $h(k+2) = \textornot{o}_{2,1}\textornot{o}_{1,0}\textornot{o}_{3,0}\textornot{o}_{4,1}\textornot{o}_{5,0}\textornot{o}_{6,1}\textornot{o}_{7,0}$. If the robot moves from $\vb{p}(k+2)$ to $\vb{p}(k+3)$ in one time step, $\gbf{\zeta}_{4,1}$ will be removed from the contact points, $\gbf{\zeta}_{3,0}$ and $\gbf{\zeta}_{5,0}$ will be added to the contact points; however, $\gbf{\zeta}_{3,0}$ is a wrongly added contact point in this case. } \label{fig: funnel} \end{figure*} We make the following assumption to ensure the algorithm works correctly: \begin{ass}\label{ass: cross_one} Consider $\mathcal{S}_{j,f}$ to be the set of lines linking the obstacle surface point $\gbf{\zeta}_{j,f}$ to the surface points of different obstacles and the base, $\mathcal{S}_{j,f}=\big\{\overleftrightarrow{\gbf{\zeta}_{j,f}\gbf{\zeta}}|\gbf{\zeta}\in\vb{b}_i\cup\{\gbf{\zeta}_{l,\iota}\}_{l\in\myset{I}_m\backslash j, \iota\in\{0,1\}}\big\}$. In one iteration interval between time $k-1$ to $k$, only one distinct line in each $\mathcal{S}_{j,f}$ can be crossed by robot $i$, i.e., multiple lines in $\mathcal{S}_{j,f}$ can be crossed by the robot only if they are coincident, $\forall j\in\myset{I}_m$, $f\in\{0,1\}$. \end{ass} Assumption \ref{ass: cross_one} prevents surface points from being wrongly identified as contact points, which is the case shown in Figure \ref{fig: fun1c}, where the robot moving from $\vb{p}(k+2)$ to $\vb{p}(k+3)$ will result in $\gbf{\zeta}_{3,0}$ being wrongly added as a contact point. Under Assumption \ref{ass: cross_one}, the only case where multiple contact points need to be added or removed in one iteration is when multiple coincident lines are crossed; such case can be handled by the iterative intersection check in Algorithm \ref{alg: funnel} following the sequence of the obstacles added to $h(k)$ (line \ref{ln:contact3} to \ref{ln:contact4}), and the reverse sequence of the contact points added to $\mathcal{C}_i(k)$ (line \ref{ln:contact1} to \ref{ln:contact2}). In practice, Assumption \ref{ass: cross_one} holds true as long as the algorithm runs at a sufficient rate, so that in every interval between consecutive iterations, only a small movement of the robot is executed. We have the following statement on the property of the shortened cable configuration obtained using Algorithm \ref{alg: funnel}: \begin{prop}\label{prop: shortest} Consider a robot $i$ tethered to a base $\vb{b}_i$ and moving in a 2-D environment consisting of only thin barrier obstacles $\overbar{\gbf{\zeta}_{j,0}\gbf{\zeta}_{j,1}}$, $\forall j\in\myset{I}_m$. Let Assumptions \ref{ass: initialpos} and \ref{ass: cross_one} hold. At a time $k$, the consecutive line segments formed by linking $\vb{b}_i$, $\vb{c}_{i,1}(k)$, \dots, $\vb{c}_{i,\numcon_i(k)}(k)$ and $\vb{p}_i(k)$, where $\vb{c}_{i,1}(k)$, \dots, $\vb{c}_{i,\numcon_i(k)}(k)$ are obtained from running Algorithm \ref{alg: funnel}, represent the shortest cable configuration of the robot that is homotopic to the actual cable configuration. \end{prop} \begin{proof}[Proof] See Appendix \ref{apd: proofshortest}. \end{proof} The length of the shortened cable configuration is a lower bound of the actual shortest cable length due to the use of simplified obstacle shapes. In Section \ref{sec: search}, this length is compensated with an extra distance to the surface of the obstacles, to better approximate the required cable length. As will be shown in Section \ref{sec: simulation}, the use of Algorithm \ref{alg: funnel} enables more efficient feasibility checking than the expensive curve shortening algorithm used in \cite{kim2014path, kim2015path}. \begin{rem} Algorithm \ref{alg: funnel} is inspired by the classic funnel algorithm \cite{HERSHBERGER199463}, which is widely used to find shortest homotopic paths in triangulated polygonal regions \cite{Teshnizi2021}. {\color{blue}Compared to the funnel algorithm, Algorithm \ref{alg: funnel} is more computationally efficient, because the use of simplified obstacle shapes with less vertices and a maintained list of crossed obstacles in $h(k)$ reduces the number of crossing checks needed; furthermore, using Algorithm \ref{alg: funnel}, the shortest path can be obtained trivially by linking all contact points, while in the funnel algorithm an additional procedure is required to determine the shortest path (because the apex of the funnel may not be the last contact point). Algorithm \ref{alg: funnel} also provides memory saving compared to the funnel algorithm, as the latter requires saving a funnel represented as a double-ended queue (deque) consisting of the boundary points of the funnel, which is always greater than or equal to the list of contact points saved in our algorithm.} As will be described in Section \ref{subsec: entanglement}, the homotopy update procedure is called frequently to incrementally predict and evaluate the homotopy status of the potential trajectories. Therefore, the frequent updates of the contact points result in significant computational and memory saving compared with the funnel algorithm with full triangulation. \end{rem} \section{Kinodynamic Trajectory Finding}\label{sec: search} The front end uses kinodynamic A* search algorithm \cite{zhou2019robust} to find a trajectory that leads to the goal while satisfying the dynamic feasibility, collision avoidance and non-entanglement requirements. A search-based algorithm is used instead of a sampling-based algorithm, such as RRT, to ensure better consistency of the trajectories generated between different planning iterations. To reduce the dimension of the problem, the kinodynamic A* algorithm searches for feasible trajectories in the X-Y plane only. For UAVs, the computation of Z-axis trajectory is also required and will affect the feasible region in the X-Y plane, hence we design a procedure that generates dynamically feasible trajectories in Z-axis only, interested readers may refer to Appendix \ref{subsec: initialz} for details. The search is conducted in a graph imposed on the discretized 2-D space augmented with the homotopy representation of the robot. Each node in the graph is a piece of trajectory of fixed duration $T$ and it contains a structure to record the following information: (1) the 2-D trajectory coefficients $\mathbf{E}_l\in\mathbb{R}^{(\trajorder+1)\times2}$, where $l$ is the index starting from the first trajectory as $0$; (2) the robot's states at the end of the trajectory $\en{\vb{p}}$, $\en{\vb{v}}$, $\en{\vb{a}}$; (3) the robot's homotopy-related information at the end of the trajectory, $\en{h}$ and $\en{\mathcal{C}}$; (4) the cost from start $g_\text{c}$ and heuristic cost $g_\text{h}$; (5) a pointer to its parent trajectory. A node is located in the graph by its final position $\en{\vb{p}}$ and its homotopy representation $\en{h}$. Successive nodes are found by applying a set of sampled control inputs $\mathcal{U}=\{[u^\text{x},u^\text{y}]^\top|u^\text{x},u^\text{y}\in\{-\bar{u}, -\frac{\sigma_\text{u}-1}{\sigma_\text{u}}\maxop{u}, \dots, \frac{\sigma_\text{u}-1}{\sigma_\text{u}}\maxop{u}, \maxop{u}\}\}$ to the end states of a node for a duration $T$, where $\maxop{u}$ is the maximum magnitude of control input. In our case, the control input is jerk and the degree of trajectory is $3$. Given a control input $\vb{u}\in\mathcal{U}$ applied to a parent node, the coefficients of the successive trajectory can be obtained as \begin{align} \mathbf{E}_l = [\en{\vb{p}}_{l-1}\;\: \en{\vb{v}}_{l-1}\;\: \frac{1}{2}\en{\vb{a}}_{l-1} \;\:\frac{1}{6}\vb{u}]^\top\in\mathbb{R}^{4\times2}. \end{align} Then, each of the successive nodes is checked for its dynamic feasibility (Section \ref{subsec: feasibility}), collision avoidance (Section \ref{subsec: collision}) and non-entanglement requirements (Section \ref{subsec: entanglement}, the homotopy update is conducted together with the entanglement check), only the nodes that satisfy the constraints and have not been added before will be added to the open list. The search process continues until a node is found that ends sufficiently close to the goal while not risking entanglement (based on the two-entry rule). {\color{blue}The heuristic cost $g_\text{h}$ is chosen to be the Euclidean distance between $\en{\vb{p}}$ and the goal.} \subsection{Workspace and Dynamic Feasibility}\label{subsec: feasibility} We use the recently published MINVO basis \cite{tordesillas2021minvo} to convert the polynomial coefficients to the control points, which form convex hulls that entirely encapsulate the trajectory and its derivatives. We denote $\mathcal{Q}_l$, $\mathcal{V}_l$ and $\mathcal{A}_l$ to be the set of 2-D position, velocity and acceleration control points of a candidate node with index $l$ respectively. To ensure feasibility, we check whether the following inequalities are satisfied: \begin{align} &\vb{q}\in\mathcal{W},\; \forall\vb{q}\in\mathcal{Q}_l,\label{eq: poscontrolpoint}\\ &[\minop{v}^\text{x}\;\minop{v}^\text{y}]^\top\leq\vb{v}\leq[\maxop{v}^\text{x}\;\maxop{v}^\text{y}]^\top,\; \forall\vb{v}\in\mathcal{V}_l,\label{eq: velcontrolpoint}\\ &[\minop{a}^\text{x}\;\minop{a}^\text{y}]^\top\leq\vb{a}\leq[\maxop{a}^\text{x}\;\maxop{a}^\text{y}]^\top,\; \forall\vb{a}\in\mathcal{A}_l.\label{eq: acccontrolpoint} \end{align} \begin{figure}[!t] \centering \includegraphics[width=0.8\linewidth]{figs/trailing.png} \caption{\footnotesize UAV $j$'s cable tails behind it, causing an expansion of the collision zone.} \label{fig: trailing} \end{figure} \subsection{Collision Avoidance} \label{subsec: collision} Due to friction, an UAV flying at a fast speed may have its cable trailing behind it, as shown in Figure \ref{fig: trailing}. We model the range of cable that may stay in the air as $\xi z$ for an UAV flying at altitude $z$ and $\xi$ is a constant to be determined empirically. Therefore, to guarantee non-collision with robot $j$ and its cable, the planning UAV needs to maintain a distance more than $\rho_j+\rho_i+\xi \Delta z_l$, where $\rho_j$ and $\rho_i$ are the radii of UAVs, $\Delta z_l$ is the maximum possible altitude difference between them during the planning interval $l$, which can be obtained from their Z-axis trajectories. Eventually, we need to check whether the following equality holds: \begin{align} \text{hull}(\mathcal{Q}_{j,l})\oplus\mathbf{B}_{j,l}\cap\text{hull}(\mathcal{Q}_l)=\emptyset,\label{eq: collision avoidance} \end{align} where $\text{hull}(\cdot)$ represents the convex hull enclosing the set of control points, $\mathcal{Q}_{j,l}$ is the minimum set of MINVO control points that contain the trajectory of robot $j$ during the planning interval $l$, $\mathbf{B}_{j,l}$ is the axis-aligned bounding box whose side is of lengths $\rho_j+\rho_i+\xi \Delta z_l$, $\oplus$ is the Minkowski sum. We use the Gilbert–Johnson–Keerthi (GJK) distance algorithm \cite{2083} to detect the collision between two convex hulls efficiently. Collision check of static obstacles can be done similarly by checking intersections with the inflated convex hulls that contain the obstacles. \subsection{Cable Length and Non-Entanglement Constraints}\label{subsec: entanglement} A tethered robot should always operate within the length limit of its cable and never over-stretch its cable. We check whether a trajectory with index $l$ satisfies this by approximating the required cable length at a position $\vb{p}_i$ using the list of active contact points: \begin{align} \phi_i\geq&\norm{\vb{b}_{i}-\vb{c}_{i,1}} +\sum_{\iota=1\dots\numcon_i-1}{\norm{\vb{c}_{i,\iota+1}-\vb{c}_{i,\iota}}}+\nonumber\\ &\sum_{\iota=1\dots\numcon_i}{\mu_\iota}+\norm{\vb{p}_i-\vb{c}_{i,\numcon_i}}+\maxop{z}_l,\label{eq: cablelength} \end{align} where $\maxop{z}_l$ is the maximum altitude of the robot during planning interval $l$, $\mu_\iota>0$ is a length added to compensate for the underestimation of cable length using contact points only. For safety we can set $\mu_\iota$ to two times the longest distance from the contact point to any surface points of the same obstacle. Algorithm \ref{alg: entangle} incrementally predicts the planning robot's homotopy representation by getting sampled states of the robots (line \ref{ln: sample}) and updating the representation using Algorithm \ref{alg: homotopyupdate} (line \ref{ln: homoupdate}). After each predicted update, it checks the cable length constraints (line \ref{ln: cablelength}) and the non-entanglement constraint (line \ref{ln: startentanglecheck}-\ref{ln: endentanglecheck}). In this algorithm, we only discard trajectories that end up risking entanglement due to a crossing action incurred in the current prediction cycle (line \ref{ln: entcriteria}). In this way, we allow the planner starting from an unsafe initial homotopy representation to continue searching and find a trajectory escaping the unsafe situation. \begin{algorithm} \DontPrintSemicolon \SetKwBlock{Begin}{function}{end function} \Begin($\text{EntanglementCheck{(}node{)}} $) { $\mathcal{P}(0)\leftarrow\text{getSampledStates}(E_l,\{\mathcal{P}_{j,l}\}_{j\in \myset{I}_n\backslash i},0)$\; \For{$k=1\dots \sigma$} { $\mathcal{P}(k)\leftarrow\text{getSampledStates}(E_l,\{\mathcal{P}_{j,l}\}_{j\in \myset{I}_n\backslash i},k)\label{ln: sample}$\; $\en{h}, \en{\mathcal{C}}\leftarrow\text{homotopyUpdate} (\en{h}, \mathcal{P}(k), \mathcal{P}(k-1),\{\mathcal{C}_j{(}t^\text{in}{)}\}_{j\in\myset{I}_n\backslash i}, \{\mathcal{C}_j{(}t^\text{in}{)}\}_{j\in\myset{I}_n\backslash i}\cup\en{\mathcal{C}}, \mathbf{O}$)\label{ln: homoupdate}\; \uIf{$\text{Eqn.(\ref{eq: cablelength}) does NOT hold}$\label{ln: cablelength}} { return False\; } \ForAll{$j\in \myset{I}_n\backslash i$\label{ln: startentanglecheck}} { \uIf{number of entries of $\text{r}_{j,f}$, $f\in\{0,1\}$ in $\en{h}$ increases AND is $\geq2$\label{ln: entcriteria}} { return False\label{ln: endentanglecheck} } } $\mathcal{P}(k-1)\leftarrow\mathcal{P}(k)$ } return True }\label{endentangle} \caption{Cable length and entanglement check}\label{alg: entangle} \end{algorithm} \section{Trajectory Optimization}\label{sec: opt} \begin{figure*} \centering {\includegraphics[width=0.8\textwidth]{figs/planning2.png}} \caption{\footnotesize The dashed rainbow curve is the trajectory obtained from the front-end trajectory finder, the solid rainbow curve is the optimized trajectory. The colors on the rainbow curves indicate the time of the trajectory since $\init{t}$. The dotted rainbow curve is the optimization result if we do not add the non-crossing constraint into the optimization problem, which is an unacceptable shortcut because robot $i$ has already crossed $r_{j,0}$ and should not cross $r_{j,1}$. $\gbf{\pi}_{j,0}$ is the parameter of the line separating robot $i$'s trajectory from robot $j$'s trajectory.} \label{fig: planning} \end{figure*} The output of the front-end planner is $\front{\eta}$ pieces of consecutive polynomial curves, $\aug{\mathbf{E}}_{l}^{\text{in}}$, $ l\in[0,\front{\eta}-1]\cap\mathbb{Z}$. The optimization module extracts the first $\back{\eta}=\text{min}(\front{\eta},\maxop{\eta})$ trajectories, where $\maxop{\eta}$ is an integer chosen by the user, and optimizes the trajectory coefficients $\aug{\mathbf{E}}_l$, $\forall l\in\{0\dots\back{\eta}-1\}$, to obtain a short-term trajectory ending at the intermediate goal \begin{align} \aug{\vb{p}}^{\text{inter}}=\aug{\mathbf{E}}_{\back{\eta}-1}^{\text{in}\top} \vb{g}(T), \end{align} where $\aug{\mathbf{E}}^{\text{in}}_l$ represents the initial solution for $\aug{\mathbf{E}}_l$ obtained from the front end. Figure \ref{fig: planning} illustrates the trajectories before and after the optimization. The objective function is \begin{align} J=\sum_{l=0}^{\back{\eta}-1}\int_{t=0}^T\norm{\aug{\mathbf{E}}_l^\top \vb{g}^{(\trajorder)}(t)}^2dt + \kappa\norm{\aug{\mathbf{E}}_{\back{\eta}-1}^\top \vb{g}(T)-\aug{\vb{p}}^\text{inter}}^2,\label{eq: objective} \end{align} in which the first term aims to reduce aggressiveness by penalizing the magnitude of control input ($\trajorder$-th order derivative of the trajectory), the second term penalizes not reaching the intermediate goal. $\kappa$ is the weight for the second term. Setting the intermediate goal as a penalty instead of a hard constraint relaxes the optimization problem and improves the success rate of the optimization. The constraints for initial states and zero terminal higher-order states are enforced as \begin{align} &\aug{\mathbf{E}}^\top_{0}\vb{g}^{(\iota)}(0)=\aug{\vb{p}}^{\text{in}\lrangle{\iota}}, \forall\iota\in\{0,1,2\},\label{eq: initialcond}\\ &\aug{\mathbf{E}}^\top_{\back{\eta}-1}\vb{g}^{(\iota)}(T)=\vb{0}, \forall\iota\in\{1,2\},\label{eq: terminalcond} \end{align} where $\aug{\vb{p}}^{\text{in}\lrangle{\iota}}=\init{\aug{\vb{p}}},\init{\aug{\vb{v}}},\init{\aug{\vb{a}}}$ respectively for $\iota=0,1,2$. We would like the robot to stop at rest at the intermediate goal, so that in case the robot cannot find a feasible trajectory for its next few iterations, the robot can decelerate and stop at a temporarily safe location until it can find a feasible solution again. We also need to ensure the states are continuous between consecutive trajectories: \begin{align} \aug{\mathbf{E}}^\top_{l}g^{(\iota)}(T)=\aug{\mathbf{E}}^\top_{l+1}\vb{g}^{(\iota)}(0),\forall \iota\in\{0,1,2\},l\in\{0,\dots\back{\eta}-2\}.\label{eq: continuity} \end{align} The dynamic feasibility constraints are enforced using the control points, since velocity and acceleration control points can be expressed as linear combinations of the trajectory coefficients. The constraint equations are in the same forms as Eqn. (\ref{eq: poscontrolpoint}-\ref{eq: acccontrolpoint}), but enforced in three dimensions. For non-collision constraints, we apply the plane separation technique introduced in \cite{tordesillas2021mader} to 2-D case. Denoting the minimum set of vertices of the inflated convex hull of the collaborating robot/obstacle $j$ at the planning interval $l$ as $\Theta_{j,l}$ (for obstacles the vertices are constant with respect to $l$), a line parameterized by $\gbf{\pi}_{j,l}\in\mathbb{R}^2$ and $d_{i,j}\in\mathbb{R}$ is used to separate the vertices of the obstacle or robot $j$ and those of the planning robot using inequalities \begin{align} &\gbf{\pi}_{j,l}^\top\gbf{\theta}+d_{j,l}>0,\; \forall\gbf{\theta}\in\Theta_{j,l},\label{eq: separate1}\\ &\gbf{\pi}_{j,l}^\top\vb{q}+d_{j,l}<0,\;\forall\vb{q}\in\mathcal{Q}_{l},\label{eq: separate2} \end{align} $\forall j\in\myset{I}_{m+n}\backslash i$, $l\in\{0,\dots, \back{\eta}-1\}$. In some cases, it is also necessary to add non-entanglement constraints in the optimization to prevent the robot from taking an unallowable shortcut, as shown in Figure \ref{fig: planning}, where the dotted rainbow curve is an unallowable shortcut because robot $i$ should not cross $r_{j,1}$ in this case. A non-crossing constraint can be added in a similar way as the non-collision constraint (\ref{eq: separate1}-\ref{eq: separate2}), if we use a line to separate the vertices of the virtual segment that cannot be crossed from those of the planning robot's trajectory. The overall optimization problem is a nonconvex quadratically constrained quadratic program (QCQP) if we are optimizing over both the trajectory coefficients $\aug{\mathbf{E}}_l$ and the separating line parameters, $\gbf{\pi}_{j,l}$ and $d_{j,l}$. To reduce computational burden, we fix the values of the line parameters by solving (\ref{eq: separate1}-\ref{eq: separate2}) using the trajectory coefficients obtained from the front-end planner, $\init{\aug{\mathbf{E}}_l}$. The resulting optimization problem with only the trajectory coefficients as the decision variables is \begin{align} &\!\min_{\aug{\mathbf{E}}_l}(\ref{eq: objective}),\nonumber\\ &\text{s.t.}\; (\ref{eq: initialcond}-\ref{eq: separate2}),\;(\ref{eq: poscontrolpoint}-\ref{eq: acccontrolpoint}),\nonumber \end{align} which is a quadratic program with a much lower complexity. \section{Complexity Analysis} \subsection{Complexity of Homotopy Update} \label{subsec: complexity homotopy} We analyze the time complexity of running the homotopy update procedure (Algorithm \ref{alg: homotopyupdate}). We define $\maxop{\omega}$ to be the maximum possible number of obstacle-related entries, $o_{j,f}$, in $h(k)$ at any time $k$. $\maxop{\omega}$ depends on the cable length and the obstacle shapes (the minimum distance between any two vertices of the obstacles), and is generally independent of the number of obstacles in the workspace. Given that the robots avoid entanglements based on the two-entry rule, the maximum number of robot-related entries, $r_{j,f}$, $j\in\myset{I}_n$, $f\in\{0,1\}$, in $h(k)$ should be $n$. The time complexity of updating the word (Algorithm \ref{alg: crossing}) is dominated by the number of crossing checks. Since the virtual segments of each robot are defined by at most $\maxop{\omega}$ contact points, and crossing $m$ static obstacle can be checked in $\mathcal{O}(m)$ time, the update of the word can be conducted in $\mathcal{O}(n\maxop{\omega}+m)$ time. The implementation of the reduction procedure (Algorithm \ref{alg: reduction}) requires three nested loops to inspect each entry of $h(k)$, hence the time complexity of the reduction procedure is $\mathcal{O}((n+\maxop{\omega})^3)$. Similarly, the contact point update procedure (Algorithm \ref{alg: funnel}) can be run in $\mathcal{O}(n+\maxop{\omega})$ time. Hence, the time complexity of running the entire homotopy update procedure is $\mathcal{O}(m+(n+\maxop{\omega})^3)$. The memory complexity of the homotopy update depends on the storage of all the virtual segments, hence it is $\mathcal{O}(n\maxop{\omega}+m)$. \subsection{Complexity of a Planning Iteration} \label{subsec: search complexity} Both the time complexities of the front-end kinodynamic planning (Section \ref{sec: search}) and the backend optimization (Section \ref{sec: opt}) are analyzed. The time complexity of the kinodynamic search is the product of the total number of candidate trajectories (successive nodes) generated and the complexity of evaluating each trajectory. In the worst case, a total of $\mathcal{O}(\epsilon(n+\maxop{\omega})\sigma_\text{u})$ candidate trajectories are generated in the homotopy-augmented graph, where $\epsilon$ is total number of grids in the workspace $\myset{W}$. $\epsilon$ is dependent on the total area of the workspace and the grid size used for discretization. For each candidate trajectory, checking the workspace and dynamic feasibility (\ref{eq: poscontrolpoint})-(\ref{eq: acccontrolpoint}) can be done in $\mathcal{O}(1)$ time. The collision avoidance requirement (\ref{eq: collision avoidance}) can be checked in $\mathcal{O}(m+n)$ time. The time complexity of checking cable-related requirements (Algorithm \ref{alg: entangle}) is dominated by the homotopy update procedure (line 5), which has to be executed $\sigma$ times; therefore, the time complexity of Algorithm \ref{alg: entangle} is $\mathcal{O}(\sigma (m+(n+\maxop{\omega})^3))$. Hence, the worst-case time complexity of running the kinodynamic search is $\mathcal{O}(\epsilon\sigma_\text{u}\sigma (n+\maxop{\omega})(m+(n+\maxop{\omega})^3))$. The time complexity of solving the optimization problem using the interior point method is $\mathcal{O}((\maxop{\eta}\trajorder)^3)$ \cite{ye1989extension}, as there are $3\maxop{\eta}(\trajorder+1)$ optimization variables. Therefore, the time complexity of one planning iteration is $\mathcal{O}((\maxop{\eta}\trajorder)^3+\epsilon\sigma_\text{u}\sigma (n+\maxop{\omega})(m+(n+\maxop{\omega})^3))$. Given that the independent parameters $\epsilon$, $\maxop{\omega}$, $\sigma$, $\sigma_\text{u}$, $\maxop{\eta}$ and $\trajorder$ are fixed, the time complexity simplifies to $\mathcal{O}(nm+n^4)$, which is linear in the number of obstacles and quartic in the number of robots. The memory complexity of planning is dominated by the memory to store the valid graph nodes for the kinodynamic search. Each node has memory complexity of $\mathcal{O}(n+\maxop{\omega}+\beta)$ due to the need to store the trajectory coefficients and the homotopy representation. Adding the memory to store a copy of the virtual segments, the memory complexity of a planning iteration is $\mathcal{O}(\epsilon(n+\maxop{\omega})(n+\maxop{\omega}+\beta)+m)$, which can be simplified to $\mathcal{O}(n^2+m)$. \subsection{Communication Complexity} In the message transmitted by each robot per iteration, the position of the robot has a length of $\mathcal{O}(1)$, the list of contact points has a length of $\mathcal{O}(\maxop{\omega})$ and the future trajectory of the robot has a length of $\mathcal{O}(\maxop{\eta}\beta)$. Considering a planning frequency of $f_\text{com}$, the amount of data transmitted by each robot is $\mathcal{O}(f_\text{com}(\maxop{\omega}+\maxop{\eta}\beta))$ per unit time, while the data received by each robot per unit time is $\mathcal{O}(nf_\text{com}(\maxop{\omega}+\maxop{\eta}\beta))$. \section{Simulation}\label{sec: simulation} The proposed multi-robot homotopy representation and the planning framework are implemented in C++ programming language. During the simulation and experiments, each robot runs an independent Robot Operating System (ROS) program and the communication between robots is realized by the publisher-subscriber utility of ROS network. The trajectory optimization described in Section \ref{sec: opt} is solved using the commercial solver Gurobi\footnote{https://www.gurobi.com/}. The processor used for simulations in Sections \ref{subsec: single} and \ref{subsec: multi without} is Intel i7-8750H while that used in Section \ref{subsec: multi with} is Intel i7-8550U. The codes for the compared methods are also implemented by ourselves in C++ and optimized to the best of our ability. In all simulations, the parameters are chosen to be $T=0.5$s, $\maxop{\vb{v}}=[2.0, 2.0, 2.0]^\top$m/s, $\maxop{\vb{a}}=[3.0, 3.0, 3.0]^\top$$\text{m/}\text{s}^2$, $\maxop{u}=5.0$$\text{m/}\text{s}^3$, $\minop{\vb{v}}=-\maxop{\vb{v}}$, $\minop{\vb{a}}=-\maxop{\vb{a}}$ and the grid size for the kinodynamic graph search is $0.2\times0.2$m. Video of the simulation can be viewed in the supplementary material or online\footnote{https://youtu.be/8b1RlDvQsi0}. \subsection{Single Robot Planning}\label{subsec: single} \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{figs/computeTime.png} \caption{\footnotesize Comparison of computation time for single-robot planning.} \label{fig: timecompare} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{figs/length.png} \caption{\footnotesize Comparison of path/trajectory lengths for single-robot planning.} \label{fig: lengthcompare} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=0.8\linewidth]{figs/pathcompare2.png} \caption{\footnotesize The cyan line shows the shortest cable configuration at the start. The blue path is generated by Kim's method, the green curve is the trajectory generated from the proposed front-end trajectory searching and the orange curve is the actual trajectory executed using the proposed planning framework.} \label{fig: pathcompare} \end{figure} We firstly apply the presented approach in the planning problem for a single tethered robot in an obstacle-rich environment. Specifically, the front-end kinodynamic trajectory search technique introduced in Section \ref{sec: search} can be used for point-to-point trajectory generation and the entire framework described in Section \ref{sec: formulation} can be used for online trajectory replanning. We compare our approach with Kim\cite{kim2014path} which generates piece-wise linear path using homotopy-augmented grid-based A*. The simulation environment is a $30\times30$m 2-D area with different numbers of randomly placed obstacles. The grid size for the graph planner in Kim is set to be two times the size of the proposed kinodynamic planner so that the number of graph nodes expanded is comparable for both planners given the same problem (if we use the same grid size, the kinodynamic planner usually expands much fewer nodes than a purely grid-based A* because the successive trajectories do not necessarily fall in neighbouring nodes). We randomly generate $100$ target points in the environment and use both the proposed front-end trajectory searching and Kim's method to generate paths that transit between target points (running each method once for every target point). Figure \ref{fig: timecompare} shows the average computation time for both approaches and the ratio of the average number of nodes expanded in Kim's method to that in the proposed method. Although the numbers of nodes expanded are comparable for both approaches, the computation time for Kim's method is at least an order of magnitude longer than our method except when there is no obstacle present (which is equivalent to planning without tether). In Kim's approach, a large proportion of time is spent on checking the cable length requirement for a robot position, which uses the expensive curve shortening technique discussed in Section \ref{sec: prelim}. In comparison, our contact point determination procedure consumes much less time and hence contributes to the efficiency of the proposed method. The average path/trajectory lengths for both approaches and the average lengths of trajectories actually executed by the robot using the proposed planning framework with online trajectory replanning are shown in Figure \ref{fig: lengthcompare}. We observe that both kinodynamic trajectory searching and grid-based graph planning generates paths with comparable lengths. The lengths of the paths generated using Kim's method is optimal subject to the grid resolution, but the trajectories generated by our kinodynamic planner have the advantage of being in the continuous space (not having to pass through the grid center) and hence can be shorter in lengths. The fast computation of the trajectory searching enables frequent online replanning which further refines the trajectory to be shorter and smoother. Figure \ref{fig: pathcompare} shows the result for one planning instance, where the actual trajectory (orange curve, $27.7$m) is shorter and smoother than the paths generated from Kim's method (blue line, $28.3$m) and kinodynamic search (green curve, $28.7$m). \subsection{Multi-Robot Planning without Static Obstacles}\label{subsec: multi without} All of the existing works on tethered multi-robot planning do not consider the presence of static obstacles. Hence, we consider an obstacle-free 3-D environment where multiple UAVs are initially equally spaced on a circle of radius $10$m, and $100$ sets of random goals are sent to the robots. A mission is considered successful if all robots are able to reach their goals. We compare our planning framework with Hert's centralized method \cite{hert1999motion} that generates piece-wise linear paths for each robot. In Hert's original approach, point robot is considered, but we modify the approach to handle non-zero radii of the robots for collision avoidance; we also change the shortest path finding algorithm from a geometric approach \cite{PAPADIMITRIOU1985259} to grid-based A* for efficiency. \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{figs/mtlp_compare.png} \caption{\footnotesize Comparison of computation time and success rate for multi-robot planning.} \label{fig: mtlp_compare} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=0.75\linewidth]{figs/deadlockpng.png} \caption{\footnotesize Illustration of a deadlock situation. Robot $i$'s cable has been crossed by both robot $j$ and robot $l$. The word of robot $i$ is $h_i=r_{l,0}r_{j,0}$. Unless robot $j$ and robot $l$ move, any route planned by robot $i$ to reach its goal will be risking entanglement based on the two-entry rule.} \label{fig: deadlock} \end{figure} Figure \ref{fig: mtlp_compare} plots the average computation time and the success rates of both approaches. The computation time for our approach refers to the time taken by a robot to generate a feasible trajectory in one planning iteration. We can observe that, for more than $2$ robots, the computation time for Hert's approach is at least an order of magnitude longer that that for our approach. While only a small increase in computation time is observed in our approach, the computation time for Hert's approach increases significantly from $30.7$ms for $2$ robots to $8.49$s for $8$ robots. We note that our implementation of Hert's approach has already achieved significant speedup compared to their results in \cite{hert1999motion} (more than $1000$s for $5$ robots), likely due to the use of modern computational geometry libraries, a more efficient path finding algorithm and a faster processor. Both our and Hert's approaches have $\mathcal{O}(n^4)$ time complexity; however, due to the consideration of a tight and straight cable model, Hert's method requires checking intersections among cables for all potential paths, which is computationally expensive. The success rate of Hert's approach is consistently $100\%$, while our approach fails occasionally for more than $5$ robots. However, this does not indicate less effectiveness of our approach, because the cable models and the types of entanglements considered are different in both approaches; the solution from one approach may not be a feasible solution for the other approach. The failures of our method are due to the occurrences of deadlocks. A common deadlock is illustrated in Figure \ref{fig: deadlock}, where any route for robot $i$ to reach its goal is considered risking entanglement based on the two-entry rule. This is a drawback of decentralized planning in which the robots only plan their own trajectories and do not consider whether there are feasible solutions left for the others. \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{figs/mtlp_length_compare.png} \caption{\footnotesize Comparison of path/trajectory lengths for multi-robot planning.} \label{fig: mtlp_length_compare} \end{figure} The average length of path followed by each robot is shorter for Hert's approach, as shown in Figure \ref{fig: mtlp_length_compare}. The cable model considered in Hert's work allows a robot to move vertically below the cables of other robots while avoiding contacts, while our approach restricts such paths and requires the robots to make a detour on the horizontal plane when necessary. This difference is illustrated in Figure \ref{fig: mtlp}, where the blue path is generated using Hert's method while the orange curve is the trajectory generated using our approach. In practice, it is difficult to ensure that a cable is fully tight and straight, hence moving below a cable presents a greater risk of collision than moving horizontally. \begin{figure}[!t] \centering \includegraphics[width=0.9\linewidth]{figs/mtlp.png} \caption{\footnotesize Robot $i$ starts at the green point and reaches the magenta point. The blue path is generated using Hert's method while the orange curve is the trajectory of the robot using our approach. The red line simulates the cable of robot $j$ if the cable is fully tight. The light blue and orange sections are the areas swept by the cable of robot $i$ during its movement from start to goal, considering a tight cable model. In this case, both approaches are able to generate paths that avoid intersection between straight cables, but Hert's approach requires robot $i$ to move below the cable of robot $j$. } \label{fig: mtlp} \end{figure} \subsection{Multi-Robot Planning with Static Obstacles}\label{subsec: multi with} \begin{figure*} \centering \subcaptionbox{ \label{fig: cir_1}}[0.32\linewidth]{\includegraphics[width=\linewidth]{figs/multicircle1.png}} \subcaptionbox{ \label{fig: cir_2}}[0.32\linewidth]{\includegraphics[width=\linewidth]{figs/multicircle2.png}} \subcaptionbox{ \label{fig: cir_3}}[0.32\linewidth]{\includegraphics[width=\linewidth]{figs/multicircle3.png}} \caption{ \footnotesize 5 UAVs planning in a workspace with $9$ static obstacles. (a) UAVs take off from their starting positions. (b) UAVs reach their targets opposite to their starting positions on the circle. (c) UAVs return to their starting positions without entanglements.} \label{fig: multicircle} \end{figure*} We conduct simulation of multiple tethered UAVs working in an environment placed with $9$ square obstacles as shown in Figure \ref{fig: multicircle}. The starting positions of the UAVs are evenly distributed on a circle of radius $10$m, and each UAV is tasked to move to the opposite point on the circle and then move back to its starting point, hence crossing of other UAVs' cables and passing through obstacles are inevitable. The mission is considered successful if all robots are able to return to their starting points at the end of the mission. UAV and cable dynamics are simulated using Unity game engine with AGX Dynamics physics plugin\footnote{https://www.algoryx.se/agx-dynamics/}; collisions among cables, UAVs and static obstacles are simulated to detect contacts and entanglements. Forces and torques commands for UAV are computed in ROS using the trajectory output from the proposed planner and the UAV states obtained from Unity. We conduct $30$ simulation runs for numbers of robots ranging from $2$ to $8$ and record the computation time of each execution of front-end planning and back-end optimization, and the total time taken for each simulation. The results are shown in Table \ref{tab: multiple}. We have the following observations: \begin{table} \centering \begin{tabular}{|c|c|c|c|c|} \hline \begin{tabular}[c]{@{}c@{}}Num. \\ of \\ Robots\end{tabular} & \begin{tabular}[c]{@{}c@{}}Avg. Comp. \\ Time \\ Front End \\ (ms)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Avg Comp. \\ Time \\ Back End \\ (ms)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Avg Time\\ per \\ Mission \\ (s)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Mission\\ Success \\ Rate \\ (\%)\end{tabular} \\ \hline $2$ & $11.11$ & $19.40$ & $32.91$ & $100$ \\ \hline $3$ & $10.74$ & $24.15$ & $29.86$ & $100$ \\ \hline $4$ & $18.61$ & $24.84$ & $38.26$ & $100$ \\ \hline $5$ & $20.30$ & $22.91$ & $43.37$ & $100$ \\ \hline $6$ & $30.31$ & $21.22$ & $55.85$ & $83.3$ \\ \hline $7$ & $30.50$ & $21.82$ & $61.62$ & $93.3$ \\ \hline $8$ & $38.79$ & $22.87$ & $69.81$ & $80.0$ \\ \hline \end{tabular} \caption{Results for Multiple Tethered UAVs Planning} \label{tab: multiple} \end{table} \begin{itemize} \item For front-end trajectory finding, the computation time increases with increasing number of robots. This is mainly due to increasing possibility that the routes are blocked by other robots or the cables of other robots (the route can be virtually blocked by a cable if crossing this cable is risking entanglement), causing the planner to take a longer time to find a detour and a feasible trajectory. However, the increase in computation time is small ($<30$ms) and the planner still satisfies the real-time requirement for $8$ robots. \item The back-end optimization has relatively consistent computation time ($\sim 22$ms), because the number of polynomial trajectories to be optimized is fixed regardless of the number of robots. \item The average computation time for one planning iteration (including both the front end and the back end) is well below $100$ms and is suitable for real-time replanning during flights. \item The time to complete the mission increases with the number of robots because more time is spent on waiting for other robots to move so that the cables no longer block the only feasible route. \item The planner achieves $100\%$ mission success rate for numbers of robots less than or equal to $5$. As the number of robots increases, the success rate drops but still maintains above $80\%$ for $8$ robots. Similar to the cases in Section \ref{subsec: multi without}, the failures are due to deadlocks, which are more likely to occur in a cluttered environment. \end{itemize} In all of the simulation runs, no entanglements are observed, showing the effectiveness of the proposed homotopy representation and the two-entry rule in detection and prevention of entanglements. However, the proposed two-entry rule is conservative in evaluating the risk of entanglements. For example, in reality, robot $i$ in Figure \ref{fig: deadlock} may be able to reach its goal if its cable is long enough; however, such a route is prohibited by two-entry rule because it has to cross the cables of robot $j$ and $l$. In essence, the proposed method sacrifices the success of a mission to guarantee safety, which is reasonable in many safety-critical applications. {\color{blue}Additional features may be implemented to improve the success rate and resolve deadlocks. For example, in Figure \ref{fig: deadlock}, robot $i$ may request other robots to `uncross' its cable before moving to the goal. Alternatively, feasible trajectories of the robots can be computed in a centralized manner, before they are sent to each robot for optimisation. However, such an approach will inevitably be more computationally expensive.} Overall, the proposed planning framework is shown to be capable of real-time execution and effective in preventing entanglement for different numbers of robots. \section{Flight Experiments} We conduct flight experiments using $3$ self-built small quadrotors in a $6$m$\times6$m indoor area, each quadrotor is attached to a $7$m cable that is connected to a ground power supply. Each quadrotor is equipped with an onboard computer with an Intel i7-8550U CPU, running the same ROS program as described in Section \ref{sec: simulation}. All onboard computers are connected to the same local network through Wi-Fi. A robust tracking controller \cite{Cai2011} is used to generate attitude and thrust commands from the target trajectory, which are sent to the low-level flight controller using DJI Onboard SDK. The parameters for the planner are chosen to be $T=0.3$s, $\maxop{\vb{v}}=[0.7, 0.7, 0.7]^\top$m/s, $\maxop{\vb{a}}=[2.0, 2.0, 2.0]^\top$$\text{m/}\text{s}^2$, $\maxop{u}=3.5$$\text{m/}\text{s}^3$, $\maxop{\eta} = 8$, $\sigma^\text{u}=2$, $\sigma = 3$, and the grid size for the kinodynamic graph search is $0.05\times0.05$m. The rate of planning is $10$Hz. The quadrotors are commanded to shuttle between two positions in the workspace, as illustrated in Figure \ref{fig: exp}, which resembles an item transportation task in a warehouse scenario. The supplementary video shows an experiment in which each robot completes $15$ back-and-forth missions without incurring any entanglement. The benefit of tethered power supply is demonstrated by the longer mission duration that the quadrotors are able to carry out than their $2$-minute flight time under battery power. Figure \ref{fig: exp_plot} shows the command trajectories and the actual positions of an UAV during part of the experiment. The generated command trajectories show high smoothness in both X and Y axes, which enables good tracking performance of the robots. \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{figs/neptune_exp.png} \caption{\footnotesize Experiment using $3$ tethered UAVs.} \label{fig: exp} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=\linewidth,trim={0.6cm 0.2cm 0.2cm 0cm}]{figs/exp_plot.png} \caption{\footnotesize Plot of command trajectories and actual positions of an UAV during the flight experiment.} \label{fig: exp_plot} \end{figure} \section{Conclusion} In this work, we presented NEPTUNE, a complete solution for trajectory planning of multiple tethered robots in an environment with static obstacles. Central to the approach is a multi-robot tether-aware representation of homotopy, which encodes the interaction among the robots and the obstacles in the form of a word, and facilitates the computation of contact points to approximate the shortened cable configuration. The front-end trajectory finder leverages on the homotopy representation to discard trajectories that risk entanglements or exceed the cable length limits, while the back-end trajectory optimizer refines the initial feasible trajectory from the front end. Simulations in single-robot obstacle-rich and multi-robot obstacle-free environments have shown the improvement of NEPTUNE with respect to the existing approaches in terms of computation time. Simulation of challenging tasks in multi-robot obstacle-rich scenarios has shown effectiveness in entanglement prevention and real-time capability. Flight experiments have highlighted the potential of NPETUNE in practical applications using real tethered systems. Future works will focus on improving the success rate by introducing deadlock-resolving features and application in a realistic warehouse scenario. \begin{appendices} \section{Explanation of Homotopy Induced by Obstacles in a 3-dimensional Space} \label{apd: homotopy3d} \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{figs/3dhomotopy.png} \caption{\footnotesize A 3-D workspace consisting of a robot $j$ and static obstacle $1$. The solid magenta curve is the cable attached to the drone.} \label{fig: 3dhomotopy} \end{figure} In a 2-D space, different homotopy classes are created due to the presence of obstacles (or punctures) in the space, e.g., path that turns left at the obstacle are topologically different from path that turns right at the obstacle to reach the same goal. However, in a 3-D space, not all obstacles can induce different homotopy classes; only those that contain one or more holes (those with genus equal to or more than 1) are able to do so. In our case, both the static obstacles and the cables attached to the drones are 3-D obstacles, but they generally do not contain any holes. Therefore, we manually close those 3-D obstacles using the following procedure. Since we restrict the planning robot to move above any other robots or obstacles, the space above them can be considered as virtual obstacles. We further extend the virtual obstacles along the workspace boundaries until they reach the respective bases or static obstacles. In Figure \ref{fig: 3dhomotopy}, the magenta and grey dashed lines outline the virtual obstacles created for the robot $j$ and obstacle $1$ respectively. Each 3-D obstacle, joined with its virtual obstacle, contains a hole, so that different homotopy classes can be induced by paths passing through the hole and paths passing outside the hole. Virtual 2-D manifolds can be created, as shown by the blue and green planes in Figure \ref{fig: 3dhomotopy}, and the sequence of the manifolds being intersected by a path can be recorded to identify the homotopy class of the path. We can observe the similarity between such a construction in 3-D space and the 2-D method discussed in Section \ref{subsec: lineseg}: given a 3-D space where all obstacles remain static, two methods produce the same word for a path that avoids passing above the obstacles and maintains a safe distance to the obstacles. We also gain an understanding of the loops in the 2-D case by looking at the 3-D case: the loops at the intersection between two 2-D manifolds do not physically wrap around any obstacles, hence they can be contracted to a point topologically. As shown in Figure \ref{fig: 3dhomotopy}, the red circle that cuts through manifolds $o_{1,1}$, $r_{j,1}$ alternatively is null-homotopic. \section{Sketch of Proof of Proposition \ref{prop: shortest}}\label{apd: proofshortest} The proof is based on the fact that, for a path that lies in the universal covering space of a workspace consisting of polygonal obstacles, the shortest homotopic path can be constructed from the vertices of the obstacles, and the start and end points \cite{HERSHBERGER199463,lee1984euclidean}. In our case where only thin barrier obstacles are considered, the vertices are the surface points $\gbf{\zeta}_{j,f}$, $j\in\myset{I}_m$, $f\in\{0,1\}$. Under Assumption \ref{ass: initialpos}, a surface point $\gbf{\zeta}_{j,f}$ can become a point on the shortest path only when the corresponding virtual segment $o_{j,f}$ has been crossed, hence it is sufficient to check only the surface points of the obstacles that lie in $h(k)$. {\color{blue} \section{Computing Initial Z Axis Trajectories}\label{subsec: initialz} } The generation of trajectories in Z-axis uses the properties of a clamped uniform b-spline. A clamped uniform b-spline is defined by its degree $\trajorder$, a set of $\lambda+1$ control points $\{q_0, q_1, \dots,q_{\lambda}\}$ and $\lambda+\trajorder+2$ knots $\{t_0, t_1, \dots t_{\lambda+\trajorder+1}\}$, where $t_0=t_1=\dots=t_p$, $t_{\lambda+1}=t_{\lambda+2}=\dots=t_{\lambda+\trajorder+1}$, and the internal knots $t_\trajorder$ to $t_{\lambda+1}$ are equally spaced. It uniquely determines $\lambda-\trajorder+1$ pieces of polynomial trajectories, each of a fixed time interval. It has the following properties of our interest\cite{zhou2019robust}: (1) the trajectory defined by a uniform clamped b-spline is guaranteed to start at $q_0$ and end at $q_{\lambda}$; (2) the first $(\trajorder-1)$-th order derivatives (including $0$-th order) at the start and the end of the trajectory uniquely determine the first and last $\trajorder$ control points respectively; (3) the $\iota$-th order derivative of the trajectory is contained within the convex hull formed by the $\iota$-th order control points, which can be obtained as \begin{align} q_l^{\lrangle{\iota+1}}=\frac{(\trajorder-\iota)(q^{\lrangle{\iota}}_{l+1}-q^{\lrangle{\iota}}_l)}{t_{l+\trajorder+1}-t_{l+\iota+1}}, \forall l\in[0,\lambda-\iota]\cap\mathbb{Z},\label{eq: bspline} \end{align} $\forall\iota\in[0,\trajorder-1]\cap\mathbb{Z}$, where $q_l^{\lrangle{\iota}}$ denotes the $\iota$-th order control point and $q_l^{\lrangle{0}}=q_l$. Using the above properties, we design an incremental control points adjustment scheme to obtain a feasible Z-axis trajectory. In our case $\trajorder=3$, we would like to obtain $\maxop{\eta}$ pieces of trajectories, where $\maxop{\eta}$ is user-defined, each trajectory of time interval $T$ to be consistent with the kinodynamic search, hence $\lambda=\maxop{\eta}+\trajorder-1$. We firstly determine the first $3$ control points of the b-spline from the initial states $p^{\text{in},\text{z}}$, $v^{\text{in},\text{z}}$ and $a^{\text{in},\text{z}}$, and set the last $3$ control points equal to the terminal target altitude $p^{\text{term},\text{z}}$. Then, we set the middle points $q_3,\dots,q_{\lambda-3}$ such that they are equally spaced between $q_2$ and $q_{\lambda-2}$. This setting corresponds to an initial trajectory that will start at the given states and reach $p^{\text{term},\text{z}}$ with zero velocity and acceleration. Next, we check the dynamic feasibility of this trajectory by computing the velocity and acceleration control points using equation (\ref{eq: bspline}), and adjust the lower-order control points if the higher-order points exceed the bound. For example, given that the acceleration exceeds the bound, $q_l^{\lrangle{2}}>\maxop{a}^{\text{z}}$, we adjust the velocity and position control points \begin{align} &q_{l+1}^{\lrangle{1}}=q_{l}^{\lrangle{1}}+\frac{T}{\trajorder-1}\maxop{a}^{\text{z}},\\ &q_{l+2}=q_{l+1}+\frac{T}{\trajorder}q_{l+1}^{\lrangle{1}}, \end{align} so that the updated acceleration is within bound $q_l^{\lrangle{2}}=\maxop{a}^{\text{z}}$. The output trajectory satisfies all dynamic constraints while trying to reach $p^{\text{term},\text{z}}$ as close as possible. Finally, we convert the b-spline control points into polynomial coefficients using the basis matrices described in \cite{qin2000general}. \end{appendices} \bibliographystyle{ieeetr}
1,941,325,220,655
arxiv
\section{Introduction} It is rather trivial for a human to follow the instruction \textit{``Walk beside the outside doors and behind the chairs across the room. Turn right and walk up the stairs...''}, but teaching robots to navigate with such instructions is a very challenging task. The complexities arise from not just the linguistic variations of instructions, but also the noisy visual signals from the real-world environments that have rich dynamics. Robot navigation via visual and language grounding is also a fundamental goal in computer vision and artificial intelligence, and it is beneficial for many practical applications as well, such as in-home robots, hazard removal, and personal assistants. Vision-and-Language Navigation (VLN) is the task of training an embodied agent which has the first-person view as humans to carry out natural language instructions in the real world~\cite{anderson2017vision}. Figure~\ref{fig:example} demonstrates an example of the VLN task, where the agent moves towards to the destination by analyzing the visual scene and following the natural language instructions. This is different from some other vision \& language tasks where the visual perception and natural language input are usually fixed (\textit{e.g.} Visual Question Answering). For VLN, the agent can interact with the real-world environment, and the pixels it perceives are changing as it moves. Thus, the agent must learn to map its visual input to the correct action based on its perception of the world and its understanding of the natural language instruction. \begin{figure}[t] \centering \includegraphics[width=0.9\textwidth]{figures/R2R_example.pdf} \caption{An example of our task. The embodied agent learns to navigate through the room and arrive at the destination (\textbf{{\color{mygreen} green}}) by following the natural language instructions. \textbf{{\color{red} Red}} and \textbf{{\color{blue} blue}} arrows match the orientations depicted in the pictures to the corresponding sentence. } \label{fig:example} \end{figure} Although steady progress has been made on the natural language command of robots~\cite{beattie2016deepmind,kempka2016vizdoom,zhu2017target,misra2017mapping}, it is still far from perfect. Previous methods are mainly employing \textit{model-free} reinforcement learning (RL) to train the intelligent agent by directly mapping raw observations into actions or state-action values. But model-free RL does not consider the environment dynamics and usually requires a large amount of training data. Besides, most of them are evaluated only in synthetic rather than real-world environments, which significantly simplifies the noisy visual \& linguistic perception problem, and the subsequent reasoning process in the real world. It is worth noticing that when humans follow the instructions, however, they do not solely rely on the current visual perception, but also imagine what the environment would look like and plan ahead in mind before actually performing a series of actions. For example, in baseball, the catcher and the outfield players often predict the direction and the rate of speed that the ball will travel, so they can plan ahead and move to the expected destination of the ball. Inspired by this fact, we seek the help of recent advance of \textit{model-based} RL~\cite{oh2017value,weber2017imagination} for this task. Model-based RL attempts to learn a model that can be used to simulate the environment and do multi-step lookaheads for planning. With an internal environment model to predict the future and plan ahead, the agent can benefit from the planning while avoiding from some trial-and-error in the real environment. Therefore, in this paper, we propose a novel approach which improves the vision-and-language navigation task performance by Reinforced Planning Ahead (which we refer as RPA). More specifically, our method, for the first time, endows the intelligent VLN agent with an environment model to simulate the world and predict the future visual perception. Thus the agent can realize directly mapping from the current real observation and planning of the future observations at the same time, and then perform an action based on both. Furthermore, We choose the real-world Room-to-Room (R2R) dataset as the testbed of our method. Our model-free RL model significantly outperforms the baseline methods as reported in the R2R dataset. Moreover, being equipped with the look-ahead module, our RPA model further improves the results and achieves the best on the R2R dataset. Hence, our contributions are three-fold: \begin{itemize} \item We are the first to combine model-free and model-based DRL for vision-and-language navigation. \item Our proposed RPA model significantly outperforms the baselines and achieves the best on the real-world R2R dataset. \item Our method is more scalable, and its strong generalizability allows it to be better transferred to unseen environments than the model-free RL methods. \end{itemize} \section{Related Work} \subsubsection{Vision, Language and Navigation} \begin{comment} Image captioning has attracted a lot of recent attention in vision and language. In 2015, many researchers~\cite{xu2015show,vinyals2015show,karpathy2015deep,chen2015mind} have considered using convolution neural network to encode the image in a latent space, and then use a recurrent neural network to decode and the caption. Beyond generating descriptions for images, there is also a growing interest in generating stories~\cite{huang2016visual}, questions~\cite{zhang2017asking}, and answering questions~\cite{antol2015vqa} from images. Compared with imagine captioning from static images, the task of video captioning~\cite{yu2016video,wang2018video} is more challenging, because of the additional spatial-temporal dynamics involved. Additionally, the task of vision-language grounding~\cite{mei2016listen,misra2017mapping} is also a complex problem that requires actively interacting with the environment rather than passively analyzing the fixed view: an agent must be able to make incremental decisions and the goal of this sequence of actions is to let the robot reach the target destination. However, it is worth noticing that prior work mostly focuses on synthetic environments, and real-world scenarios are seldom tested. \end{comment} Recently, the intersection of vision and language research has attracted a lot of attention. Much work~\cite{xu2015show,vinyals2015show,karpathy2015deep,chen2015mind,yu2016video,wang2018watch,wang2018video,AREL2018} has been done in language generation conditioned on visual inputs. There is also another line of work~\cite{huang2016visual,antol2015vqa} that tries to answer questions from images. The task of vision-language grounding~\cite{thomason2017guiding,alomari2017learning,alomari2017natural} is more relevant to our task, which requires the ability to connect the language semantics to the physical properties of the environment. Our task requires the same ability but is more task-driven. The agent in our task needs to sequentially interact with the environment and finish a navigation task specified by a language instruction. Early approaches~\cite{kim1999symbolic,borenstein1989real,borenstein1991vector,oriolo1995line} on robot navigation usually require a prior global map or needs to build an environment map on-the-fly. The navigation goal in these methods is usually directly annotated in the map. In contrast to these work, the VLN task is more challenging in the sense that no global map is required and the goal is not directly annotated but described by natural language. Under this setting, several methods have been proposed recently. Mei \emph{et al.}~\cite{mei2016listen} proposed a sequence-to-sequence model to map the language to navigation actions. Misra \emph{et al.}~\cite{misra2017mapping} formulate navigation as a sequential-decision process and propose to use reward shaping to effectively train the RL agent. In the same environment, Xiong \emph{et al.}~\cite{xiong2018scheduled} propose a scheduled training mechanism which yields more efficient exploration and achieves better results. However, these methods still operate in synthetic environments and consider either simple discrete observation inputs or unrealistic top-down view of the environment. \subsubsection{Model-based Reinforcement Learning} Using model-based RL for planning is a long-standing problem in reinforcement learning. Recently, the great computational power of neural networks makes it more realistic to learn a neural model to simulate environments~\cite{watter2015embed,lenz2015deepmpc,finn2017deep}. But for more complicated environments where the simulator is not exposed to the agent, the model-based RL usually suffers from the mismatch between the learned and real environments~\cite{gu2016continuous,talvitie2015agnostic}. In order to combat this issue, RL researchers are actively working on combining model-free and model-based RL~\cite{sutton1990integrated,yao2009multi,tamar2016value,silver2016predictron}. Most recently, Oh \textit{et al.}~\cite{oh2017value} propose a Value Prediction Network whose abstract states are trained to make predictions of future values rather than of future observations, and Weber \textit{et al.}~\cite{weber2017imagination} introduce an imagination-augmented agent to construct implicit plans and interpret predictions. Our algorithm shares the same spirit and is derived from these methods. But instead of testing on games, we, for the first time, adapt the combination of model-based and model-free RL for the real-world vision-and-language task. Another related work by Pathak \textit{et al.}~\cite{pathak2017curiosity} also learns to predict the next state during roll-out. An intrinsic reward is calculated based on the state prediction. Instead of inducing an extra reward, we directly incorporate the state prediction into the policy module. In other words, our agent takes into account the future predictions when making action decisions. \section{Method} \subsection{Task Definition} As shown in Figure~\ref{fig:example}, we consider an embodied agent that learns to follow natural language instructions and navigate in realistic indoor environments. Specifically, given the agent's initial pose $p_0 = (v_0, \phi_0, \theta_0)$, which includes the spatial position, heading and elevation angles, and a natural language instruction (sequence of words) $\mathcal{X} = \{x_1,x_2, ..., x_n\}$, the agent is expected to choose a sequence of actions $\{a_1,a_2, ..., a_T\} \in \mathcal{A}$ and arrive at the target position $v_{target}$ specified by the language instruction $\mathcal{X}$. The action set $\mathcal{A}$ consists of six unique actions, \emph{i.e.} \textit{turn left, turn right, camera up, camera down, move forward}, and \textit{stop}. In order to figure out the desired action $a_t$ at each time step, the agent needs to effectively associate the language semantics with its visual observation $o_t$ about the environment. Here the observation $o_t$ is the raw RGB image captured by the mounted camera. The performance of the agent is evaluated by both the success rate $P_{succ}$ (the percentage of test instructions that are correctly followed by the agent) and the final navigation error $E_{nav}$ (average final distance from the target position). \begin{figure}[t] \centering \includegraphics[width=1\textwidth]{figures/model_diagram.pdf} \caption{The overview of our method.} \label{fig:overview} \end{figure} \subsection{Overview} In consideration of the sequential-decision making nature of the VLN task, we formulate VLN as a reinforcement learning problem, where the agent sequentially interacts with the environments and learns by trial and error. Once an action is taken, the agent receives a scalar reward $r(a_t,s_t)$ from the environment. The agent's action $a_t$ at each step is determined by a parametrized policy function $\pi(o_t;\theta)$. The training objective is to find the optimal parameters $\theta$ that maximize the discounted cumulative rewards: \begin{equation} \max_{\theta} \mathcal{J^{\pi}} = \mathbb{E} \Big[ \sum_{t=1}^{T} \gamma^{t-1} r(a_t,s_t) | \pi(o_t;\theta) \Big] \quad, \end{equation} where $\gamma \in (0,1)$ is the discounted factor that reflects the significance of future rewards. \begin{comment} The overview of our RPA method is given in Figure~\ref{fig:overview}(a). In the encoding stage, the language LSTM encoder takes as input the instruction $\mathcal{X}$ and outputs word features $\{w_i\}$ ($i = 1, .., n$). Then at each time step $t$ of the decoding or decision making stage, the raw observation $o_t$ is fed into a pretrained CNN model\footnote{The pretrained ResNet-152 model~\cite{he2016deep} is used in our experiments.} to extract the high-level visual feature $s_t$. Despite the language encoder and the CNN model, our RPA approach is composed by the \textit{recurrent policy model}, the \textit{look-ahead modules}, and the \textit{action predictor}. As illustrated in Figure~\ref{fig:overview}, at each time step $t$, the recurrent policy model takes as input the word features $\{w_i\}$ and the state $s_i$ and produces the information for the action predictor to adopt, which forms a model-free path by itself. In addition, there is also a model-based path which is responsible for planning ahead and imagining the possible future trajectories. The final action $a_t$ is chosen by the action predictor, based on the information from both the model-free and model-based paths. Therefore, our RPA method indeed combines model-free and model-based reinforcement learning. \end{comment} We model the policy function as a sequence-to-sequence neural network that encodes both the language sequence $\mathcal{X} = \{x_1,x_2, ...,x_n\}$ and image frames $\mathcal{O} = \{o_1,o_2, ...,o_T\}$ and decodes the action sequence $\{a_1,a_2, ..., a_T\}$. The basic model consists of a \textbf{language encoder} that encodes the instruction $\mathcal{X}$ as word features $\{w_1,w_2, ...,w_n\}$, an \textbf{image encoder} that extracts high-level visual features, and a \textbf{recurrent policy network} that decodes actions and recurrently updates its internal state, which is supposed to encode the history of previous actions and observations. To reinforce the agent by planning ahead and further improve the model's capability, we equip the agent with \textbf{look-ahead modules}, which employ the \textbf{environment model} to take into account the future predictions. As illustrated in Figure~\ref{fig:overview}(a), at each time step $t$, the recurrent policy model takes as input the word features $\{w_i\}$ and the state $s_i$ and produces the information for the final decision making, which forms a \textit{model-free path} by itself. In addition, the \textit{model-based path} exploits multiple look-ahead modules to realize look-ahead planning and imagine the possible future trajectories. The final action $a_t$ is chosen by the \textbf{action predictor}, based on the information from both the model-free and model-based paths. Therefore, our RPA method seamlessly integrates model-free and model-based reinforcement learning. \subsection{Look-Ahead Module} The core component of the RPA method is the look-ahead module, which is used to imagine the consequences of planning ahead multiple steps from the current state $s_t$. In order to augment the agent with imagination, we introduce the \textit{environment model} that makes a prediction about the future based on the state of the present. Since directly predicting the raw RGB image $o_{t+1}$ is very challenging, our environment model, instead, attempts to predict the abstract-state representation $s_{t+1}$ that represents the high-level visual feature. Figure~\ref{fig:overview}(b) showcases the internal process of the look-ahead module, which consists of an environment model, a look-ahead policy, and a trajectory encoder. Given the abstract-state representation $s_t$ of the real world at step $t$, the look-ahead policy\footnote{We adopt the recurrent policy used in the model-free path as the look-ahead policy in all our experiments.} first takes $s_t$ as input and outputs an imagined action $a'_t$. Our environment model receives the state $s_t$ and the action $a'_t$, and predicts the corresponding reward $r'_t$ and the next state $s'_{t+1}$. Then the look-ahead policy will take a further action $a'_{t+1}$ based on the predicted state $s'_{t+1}$. The environment model will make a new prediction $\{r'_{t+1}, s'_{t+2}\}$. This look-ahead planning goes $m$ steps, where $m$ is the preset trajectory length. We use an LSTM to encode all the predicted rewards and states along the look-ahead trajectory and outputs its representation $\tau'_{j}$. As shown in Figure~\ref{fig:overview}(a), at every time step $t$, our model-based path operates $J$ look-ahead processes and we obtain a look-ahead trajectory representation $\tau'_{j}$ for each ($j = 1,...,J$). These $J$ look-ahead trajectories are then aggregated (by concatenation) together and passed to the action predictor as the information of the model-based path. \begin{figure}[t] \centering \includegraphics[width=6.5cm]{figures/env_model.pdf} \caption{The environment model.} \label{fig:env} \end{figure} \subsection{Models} Here we further discuss the architecture designs of the learnable models in our methods that are not specified above, including the environment model, the recurrent policy model, and the action predictor. \subsubsection{Environment Model} Given current state $s_t$ and the action $a_t$ taken by the agent, the environment model predicts the next state $s'_{t=1}$ and the reward $r'_t$. As is shown in Figure~\ref{fig:env}, the projection function $f_{proj}$ first concatenates $s_t$ and $a_t$ and then projects them into the same feature space. Its output is then fed into the transition function $f_{transition}$ and the reward function $f_{reward}$ to obtain $s'_{t=1}$ and $r'_t$ respectively. In formula, \begin{align} s'_{t+1} &= f_{transition}(f_{proj}(s_t, a_t)) \\ r'_t &= f_{reward}(f_{proj}(s_t, a_t)) \quad, \end{align} where $f_{proj}$, $f_{transition}$, and $f_{reward}$ are all learnable neural networks. Specifically, $f_{proj}$ is a linear projection layer, $f_{transition}$ is a multilayer perceptron with sigmoid output, and $f_{reward}$ is also a multilayer perceptron but directly outputs the scalar reward. \subsubsection{Recurrent Policy Model} Our recurrent policy model is an attention-based LSTM decoder network (see Figure~\ref{fig:policy}). At each time step $t$, the LSTM decoder produces the action $a_t$ by considering the context of the word features $\{w_i\}$, the environment state $s_t$, the previous action $a_{t-1}$, and its internal hidden state $h_{t-1}$. Note that one may directly take the encoded word features $\{w_i\}$ as the input of the LSTM decoder. We instead adopt an attention mechanism to better capture the dynamics in the language instruction and dynamically put more attention to the words that are beneficial for the current action selection. \begin{figure}[t] \centering \includegraphics[width=11cm]{figures/policy.pdf} \caption{An example of the unrolled recurrent policy model (from $t$ to $t+5$). The left-side yellow region demonstrates the attention mechanism at time step $t$.} \label{fig:policy} \end{figure} The left-hand side of Figure~\ref{fig:policy} is a demo attention module for the LSTM decoder. At each time step $t$, the context vector $c_t$ is computed as a weighted sum over the encoded word features $\{w_i\}$ \begin{equation} c_t = \sum \alpha_{t,i} w_i \quad . \end{equation} These attention weights $\{\alpha_{t,i}\}$ act as an alignment mechanism by giving higher weights to certain words which match the decoder's current status, and are defined as \begin{equation} \label{eq:att1} \alpha_{t,i} = \frac{\exp(e_{t,i})}{\sum_{k=1}^n \exp(e_{t,k})} \quad, \quad \text{where}~ e_{t,i} = h_{t-1}^\top w_i \quad. \end{equation} $h_{t-1}$ is the decoder's hidden state at previous step. Once the context vector $c_t$ is obtained, the concatenation of $[c_t, s_t, a_{t-1}]$ is fed as the input of the decoder to produce the intermediate model-free feature for the action predictor's use. Formally, \begin{align} h_t &= LSTM(h_{t-1}, [c_t, s_t, a_{t-1}]) \quad . \end{align} Then the output feature is the concatenation of the LSTM's output $h_t$ and the context vector $c_t$, which will be passed to the action predictor for making the decision. But if the recurrent policy model is employed as an individual policy (\textit{e.g.} the look-ahead policy), then it directly outputs the action $a_t$ based on $[h_t; c_t]$. Note that in our model, we feed the context vector $c_t$ to both the LSTM and the output posterior, which boosts the performance than solely feeding it into the input. \subsubsection{Action Predictor} The action predictor is a multilayer perceptron with a SoftMax layer as the last layer. Given the information from both the model-free and model-based paths as the input, the action predictor generates a probability distribution over the action space $\mathcal{A}$. \subsection{Learning} The training of the whole system is a two-step process: learning the environment model first and then learning the enhanced policy model, which is equipped with the look-ahead module. It is worth noting that the environment model and policy model have their own language encoders and are trained separately. The environment model will be fixed during policy learning. \subsubsection{Environment Model Learning} Ideally, the look-ahead module is expected to provide the agent with accurate predictions of future observations and rewards. If the environment model is noisy itself, it can actually provide misleading information and make the training even more unstable. In terms of this, before we plug in the look-ahead module, we pretrain the environment model using a randomized teacher policy. Under this policy, the agent will decide whether to take the human demonstration action or a random action based on a Bernoulli meta-policy with $p_{human} = 0.95$. Since the agent's policy will get closer to demonstration (optimal) policy during training, the environment model trained by demonstration policy will help it better predict the transitions close to the optimal trajectories. On the other hand, in reinforcement learning methods, the agent's policy is usually stochastic during training. Making the agent take the random action under the probability of $1 - p_{human}$ is to simulate the stochastic training process. We define two losses to optimize this environment model: \begin{align} \mathit{l}_{transition} &= \mathbb{E}[(s'_{t+1} - s_{t+1})^2] \\ \mathit{l}_{reward} &= \mathbb{E}[(r'_{t+1} - r_{t+1})^2] \quad . \end{align} The parameters are updated by jointly minimizing these two losses. \subsubsection{Policy Learning} With the pretrained environment model, we can incorporate the look-ahead module into the policy model. We first discuss the general pipeline of training the RL agent and then describe how to train the proposed RPA model. In the VLN task, two distinct supervisions can be used to train the policy model. First, we can use the demonstration actions provided by the simulator to do pure supervised learning. The training objective in this case is to simply maximize the log-likelihood of the demonstration action: \begin{equation} \mathcal{J}_{sl} = \mathbb{E} [ \log(\pi(a_h|o;\theta)) ] \quad , \end{equation} where $a_h$ is the demonstration action. This agent can quickly learn a policy that perform relative well on seen scenes. However, pure supervised learning only encourage the agent to imitate the demonstration paths. This potentially limits the agent's ability to recover from erroneous actions in an unseen environment. To also encourage the agent to explore the state-action space outside the demonstration path, we utilize the second supervision, \emph{i.e.} the reward function. The reward function depends on the environment state $s$ and agent's action $a$, and is usually not differentiable in terms of $\theta$. As the objective of the VLN task is to successfully arrive at the target position, we define our reward function based on the distance metric. We denote the distance between a state $s$ and the target position $v_{target}$ as $\mathcal{D}_{target}(s)$. Then the reward after taking action $a_t$ at state $s_t$ is defined as: \begin{equation} \label{i_r} r(s_t,a_t) = \mathcal{D}_{target}(s_{t}) - \mathcal{D}_{target}(s_{t+1}) \quad . \end{equation} It indicates whether the action reduces the agents distance from the target. Obviously, this reward function only reflects the immediate effect of a particular action but ignores the action's future influence. To account for this, we reformulate the reward function in a discounted cumulative form: \begin{equation} \label{r} R(s_t,a_t) = \sum_{t'=t}^{T} \gamma^{t'-t}r(s_{t'},a_{t'}) \quad . \end{equation} Besides, the success of the whole trajectory can also be used as an additional binary reward. Further details on reward setting are discussed in the experiment section. With the reward function, the RL objective then becomes: \begin{equation} \mathcal{J}_{rl} = \mathbb{E}_{a \sim \pi(\theta)} [ \sum_t{R(s_t,a_t)} ] \quad . \end{equation} Using the likelihood-ratio estimator in the REINFORCE algorithm, the gradient of $\mathcal{J}_{rl}$ can be written as: \begin{equation} \nabla_{\theta}\mathcal{J}_{rl} = \mathbb{E}_{a \sim \pi(\theta)} [\nabla_{\theta} \log \pi(a|s;\theta) R(s,a)] \quad . \end{equation} With this two training objective, we can either use a mixed loss function as in~\cite{ranzato2015sequence} to train the whole model, or use the supervised learning to warm-start the model and use RL to do fine-tuning. In our case, we find the mixed loss converges faster and achieves better performance. \begin{algorithm}[t] \caption{RL training with planning ahead} \label{alg:training} \begin{algorithmic}[1] \State $\theta_p$: policy parameters to be learned, $\theta_e$: environment model parameters \State Initialize the R2R environment \While{not converged} \State Roll-out a trajectory $(<s_1,a_1,r_1>,<s_2,a_2,r_3>,...,<s_T,a_t,r_T>)$ \State Update $\theta_e$ using $ g \propto \nabla_{\theta_e}(\mathit{l}_{transition} + \mathit{l}_{reward})$ \EndWhile \For{iteration=0,M-1} \State initialize the weight for supervised loss $w_{SLloss} \leftarrow 1$ \State Sample a batch of training instructions \State $s_0 \leftarrow $ initial state \For{$t$ = 0, MAX\_EPISODE\_LEN-1} \State Perform depth-bounded ($depth=2$) roll-outs using the environment model \State Use roll-out encoder to encoder all these simulated \State Sample actions under the current policy in parallel \State Save immediate rewards $r(s_t,a_t)$ and performed actions $a_t$ \If{All Ended} \State Break \EndIf \EndFor \State Compute the discounted cumulative reward $R(s_t,a_t)$ \State Total loss $\mathit{l}_{policy} = - w_{SLloss} * \mathcal{J}_{sl} - (1 - w_{SLloss}) * \mathcal{J}_{rl}$ \State Decrease $w_{SLloss}$: $w_{SLloss}\leftarrow 0.1 + 0.9 * \exp(iteration / \mathcal{T})$ \State Update $\theta_p$ using $g \propto \nabla \mathit{l}_{policy}$ \EndFor \end{algorithmic} \end{algorithm} To joint train the policy model and look-ahead module, we first freeze the pretrained environment model. Then at each step, we perform simulated depth-bounded roll-outs using the environment model. Since we have five unique actions besides the \textit{stop} action, we perform the corresponding five roll-outs. Each path is first encoded using an LSTM. The last hidden states of all paths are concatenated and then feed into action predictor. Now the learnable parameters come from three components: the original model-free policy mode, the roll-out encoder, and the action predictor. The pseudo-code of the algorithm is shown in Algorithm~\ref{alg:training}. \section{Experiments} \subsection{Experimental Settings} \subsubsection{R2R Dataset} Room-to-Room (R2R) dataset~\cite{anderson2017vision} is the first dataset for vision-and-language navigation task in real 3D environments. The R2R dataset is built upon the Matterport3D dataset~\cite{chang2017matterport3d}, which consists of 10,800 panoramic views constructed from 194,400 RGB-D images of 90 building-scale scenes (Many of the scenes can be viewed in the Matterport 3D spaces gallery\footnote{\url{https://matterport.com/gallery/}}). The R2R dataset further samples 7,189 paths capturing most of the visual diversity in the dataset and collects 21,567 navigation instructions with an average length of 29 words (each path is paired with 3 different instructions). As reported in \cite{anderson2017vision}, the R2R dataset is split into training (14,025 instructions), seen validation (1,020), unseen validation (2,349), and test (4,173) sets. Both the unseen validation and test sets contain environments that are unseen in the training set, while the seen validation set shares the same environments with the training set. \subsubsection{Implementation Details} We develop our algorithms on the open source code of the Matterport3D simulator\footnote{\url{https://github.com/peteanderson80/Matterport3DSimulator}}. ResNet-152 CNN features~\cite{he2016deep} are extracted for all the images without fine-tuning. In the model-based path, we perform one look-ahead planning for each possible action in the environment. The $j$-th look-ahead planning corresponds to the $j$-th of the action set $\mathcal{A}$, and the subsequent actions are executed by the shared look-ahead policy. In our experiments, we use the same policy model trained in the model-free path as the look-ahead policy. All the other hyperparameters are tuned on the validation set. More training details can be found in the supplementary material. \subsubsection{Evaluation Metrics} Following the conventional wisdom, the R2R dataset mainly evaluates the results by three metrics: \textit{navigation error}, \textit{success rate}, and \textit{oracle success rate}. We also report the \textit{trajectory length} though it is not a metric. The navigation error is defined as the shortest path distance in the navigation graph between the agent's final position $v_T$ and the destination $v_{target}$. The success rate calculates the percentage of the result trajectories whose navigation errors are less than 3m. The oracle success rate is also reported: the distance between the closest point on the trajectory and the destination is used to calculate the error, even if the agent does not stop there. \subsubsection{Baselines} In the R2R dataset, there exists a ground-truth shortest-path trajectory (\textit{Shortest}) for each instruction sequence from the starting location $v_0$ to the target location $v_{target}$. This shortest-path trajectory can be further used for supervised training. \textit{Teacher-forcing}~\cite{luong2015effective} uses cross-entropy loss to train the model at each time step to maximize the likelihood of the next gound-truth action given the previous ground-truth action. Instead of feeding the ground-truth action back to the recurrent model, one can sample an action based on the output probabilities over the action space (\textit{Student-forcing}). In our experiments, we list the results of these two models as reported in \cite{anderson2017vision} as our baselines. We also include the results of a random agent (\textit{Random}), which randomly takes an action at each step. \begin{table}[t \setlength{\tabcolsep}{2pt} \begin{center} \begin{tabular}{ l | c c c c | c c c c | c c c c} & \multicolumn{4}{c|}{\textbf{Val Seen}} & \multicolumn{4}{c|}{\textbf{Val Unseen}} & \multicolumn{4}{c}{\textbf{Test (unseen)}} \\ \hline \textbf{Model} & \begin{tabular}{@{}c@{}} TL \\ (m) \end{tabular} & \begin{tabular}{@{}c@{}} NE \\ (m) \end{tabular} & \begin{tabular}{@{}c@{}} SR \\ (\%) \end{tabular} & \begin{tabular}{@{}c@{}} OSR \\ (\%) \end{tabular} & \begin{tabular}{@{}c@{}} TL \\ (m) \end{tabular} & \begin{tabular}{@{}c@{}} NE \\ (m) \end{tabular} & \begin{tabular}{@{}c@{}} SR \\ (\%) \end{tabular} & \begin{tabular}{@{}c@{}} OSR \\ (\%) \end{tabular} & \begin{tabular}{@{}c@{}} TL \\ (m) \end{tabular} & \begin{tabular}{@{}c@{}} NE \\ (m) \end{tabular} & \begin{tabular}{@{}c@{}} SR \\ (\%) \end{tabular} & \begin{tabular}{@{}c@{}} OSR \\ (\%) \end{tabular} \\ \hline Shortest & 10.19 & 0.00 & 100 & 100 &9.48 & 0.00 & 100 & 100 & 9.93 & 0.00 & 100 & 100 \\ Random & 9.58 & 9.45 & 15.9 & 21.4 & 9.77 & 9.23 & 16.3 & 22.0 & 9.93 & 9.77 & 13.2 & 18.3 \\ Teacher-forcing & 10.95 & 8.01 & 27.1 & 36.7 & 10.67 & 8.61 & 19.6 & 29.1 & - & - & - & -\\ Student-forcing & 11.33 & 6.01 & 38.6 & 52.9 & 8.39 & 7.81 & 21.8 & 28.4 & 8.13 & 7.85 & 20.4 & 26.6 \\ \hline \textbf{Ours} & \multicolumn{4}{c|}{} & \multicolumn{4}{c|}{} & \\ XE & 11.51 & 5.79 & 40.2 & \textbf{54.1} & 8.94 & 7.97 & 21.3 & 28.7 & 9.37 & 7.82 & 22.1 & 30.1\\ Model-free RL & 10.88 & 5.82 & 41.9 & 53.5 & 8.75 & 7.88 & 21.5 & 28.9 & 8.83 & 7.76 & 23.1 & 30.2\\ RPA & 8.46 & \textbf{5.56} & \textbf{42.9} & 52.6 & 7.22 &\textbf{7.65} & \textbf{24.6} & \textbf{31.8} & 9.15 & \textbf{7.53} & \textbf{25.3} & \textbf{32.5}\\ \end{tabular} \end{center} \caption{Results on both the validation sets and test set in terms of four metrics: Trajectory Length (TL), Navigation Error (NE), Success Rate (SR), and Oracle Success Rate (OSR). We list the best results as reported in \cite{anderson2017vision}, of which Student-forcing performs the best. Our RPA method significantly outperforms the previous best results, and it is also noticeable that we gain a larger improvement on the unseen sets, which proves that our RPA method is more generalized.} \label{table:result} \vspace*{-3ex} \end{table} \subsection{Results and Analysis} Table~\ref{table:result} shows the result comparison between our models and the baseline models. We first implement our own recurrent policy model trained with the cross-entropy loss (\textit{XE}). Note that our XE model performs better than the Student-forcing model on the test set. By switching to the model-free RL, the results are slightly improved. Then our RPA learning method further boosts the performance consistently on the metrics and achieves the best results in the R2R dataset, which validates the effectiveness of combining model-free and model-based RL for the VLN task. An important fact revealed here is that our RPA method brings a notable improvement on the unseen sets and the improvement is even larger than that on the seen set (the relative success rates are improved by 6.7\% on Val Seen, 15.5\% on Val Unseen, and 14.5\% on Test over XE). While the model-free RL method gains a very small performance boost on the unseen sets. This proves our claim that it is easy to collect and utilize data in a scalable way to incorporate the look-ahead module for the decision making. Besides, our RPA method turns out to be more generalized and can be better transferred to unseen environments. \subsection{Ablation Study} \subsubsection{Learning Curves of the Environment Model} To realize our RPA method, we first need to train an environment model to predict the future state given the present state, which would be then plugged into the look-ahead module. So it is important to guarantee the effectiveness of the pretrained environment model. In Figure~\ref{fig:env_loss}, we plot both the transition loss and the reward loss of the environment model during training. Evidently, both losses converge to a stable point after around 500 iterations. But it is also noticeable that the learning curve of the reward loss is much noisier than that of the transition loss. This is because of the sparsity nature of rewards. Unlike the state transitions that are usually more continuous, the rewards within trajectory samples are very sparse and of high variance, thus it is noisier to predict the exact reward using mean square error. \begin{figure}[t] \centering \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[height=3.5cm]{figures/env_tran_loss.png} \end{subfigure} ~ \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[height=3.5cm]{figures/env_reward_loss.png} \end{subfigure} \caption{Learning curves of the environment model.} \label{fig:env_loss} \end{figure} \subsubsection{Effect of Different Rewards} We test four different reward functions in our experiments. The results are shown in Table~\ref{table:reward}. The \textit{Global Distance} reward function is defined per path by assigning the same reward to all actions along this path. This reward measures how far the agent approaches the target by finishing the path. The \textit{Success} reward is a binary reward: if the path is correct, then all actions will be assigned with a reward $1$, otherwise reward $0$. The \textit{Discounted} reward is defined as in Equation~\ref{r}. Finally, the \textit{Discounted} \& \textit{Success} reward, which is used by our final model, basically adds the \textit{Success} binary reward to the immediate reward (see Equation~\ref{i_r}) of the final action. Then the discounted cumulative reward is calculated using the Equation~\ref{r}. In the experiments, the first two rewards are much less effective than the discounted reward functions which assign different rewards to different actions. We believe the discounted reward calculated at every time step can better reflect the true value of each action. As the final evaluation is not only based on the navigation error but also success rate, we also observe that incorporating the success information into the reward can further boost the performance in terms of success rate. \begin{table}[t \small \setlength{\tabcolsep}{3pt} \begin{center} \begin{tabular}{ l | c c c | c c c } & \multicolumn{3}{c|}{\textbf{Val Seen}} & \multicolumn{3}{c}{\textbf{Val Unseen}} \\ \hline \textbf{Reward} & \begin{tabular}{@{}c@{}} Navigation \\ Error \\ (m) \end{tabular} & \begin{tabular}{@{}c@{}} Success \\ (\%) \end{tabular} & \begin{tabular}{@{}c@{}} Oracle \\ Success \\ (\%) \end{tabular} & \begin{tabular}{@{}c@{}} Navigation \\ Error \\ (m) \end{tabular} & \begin{tabular}{@{}c@{}} Success \\ (\%) \end{tabular} & \begin{tabular}{@{}c@{}} Oracle \\ Success \\ (\%)\end{tabular} \\ \hline \textit{Global Distance} & 6.17 & 35.5 & 45.1 & 8.20 & 19.0 & 25.6 \\ \textit{Success} & 6.21 & 37.8 & 43.2 & 8.17 & 21.3 & 26.7 \\ \textit{Discounted} & \textbf{5.79} & 40.5 & 52.8 & \textbf{7.74} & 20.4 & 28.5 \\ \textit{Discounted} $\&$ \textit{Success} & 5.82 & \textbf{41.9} & \textbf{53.5} & 7.88 & \textbf{21.5} & \textbf{28.9} \\ \end{tabular} \end{center} \caption{Results of the model-free RL with different reward definitions.} \label{table:reward} \vspace*{-3ex} \end{table} \subsubsection{Case Study} For a more intuitive view of the decision-making process in the VLN task, we show a test trajectory that is performed by our RPA agent in Figure~\ref{fig:case_study}. The agent starts from position (1) and takes a sequence of actions by following the natural language instruction until it reaches the destination (11) and stops there. We observe that although the actions include \textit{Forward, Left, Right, Up, Down,} and \textit{Stop}, the action \textit{Up} and \textit{Down} appear very rare in the result trajectories. In most cases, the agent can still reach the destination even without moving up/down the camera, which indicates that the R2R dataset has its limitation on the action distribution. \begin{figure}[t] \centering \includegraphics[width=0.7\textwidth]{figures/case_study_2.pdf} \caption{An example trajectory executed by our RPA agent. Given the instruction and the starting position (1), the agent produces one action per time step. In this example we show all the 11 steps of this trajectory.} \label{fig:case_study} \vspace*{-2ex} \end{figure} \section{Conclusion} Through experiments, we demonstrate the superior performance of our proposed RPA approach, which also tackles the common generalization issue of the model-free RL when applying to unseen scenes. Besides, equipped with the look-ahead module, our method can simulate the environment and incorporate the imagined trajectories, making the model more scalable than the model-free agents. In the future, we plan to explore the potential of the model-based RL to transfer across different tasks, \textit{i.e.} Vision-and-Language Navigation, Embodied Question Answering~\cite{embodiedqa} etc. \bibliographystyle{splncs04}
1,941,325,220,656
arxiv
\section{Introduction} The characterization of the behavior of a passive but diffusing scalar advected by a prescribed, smooth velocity field has been the subject of intensive research going back at least as far as Batchelor \cite{Batchelor:1959}. Above and beyond the obvious practical importance in applications ranging from micro-mixers to global climate dynamics, 'scalar turbulence' as exhibited by solutions of the linear advection diffusion equation also provides an avenue for insight into the structure of the Navier-Stokes equations \cite{Shraiman:2000}. Despite the linearity of the governing equation, complete characterization of scalar solutions, especially the asymptotic decay of such solutions for vanishing diffusivity, continues to pose considerable difficulties even when restricted to the case of planar flows. Here we are concerned with the so-called Batchelor regime, where the spatial scale of the velocity ($l_v$) is assumed to be much larger than the diffusive length scale ($l_\kappa$). For the scalar turbulence problem with periodic velocity fluctuations in time and space, rigorous homogenization techniques \cite{majda-kramer:1999,Bonn:2001,pavliotis:2002} can be applied to compute effective, renormalized diffusivities on large time and space ($L \gg l_v$) scales assuming that the initial distribution of the scalar field satisfies the scale separation $l_s \ll l_v$. On the other hand, the situation for scalar fields with variations commensurate with both the velocity field and the domain size, ($l_s \sim l_v \sim L \gg l_\kappa$) requires different techniques. The present paper is motivated in part by the phenomenon of persistent patterns, termed 'strange eigenmodes' \cite{pierrehumbert:1994}, that occur under the action of periodic stirring. Such patterns, characterized by exponential decay of the scalar variance and self-similar evolution of scalar density functions, have been observed both numerically and experimentally {\cite{Voth2003}, \cite{Camassa:2007}}. Theoretical predictions of the decay rates of the scalar and the connection between the observed eigenmodes and the phase space of the underlying advection dynamics have been investigated {\cite{Antonsen:1996, Sukhatme:2002, Giona:2004, Fereday:2004, Popovych:2007}}, most often in the context of non-linear maps. In the strange eigenmode regime, the decay of the scalar contrast can be studied via Floquet theory. While well established for ODE's, the existing theory for parabolic PDE's requires that the PDE satisfies a restrictive spectral gap condition which the advection diffusion equation fails for vanishing diffusivity (see \cite {Kuchment:1993}). An alternate approach, shown by Chow \textit{et al.} \cite{Chow:1994} for one dimensional parabolic equations, is to prove the existence of an inertial manifold for the system and then apply ordinary Floquet theory to the inertial form. Our goal here is less ambitious, but a potential first step in this direction. Following Krol \cite{krol:1991}, we propose a formal averaging procedure for the advection-diffusion equation when the velocity field is has zero mean and possesses time-periodic stream lines. The approach is perturbative, making explicit use of the disparity of time-scales between the advective and diffusive operators. Advection fields of the form $u=u(\xi,t)=\bar u(\xi)f(t)$ guarantee that, in the case of vanishing diffusion, the time-dependent system can be solved using action-angle variables, and that tracer trajectories will be time-periodic. The explicit solution in action-angle coordinates allows the original equation to be written in a form suitable for averaging. By applying Lie transform techniques, we derive an approximate averaged equation. The fact that this resulting equation - in contrast to the original problem - has time-independent coefficients, facilitates the theoretical and numerical analysis tremendously. The use of the Lie transform also allows relatively straight-forward computation of higher order corrections. The form of the paper is as follows. In section 2 we consider the transformation which places the advection-diffusion equation in a form suitable for averaging. Lie transform techniques are used to average the equation in section 3. A proof of the convergence of solutions of the averaged equations to those of the original time dependent problem is given in section 4. An application to a specific flow field, a periodically modulated, regularized vortex, along with numerical comparisons of the solutions are given in sections 5 and 6. \section{Action-angle variables} We consider the advection diffusion equation in the following form \begin{equation} \label{advec} c_t +(u\cdot \nabla)\,c - \kappa \,\nabla^2\,c = 0. \end{equation} All functions depend on the spatial variable $\xi=(x,y)^T$ and the time $t$. We look at (\ref{advec}) as an initial value problem, assuming that \begin{equation} \label{init_advec} c(0,\xi)=c^{(s)}(\xi) \end{equation} is a known function. Incompressibility $\nabla\cdot u = 0$ implies that the given velocity field $u$ is derived from a stream function $\Psi$ such that \begin{equation} u(\xi,t) = \nabla^{\perp}\Psi(\xi,t) \end{equation} where $\nabla^{\perp}\equiv (\partial_y,-\partial_x)$. We assume that the stream function $\Psi$ is of the particular form \begin{equation} \Psi(\xi,t) = \bar \Psi(\xi)f(t) \end{equation} where the function $f$ is periodic in time with period $T$. For consistency in the averaging which follows, we also require that $\langle f \rangle = \frac{1}{T} \int_{0}^{T} f(t) dt = 0$. A standard non-dimensionalization of (\ref{advec}) with velocity, length and time scales given respectively by $(U,L,T)$ gives \begin{equation} \label{scaleadvec} c_t + \frac{1}{St} (u\cdot \nabla)\,c - \epsilon \,\nabla^2\,c = 0 \end{equation} where the dimensionless groups are the Strouhal number, $St = U T/ L$, the ratio of the forcing period to the advective time-scale, and $ \epsilon = T \kappa / L^2$, the ratio of the forcing period to the diffusive time-scale. Throughout, we assume $St = {\mathcal{O}}(1)$ and $ \epsilon \ll 1$. To clarify notation, we take $u = u/St $ throughout. Our aim is to derive an equation of the form \begin{equation} \bar c_t + {\mathcal{L}}\,\bar c = 0 \end{equation} with a time-independent local linear operator ${\mathcal{L}}$ that can be used to construct approximative solutions to the initial value problem given by (\ref{init_advec}) or (\ref{scaleadvec}), in the limit of small diffusivity $\kappa \ll 1$. Due to the fact that the leading order evolution is given by the periodically varying advection operator, we cannot apply averaging techniques directly. Instead, we seek a transformation to the Lagrangian frame which results in a new equation where the coefficients of both the advective and diffusive operators are zero-mean periodic functions of time. The transformed equation is then in a form suitable for averaging. For the restricted class of flow fields considered, the the proper transformation is simply to action-angle coordinates of the underlying, conservative advection equation. Introduce the function $F$ as \begin{equation} F(t) = \int_0^t f(t')\,dt' \end{equation} and write the tracer coordinate $\xi$ as a function of $F$. We then obtain the autonomous Hamiltonian system \begin{equation} \frac{d\xi}{dF} = \nabla^{\perp}\bar\Psi(\xi)\,. \end{equation} Since this system integrable, there exists a canonical transformation \cite{arnold:1989} \begin{equation} {\mathcal{C}}:(x,y)\rightarrow(J,\theta) \end{equation} such that the advection-diffusion equation (\ref{advec}) becomes \begin{equation} \label{advec_can} c_t - f(t)\omega(J)c_{\theta} - \epsilon\left(\Gamma:\nabla\nabla + \delta\cdot\nabla\right)c = 0 \end{equation} with a matrix $\Gamma=\Gamma(\theta,J)$ and a vector $\delta = \delta(\theta,J)$, that are solely determined by the canonical transformation ${\mathcal{C}}$. The advantage of the representation (\ref{advec_can}) lies in the fact that the evolution of the unperturbed problem is linear and given by \begin{equation} J = J_0, \qquad \theta = \theta_0-\omega(J)F(t)\,. \end{equation} Therefore, we can now use these stream lines as coordinates via the transformation \begin{equation} c(t,J,\theta) = v\left(t,J,\tilde\theta=\theta-\omega(J)F(t)\right) \end{equation} and with the transformation rules \begin{eqnarray*} c_J &=& - \omega'Fv_{\tilde\theta}+v_J, \\ c_{JJ} &=& (\omega')^2F^2v_{\tilde\theta\tilde\theta}-2\omega'Fv_{\tilde\theta J}-\omega''Fv_{\tilde\theta}+v_{JJ} \end{eqnarray*} and the rescaling of time as $\tau=\epsilon t$ the equation for $v$ takes the form \begin{equation} \label{advec_ready} v_{\tau} = \left(\tilde \Gamma:\nabla\nabla + \tilde \delta\cdot\nabla\right)v\,. \end{equation} Since the only explicit time dependence in the coefficients $\tilde\Gamma$ and $\tilde\delta$ is given in terms of the $T$-periodic function $F$, the equation (\ref{advec_ready}) is now suitable for averaging. \section{Lie transform averaging} In order to average (\ref{advec_ready}), we use a technique based on Lie transforms first developed in the finite-dimensional context \cite{nayfeh:1973} and then applied to cases involving an infinite number of degrees of freedom \cite{kodama:1985,gabitov-schaefer-etal:2000}. The basic idea of a Lie transform is to use a near identity transform of the type \begin{equation} \label{lie_trafo} v = \exp(\phi\cdot\nabla_L)V \;\;. \end{equation} The linear operator, $\phi\cdot\nabla_L$, is chosen to eliminate the explicit time dependence of the coefficient of an equation \begin{equation} \label{time_lie} v_{\tau} = X(v,\tau) \end{equation} in order to obtain an equation with time-independent coefficients of the form \begin{equation} \label{notime_lie} V_{\tau} = Y(V)\,. \end{equation} Since the functionals $X$ and $Y$ depend on $v$ and all its spatial derivatives, the operator $\phi\cdot\nabla_L$ will be defined in our case as \begin{equation} \phi\cdot\nabla_L = \sum_{n,m}\phi_{nx,my}\frac{\partial^{(n+m)}}{\partial V_{nx,my}} \end{equation} where $\phi_{nx,my}=\partial^{(n+m)}\phi/\partial x^n \partial y^m$ and $V_{nx,my}=\partial^{(n+m)}V/\partial x^n \partial y^m$ respectively. The subscript at $\nabla_L$ distinguishes this operator from the usual $\nabla$. The generating function $\phi$ also depends on $V$ and all its derivatives. The idea is that the explicit time dependence will be kept in $\phi$ rather than in the equation for $V$, hence $\phi$ will also depend periodically on $\tau$. The general transformation rule under which (\ref{time_lie}) transforms to (\ref{notime_lie}) using (\ref{lie_trafo}) is \cite{hasegawa-kodama:1995} \begin{equation} Y\cdot\nabla + \left(\frac{\partial}{\partial t}\mathrm{e}^{\phi\cdot\nabla_L}\right) {\mathrm{e}}^{-\phi\cdot\nabla} = \mathrm{e}^{\phi\cdot\nabla_L}(X\cdot\nabla_L){\mathrm{e}}^{-\phi\cdot\nabla_L} \end{equation} Both terms can be conveniently expanded using the Campbell-Baker-Hausdorff formulae \begin{equation} \left(\frac{\partial}{\partial \tau}\mathrm{e}^{\phi\cdot\nabla_L}\right) =\left(\phi_t + \frac{1}{2!}[\phi,\phi_t]_L+\frac{1}{3!}[\phi,[\phi,\phi_t]_L]_L+...\right)\cdot\nabla_L \end{equation} \begin{equation} \mathrm{e}^{\phi\cdot\nabla_L}(X\cdot\nabla_L){\mathrm{e}}^{-\phi\cdot\nabla_L} = \left(X+[\phi,X]_L+\frac{1}{2!}[\phi,[\phi,X]_L]_L+...\right)\cdot\nabla_L \end{equation} where the Lie commutator is defined through $[A,B]_L = (A\cdot\nabla_L)B-(B\cdot\nabla_L)A$ and again the subscript distinguishes the Lie commutator from the usual commutator. We now expand both $Y$ and $\phi$ in a series in the small parameter $\epsilon$ as \begin{equation} Y = Y_0+Y_1+...,\qquad \phi = \phi_1+\phi_2+... \end{equation} where ${\mathcal{O}}(Y_n)={\mathcal{O}}(\phi_n)=\epsilon^n$ and differentiation by $\tau$ lowers the order of $\phi_n$ by one \begin{equation} {\mathcal{O}}(\partial \phi_n/\partial\tau) = \epsilon^{n-1}\,. \end{equation} The equation for $Y$ can then be solved order by order. At the leading order, we find \begin{equation} Y_0 + \frac{\partial\phi_1}{\partial \tau} = X \end{equation} and averaging this equation yields directly $Y_0=\langle X \rangle$ due to the periodicity of $\phi_1$. The transformed advection-diffusion equation (\ref{advec_ready}) can be written in the form \begin{equation} \label{advec_ready_L} v_{\tau} = \tilde L v, \qquad \tilde L = \tilde \Gamma :\nabla\nabla + \tilde \delta \cdot\nabla\,. \end{equation} Averaging this equation immediately yields at the leading order \begin{equation}\label{av_eqn} V_{\tau} = \langle \tilde L \rangle V\,. \end{equation} Here, and in what follows, $\langle...\rangle$ denotes averaging over one period. The averaged equation is then \begin{equation} V_{\tau} = \left(\langle \tilde \Gamma\rangle :\nabla\nabla + \langle \tilde \delta\rangle \cdot\nabla\right)V\,. \end{equation} Thus, the time-dependent coefficient are simply replaced by their time averages. The leading order of the generating function $\phi_1$ is found as \begin{equation} \phi_1 = L_1\,V \equiv \left( \int_0^{\tau}\tilde L-\langle \tilde L \rangle \right)V\,. \end{equation} Introducing $\Gamma_1$ and $\delta_1$ as \begin{equation} \label{int_functions} \frac{d\Gamma_1}{d\tau} = \tilde \Gamma - \langle \tilde \Gamma \rangle, \qquad \frac{d\delta_1}{d\tau} = \tilde \delta - \langle \tilde \delta \rangle \end{equation} we can write $L_1$ explicitly as \begin{equation} L_1 = \Gamma_1 :\nabla\nabla + \delta_1 \cdot\nabla\,. \end{equation} Higher order corrections can be calculated in an elegant way using the Campbell-Baker-Hausdorff formulae. For the second order term in the expansion of $Y$, for example, we find \begin{eqnarray} Y_1 &=& \frac{1}{2}\langle[L_1V,L_{1\tau}V]_L\rangle + [\langle L_1 \rangle V, \langle \tilde L \rangle V]_L \\ \nonumber &=& \left(\langle \tilde L L_1\rangle - \langle L_1 \rangle \langle \tilde L \rangle\right)V \end{eqnarray} where the last equality follows after using the definition of the Lie commutator and integration by parts. Collecting first and second order, we obtain as averaged equation for $V$ \begin{equation} \label{average_2nd_order} V_{\tau} = \left(\langle \tilde L \rangle + \epsilon \left(\langle \tilde L L_1\rangle - \langle L_1 \rangle \langle \tilde L \rangle\right)\right)V \end{equation} \section{An averaging theorem for parabolic differential equations} In the previous section, we applied a technique based on Lie transforms to average the equation (\ref{advec_ready}), and we arrived at the equation (\ref{av_eqn}). Here, we state and prove rigorously a theorem on averaging of parabolic partial differential equations which is due to Krol \cite{krol:1991}. We assume that the differential operators in (\ref{advec_ready}) are given by \[ \tilde\Gamma =\epsilon[ a_{ij}(x, y,t)]_{i,j=1}^2 \] and \[ \tilde\delta =\epsilon( b_1(t,x,y), b_2(t,x,y)), \] where $a_{i,j}, b_i\in C^{\infty}(\overline{R^2\times [0, \infty)})$, and $[a_{ij}]$ is symmetric and uniformly positive definite, i.e, there exists $\theta>0$ such that for all $\xi=(\xi_1,\xi_2)\in R^2$ and $(x,y,t)\in R^2\times [0,\infty)$ we have \[ \sum_{i,j=1}^2a_{ij}(x,y,t)\xi_i\xi_j\geq\theta |\xi|^2. \] In the following, let $\tau_0=\mathcal{O}(1/\epsilon)$, and let $\|\cdot\|_\infty$ denote the usual supremum norm on either $R^2$ or $R^2\times[0,\tau_0]$, depending on the context. \begin{theorem} Let $v$ and $V$ be solutions to the Cauchy problems $v(0)=V(0)=v_0\in C^{\infty}(\overline{R^2})$ for the equations (\ref{advec_ready}) and (\ref{av_eqn}), respectively. Then \[ \|v-V\|_\infty=\mathcal{O}(\epsilon). \] \end{theorem} {\bf Proof:} First note that the existence and the uniqueness of bounded solutions $v$, $V$ on $C^2(R^2\times[0,\tau_0])$ is well established (see \cite{friedman:1964}). Also, since (\ref{av_eqn}) is autonomous, the derivatives $V_{\bf \alpha}$ of $V$ also satisfy an autonomous parabolic differential equation with the same second order differential operator, however with different smooth and bounded first and zero order coefficients: \[ V_{{\bf \alpha}\tau} =\left(\langle \tilde \Gamma\rangle :\nabla\nabla + \epsilon (b_1({\bf \alpha})(x,y), b_2({\bf \alpha})(x,y)) \cdot\nabla + \epsilon f({\bf \alpha})(x,y) \right)V_{\bf \alpha}\,. \] The Phragm\`en-Lindel\"of principle for parabolic partial differential equations implies that \[ \|V_\alpha\|_\infty\leq \|(v_0)_\alpha\|_\infty + \tau_0\epsilon \|f(\alpha)\|_\infty=\mathcal{O}(1)\,. \] Let us now define a near-identity transformation \[ \hat V(x,y,\tau)=V(x,y,\tau)+ \left[\int_0^\tau (\tilde L(s)-\langle\tilde L\rangle)\ ds\right] V(x,y,\tau). \] Since the integrand is $T$-periodic with zero average, the equation reads actually \[ \hat V(x,y,\tau)=V(x,y,\tau)+ \left[\int_{[\tau/T]T}^\tau (\tilde L(s)-\langle\tilde L\rangle)\ ds\right] V(x,y,\tau), \] and it is an easy observation that $\|\hat V-V\|_\infty =\mathcal{O}(\epsilon)$. On the other hand, one easily verifies that $\hat V$ satisfies the equation \[ \hat V_\tau=\tilde L(\tau) \hat V+ \tilde M(\tau)V, \] where $\tilde M(\tau)=\int_0^\tau (\tilde L(s)-\langle\tilde L\rangle)\langle\tilde L\rangle-\tilde L(\tau)(\tilde L(s)-\langle\tilde L\rangle)\ ds$, and the initial-value condition $\hat V(0)=v_0$. Notice that $\tilde M$ is a $T$-periodic fourth order operator with smooth bounded coefficients of order $\mathcal{O}(\epsilon^2)$. Consequently, the difference $\hat V-v$ satisfies the equation \[ (\hat V-v)_\tau=\tilde L(\tau)(\hat V-v) + \tilde M(\tau) V, \] and the initial condition $(\hat V-v)(0)=0$. By the Phragm\`en-Lindel\"of principle for parabolic partial differential equations, we conclude $\|\hat V-v\|_\infty\leq \|\tilde M(\cdot) V\|_\infty\tau_0=\mathcal{O}(\epsilon)$. This concludes the proof. We remark that similar techniques can be applied to solutions of the second order Lie-averaged equations leading to ${\mathcal{O}}(\epsilon^2)$ error estimates. \section{Regularized vortical flow field} To illustrate the theory and to give a comparison to numerical simulations, we consider the particular case of a regularized vortical flow field whose stream function is given by \begin{equation} \Psi(t,x,y) = \ln\left(\sqrt{a^2+x^2+y^2}\right)\,f(t) \;. \end{equation} This flow represents perhaps the simplest example for studying the interplay between diffusion and nonlinear, time-periodic advection. Since $r^2=x^2+y^2$ is a constant of motion, the unperturbed stream lines are given in Cartesian coordinates as \begin{eqnarray*} x(t) &=& \cos\left(\omega(r)F(t)\right)x_0 + \sin\left(\omega(r)F(t)\right)y_0 \\ y(t) &=& -\sin\left(\omega(r)F(t)\right)x_0 + \cos\left(\omega(r)F(t)\right)y_0 \end{eqnarray*} where \begin{equation} \omega(r) = \frac{1}{a^2+r^2}\,. \end{equation} Obviously, we can take as a canonical transform to action-angle variables the usual transformation to polar coordinates $(x,y)\rightarrow (r,\theta)$ and in these coordinates, the advection-diffusion equation (\ref{advec}) is written for this particular flow field as \begin{equation} c_t - \omega(r)f(t)c_{\theta} = \epsilon\left(\frac{1}{r}c_r + c_{rr}+\frac{1}{r^2}c_{\theta\theta}\right)=\epsilon\Delta\,c\,. \end{equation} The transformed equation (\ref{advec_ready}) becomes (using $t$ instead of the rescaled $\tau$) \begin{equation} v_t = \epsilon\left(\Delta v + F\left(\left(\frac{\omega'}{r}+\omega''\right)v_{\tilde \theta}+2\omega'v_{\tilde \theta r}\right)+F^2(\omega')^2v_{\tilde\theta\tilde\theta}\right) \end{equation} Using the previous results, we obtain at leading order \begin{equation} \label{advec_vortex_av} V_{t} = \epsilon\left(\Delta\,V + \langle F \rangle \left(\left(\frac{\omega'}{r}+\omega''\right) V_{\tilde\theta} +2\omega' V_{\tilde\theta r}\right) + \langle F^2\rangle(\omega')^2 V_{\tilde\theta\tilde\theta}\right) \end{equation} As shown in the Appendix, the leading order contributions in Cartesian coordinates produce a time independent advection field with spatially dependent rotation and source-like terms. The averaged diffusivity tensor is usually full and symmetry breaking. The relative importance of the symmetry breaking terms depends upon the explicit form of the time dependence through the ratio of $\langle F\rangle $ and $\langle F^2\rangle$. For this flow-field, it is not difficult to compute corrections at second order as well. In order to make our notation more efficient, we introduce the two operators ${\mathcal{L}}_1$ and ${\mathcal{L}}_2$ \begin{eqnarray} {\mathcal{L}}_1 &\equiv& \left(\frac{\omega'}{r}+\omega''\right)\partial_{\tilde\theta}+ 2\omega'\partial_{\tilde\theta}\partial_{r} \\ {\mathcal{L}}_2 &\equiv& (\omega')^2\partial_{\tilde\theta}\partial_{\tilde\theta} \end{eqnarray} and introduce $F_1\equiv F$ and $F_2\equiv F^2$. The first order averaged equation (\ref{advec_vortex_av}) becomes then \begin{displaymath} V_{t} =\epsilon\left( \Delta\,V + \langle F_1 \rangle {\mathcal{L}}_1 V + \langle F_2 \rangle {\mathcal{L}}_2 V \right) \end{displaymath} Applying now (\ref{int_functions}), we introduce the functions $G_1$ and $G_2$ that can be found explicitly from $F_1$ and $F_2$ as \begin{equation} G_j(t_0) = \int_0^{t_0}(F_j(\tau)-\langle F_j \rangle)d\tau, \qquad j=1,2 \;\;. \end{equation} At second order in $\epsilon$ the equation is \begin{eqnarray} V_{t} &=& \epsilon\left( \Delta\,V + \langle F_1 \rangle {\mathcal{L}}_1 V + \langle F_2 \rangle {\mathcal{L}}_2 V \right) +\epsilon^2 \left([\Delta,\langle G_1 \rangle {\mathcal{L}}_1]V \right. + [\Delta,\langle G_2 \rangle {\mathcal{L}}_2]V \nonumber \\ && + (\langle F_1G_1 \rangle -\langle F_1 \rangle \langle G_1 \rangle){\mathcal{L}}_1^2V + (\langle F_2G_2 \rangle -\langle F_2 \rangle \langle G_2 \rangle){\mathcal{L}}_2^2V \nonumber \\ && + (\langle F_1G_2 \rangle -\langle F_2 \rangle \langle G_1 \rangle){\mathcal{L}}_1{\mathcal{L}}_2 V + \left. (\langle F_2G_1 \rangle -\langle F_1 \rangle \langle G_2 \rangle){\mathcal{L}}_2{\mathcal{L}}_1 V\right) \label{second_order} \end{eqnarray} where $[A,B]\equiv AB-BA$ denotes the usual commutator. \section{Numerical Simulations} We can now integrate both the original problem (\ref{advec}) and the averaged equation in first order (\ref{advec_vortex_av}) numerically and compare their solutions. In our numerical simulations, we use a standard Adams-Bashforth-Moulton Method in Fourier space, the Fourier transformations are done using FFTW. We work back in Cartesian space and for this purpose, we compare solutions at Poincar\'e-sections where $F$ is zero, $\theta = \tilde \theta$ and (\ref{advec_vortex_av}) is written in Cartesian coordinates. The explicit form of (\ref{advec_vortex_av}) in Cartesian coordinates is given in the appendix. Throughout, we consider a single initial condition of the form \begin{equation} c^{(s)}(x,y)= x\,\exp(-b\,r^2) \end{equation} where the constants $a$ and $b$ were chosen as $a=1.0$ and $b=4.0$. As shown in (\ref{advec_vortex_av}), the form of the averaging depends explicitly on the nature of $f(t)$. For the particular case of $f(t)=\sin(t)$, we find $\langle F \rangle = 1$ and $\langle F^2 \rangle = 3/2$. Fig. \ref{fig:advec} shows both the initial condition and the evolution after ten periods for this choice of $f(t)$. \begin{figure}[htb] \centering \begin{minipage}{1.0\textwidth} \begin{center} \includegraphics[width=0.489\textwidth, angle=0]{psi_start_CLIP.jpg} \hfill \includegraphics[width=0.489\textwidth, angle=0]{psi_end_CLIP.jpg} \caption{{\@setfontsize\small\@xipt{13.6} Evolution of tracers in the time-periodically modulated vortical flow field. The figure on the left shows the initial condition, the figure on the right shows the result of the tracer motion after 10 periods.}} \label{fig:advec} \end{center} \end{minipage} \end{figure} In the absence of the time-dependent advection field, the tracer field will obey the purely diffusive equation \begin{equation} \label{diff_eq} c^{(\mathrm{vis})}_t = \epsilon \Delta c^{({\mathrm{vis}})}, \qquad c^{(\mathrm{vis})}(0,\xi)=c^{(s)}(\xi) \end{equation} and the diffusion will simply spread the initial distribution out and preserve its symmetries. In the presence of the time-dependent vortical field, however, the particles will move back and forth within one period and the interplay of the time-dependent trajectories with the diffusion will give rise to a breaking of the symmetry of the initial distribution resulting in a ``twist'' that can be clearly seen in the right figure of Fig. \ref{fig:advec}. In order to determine how well the averaged equation (\ref{advec_vortex_av}) captures the differences between the purely diffusive case and the case with both time-dependent advection and diffusion, the difference of the purely viscous case and the solution of (\ref{advec}), hence $c-c^{(\mathrm{vis})}$ and (b) the difference of the purely viscous case and the approximation $c^{(\mathrm{av})}$ constructed using (\ref{advec_vortex_av}), hence $c^{(\mathrm{av})}-c^{(\mathrm{vis})}$ are plotted in Fig. \ref{fig:advec_comp}. The first order approximation accurately captures the overall dynamics of the full, time-dependent problem. \begin{figure}[htb] \centering \begin{minipage}{1.0\textwidth} \begin{center} \includegraphics[width=0.485\textwidth, angle=0]{psi_comp_CLIP.jpg} \hfill \includegraphics[width=0.485\textwidth, angle=0]{psi_av_comp_CLIP.jpg} \caption{{\@setfontsize\small\@xipt{13.6} Comparison of the prediction of the averaged equation (\ref{advec_vortex_av}) to the evolution of the time-dependent equation (\ref{advec}). We plot the difference between the purely viscous solution and the solution incorporating the effects of the time-dependent advection field. The figure on the left shows $c-c^{\mathrm{vis}}$ where $c$ is the solution of (\ref{advec}). The figure on the right shows $c^{(\mathrm{av})}-c^{(\mathrm{vis})}$, where $c^{(\mathrm{av})}$ has been found using the averaged equation (\ref{advec_vortex_av}). The averaged equation is obviously able to capture the leading order impact of the time-dependent velocity field.}} \label{fig:advec_comp} \end{center} \end{minipage} \end{figure} To quantify the accuracy of the approximation, we consider the time-evolution of the canonical $L^2$-norm of the differences by defining \begin{equation} \|c-c^{(av)}\| = \left(\frac{\int_{\mathbb{R}^2}|c-c^{(av)}|^2\;dx\,dy}{\int_{\mathbb{R}^2}|c|^2\;dx\,dy}\right)^{1/2} \end{equation} Figure \ref{fig:comp-L2}a shows that this error is small and approximately constant over the first 10 periods whereas the corresponding error between $c$ and $c^{(\mathrm{vis})}$ is comparatively large and growing exponentially in time. Figure \ref{fig:comp-L2}b indicates that, at least for the case where $f(t) = \sin(t)$, the solutions of the averaged equation converge to those of the full equation faster than $\epsilon$. \begin{figure}[htb] \centering \begin{minipage}{1.0\textwidth} \begin{center} \includegraphics[width=0.5\textwidth, angle=90]{Sine_Error_Time_CLIP.pdf} \includegraphics[width=0.5\textwidth, angle=90]{Sine_Error_Diff_CLIP.pdf} \caption{{\@setfontsize\small\@xipt{13.6} (a) Difference norms for the case $f(t) = \sin(t)$ and $\epsilon = 0.005$. (b) Average error between the first order averaged equation and the full solution versus $\epsilon$ for $f(t) = \sin(t)$. The error scales like $\epsilon^{1.8}$. }} \label{fig:comp-L2} \end{center} \end{minipage} \end{figure} We examine the role of $f(t)$ in the averaged dynamic by setting $f(t) = \cos(t)$. This choice implies that $\langle F\rangle = 0$, leading to near degeneracy of the first order corrections. In this case, the time independent advection terms produced by the averaging procedure do not contribute to 'twisting' the scalar evolution. The symmetry breaking terms in the averaged diffusivity tensor are identically zero at first order. The effect of this near-degeneracy for $\langle F \rangle = 0$ is clearly seen in the comparison of panels (a) and (b) in Fig. \ref{fig:cosine_comp}. A comparison of the difference norms, shown in Fig. \ref{fig:comp-L2-cos}(a), indicates that the first order averaged equation is only a marginal improvement on the purely viscous solution at short times. However, as shown in Fig. \ref{fig:comp-L2-cos}(b), solutions to the first order averaged equations continue to converge to the true solution with decreasing $\epsilon$ although the convergence rate, $\sim \epsilon^{1.2}$, is considerably slower than that observed when $f(t)=\sin(t)$. \begin{figure}[htb] \centering \begin{minipage}{1.0\textwidth} \begin{center} \includegraphics[width=0.85\textwidth, angle=0]{comp_2nd_order_CLIP.jpg} \caption{{\@setfontsize\small\@xipt{13.6} Solutions for the case $f(t) = \cos(t)$ and $\epsilon = 0.010$ after 10 periods. (a) Full solution, (b) first order averaged solution, (c) second order averaged solution, (d) difference between first and second order averaged. }} \label{fig:cosine_comp} \end{center} \end{minipage} \end{figure} For $\langle F \rangle =0$, second order contributions are clearly important. Referring back to (\ref{second_order}), this situation also leads to a relatively simple form for the second order expression. The coefficient in front of ${\mathcal{L}}_1$ vanishes, and in this particular case for $f(t)=\cos(t)$, (\ref{second_order}) simplifies to \begin{equation} V_{t} = \epsilon \left(\Delta\,V+ \frac{1}{2} {\mathcal{L}}_2\,V\right) + \epsilon^2\left([\Delta,{\mathcal{L}}_1]V - \frac{1}{2}[{\mathcal{L}}_1,{\mathcal{L}}_2]V\right) \end{equation} Evaluating this equation explicitly we find \begin{eqnarray} V_{t} &=& \epsilon \left(\Delta\,V + (\omega')^2V_{\tilde\theta\tilde\theta}\right)+\epsilon^2\left(\left(\frac{\omega'}{r^3}-\frac{\omega''}{r^2}+2\frac{\omega'''}{r}+\omega^{(iv)}\right)V_{\tilde\theta}\right.\nonumber \\ && +4\left(\frac{\omega''}{r}+\omega'''\right)V_{{\tilde\theta r}} +\left.\left(4\frac{\omega'}{r^3}-2(\omega')^2\omega''\right)V_{{\tilde\theta}{\tilde\theta}{\tilde\theta}}+4\omega''V_{\tilde\theta r r}\right) \label{eq:av_cos_polar} \end{eqnarray} As shown in the Appendix, this leads to ${\mathcal{O}}(\epsilon)$ symmetry breaking contributions to both the average advection terms and diffusivity tensor as well as contributions in the form of higher, third order, spatial derivatives. Numerically, such terms are easily computed spectrally. An example of a second order solution is shown in panel (c) of Fig. \ref{fig:cosine_comp}. Obviously, the second order solution is a clear improvement on the first order approximation and the difference between the two, shown in panel (d), demonstrates the restoration of advective twist at higher order. Figure \ref{fig:comp-L2-cos} indicates both the increase in accuracy and the expected increase in convergence rate for the second order approximation. \begin{figure}[htb] \centering \begin{minipage}{1.0\textwidth} \begin{center} \includegraphics[width=0.5\textwidth, angle=90]{Cosine_Error_Time_CLIP.pdf} \includegraphics[width=0.5\textwidth, angle=90]{Cosine_Error_Diff_CLIP.pdf} \caption{{\@setfontsize\small\@xipt{13.6} (a) Difference norms for the case $f(t) = \cos(t)$ and $\epsilon = 0.005$ for both first and second order averaging. (b) Average error between the averaged equation and the full solution versus $\epsilon$ for $f(t) = \cos(t)$. The first order solution is shown by the solid curve, the second order by the dashed curve. For comparison, the first order results for $f(t) = \sin(t)$ are shown in the light dot-dashed line. }} \label{fig:comp-L2-cos} \end{center} \end{minipage} \end{figure} \section{Discussion} We have proposed a scheme for formally transforming the advection diffusion equation, in the limit of small diffusivity, into a form suitable for averaging. We have given an explicit means of averaging the transformed equation and proven the convergence of solutions of the averaged, time-independent approximation to the full dynamics. Throughout, however, we deal only with a restricted class of advecting fields, namely those which are zero-mean and possess time-periodic stream lines. While such fields are inherently integrable, and hence explicitly non-chaotic, the results presented are of interest in the study of 'strange eigenmodes' of the advection diffusion equation in the Batchelor regime. First, the emergence of strange eigenmodes is independent of the integrability or non-integrability of the underlying flow \cite{Camassa:2007}, and indeed, the simple example considered here produces non-trivial periodic patterns. Secondly, the results shown for even these extremely simple cases point to the delicate relationship between the non-linearity of the conservative advection operator and small diffusivity. The analysis points to fundamental differences between the dynamics of continuous time flows and discrete time maps. Indeed, for the periodically modulated vortex considered, the averaged dynamics depends strongly on the phase of the single frequency periodic modulation, a fact completely lost when considering the Poincar{\'e} map of the flow which is simply the identity. In the case of a mean-free advection field with periodic stream lines, the transform to action-angle variables of the Lagrangian flow was found to be the appropriate transformation for deriving accurate time-averaged dynamics in the limit of small diffusivity. Finding transformations with similar properties for other classes of advection fields will likely provide a means for understanding the interplay between advection and small diffusion. \vspace{1.in} \noindent \textbf{Acknowledgment} ACP, TS and JV were supported, in part, by a grant from the City University of New York PSC-CUNY Research Award Program. The authors gratefully acknowledge the support of the CUNY High Performance Computing Facility and the Center for Interdisciplinary Applied Mathematics and Computational Sciences. Also, JV was supported in part by the NSF grant DMS-0733126. \vspace{0.25in}
1,941,325,220,657
arxiv
\section{Introduction}\label{s:intro} Supernovae (SNe) are the result of several different kinds of explosions from different stellar progenitor systems. Separating SNe into different classes (or ``types'') has existed for over 70 years \citep{Minkowski41}. Despite the recent discovery of several new varieties of SNe \citep[e.g.,][]{Foley13:iax}, most SNe discovered can be placed into the two broad categories of ``core-collapse'' SNe (those with a massive star progenitor, corresponding to the Ib, Ic, II, IIb, and IIn classes) and Type Ia SNe (SNe~Ia). \citet{Filippenko97} reviews the observational characteristics of these more common classes. There remain a few additional SNe that do not fall into these two categories, but these are a small fraction of the SNe discovered \citep{Li11:rate2}. Current transient surveys discover many more SNe than can be spectroscopically classified. Future surveys will have even lower rates of spectroscopic classification. Because of this limitation, there has been a significant amount of effort to photometrically classify SNe \citep[see][for a review]{Kessler10}. Providing large and pure samples of individual classes of SNe is important for many studies. In particular, large samples of SNe~Ia are required to make progress in determining the nature of dark energy \citep[e.g.,][]{Campbell13}, driving cosmic acceleration, which was originally discovered through measurements of (mostly) spectroscopically confirmed SNe~Ia \citep{Riess98:Lambda, Perlmutter99}. Additionally, having some preliminary classification to aid in spectroscopic follow-up can be useful for studies of all classes of SNe. Until now all effort has focused on classification using only the light curves of the SNe. Specifically, different SN classes tend to have different rise times, decline rates, and colors. Additionally, all efforts have focused on separating SNe~Ia from all other types of SNe. This problem becomes more difficult with low signal-to-noise ratio data, sparse sampling, a sample that extends over a large redshift range, and limited filters. Nonetheless, the best photometric classification methods for a simulated SN sample is 96\% pure while recovering 79\% of all SNe~Ia \citep{Kessler10, Sako11, Campbell13}. Our approach here is to classify SNe without using any SN photometry. This method uses the known correlations between host-galaxy properties and SN classes. Since core-collapse SNe (SNe~Ibc and II) have massive star progenitors and SNe~Ia have WD progenitors, several host-galaxy properties are correlated with SN type. For decades, we have known that core-collapse SNe explode almost exclusively in late-type galaxies and are associated with spiral arms and \ion{H}{2} regions. On the other hand, SNe~Ia explode in all types of galaxies and have no preference for exploding near spiral arms. These basic facts drive the majority of our exploration. Most of the host-galaxy data we use should be available for all transient surveys. We do not attempt to combine this classification technique with photometric classifications (or attempt hybrid approaches), and leave such implementation to future studies. The manuscript is structured in the following way. Section~\ref{s:fom} describes a figure of merit for determining the quality of classification. We introduce our SN samples in Section~\ref{s:data} and discuss host-galaxy properties in Section~\ref{s:galaxy}. Our method is described in Section~\ref{s:galsnid}, and we test the method in Section~\ref{s:tests}. We discuss our results, additional applications, and future prospects in Section~\ref{s:disc} and conclude in Section~\ref{s:conc}. \section{Figure of Merit}\label{s:fom} When evaluating non-spectroscopic identification techniques, one needs a metric for comparison. \citet{Kessler10} presents a figure of merit (FoM) for such a comparison, with the focus on producing a large sample of SNe~Ia with low contamination. The FoM, $\mathcal{C}_{\rm Ia}$ is the product of the efficiency and pseudopurity of a sample of SNe classified as SNe~Ia. The efficiency is defined as \begin{equation} \epsilon_{\rm Ia} = N^{\rm Sub}_{\rm Ia} / N^{\rm Tot}_{\rm Ia}, \end{equation} where $N^{\rm Tot}_{\rm Ia}$ is the total true number of SNe~Ia in the full sample and $N^{\rm Sub}_{\rm Ia}$ is the true number of SNe~Ia in a subsample classified as SNe~Ia under some criterion. The pseudopurity is defined as \begin{equation} PP_{\rm Ia} = \frac{N^{\rm Sub}_{\rm Ia}}{N^{\rm Sub}_{\rm Ia} + W^{\rm False}_{\rm Ia} N^{\rm Sub}_{\rm Non-Ia}}, \end{equation} where $N^{\rm Sub}_{\rm Non-Ia}$ is the number of objects misclassified as SNe~Ia in the selected subsample which are not truly SNe~Ia, and $W^{\rm False}_{\rm Ia}$ is the weight given to adjust the importance of purity on the FoM. If $W^{\rm False}_{\rm Ia} = 1$, the pseudopurity is simply the true purity, which is equivalent to the probability of a SN in that subsample being a SN~Ia. \citet{Campbell13} suggests that $W^{\rm False}_{\rm Ia} = 5$ is the preferred value for creating a sample of SNe~Ia for the purpose of measuring cosmological parameters; we will use that value throughout this paper. If one can perfectly reject non-transient sources from the subsample, then $N^{\rm Sub}_{\rm Non-Ia} \approx N^{\rm Sub}_{\rm CC}$, where $N^{\rm Sub}_{\rm CC}$ is the number of core-collapse SNe in the subsample (there will perhaps be a few peculiar thermonuclear SNe in the subsample, but those will likely be orders of magnitude smaller than the core-collapse population). Of course this FoM is not the only way to compare classification methods, and it is particularly focused on selecting a relatively pure sample of SNe~Ia. But since we hope to merely provide practical and useful methods of classification for various surveys, there is no urgent need to define a different FoM. Regardless of whether one is trying to select SNe~Ia or core-collapse or wants to somehow weight efficiency and/or purity differently, the \citet{Kessler10} FoM will likely still be informative. \section{Data}\label{s:data} \subsection{Supernova Samples} When testing various classification schemes, one would ideally have a large, unbiased, spectroscopically complete sample. Unfortunately, this sample does not exist. Instead, previous studies have typically simulated large samples of SNe with the simulated sample properties matching those believed to be representative of a particular survey \citep[e.g.,][]{Kessler10}. This approach is reasonable when generating samples of SN light curves since there is a significant amount of light-curve data available and sufficient understanding of the relative rates of various SN (sub)types and their luminosity functions \citep{Li11:rate2}. However, simulations are not necessarily appropriate when looking at host-galaxy properties of SN samples. Some observables have not been examined in detail, and the correlations between properties are not well understood. For the purposes of this examination, there is a large, almost spectroscopically complete sample of SNe that is relatively free of bias: the Lick Observatory Supernova Search (LOSS) sample \citep{Leaman11}. LOSS is a SN search that has run for over a decade monitoring nearby galaxies with a cadence of a few nights to a couple of weeks. The ``full'' LOSS sample contains 929 SNe, while the ``optimal'' LOSS sample contains 726 SNe where 98.3\% of the SNe have a spectroscopic classification \citep{Leaman11}. The LOSS detection efficiency is very high ($\sim\!\!$~ 90\%) with the vast majority of missed objects being in the nuclear regions of bright, compact galaxies \citep{Leaman11}. The three major biases for the sample are (1) the missed objects in nuclear regions, (2) that luminous galaxies are over-represented the sample, an effect that increases with distance, and (3) that the Hubble type distribution changes towards earlier galaxy types with distance. However, those biases can be somewhat mitigated and do not affect certain measurements. After constructing SN luminosity functions for each subtype, \citet{Li11:rate2} determined that the LOSS sample is relatively complete to a distance of 80 and 60~Mpc for SNe~Ia and core-collapse SNe, respectively. Within 60~Mpc, the $K$-band luminosity function of the LOSS galaxy sample matches that of a complete sample for galaxies with $M_{K} < -23$~mag \citep{Leaman11}. At fainter magnitudes, the LOSS sample is incomplete. Similarly, the average $K$-band luminosity for E and Scd galaxies in the LOSS sample increases by a factor of 4 and 20 from 15 to 175~Mpc, respectively (while the average galaxy increases by a factor of $\sim\!\!$~ 2 between 15 and 60~Mpc regardless of Hubble type; \citealt{Leaman11}). We will examine two subsamples of the LOSS sample. The ``Full'' sample is nearly equivalent to the ``full'' LOSS sample as defined by \citet{Leaman11}. We add classifications for 3 SNe in this sample. SN~2000cc was observed by \citet{Aldering00:00ca}, who noted that it had a featureless spectrum consistent with a blackbody. Although that is not a definitive classification, it is consistent with a core-collapse SN and inconsistent with a SN~Ia. We also classify SN~2000ft as a SN~II. This SN has no optical spectrum, but its radio light curve is consistent with a SN~II \citep{Alberdi06}. Finally, \citet{Blondin12} classified SN~2004cu as a SN~Ia. To generate the ``Full'' sample, we remove 24 SNe from the ``full'' LOSS sample. Of these 24, 6 are similar to SN~2005E \citep{Perets10:05e} and 7 are SNe~Iax \citep{Foley13:iax}. Although there is evidence that these SNe are peculiar thermonuclear SNe \citep[e.g.,][]{Li03:02cx, Foley09:08ha, Foley10:08ha, Foley10:08ge, Foley13:iax, Perets10:05e}, there is still some controversy \citep[e.g.,][]{Valenti09}; as a result, we remove these SNe from this analysis. We also remove SN~2008J, which appears to be a SN~Ia interacting with circumstellar hydrogen \citep{Taddia12}. After all alterations, the ``Full'' sample has 905 SNe, of which 368 are SNe~Ia and 537 are core-collapse SNe (137 SNe~Ibc and 400 SNe~II). In Section~\ref{s:tests}, we will examine SN samples from the Sloan Digital Sky Survey (SDSS) SN survey \citep{Frieman08} and Palomar Transient Factory \citep[PTF;][]{Law09}. Both surveys are large-area untargeted surveys; the SN samples are not biased to those in luminous galaxies. The SDSS SN survey was performed over three seasons with the SDSS telescope. Spectroscopic follow-up was performed with a variety of telescopes \citep{Zheng08, Konishi11:subaru, Ostman11, Foley12:sdss}. The first cosmological results based on a spectroscopic sample were presented by \citet{Kessler09:cosmo}. A photometric sample was presented by \citet{Sako11}, and a cosmological analysis of a photometric SN~Ia sample was performed by \citet{Campbell13}. PTF has been running a transient survey since 2009 using the 48-inch telescope at Palomar Observatory. Although there has not been an official spectroscopic data release yet, PTF has publicly announced several hundred spectroscopically classified SNe. Despite being untargeted surveys, neither SDSS nor PTF are close to being spectroscopically complete. Spectroscopically complete subsamples are much smaller than LOSS. We therefore choose to focus on the LOSS sample initially and test our method with the SDSS and PTF samples. \subsection{Host-galaxy Observables} The simplest, although also the least quantitative, metric for determining bulk host-galaxy properties is the Hubble type. The Hubble types for the LOSS galaxies have been determined in a consistent way and presented by \citet{Leaman11}. Similarly, the Galaxy Zoo project has determined visual morphological classifications for a large number of SDSS galaxies \citep{Lintott11}. They use the individual classifications of many volunteers to determine a probability that a galaxy has an elliptical or spiral morphology. The probabilities take into account biases associated with redshift. In addition to Hubble type, one can easily measure the color of the host galaxy. Particular colors correlate well with star-formation rate, and should therefore correlate with the types of SNe produced. \citet{Leaman11} present $B$, $B_{0}$, and $K$ band measurements for the LOSS galaxy sample, where $B_{0}$ is the $B$ magnitude corrected for Galactic extinction, internal extinction, and $K$-corrections. Since the $B$ band straddles the 4000~\AA\ break, the $B_{0} - K$ color is a reasonable proxy for the star formation rate. Galaxy morphology is correlated with both color and luminosity. More luminous galaxies tend to be ellipticals, gas poor, and lack recent star formation. One can generally cleanly separate star-forming and passive galaxies using a color-magnitude diagram, with both dimensions providing information. We will also examine SN populations as a function of host-galaxy luminosity. Specifically, we examine $M_{K}$, which is highly correlated with stellar mass. Since core-collapse SNe are associated with star-forming regions within a galaxy, while SNe~Ia are not, we will examine the proximity of SN locations to bright regions of the host galaxy. We use the ``pixel-based'' method of \citet{Fruchter06}, which compares SN locations to the intensity map of a galaxy, with the brightest pixels corresponding to a value of 1 and the faintest pixels corresponding to a value of 0. We use the values provided by \citet{Kelly08}, which cover a subsample of the LOSS sample. We refer to this derived quantity as the ``pixel rank.'' We also examine the offset of the SN relative to the nucleus. Both the underlying stellar population and the progenitor metallicity should correlate with the offset. For this measurement, we use the effective offset, $R$, of \citet{Sullivan06}, which is a dimensionless parameter describing the separation of the SN from its host galaxy. A value of $R = 3$ corresponds roughly to the isophotal limit of the galaxy. \section{Galaxy Properties of the LOSS SN Sample}\label{s:galaxy} We now examine how host-galaxy properties can predict SN types in the LOSS sample. Figure~\ref{f:frac} displays the fraction of SNe~Ia in the LOSS sample, and the subset of SNe~I, as a function of host galaxy property (morphology, color, luminosity, effective offset, and pixel ranking, respectively). Figure~\ref{f:frac} also shows the cumulative distribution functions (CDFs) for SNe~Ia, SNe~II, and SNe~Ibc for each host-galaxy property. \begin{figure*} \begin{center} \epsscale{0.55} \rotatebox{90}{ \plotone{frac.ps}} \caption{Top panels display the fraction of SNe~Ia in the LOSS sample (black) and the subset of SNe~I (red) as a function of host-galaxy parameters. The bottom panels display the CDFs for each host-galaxy observable for SNe~Ia (black), SNe~II (blue), and SNe~Ibc (red). The first and third (second and last) rows display the results for the Full ($D < 60$~Mpc) LOSS sample. The host-galaxy parameters are from left to right, morphology, color, absolute magnitude, effective offset, and pixel rank.}\label{f:frac} \end{center} \end{figure*} Consistently, we see that SNe~Ia are more frequently found in galaxies with properties consistent with older populations than that of the core-collapse comparison sample. Specifically, the fraction of SNe that are of Type~Ia with early-type, red, or luminous host galaxies is larger than the fraction with late-type, blue, or faint host galaxies. And therefore, the probability that a particular SN is a SN~Ia is higher if its host galaxy is a luminous, red, early-type galaxy. For instance, 98\% of all SNe with elliptical host galaxies are SNe~Ia, while only 10\% of all SNe with irregular host galaxies are SNe~Ia. Clearly, host galaxy information can be useful for classifying SNe~Ia with no additional information. This trend continues even within galaxies, where SNe~Ia tend to be found with larger offsets and in fainter regions of the galaxy than core-collapse SNe. Because of the small number of LOSS SNe with pixel-ranking data, the uncertainties are especially large for that property. This metric should be re-examined when more data becomes available. Figure~\ref{f:frac} displays results for both the ``Full'' LOSS sample and the volume-limited LOSS sample. There are no significant differences between the samples, and most importantly, the fractions are consistent for the same bins. This indicates that whatever biases the larger LOSS sample has, they have little effect on the fraction of SNe~Ia from host galaxies that are very similar in one of these properties. This result is especially important for transferring results to other surveys where galaxy population will not be the same as the LOSS survey. \section{Galsnid}\label{s:galsnid} Using the data above, we can create a metric for determining the probability that a given SN is of Type~Ia. Specifically, this probability can be expressed using Bayes' Theorem. Here, we consider the case where we only wish to distinguish between two choices, `Ia' and `Core-collapse' (`CC'). That is, from a classification point of view, we consider all SNe to have a type, $T \in \{{\rm Ia, CC}\}$. For a given observable, $D_{i}$, we estimate the probability that a SN is of Type Ia, $P({\rm Ia} | D_{i})$. We seek to compute the probability that a SN is of a given type given multiple observables, \begin{equation} P({\rm Ia} | \boldsymbol{D}), \end{equation} where $\boldsymbol{D}$ is the vector of its host-galaxy data. Since we are only considering two classes, we have \begin{equation} 1 - P({\rm Ia} | \boldsymbol{D}) = P({\rm CC} | \boldsymbol{D}). \label{e:eq1} \end{equation} and \begin{equation} 1 - P({\rm Ia}) = P({\rm CC}), \label{e:eq2} \end{equation} where $P({\rm T})$ is the overall probability of a given SN in the sample is of Type $T$ (the prior). Bayes' Theorem is \begin{equation} P({\rm Ia} | \boldsymbol{D}) = k^{-1} P(\boldsymbol{D} | {\rm Ia}) P({\rm Ia}), \end{equation} where $k^{-1}$ is a normalization factor depending on $\boldsymbol{D}$ set by requiring the class probabilities to add to unity and $P(\boldsymbol{D} | {\rm Ia})$ is the probability density of a set of observables given that the object is a SN Ia. The likelihood $P(\boldsymbol{D} | {\rm Ia})$ is difficult to model directly since $\boldsymbol{D}$ is multi-dimensional. It is convenient to neglect the correlations among the galaxy observables and make the approximation that their joint probability factors as the product of the individual one-dimensional likelihoods of each galaxy property. We can do this by invoking the Naive Bayes assumption that the data are conditionally independent given the class\footnote{This assumption is typically not true; see discussion of this limitation in Section~\ref{ss:improve}.}, which gives us \begin{equation} P({\rm Ia} | \boldsymbol{D}) = k^{-1} P({\rm Ia}) \prod_{i = 1}^{n} P(D_{i} | {\rm Ia}), \label{e:galsnid} \end{equation} where $D_{i}$ are the individual $n$ observables. From Equations~\ref{e:eq1}, \ref{e:eq2}, and \ref{e:galsnid}, \begin{equation} k = P({\rm Ia}) \prod_{i = 1}^{n} P(D_{i} | {\rm Ia}) + (1 - P({\rm Ia}))\prod_{i = 1}^{n} P(D_{i} | {\rm CC}). \end{equation} The underlying population of the SNe and host galaxies are somewhat important in the determination of the probability. As an example, we consider a single observable. In that case, we have \begin{equation} P({\rm Ia} | \boldsymbol{D}) = k^{-1} P({\rm Ia}) P(D_{1} | {\rm Ia}). \end{equation} With some algebraic manipulation, we find that \begin{equation} P({\rm Ia} | \boldsymbol{D}) = \frac{f_{{\rm Ia}/{\rm CC}, x}}{1 + f_{{\rm Ia}/{\rm CC}, x}} \end{equation} where $f_{{\rm Ia}/{\rm CC}, x}$ is the odds ratio of SN~Ia to core-collapse SN for a particular value of $D_{1}$ (the relative fraction of SNe~Ia to core-collapse SNe with $D_{1} = x$). This implies that biased samples can still be useful for determining probabilities for {\it all} other samples, both biased and unbiased, as long as the samples retain the same relative fraction of SNe~Ia and core-collapse SNe for a particular value of each observable. Specifically, the known biases of the LOSS sample should not affect our ability to apply its results to other low-redshift samples. However, using the LOSS sample for high-redshift SNe where, for example, we know that the relative fractions of SNe~Ia and core-collapse SNe is different in spirals, will bias the results somewhat. From the LOSS data, we have determined $P(D_{i} | {\rm Ia})$ and $P(D_{i} | {\rm CC})$ for all relevant values of each observable; this is simply the fraction of SN Ia (or core-collapse SN) host galaxies that have a particular galaxy observable, $D_{i}$. We present these data in Table~\ref{t:prop}. Since some bins do not contain many SNe, the uncertainties can be large for some values of particular parameters. To avoid potential biases associated with large statistical uncertainties, we perform a Monte Carlo simulation for each SN where we determine $\tilde{P}(D_{i} | T)$, a single realization of the probability for variable $i$ using $P(D_{i} | T)$ and its uncertainty. This Monte Carlo is performed for all observables simultaneously, resulting in several realizations of the overall probability that a given SN is of Type~Ia, $\tilde{P} ({\rm Ia} | \boldsymbol{D})$. From the Monte Carlo simulation, we have a distribution of posterior probabilities that each SN is of Type Ia. We then assign the final probability, $P ({\rm Ia} | \boldsymbol{D})$ to be the median value of the distribution of $\tilde{P} ({\rm Ia} | \boldsymbol{D})$. \begin{deluxetable}{lll} \tabletypesize{\scriptsize} \tablewidth{0pt} \tablecaption{Probability of Host Properties Given Type\label{t:prop}} \tablehead{ \colhead{Bin} & \colhead{$P(D_{i} | {\rm Ia})$} & \colhead{$P(D_{i} | {\rm CC})$}} \startdata \multicolumn{3}{c}{Morphology} \\ \tableline E & 0.141 \err{0.021}{0.018} & 0.002 \err{0.003}{0.001} \\ S0 & 0.217 \err{0.026}{0.023} & 0.017 \err{0.007}{0.005} \\ Sa & 0.149 \err{0.022}{0.019} & 0.142 \err{0.017}{0.015} \\ Sb & 0.177 \err{0.023}{0.021} & 0.188 \err{0.020}{0.018} \\ Sbc & 0.117 \err{0.019}{0.017} & 0.231 \err{0.022}{0.020} \\ Sc & 0.120 \err{0.019}{0.017} & 0.218 \err{0.021}{0.019} \\ Scd & 0.076 \err{0.016}{0.013} & 0.188 \err{0.020}{0.018} \\ Irr & 0.003 \err{0.004}{0.002} & 0.015 \err{0.006}{0.004} \\ \tableline \multicolumn{3}{c}{$B_{0}-K$} \\ \tableline 0.0 -- 1.75 & 0.026 \err{0.010}{0.007} & 0.044 \err{0.010}{0.008} \\ 1.75 -- 2.25 & 0.023 \err{0.010}{0.007} & 0.075 \err{0.013}{0.011} \\ 2.25 -- 2.5 & 0.037 \err{0.012}{0.009} & 0.069 \err{0.013}{0.011} \\ 2.5 -- 2.75 & 0.043 \err{0.013}{0.010} & 0.115 \err{0.016}{0.014} \\ 2.75 -- 3.0 & 0.111 \err{0.019}{0.016} & 0.181 \err{0.020}{0.018} \\ 3.0 -- 3.25 & 0.131 \err{0.021}{0.018} & 0.185 \err{0.020}{0.018} \\ 3.25 -- 3.5 & 0.154 \err{0.022}{0.020} & 0.137 \err{0.018}{0.016} \\ 3.5 -- 3.75 & 0.125 \err{0.020}{0.018} & 0.115 \err{0.016}{0.014} \\ 3.75 -- 4.0 & 0.168 \err{0.023}{0.021} & 0.048 \err{0.011}{0.009} \\ 4.0 -- 4.25 & 0.103 \err{0.019}{0.016} & 0.022 \err{0.008}{0.006} \\ 4.25 -- 6.25 & 0.080 \err{0.017}{0.014} & 0.010 \err{0.006}{0.004} \\ \tableline \multicolumn{3}{c}{$M_{K}$} \\ \tableline $-26.5$ -- $-25.5$ & 0.079 \err{0.016}{0.014} & 0.018 \err{0.007}{0.005} \\ $-25.5$ -- $-25.1$ & 0.108 \err{0.019}{0.016} & 0.029 \err{0.009}{0.007} \\ $-25.1$ -- $-24.7$ & 0.164 \err{0.023}{0.020} & 0.125 \err{0.017}{0.015} \\ $-24.7$ -- $-24.3$ & 0.190 \err{0.025}{0.022} & 0.146 \err{0.018}{0.016} \\ $-24.3$ -- $-23.9$ & 0.153 \err{0.022}{0.019} & 0.125 \err{0.017}{0.015} \\ $-23.9$ -- $-23.5$ & 0.130 \err{0.021}{0.018} & 0.136 \err{0.017}{0.015} \\ $-23.5$ -- $-23.1$ & 0.040 \err{0.012}{0.009} & 0.133 \err{0.017}{0.015} \\ $-23.1$ -- $-22.7$ & 0.048 \err{0.013}{0.010} & 0.115 \err{0.016}{0.014} \\ $-22.7$ -- $-22.3$ & 0.031 \err{0.011}{0.008} & 0.058 \err{0.012}{0.010} \\ $-22.3$ -- $-21.5$ & 0.037 \err{0.012}{0.009} & 0.060 \err{0.012}{0.010} \\ $-21.5$ -- $-17.1$ & 0.020 \err{0.009}{0.006} & 0.055 \err{0.011}{0.009} \\ \tableline \multicolumn{3}{c}{Effective Offset} \\ \tableline 0.0 -- 0.05 & 0.043 \err{0.012}{0.010} & 0.032 \err{0.009}{0.007} \\ 0.05 -- 0.1 & 0.098 \err{0.018}{0.015} & 0.086 \err{0.014}{0.012} \\ 0.1 -- 0.15 & 0.130 \err{0.020}{0.018} & 0.091 \err{0.014}{0.012} \\ 0.15 -- 0.2 & 0.090 \err{0.017}{0.014} & 0.091 \err{0.014}{0.012} \\ 0.2 -- 0.25 & 0.071 \err{0.015}{0.013} & 0.114 \err{0.016}{0.014} \\ 0.25 -- 0.3 & 0.057 \err{0.014}{0.011} & 0.084 \err{0.013}{0.012} \\ 0.3 -- 0.35 & 0.068 \err{0.015}{0.012} & 0.058 \err{0.011}{0.009} \\ 0.35 -- 0.4 & 0.060 \err{0.014}{0.011} & 0.054 \err{0.011}{0.009} \\ 0.4 -- 0.45 & 0.038 \err{0.012}{0.009} & 0.067 \err{0.012}{0.010} \\ 0.45 -- 0.5 & 0.052 \err{0.013}{0.011} & 0.063 \err{0.012}{0.010} \\ 0.5 -- 0.6 & 0.060 \err{0.014}{0.011} & 0.089 \err{0.014}{0.012} \\ 0.6 -- 0.75 & 0.062 \err{0.014}{0.012} & 0.048 \err{0.010}{0.009} \\ 0.75 -- 1.0 & 0.073 \err{0.016}{0.013} & 0.045 \err{0.010}{0.008} \\ 1.0 -- 1.4 & 0.052 \err{0.013}{0.011} & 0.041 \err{0.010}{0.008} \\ 1.4 -- 5.25 & 0.046 \err{0.013}{0.010} & 0.037 \err{0.009}{0.007} \\ \tableline \multicolumn{3}{c}{Pixel Rank} \\ \tableline 0.0 -- 0.2 & 0.206 \err{0.094}{0.064} & 0.113 \err{0.047}{0.033} \\ 0.2 -- 0.4 & 0.294 \err{0.109}{0.079} & 0.282 \err{0.070}{0.056} \\ 0.4 -- 0.6 & 0.088 \err{0.068}{0.038} & 0.211 \err{0.062}{0.048} \\ 0.6 -- 0.8 & 0.324 \err{0.113}{0.084} & 0.296 \err{0.072}{0.058} \\ 0.8 -- 1.0 & 0.088 \err{0.068}{0.038} & 0.099 \err{0.045}{0.031} \enddata \end{deluxetable} Following the convention of {\it psnid} \citep[photometric supernova identification;][]{Sako11}, we call this procedure {\it galsnid} (galaxy-property supernova identification). For the rest of the manuscript, we define the posterior probability from {\it galsnid} as $p \equiv P({\rm Ia} | \boldsymbol{D})$. Having performed this procedure for the Full LOSS sample, we arrive with a distribution of probabilities from 0 to 1. We display the results in Figure~\ref{f:hist}. In this Figure, we present histograms for the probability that a SN is of Type~Ia for the spectroscopically confirmed subsets of SNe~Ia, II, and Ibc. For the LOSS sample, 30\% have $p > 0.5$; of those 71\% are SNe~Ia. This compares favorably to the prior of $P({\rm Ia}) = 0.41$. For the same sample, 21\% have $p < 0.1$; 84\% of which are core-collapse SNe. \begin{figure} \begin{center} \epsscale{1.0} \rotatebox{90}{ \plotone{prob_hist_cc.ps}} \caption{Histogram of {\it galsnid} probability for different spectroscopic SN classes in the LOSS sample. The filled black, red, and blue histograms represent SNe~Ia, Ibc, and II, respectively.}\label{f:hist} \end{center} \end{figure} Again, this information can both be used by itself and in combination with SN photometry for classification. We test the utility of using only the {\it galsnid} method with the FoM defined in Section~\ref{s:fom}. Figure~\ref{f:fom} presents the efficiency, the purity ($W^{\rm False}_{\rm Ia} = 1$), and the FoM (assuming $W^{\rm False}_{\rm Ia} = 5$) for subsamples including only objects with {\it galsnid} probability $p$ greater than a threshold value. The FoM peaks at $p = 0.97$ at a value of 0.269. The full sample has a FoM of 0.121, so implementing {\it galsnid} improves the FoM by a factor of 2.23. As a comparison, \citet{Campbell13} performed {\it psnid} on a simulated sample of SNe~Ia and obtained an improvement of 2.60. \begin{figure} \begin{center} \epsscale{1.0} \rotatebox{90}{ \plotone{prob_cc.ps}} \caption{Efficiency, purity, and FoM (blue, red, and black curves, respectively) for subsamples of the LOSS sample defined by a particular {\it galsnid} probability or larger. The FoM peaks at $p = 0.97$ at a value that is 2.23 times larger than the FoM for the entire sample.}\label{f:fom} \end{center} \end{figure} \section{Tests}\label{s:tests} In this section, we perform a variety of tests on the {\it galsnid} method for classifying SNe. Specifically, we examine the reliability of the method, test the importance of each galaxy observable for classification, and apply the method to additional SN samples. \subsection{Cross-validation} Having shown that host-galaxy information is useful for SN classification, we now test the robustness of the above results. Specifically, we cross-validate the method again using the LOSS sample. We split the sample in half, placing every-other SN (to mitigate possible biases in the SN search or classification with time) as the training and comparison samples. Using the ``evenly-indexed'' sample as the training set, we find the {\it galsnid} probability which results in the highest FoM for the training set, which we consider the threshold value above which a SN will enter our final sample. We then apply the probabilities and this threshold {\it galsnid} probability found from the training set to the ``oddly-indexed'' sample and determine the efficiency and pseudo-purity of the sample. Doing this, we find that the FoM improves by a factor of 1.4 compared to not using the {\it galsnid} procedure. Performing the same procedure but switching the training and testing samples, we find that the FoM improves by a factor of 2.4. Therefore, the method appears to be robust within a given sample, although clearly the amount of improvement depends on the training sample. \subsection{Importance of Each Observable} To assess the importance of each host-galaxy observable for classification, we first re-analyze the data and computed {\it galsnid} probabilities using a single observable at a time. We then compute the {\it galsnid} probabilities using all observables, but excluding a single observable at a time. The summary of the results are presented in Table~\ref{t:imp}, where we list the peak FoM, the improvement factor over the baseline FoM, and the difference in the median {\it galsnid} probability for the spectroscopically confirmed SNe~Ia and core-collapse SN classes. The latter is a measure of the difference of distribution of {\it galsnid} probabilities for different spectroscopic classes, and thus an additional (and different) indication of the importance of the observable beyond the improvement in the FoM. \begin{deluxetable*}{lcccccc} \tabletypesize{\scriptsize} \tablewidth{0pt} \tablecaption{Importance of Each Observable\label{t:imp}} \tablehead{ & \multicolumn{3}{c}{Exclusively Using Observable} & \multicolumn{3}{c}{Excluding Observable} \\ & \multicolumn{3}{c}{---------------------------------------------} & \multicolumn{3}{c}{---------------------------------------------} \\ \colhead{} & \colhead{Peak} & \colhead{Improvement} & \colhead{Difference} & \colhead{Peak} & \colhead{Improvement} & \colhead{Difference} \\ \colhead{Observable} & \colhead{FoM} & \colhead{Factor} & \colhead{in Medians} & \colhead{FoM} & \colhead{Factor} & \colhead{in Medians}} \startdata Baseline\tablenotemark{a} & 0.121 & N/A & N/A & 0.121 & N/A & N/A \\ Using All Galaxy Data\tablenotemark{b} & 0.269 & 2.23 & 0.34 & 0.269 & 2.23 & 0.34 \\ Morphology & 0.262 & 2.18 & 0.15 & 0.157 & 1.30 & 0.22 \\ Color & 0.128 & 1.06 & 0.10 & 0.273 & 2.26 & 0.26 \\ Luminosity & 0.135 & 1.12 & 0.07 & 0.273 & 2.27 & 0.24 \\ Effective Offset & 0.122 & 1.02 & 0.03 & 0.261 & 2.16 & 0.30 \\ Pixel Rank & 0.123 & 1.02 & 0.00 & 0.269 & 2.23 & 0.33 \enddata \tablenotetext{a}{The ``Baseline'' category classifies the entire SN sample as SN~Ia without using host-galaxy information.} \tablenotetext{b}{This category is for the nominal {\it galsnid} procedure, as defined in Section~\ref{s:galsnid}, using all host-galaxy data.} \end{deluxetable*} Unsurprisingly, the pixel ranking data was not particularly useful, and excluding it made no significant difference in the results. The vast majority of SNe in the sample do not have pixel ranking data, and thus it only has the ability to affect a small number of objects. Additionally, pixel ranking does not appear to be as discriminating as other observables. The color and luminosity are both somewhat important. The median {\it galsnid} probability when just using color (luminosity) was 0.43 (0.47) and 0.33 (0.40) for SNe~Ia and core-collapse SNe, respectively. However, just using color or luminosity results in only a modest improvement in the peak FoM with ratios of 1.06 and 1.12, respectively. Using both quantities together (but excluding all other observables) results in a maximum FoM improvement ratio of 1.17. Removing either color and luminosity results in only modest changes in the maximum FoM from 0.269 to 0.273 (a net increase). We do not consider this change in the FoM significant. However, removing these data results in more smearing of the populations with the difference in the median {\it galsnid} probabilities for SNe~Ia and core-collapse SNe decreasing from 0.34 to 0.26 and 0.24, respectively. Removing both color and luminosity continues this trend with a difference in medians for the two populations of only 0.16. Therefore, although color and luminosity do not significantly affect the peak FoM presented here, they could be particularly important for other applications or different FoMs. Using only offset information results in no significant improvement in maximum FoM, although it is slightly helpful with classification; the median {\it galsnid} values for the SN Ia and core-collapse populations are 0.44 and 0.41, respectively. However, removing the relative offset decreases the maximum FoM to 0.261. This is a somewhat surprising result and may not be significant. By far, the most important parameter is morphology. Morphology alone results in a maximum FoM of 0.262, a factor of 2.18 improvement over not using any host-galaxy information. Removing morphology information decreases the maximum FoM to 0.157. Without these data, the maximum improvement is only a factor of 1.30 over not using any host-galaxy data. Nonetheless, {\it galsnid} is still effective without morphology. \subsection{SDSS}\label{ss:sdss} We also examined the largest photometry-selected SN~Ia sample: the SDSS-II SN survey compilation \citep{Sako11, Campbell13}. This sample, which we call the ``SDSS sample,'' was taken from the SDSS-II SN survey and various cuts were made based on the photometric properties of the SNe to determine a relatively pure subsample of SNe~Ia (see \citealt{Campbell13} for details). This sample is a subset of the full photometric-only sample of SNe from SDSS-II \citep{Sako11}. All SNe in the SDSS sample have host-galaxy redshifts. Using simulations, \citet{Campbell13} showed that the SDSS sample should have an efficiency of 71\% and a contamination of 4\%. This sample only includes SNe photometrically classified as Type Ia. Using the SDSS imaging data of the host galaxies, we apply the {\it galsnid} procedure to the SDSS sample. This sample is already supposedly quite pure, and an ideal method would provide a criteria to sift out the 4\% contamination without a large loss of true SNe~Ia. Of course this is not a perfect test of a given method since increased efficiency with minimal decrease in purity is also a net gain. Nonetheless, a qualitative assessment can be made. For this test, we focused on the SNe with $z < 0.2$. This subsample is likely to have some contamination from core-collapse SNe; Malmquist bias will remove many low-luminosity core-collapse SNe from the higher-redshift sample. The efficiency for this subsample is also expected to be higher, providing a more representative SN~Ia sample. For these redshifts, one may also expect that the fractions of different SN types for a given galaxy property have not evolved much. For the SDSS sample, the SN parameters (light-curve shape and color) do not evolve much for this redshift range. Additionally, a significant number of SNe in this sample are spectroscopically confirmed as SNe~Ia. Finally, \citet{Campbell13} shows that simulations predict several large Hubble-diagram outliers from core-collapse SNe at $z < 0.2$ that remain in the sample. Although there is no direct evidence of that contamination, there are also several Hubble-diagram outliers at $z < 0.2$ in the data as well. Understanding this potential contamination and potentially identifying a solution would be useful. This subsample contains 143 SNe, but only 131 have matches for the listed galaxy ID in DR8/9 \citep{Ahn12, Aihara12}. We require host-galaxy photometry and a host-galaxy redshift for this analysis. The overall fraction of SNe~Ia in the Full and volume-limited LOSS sample is 40\% and 27\%, respectively. \citet{Bazin09} found that only 18\% of SNe at $z < 0.4$ in the Supernova Legacy Survey were SNe~Ia. However, using the volumetric rates as a function of redshift from \citet{Dilday08} and \citet{Bazin09} (for SNe~Ia and core-collapse SNe, respectively), we find that the SN~Ia fraction should be 22\% at $z = 0.15$. Of course this is the {\it volumetric} fraction. SDSS is relatively complete to $z = 0.2$, but still suffers from some Malmquist bias. The magnitude-limited SN~Ia fraction in the LOSS sample was 79\% \citep{Li11:rate2}. For the SDSS sample, we take an intermediate value of 50\%. This number essentially provides a normalization for the probability and does not affect relative results. Taking the DR8 imaging data, we determined the \protect\hbox{$ugriz$} \ magnitudes for each host galaxy. Cross-checking with earlier data releases, we verified that no measurements were significantly affected by SN light. Using the {\it kcorrect} routine \citep[version 4\_1\_4;][]{Blanton03, Blanton07}, we calculated rest-frame $B$ and $K$ magnitudes. This method extrapolates galaxy templates into the NIR to estimate the $K$ magnitudes, but since these galaxy have 5-band photometry, including $z$ band, this extrapolation should be relatively robust. We tested this by comparing the observer-frame $z$ and extrapolated observer-frame $K$ photometry; the two values were highly correlated. From the derived photometry, we were able to measure $B-K$ and $M_{K}$ for each SDSS host galaxy. We also used morphological classifications from the Galaxy Zoo survey \citep{Lintott11}. Only 33 of the 131 SN host galaxies have morphological classifications from Galaxy Zoo. Using the LOSS data, we redetermined the probabilities of a SN being of Type Ia when only using the coarse bins of ``elliptical'' and ``spiral.'' These probabilities are used for the SDSS galaxies with morphology information. Since the effective offset and pixel ranking are not particularly effective at classifying SNe, we do not use those measurements. For comparison, we also chose 10,000 random galaxies in the SDSS-II SN survey footprint with $z < 0.2$. We expect that these galaxies will have average properties that are different from both the average SN~Ia host galaxy and the average SN host galaxy. As a result, performing the {\it galsnid} procedure on these galaxies should provide a baseline for any potential improvement. The {\it galsnid} probabilities are shown for these two samples in Figure~\ref{f:sdss_hist}. We find that the median {\it galsnid} probabilities for the SDSS sample and the random comparison sample are 0.82 and 0.66, respectively. The SDSS host-galaxy sample is more likely to host SNe~Ia (relative to core-collapse SNe) than the random sample. This is further validation of the {\it galsnid} procedure. However, the probabilities are more evenly distributed than for the LOSS sample. We attribute this mainly to the lack of morphology information for the majority of the sample. \begin{figure} \begin{center} \epsscale{1.0} \rotatebox{90}{ \plotone{sdss_prob_hist_cc.ps}} \caption{Histogram of {\it galsnid} probability for the SDSS (black and blue; 131 galaxies) and control (red; 10,000 galaxies) samples. The black and blue histograms show the distributions of probabilities when using morphology classifications from Galaxy Zoo and one of the authors (RJF), respectively. The control sample histogram has been scaled by a factor of 0.093 to roughly match the SDSS histograms.}\label{f:sdss_hist} \end{center} \end{figure} We examined SDSS images for each host galaxy in the SDSS sample and one of us (RJF) visually classified their morphology as elliptical or spiral. We were able to assign a morphology to 87 of the 131 galaxies, of which 36 and 51 were ellipticals and spirals, respectively. These classifications are not as robust as the Galaxy Zoo measurements, but are helpful for assessing how morphology data can improve our classifications. After including these new morphology measurements (and ignoring all Galaxy Zoo measurements), we find that the median {\it galsnid} probability of the SDSS sample is 0.85, slightly above the median of the previous analysis. Moreover, the number of SNe with $p > 0.97$, the peak of the FoM from the prior analysis, more than tripled from 11 to 36 SNe. Using the best-fit cosmology of \citet{Campbell13}, we are also able to measure the Hubble residual for each SN in the sample. Notably, there are 6 (9) SNe with a Hubble residual $>$0.5 (0.4)~mag from zero. Figure~\ref{f:galsnid_resid} displays the absolute value of the Hubble residual as a function of {\it galsnid} probability. Interestingly, 3 of these SNe, including the most discrepant outlier, have $p < 0.2$. There are only 18 SDSS SNe with $p < 0.2$, and thus the outliers make up 17\% of the low-probability subset. Splitting the SDSS sample by $p = 0.85$ (the median value), we can examine the characteristics of SNe with low/high {\it galsnid} probability. The average and median redshifts for the two samples are nearly identical, with the high-probability subsample being at slightly higher redshift (by 0.01). \begin{figure} \begin{center} \epsscale{0.75} \rotatebox{90}{ \plotone{sdss_resid.ps}} \caption{Absolute value of Hubble residuals (assuming the preferred \citealt{Campbell13} cosmology) for the SDSS sample as a function of {\it galsnid} probability. The black diamonds (with hats) and blue dashes (without hats) represent probabilities using morphology classifications from Galaxy and the author (RJF), respectively.}\label{f:galsnid_resid} \end{center} \end{figure} Looking at the subsamples in detail, we see a correlation between Hubble residual (not the absolute value) and {\it galsnid} probability. SNe with large {\it galsnid} probability tend to have negative Hubble residuals, while those with small {\it galsnid} probability tend to have positive Hubble residuals. We display these residuals in Figure~\ref{f:sdss_resid}. The medians for these subsamples are $-0.138$ and 0.041~mag, respectively. The weighted means are $-0.085 \pm 0.011$ and $0.034 \pm 0.012$. The difference in the weighted means are 7.3-$\sigma$ different. Performing a Kolmogorav-Smirnov test on the two samples results in a $p$-value of $4.4 \times 10^{-6}$, indicating that the Hubble residuals are drawn from different populations. \begin{figure} \begin{center} \epsscale{0.75} \rotatebox{90}{ \plotone{sdss_resid2.ps}} \caption{Histograms of Hubble residuals (assuming the preferred \citealt{Campbell13} cosmology) for the SDSS sample. The solid black and hashed red histograms represent the samples with $p > 0.85$ and $p < 0.85$, respectively.}\label{f:sdss_resid} \end{center} \end{figure} Considering that most of the SNe in the $z < 0.2$ subsample are spectroscopically confirmed as Type~Ia, the difference in Hubble residuals is probably not the result of contamination. Rather, the difference is likely related to the known correlation between Hubble residuals and host-galaxy properties \citep{Kelly10, Lampeitl10:host, Sullivan10}. \citet{Campbell13} chose not to include this correction. Since {\it galsnid} probability correlates strongly with these host-galaxy properties, this is likely the cause of the difference in Hubble residuals. However, not accounting for this effect prevents some analysis of the correlations between Hubble residuals and {\it galsnid} probability. Excluding the outlier SNe (Hubble residuals $>$0.5~mag), we fit Gaussians to the residuals. Splitting the sample by $p = 0.85$, we find that the standard deviation of the residuals are 0.128 and 0.114~mag for the subsample with $p \le 0.85$ and $p > 0.85$, respectively; the subsample with higher {\it galsnid} probability has smaller scatter. It is unclear if the difference is the result of different amounts of contamination in the subsamples or because of the properties of SNe~Ia in redder, more luminous, earlier galaxies tend to produce a more standard sample. The host galaxies of the six outlier SNe (Hubble residual $>$0.5~mag) are somewhat varied: two large spiral galaxies, an incredibly small and low-luminosity galaxy, a modest disk galaxy, a small red galaxy with no signs of star formation, and a small starburst galaxy with a potential tidal tail. There is no obvious trend with the host-galaxy properties investigated here. Nonetheless, {\it galsnid} provides some handle on these outliers. \subsection{PTF}\label{ss:ptf} To further test the {\it galsnid} method on another independent sample, we investigate the relatively large sample of publicly classified SNe from PTF. PTF is a low-redshift (typically $z < 0.15$) SN survey that has spectroscopically classified almost 2000 SNe (as of 1 June 2013). Many of these SNe are publicly announced with coordinates, redshifts, and classifications. PTF provides a relatively large sample of SNe for which we can attempt classification through host-galaxy properties. Using the WISeREP database \citep{Yaron12}, we obtained a list of 555 PTF SNe with classifications. We visually cross-referenced this list with SDSS images to determine the host galaxy for each SN. Many SNe were not in the SDSS footprint, had no obvious host galaxy, or there was some ambiguity as to which galaxy was the host. After removing these objects, 384 SNe remained. We further restricted the sample to SNe with host galaxies that have SDSS spectroscopy, leaving a total of 151 SNe. This sample contains 118 SNe~Ia and 33 core-collapse SNe. We again match the Galaxy Zoo morphology classifications to this sample. For the PTF sample, 131 host galaxies had classifications. Using the same method as described in Section~\ref{ss:sdss}, except using the LOSS prior on the SN~Ia fraction, we applied the {\it galsnid} technique to the PTF sample. Histograms of the resulting probabilities are presented in Figure~\ref{f:ptf_hist}. Again, the SNe~Ia typically have a much higher {\it galsnid} probability than the core-collapse SNe. The median probability for the SNe~Ia and core-collapse SNe are 0.74 and 0.27, respectively. This is empirical proof that simply using the LOSS parameters are useful for classifying SNe in an untargeted survey. \begin{figure} \begin{center} \epsscale{1.0} \rotatebox{90}{ \plotone{prob_ptf_hist_cc.ps}} \caption{Histogram of {\it galsnid} probability for different spectroscopic SN classes in the PTF sample. The filled black and red histograms represent SNe~Ia and core-collapse SNe, respectively.}\label{f:ptf_hist} \end{center} \end{figure} \section{Discussion}\label{s:disc} \subsection{Additional Applications} Although we have focused on separating SNe~Ia from core-collapse SNe, the {\it galsnid} algorithm can be used for a variety of purposes. As an example, we have used the {\it galsnid} algorithm to separate SNe~Ibc from SNe~II in the LOSS sample (Figure~\ref{f:cc_hist}). Although the two samples do not separate as cleanly as SNe~Ia from core-collapse SNe, one can use {\it galsnid} to prioritize SNe for follow-up. The method can be particularly useful when a SN is young, before any light-curve fitting can be performed. \begin{figure} \begin{center} \epsscale{1.0} \rotatebox{90}{ \plotone{prob_hist_ii.ps}} \caption{Histogram of {\it galsnid} probability for SNe~Ibc (filled black histogram) and SNe~II (dashed blue histogram) in the LOSS sample. The {\it galsnid} probability lists the probability that a given SN is of Type Ibc. The median {\it galsnid} probability for all SNe is relatively low to reflect the relatively low random chance of a SN being a SN~Ibc (the prior) of 0.26.}\label{f:cc_hist} \end{center} \end{figure} Similarly, we can even separate different subclasses of SNe. As an example, {\it galsnid} was used to separate ``peculiar'' SNe~II from ``normal'' SNe~II (Figure~\ref{f:ii_hist}). Specifically, we were able to separate SNe classified as SNe~IIb or SNe~IIn from those classified as SNe~IIP or simply SNe~II. Again, {\it galsnid} is useful for selecting SNe for follow-up. In this case, it could be particularly useful since early spectra of all types of SNe~II can be relatively featureless, and thus, there could be epochs where the host-galaxy information is more discriminating than a spectrum. \begin{figure} \begin{center} \epsscale{1.0} \rotatebox{90}{ \plotone{prob_hist_iio.ps}} \caption{Histogram of {\it galsnid} probability for SNe~IIb/IIn (filled black histogram) and SNe~II/IIP (dashed blue histogram) in the LOSS sample. The {\it galsnid} probability lists the probability that a given SN is of Type IIb or IIn. The median {\it galsnid} probability for all SNe is relatively low to reflect the relatively low random chance of a SN being a SN~IIb/IIn (the prior) of 0.19.}\label{f:ii_hist} \end{center} \end{figure} Another potential application is classifying the small number of unclassified SNe in the LOSS sample. For these SNe, we applied the {\it galsnid} procedure using the LOSS priors and probabilities. Since these SNe were part of the LOSS sample, it is reasonable to assume that the absolute probabilities are correct. That is, a SN with $p > 0.5$ is likely a SN~Ia. We list the results in Table~\ref{t:unk}. \begin{deluxetable}{lcccc} \tabletypesize{\scriptsize} \tablewidth{0pt} \tablecaption{Spectroscopically Unclassified LOSS SNe\label{t:unk}} \tablehead{ \colhead{SN} & \colhead{{\it galsnid} $p$} & \colhead{Classification}} \startdata 1999gs & 0.639 & Ia \\ 2000fu & 0.044 & CC \\ 2000fv & 0.154 & CC \\ 2003bq & 0.066 & CC \\ 2003cm & 0.067 & CC \\ 2004bt & 0.102 & CC \\ 2005lv & 0.162 & CC \\ 2006A & 0.497 & CC \\ 2006dz & 0.611 & Ia \\ 2008hl & 0.213 & CC \enddata \end{deluxetable} Of the 10 unclassified SNe, 8 are classified as core-collapse (although SN~2006A, with $p = 0.497$, is effectively undetermined). Interestingly, SN~2006dz, which {\it galsnid} classifies as a SN~Ia, was originally identified in template-subtracted images of another SN, SN~2006br \citep{Contreras06}. The SN was identified after maximum brightness, and the SN appears to be heavily dust reddened. Nonetheless, the light curves are consistent with being a SN~Ia. Summing the {\it galsnid} $p$ values for the SNe classified as core-collapse and $1-p$ for those classified as SNe~Ia, we can estimate the number of incorrectly classified SNe in this sample. For the spectroscopically unclassified LOSS sample, we expect incorrect classifications for 2.055 SNe out of 10 SNe. \subsection{Further Improvements to Galsnid}\label{ss:improve} The current {\it galsnid} method is presented mainly as a proof of concept. There are several improvements one should make before using {\it galsnid} for specific robust scientific results. The current methodology of {\it galsnid} presumes that all host-galaxy properties are uncorrelated. This is clearly incorrect. As a result, we have some information about other parameters with a single measurement. Specifically, color, luminosity, and morphology are correlated. Taking these correlations into account should improve our inference. There are a number of additional parameters that one could measure for a host galaxy. For the LOSS sample, we simply use morphology, a single color, a single luminosity, an effective offset, and a pixel ranking. Adding additional photometry in several bands should improve classification. Deriving physical quantities such as star-formation rates and masses from such data may also provide more robust classifications. Adding data from spectroscopy should also improve classification. Specifically, emission line luminosity \citep[to measure star-formation rates; e.g.,][]{Meyers12}, line diagnostics (to determine possible AGN contribution to photometry), velocity dispersion (to measure a mass), and metallicity could all be important discriminants. Additional data such as \ion{H}{1} measurements could be useful, but perhaps difficult to obtain for large samples. One could also possibly include other environmental information in a classifier. The density of the galactic environment may affect the relative rates of SNe. Perhaps close companions are a good indication of a recent interaction which triggered a burst of star formation. Future investigations should attempt to provide a broader set of observations from which classifications can be made. \subsection{Combining Classifications} Using host-galaxy properties to classify SNe has several distinct advantages over light-curve analyses. It does not use any light-curve information, so samples will not be biased based on expectation of light-curve behavior. Almost all host-galaxy data can be obtained after the SN has faded. In particular, high-resolution imaging, spectroscopy, or additional photometry can be obtained post facto. Since {\it galsnid} is independent from any light-curve classifier, one can use SN~Ia samples defined by both techniques to examine systematics introduced by either method. Host-galaxy data can also be combined with photometry-only classification, and one can implement hybrid approaches. Since {\it galsnid} produces a probability density function for each SN, it would be trivial to naively combine the output of {\it galsnid} with any other similar output. However, some SN properties are correlated with host-galaxy properties. For instance, SNe~Ibc tend to come from brighter galactic positions than SNe~II and SNe~Ia hosted in ellipticals tend to be lower luminosity than those hosted in spirals. Therefore, a more careful approach to combining different methods should be used. In addition to classifications made purely on host-galaxy or light-curve properties, one could use hybrid measurements. For instance the peak luminosity of a SN compared to the luminosity of its host galaxy or the relative colors of a SN and its host galaxy could be useful indicators. \subsection{Redshift Evolution} The relative fraction of SN classes changes with redshift, with the SN~Ia fraction decreasing with redshift to at least $z \approx 1$. Similarly, galaxy properties change, on average, over redshift ranges of interest. It is not known if the fractions over a small parameter range change. For instance, it is reasonable to assume that the SN~Ia fraction in ellipticals has relatively little evolution. The fractions in other small bins might also stay the same while the underlying galaxy population is changing with redshift. The assumption that there is little evolution in the relative fractions for small parameter ranges should be tested with data. However, even with this assumption, one needs to account for the overall evolution of the galaxy population for high-redshift samples. A simple approach is to have a prior for the overall (observed) fraction as a function of redshift. Such a prior can both be determined observationally and through simulations. However, if one uses a threshold of a particular {\it galsnid} probability to separate classes, the effect of the prior is minimized. That is, the classification of a particular SN is only affected if the different prior would cause the {\it galsnid} probability of that object to cross the threshold. For example, if there are objects with $p = 0.99$, 0.95, 0.9, 0.8, and 0.6 with $P({\rm Ia}) = 0.5$, changing the prior to $P({\rm Ia}) = 0.4$ will result in probabilities of $p = 0.985$, 0.93, 0.86, 0.73, and 0.5, respectively. If the threshold were $p = 0.94$, only one of the example SNe would have had their classification changed with the different priors. This example also demonstrates that objects with $p \approx 0.5$ are more affected by the prior than those close to zero or one. \subsection{When Not to Use Galsnid} The {\it galsnid} method can produce relatively clean samples of particular SN classes. However, these samples can be highly biased subsamples of the underlying SN class. For instance, when choosing a SN~Ia sample, SNe with elliptical hosts will be much more likely to be included than those in spirals. But since SN~Ia properties such as luminosity are correlated with host-galaxy properties \citep[e.g.,][]{Hicken09:lc}, a {\it galsnid}-defined sample will likely be biased to lower luminosity SNe. Similar biases are also introduced by light-curve classifiers (SNe~Ia with light curves more like the templates and less like core-collapse SNe are more likely to be included); however, the {\it galsnid} biases may be harder to properly model. Similarly, as seen in Section~\ref{ss:sdss}, cosmological analyses {\it could} be biased if correlations between host-galaxy properties and Hubble residuals are not removed. Again, these biases also apply to light-curve classifiers which are more likely to select SNe with particular light-curve properties as SNe~Ia and if those properties (e.g., color) correlate with Hubble residuals \citep{Scolnic13}. As a result, one must be careful in choosing appropriate applications for {\it galsnid} samples. Clearly, investigations of host properties of a given class should not be performed on a {\it galsnid} sample. If one were to use a {\it galsnid} sample to determine SN rates as a function of redshift, careful attention to the prior is required. Additionally, SNe with particularly large offsets could have misidentified host galaxies. Although this should not affect many SNe, {\it galsnid} may not provide representative samples specifically designed to identify such objects. \section{Conclusions}\label{s:conc} We have introduced a method for classifying SNe using only galaxy data. This method relies on the fact that different SN classes come from different stellar populations. Using the LOSS sample, we estimate the probabilities that particular SN classes have specific host-galaxy properties and the probabilities that galaxies with particular properties have particular SN classes. We define an algorithm, {\it galsnid}, that combines the host-galaxy data to determine the Bayesian posterior probability that a given SN is of a particular class. We have tested {\it galsnid} in a variety of ways, and have determined that it provides robust, reliable classifications under many different scenarios. We find that of the quantities examined here, morphology had the most discriminating power. We have shown that {\it galsnid} is effective at building relatively pure samples of particular SN classes, and can be helpful for building samples for SN~Ia cosmology. We also demonstrated some additional applications for {\it galsnid}, including separating various subclasses. Past (SDSS), current (Pan-STARRS; PTF) and future SN surveys (Dark Energy Survey; the Large Synoptic Survey Telescope) should have deep imaging of all SN host galaxies as a result of the nominal survey. These data could be used for classification with {\it galsnid} without additional observations. However, a relatively small spectroscopic campaign could provide detailed information that should improve classifications beyond those presented here. Moreover, high-resolution adaptive-optics or {\it Hubble Space Telescope} imaging could significantly improve any classification by allowing a precise morphological classification. Additional improvements to {\it galsnid} could be achieved by taking into account additional galaxy data, properly handling correlated data, joining galaxy and SN data, and combining {\it galsnid} results with those of photometry-based SN classifiers. \begin{acknowledgments} \bigskip We thank D.\ Scolnic, R.\ Kessler, M.\ Sako, K.\ Barbary, and the anonymous referee for useful discussions and comments. Supernova research at Harvard is supported in part by NSF grant AST-1211196. Funding for SDSS-III has been provided by the Alfred P.\ Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S.\ Department of Energy Office of Science. The SDSS-III web site is http://www.sdss3.org/. SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University. This paper uses data accessed through the Weizmann Interactive Supernova data REPository (WISeREP) -- www.weizmann.ac.il/astrophysics/wiserep . \end{acknowledgments} \bibliographystyle{fapj}
1,941,325,220,658
arxiv
\section{INTRODUCTION} Galaxy interactions and mergers are thought to play an important role in galaxy evolution, impacting their morphologies, gas kinematics, and star formation rates \citep{toom72, sand88, barn96, cons06}. Cold dark matter models predict that galaxies have accreted their mass through hierarchical mergers \citep{delu06}. Mergers are also suspected to trigger luminosity increases, from active galactic nuclei (AGNs, \citealt{sand96}) and starbursts \citep{barn96,miho96,cox06}. Enhanced star formation can result from the tidal interactions of the galaxies that compress/shock the gas, causing it to collapse and form stars \citep{barn04,kim09,sait09}. These merger-induced starbursts are sometimes observed as luminous/ultra-luminous infrared galaxies (LIRGs/ ULIRGs), which have extreme far-infrared (FIR) luminosities of 10$^{11}$ L$_{\odot}$ and 10$^{12}$ L$_{\odot}$, respectively \citep{sand96}. Several studies have shown that the infrared luminosity of galaxies is statistically correlated with mergers \citep{elli13,lars16}. These can be seen as morphological disturbances in LIRGs and ULIRGs \citep{elbaz03,hwang07,hwang09,kart10,elli13}. However, the merger contributions to starbursts in general is still unclear. At high redshifts, average star formation rates are higher. LIRGs and ULIRGs are often found on the ``main sequence", that is, obeying the correlation between SFR and stellar mass of typical galaxies at a given redshift \citep{dadd07,elbaz07}. Thus, we will define starbursts by comparison with the star formation of other galaxies of a given stellar mass in the same redshift bin. Previous studies have defined starbursts as galaxies experiencing star formation three or four times above the median of the SFRs of main sequence of star-forming galaxies \citep{elbaz11,rodi11,schr15}. In a similar way, we divide galaxies into three different star formation ``modes" (i.e. starbursts, main sequence and quiescent galaxies) in each redshift bin. It is challenging to identify a large number of merger galaxies out to large redshifts. There are two main methods to identify mergers -- selecting close pairs \citep{bund09,dera09,man16,dunc19} or using morphological disturbances \citep{cons03,lotz08,lotz11}. For example, \cite{kado18} used the Subaru/HSC images to select the merger galaxies at 0.05 $< z <$ 0.45 using visually identified tidal features (e.g. shell or stream features) However, for using close pairs, spectroscopic velocities for both pairs are needed. Because spectroscopic observations are expensive, merger studies based on them will suffer from incompleteness. Deep and high-resolution imaging (e.g. {\it Hubble Space Telescope}) could avoid that incompleteness, but the merger fraction from galaxy imaging may be ambiguous. Morphological disturbances presented the late-stage of mergers can be determined by visual inspection or discriminate quantitative outliers of morphological disturbances. Visual inspection is subjective and time-consuming. High redshift galaxies can also be easily misclassified because of wavelength-dependent morphology and surface brightness effects \citep{bohl91,kuch00,wind02,kamp07}. In general, morphological types of galaxies are classified by the light profiles of galaxies. The measured profile is the average intensity of a galaxy as a function of radius, and can be mathematically fitted (e.g. S{\'e}rsic profile, \citealt{sers63}). These parametrizations historically have been used to classify galaxies into Ellipticals, Spirals and Irregular galaxies. There are also non-parametric measures for galaxy classification, such as concentration, asymmetry, clumpiness (CAS, \citealt{cons00,cons03,mena06}). In addition, there have been many studies of morphological classification using the parameters of Gini coefficient and M$_{20}$ \citep{lotz04,lotz08}. The Gini coefficient is a measure whether the flux of a galaxy is concentrated or spread out (to be formally defined in Section 3), and M$_{20}$ is the the second-order moment of the brightest 20 percent of a galaxy \citep{lotz04,lotz08}. The Gini coefficient is originally used in economics to statistically describe the distribution of wealth within a society. This coefficient was applied to astronomical images to quantify the spread of galaxy light \citep{abra03,lotz08}, which is now widely used for morphological analysis in astronomy. Between these two approaches (parametric vs. non-parametric), non-parametric measurements may be less impacted by redshift. Concentration, asymmetry, clumpiness are used for merger-finding, but asymmetry measurement is less sensitive to the late-stage of mergers \citep{lotz08}. The Gini coefficient and M$_{20}$ are used for wide area surveys \citep{lotz04,lotz10a,lotz10b}. These parameters are more effective in classifying galaxies and identifying late-stage mergers than concentration and asymmetry, and also more robust for galaxies with low signal-to-noise ratios \citep{lotz04,lotz08}. Therefore, we use the Gini coefficient and M$_{20}$ for identifying mergers, for quantitative comparison with other studies, and to secure a large sample from our data. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{fig1.png} \caption{L$_{IR}$(MIR-FIR) versus L$_{TIR}$ from AKARI \( 9\ \mu \)m (Top). L$_{IR}$(MIR-FIR) is fitted by CIGALE including at least one FIR band detection \citep{wan20}. Diamonds and triangles represent AGNs classified with spectra and WISE colour cut, respectively. (Middle) The ratio of the L$_{TIR}$ to L$_{IR}$(MIR-FIR) as a function of L$_{TIR}$. (Bottom) The ratio of the L$_{TIR}$ to L$_{IR}$(MIR-FIR) as a function of redshift. Colour codes represent galaxies in different redshift ranges. } \label{fig-lirtot} \end{figure} In this paper, we examine the evolution of merger fractions of galaxies with redshifts up to $\it{z}$ = 0.6 in the AKARI North Ecliptic Pole (NEP)--Wide field, and the variations in merger fractions of galaxies in three different star-formation modes (i.e. starbursts, main sequence, and quiescent galaxies). We also study how the FIR detection affects the merger fractions. In Section 2, we summarise our observational data and sample selection. Section 3 describes the morphological analysis to classify the galaxies using the Gini and M$_{20}$. We present the results and discussion in Sections 4 and 5, respectively. \section{DATA \subsection{Optical images}\label{sec:opt We use deep optical images taken with the Hyper Supreme-Cam (HSC) on Subaru 8-m telescope in the AKARI NEP--Wide field covering 5.4 deg$^2$ (\citealt{goto17}, Oi et al. in accepted). The HSC has a 1.5 deg field of view (FoV) covered with 104 red-sensitive CCDs, and the pixel scale is 0.17 arcsec. It is the largest FoV among the 8-m telescopes, and the size of FoV covered the AKARI NEP--Wide field with only four pointings (see Figure 2 in \citealt{goto17}, Figure 1 in Oi et al. in accepted). The observations of the NEP--Wide field were performed in June 30th 2014 and August 7-10th in 2015. The 5$\sigma$ detection limits are 28.6, 27.3, 26.7, 26.0, 25.6 AB mag, and the median seeings are 0.68, 1.26, 0.84, 0.76, 0.74 for $\it{g, r, i, z}$ and $\it{y}$-band, respectively. The total number of identified sources in 5 bands is 3.5 million and more detailed information on the data set is described in Oi et al. (accepted). \subsection{Multi-wavelength Data We have used the multi-wavelength data set based on the catalogue of AKARI mid-IR (MIR) galaxies newly identified by an optical survey by Subaru/HSC \citep{oi21}. The infrared galaxies detected by AKARI's NEP--Wide survey \citep{kim12} were cross-matched against deep HSC optical data, thereafter all available supplementary data over the NEP--Wide field were merged together \citep{kim21}. Data merging of these two catalogues were carried out by positional matching with the matching radii defined by 3-sigma positional offsets, which are more rigorous than using PSF sizes \citep{kim21}. This band-merged catalogue has 91,000 objects including $\sim$ 70,000 objects detected in N2, N3, N4 bands, $\sim$ 20,000 objects detected in S7, S9, L11, L18, L24 bands, and is the reference catalogue for our sample selection in this study. Optical to submillimeter (submm) photometry for AKARI sources are also added. Original AKARI NEP-Wide field catalogue \citep{kim12} includes CFHT/MegaCam ${\it u^*, g', r', i', z'}$ \citep{hwan07}, Maidanak observatory/ SNUCAM B, R, I-band data \citep{jeon10} and KPNO /FLAMINGOS J and H band data \citep{jeon14}. Supplementally, the observed data from CFHT/MegaPrime $\it{u}$-band \citep{huan20}, CFHT/MegaCam ${\it u^*, g', r', i', z'}$ \citep{oi14,goto18}, WIRCam Y, J and Ks band \citep{oi14} are added to the main catalogue. The main catalogue is also cross-matched with the $\it{WISE}$ catalogue \citep{jarr11}, $\it{Spitzer}$/IRAC \citep{nayy18} and $\it{Herschel}$/PACS and SPIRE \citep{pear17,pear19}. This band-merged catalogue adopted spectroscopic redshifts for objects from several observations, which include Keck/DEIMOS \citep{shog18,kim18} and MMT/Hectospec and WIYN /Hydra \citep{shim13}. Subaru/FMOS \citep{oi17}, GTC (Miyaji et al. in preparation) and the SPICY survey \citep{ohya18} are also included. Photometric redshifts are determined \citep{ho21} using 26 bands from optical to NIR with the public code LePhare \citep{arno99,ilbert06}, and the photo-z accuracy is $\sigma$ = 0.053. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{fig2.png} \end{center} \caption{Distribution of radius for galaxies. Dotted line represents the seeing size, 0.6 arcsec in the HSC $\it{i}$-band image. Blue and red histograms represent the galaxies with photo-z and spec-z. } \label{fig-rhalf} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{fig3.png} \end{center} \caption{Total infrared luminosity distribution as a function of redshift for 9-$\micron$ selected galaxies. Black circles and red crosses represent the galaxies with spec-z and photo-z. } \label{fig-z} \end{figure} \subsection{Physical Parameters of Galaxies We derive the total infrared luminosity ($L_{\rm TIR}$, 8--1000 \micron) using a set of template spectral energy distributions (SEDs) of main sequence galaxies in \citet{elbaz11} with each of AKARI bands, S7, S9, S11, L15, L18 and L24. They defined a typical IR SED for main sequence galaxies using $\it{Herschel}$ data, this SED could extrapolate the total IR luminosity for galaxies that only one measurement exists. Although $L_{\rm TIR}$ derived with FIR data could be more accurate than that without FIR data, the latter case enables us to secure a large number of samples \citep{calz10,gala13}. To assure the validity of the $L_{\rm TIR}$ from one-band, we compare the $L_{\rm TIR}$ with infrared luminosity ($L_{\rm IR}$ derived from MIR-FIR bands, \citealt{wan20}). They calculated the $L_{\rm IR}$ using SED modeling code CIGALE \citep{burg05,noll09} with 36 bands ranging from optical to submm bands, which is represented as a sum of dust and AGN activities. As seen in Figure \ref{fig-lirtot}, we compared the difference between $L_{\rm TIR}$ from one-band and $L_{\rm IR}$ from MIR--FIR bands as a functions of $L_{\rm TIR}$ and redshift. $L_{\rm TIR}$ and $L_{\rm IR}$ up to around $L_{\rm TIR}$ = 10$^{13}$ L$_{\odot}$ show good agreement below $\it{z}$ $\sim$ 0.8, which may originate from the effect of galaxy evolution on the SED over the cosmic time. The relation between the total IR luminosity and one-band IR lunminosity could have discrepancy as redshift inceases depending on which IR-band is used (e.g. 24 vs 8 \micron, \citealt{elbaz11}). Since such a trend is commonly found in every AKARI MIR band (7 -- 24 \micron), we used 9 $\micron$ band (S9) for sample selection because of the largest number of sample. In the middle panel, the standard deviation in $L_{\rm IR}$/$L_{\rm TIR}$ of galaxies except AGNs is 0.71. A detailed sample selection is described in Section 2.4 and Table \ref{tab:sam}. Since the $L_{\rm TIR}$ is usually used as good star formation indicator, we calculate the $L_{\rm TIR}$ from the initial mass function (IMF). We adopt the \citet{salp55} initial mass function (IMF), which is also used for calculating $L_{\rm IR}$ with CIGALE in \citet{wan20}. The star formation rates are calculated from the formula (12) in \citet{kenn12} which is \begin{equation} \rm{log \dot{ M_{*}} (M_{\odot} \rm{year^{-1}}) = {log} L_x - {log} C_x} , \end{equation} where L$_x$ units are [ergs s$^{-1}$] and logC${_x}$ = 43.41 adopting the calibration factors from \citet{hao11,murp11}. The stellar mass ($M_*$) is derived from SED fitting with LePhare \citep{arno99,ilbert06} using 13 multi-wavelength data of CFHT/MegaCam $\it{u'}$, Subaru/HSC $\it{g, r, i, z, Y}$, CFHT/Wircam $\it{J, Ks}$, and AKARI N2, N3, N4, S7, S9 bands. We convert $M_*$ from LePhare based on \citet{chab03} IMF to $M_*$ based on \citet{salp55} IMF by dividing by a factor of 0.63 to fairly compare with others (e.g. \citealt{schr15,pear18}). The star forming galaxies in the main sequence follow an empirical power-law relation between the SFR and stellar mass. However, \citet{smer18,drai20} showed that quiescent galaxies have considerable scatter on this relation when they compared different indicators such as L$_{TIR}$, H$\alpha$, neon lines, etc. Thus, considering to adopt separate conversion factor for deriving the SFR for each different star formation mode might improve the relation. However, because we have constrained galaxies with the range of 9.0 $<$ log(M$_*$/M$_{\odot}$) $<$11.5 including relatively massive quiescent galaxies, we do not apply the separate conversion factor for quiescent galaxies in this paper. The star forming galaxies show tight correlations between L$_{TIR}$ and SFR (e.g., \citealt{kenn12,hwang10,hwang12}). However, this tight correlation can break down for quiescent galaxies, especially for those with low infrared luminosities (e.g., \citealt{smer18,drai20}. Thus, it is conceivable that this difference might affect our results. However, the infrared luminosities of our sample galaxies (even for quiescent galaxies) are generally high enough, the impact of this different correlation is insignificant. \begin{table} \centering \caption{Number of Galaxies with 5-sigma detection in different AKARI/IRC bands} \begin{tabular}{c|cccccc} \hline Band & S7 & S9 & S11 & L15 & L18 & L24 \\ \hline Total & 5007& 9076 & 9099 & 8592 & 10133 & 2384 \\ spec-z & 1022 & 1417 & 1388 & 1117 & 1186 & 532 \\ photo-z & 5003 & 9072 & 9096 & 8589 & 10130 & 2382 \\ \hline Total (z$<$0.8) & 3702 & 7236 & 7377 & 4640 & 5068 & 1349 \\ spec-z & 861 & 1239 & 1220 & 927 & 971 & 443 \\ photo-z & 3659 & 7173 & 7317 & 4580 & 5010 & 1320 \\ \hline Total (z$<$0.6) & 3407 & 6425 & 6200 & 3392 & 3805 & 1150 \\ spec-z & 820 & 1169 & 1137 & 849 & 893 & 413 \\ photo-z & 3348 & 6331 & 6107 & 3307 & 3718 & 1108 \\ Herschel detection & 739 & 1048 & 1051 & 805 & 853 & 467 \\ \hline \end{tabular} \label{tab:sam} \end{table} \begin{figure*} \centering \includegraphics[width=0.7\textwidth]{fig4.png} \caption{Example of the morphological measurements for the galaxy performed by $\texttt{statmorph}$. (a), (b), (c) are example images of mergers, Spiral, and Elliptical galaxies, respectively. Black solid contours represent segmentation maps and text labels represent the values of the Gini, M$_{20}$ and CAS. F(G,M$_{20}$) and S(G,M$_{20}$) show that bulge statistic and merger statistic. Detailed parameter description is in \citet{rodr19}. } \label{fig-statm} \end{figure*} \begin{figure} \centering \includegraphics[width=0.48\textwidth]{fig5.png} \caption{The Gini--M$_{20}$ diagram for galaxies at different redshift ranges. Red and black dots represent galaxies in $\it{r}$ and $\it{i}$ band, respectively. Dashed lines separate the regimes according to their morphological types; mergers in above the dashed line, Ellipticals in right regime, and Spirals in below the dashed line. } \label{fig-class} \end{figure} \subsection{Sample Selection} \label{sample Considering the total number of detected sources at each AKARI band in the band-merged catalogue, we selected the S9 (9 $\micron$) band for our study (see Table \ref{tab:sam}). The total number of galaxies in Table \ref{tab:sam} presents the number of galaxies with redshift information estimated from either spectroscopic or photometric measurement. Although the total number of galaxies with L18 is the largest in the whole redshift range, we select 9 $\micron$ detected galaxies for our sample, because the number of galaxies is the largest at the redshift below $\it{z}$ $=$ 0.6 where we finally analyse the data. Note that the total number of 9 $\micron$ detected galaxies is 9,076 and it is reduced to be 7,236 and 6,425 at $\it{z}$ $<$ 0.8 and $\it{z}$ $<$ 0.6, respectively. To examine the contribution from AGNs in our analysis, we overplot 190 AGNs in Figure \ref{fig-lirtot}, which were identified by Baldwin-Phillips-Terlevich (BPT) emission-line ratio diagrams \citep{shim13}. In addition, 30 IR-bright AGNs are found through WISE W1 - W2 and W2 - W3 colour-colour diagrams with the criteria of \cite{jarr11} and \cite{mate12}. Figure \ref{fig-lirtot} shows that AGNs are significantly off the linear correlation of $L_{\rm TIR}$ and $L_{\rm IR}$. This is because AGN-dominant galaxies have higher MIR luminosities compared to star-forming galaxies \citep{spin95}. Due to these templates are based on star-forming galaxies, we remove these 219 AGNs (73 AGNs in $\it{z}$ $<$ 0.6) from the further analysis, and end up having 6,352 galaxies at $\it{z}$ $<$ 0.6. Figure \ref{fig-rhalf} shows the normalised histogram of distributions of seeing corrected half light radius ($R_h$) for galaxies in each redshift bin, from top to the bottom. We separate the redshift into the range of 0.2 bin and find most of galaxies in size are larger than the seeing 0.6 arcsec in the HSC $\it{i}$-band in all redshift bins. Figure \ref{fig-z} shows the distribution of sample galaxies on the redshift and the total infrared luminosity. Red and black circles represent the sample with spectroscopic and photometric redshifts, respectively. Upper and right panels show histograms of redshift and $L_{\rm TIR}$, respectively. \section{Measurement of morphological parameters The morphological parameters allow us to classify the galaxy types. In order to quantify galaxy morphologies, we used the Gini coefficient and M$_{20}$ classification method \citep{lotz04}. The Gini coefficient is a statistical measure of distribution of income in a population in economics, and recently has applied to astronomy as well \citep{abra03,lotz04}. The Gini can be computed sorting the $\it{f_i}$ pixel value increasing order as \begin{equation} \rm{G= \frac{1}{|\overline{f}|n(n-1) }\sum_{i=1}^{n} (2i -n -1)|f_i|} \end{equation} where $\it{\overline{f}}$ is the mean over the pixel values and $\it{n}$ is the number of pixels. If all the flux of a galaxy is concentrated in one pixel, G = 1, while a galaxy has a homogeneous surface brightness, G = 0 \citep{glas62}. The M$_{20}$ is the second order moment of brightest regions of a galaxy. The brightest 20 $\%$ of the light is normalised to the total second-order central moment, M$_{tot}$ \citep{lotz04}. These are defined as \begin{equation} \rm{M_{tot} = {\sum_{i}^{n} M_i} = \sum_{i}^{n} f_i[(x_i - x_c)^2 + ( y_i - y_c)^2]}, \end{equation} \begin{equation} \rm{M_{20} = \rm{log}_{10}{\frac {\sum_{i} M_i}{M_{tot}}, while \sum_{i} f_i <0.2 f_{tot}}}, \end{equation} where $\it{f_i}$ is the pixel flux value and $\it{x_c, y_c}$ is the galaxy centre. The centre is the point where $\it{M_{tot}}$ is minimised. The M$_{20}$ is anti-correlated with concentration; low M$_{20}$ represents highly concentrated galaxy. We derive the non-parametric Gini and M$_{20}$ using $\texttt{statmorph}$ python code \citep{rodr19} on galaxies in cutouts of $\it{r}$ and $\it{i}$-band images. It constructs a segmentation map for Gini measurements to be insensitive to dimming surface brightness for distant galaxies \citep{lotz04}. The image of a galaxy is convolved with the Gaussian kernel $\sigma$ = r$_{petro}$/5, where r$_{petro}$ is the Petrosian radius. The mean surface brightness within the r$_{petro}$ is used to define threshold of flux, then the pixel value above the threshold is assigned to the galaxy in the segmentation map. Both the Gini and M$_{20}$ are calculated on the segmentation map. Figure \ref{fig-statm} shows the examples of segmentation maps of three galaxies with measured Gini and M$_{20}$. It should be noted that the high-column density of dust could impact the morphological classification of galaxies in the Gini-M$_{20}$ space \citep{lotz08}. To briefly test this effect, we examine the distributions of Gini and M$_{20}$ for Herschel detected and non-detected galaxies. Because the Herschel detection requires larger submm flux densities (i.e. larger amount of dust than those with similar M$_*$/SFRs/T$_{\rm{dust}}$; \citealt{hild83}, this comparison can show the impact of dust on the morphological measurements. The comparison does not show any systematic differences of Gini and M$_{20}$ estimates between the two samples (not shown here), which is supported by the Kolmogorov-Smirnov test with high significance levels (p $<$ 0.35). We therefore do not think that the dust introduces a systematic bias in our measurements of Gini and M$_{20}$ parameter. As \cite{lotz08} proposed criteria to separate galaxies into three galaxy types on the Gini - M$_{20}$ diagram using galaxies at 0.2 $<$ $\it{z}$ $<$ 1.2, we adopt the classification criteria from the equation (4) of \citet{lotz08} to divide galaxies into mergers, Spirals and Ellipticals on the Gini and $M_{20}$ diagram: Mergers: G $>$ --0.14 M$_{20}$ + 0.33, E/S0/Sa: G $<$ --0.14 M$_{20}$ + 0.33, and G $>$ 0.14 M$_{20}$ + 0.80, Sb/Sc/Irr: G $\leq$ --0.14 M$_{20}$ + 0.33, and G $\leq$ 0.14 M$_{20}$ + 0.80. Figure \ref{fig-class} shows the G--M$_{20}$ distribution of our sample on $\it{r}$-, $\it{i}$-band images for different redshift bins. We find that the distribution of these two morphological parameters for galaxies on $\it{r}$-, $\it{i}$-band images are not significantly different. We derive morphological parameters for both $\it{r}$- and $\it{i}$-band images to select the one that gives similar rest-frame wavelengths for the comparison of galaxies at different redshifts. Therefore, we adopt the parameters from $\it{r}$-band images for the galaxies at $\it{z}$ $<$ 0.2, and from $\it{i}$-band images for those at $\it{z}$ $>$ 0.2. To verify our morphological classification based on the measurements of Gini and M$_{20}$, we also conduct the visual inspection of the optical images of all the galaxies in our sample. We find that only 1$\%$ of the galaxies classified as mergers in our sample turn out to be spirals, and 1.7$\%$ of ellipticals and 0.2 $\%$ of spirals based only on the G-M$_{20}$ classification are mergers. This contamination is small enough to have no significant impact on our result. We therefore decided to keep the results based on the automated classification based on the estimates of Gini and M$_{20}$ to avoid any possible subjective misclassification based on visual classification, especially for faint galaxies. Also, since the FWHM of a point source in the i-band images is 0.84 arcsec corresponding to $\sim$ 3.8 kpc at our median redshift (i.e. z $\sim$ 0.3), we could hardly find patchy features of star formation in the galaxy images that could affect the Gini during visual inspection of galaxies. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{fig6.png} \caption{Merger fractions as a function of L$_{TIR}$ at different redshift. Filled red circle, open green triangle, open blue rectangle represent galaxies at 0.0 $< \it{z} <$ 0.2, 0.2$< \it{z} <$ 0.4, 0.4$< \it{z} <$ 0.6, respectively.} \label{fig-add} \end{figure} \begin{figure*} \centering \includegraphics[width=0.6\textwidth]{fig7.png} \caption{ Star formation rates for 9-$\micron$ selected galaxies as a function of stellar mass at different redshift range. Solid line represents the average SFR for main sequence galaxies \citep{schr15} and upper and lower dashed lines represent a factor of two above and below of this fit at each redshift range. Dash-dotted line represents log(M$_*$) = 10.5. Blue solid line shows the best fits for the main sequence galaxies in SDSS \citep{elbaz07}. Red and black dots represent the galaxies with spec-z and photo-z, respectively.} \label{fig-sfmode} \end{figure*} \section{Results It has been well known that the infrared luminosity of galaxies is closely related to the merger activity of galaxies \citep{hwang07, elli13,lars16}. However, the situation can differ if we consider a wide range of redshift. For example, Figure \ref{fig-add} shows the merger fraction of galaxies in our sample as a function of infrared luminosity at different redshift ranges. As expected, the merger fraction increases with L$_{TIR}$ for a given redshift range. However, because the merger fraction could be different depending on the redshift despite similar L$_{TIR}$, we examine the merger fraction focusing on star formation mode. \subsection{Merger fractions of galaxies at different star formation modes The relation between star formation rate and stellar mass of galaxies is tightly related to the star formation mode. To investigate the cosmic evolution out to $\it{z}$ $\sim$ 1 over the star formation mode, we divide our sample at each redshift bin (see Figure \ref{fig-sfmode}). We adopt the average SFR of main sequence galaxies with stellar mass and redshift from the equation (9) of \citet{schr15} to resolve star formation modes. They present an analysis of statistical properties of star-forming galaxies using the $\it{Herschel}$ and $\it{Hubble}$ $\it{H}$-band images in the redshift range of $\it{z}$ $>$ 0.3. We extrapolate their relation to 0 $<$ $\it{z}$ $<$ 0.2 bin, but found that the extrapolated SFRs are higher than those of previous studies \citep{brin04,elbaz07}. Therefore, we used the relation derived from the galaxies at low redshifts (i.e. SDSS galaxies z $\approx$ 0.1, \citealt{elbaz07}) to adjust the extrapolated relation; we set the average SFR of main sequence to be equal to that of SDSS at log(M$_*$/M$_{\odot}$) = 10.0 in the redshift range with 0.0 $< \it{z} <$ 0.2, as shown in the left top panel of Figure \ref{fig-sfmode}. We define the galaxies within 2 and 0.5 times the average SFRs (dashed lines in Figure \ref{fig-sfmode}) as main sequence (MS) galaxies. Galaxies above the upper dashed line are considered as starbursts (SB, SFR $>$ 2$\times$SFR$_{MS}$), and galaxies below the lower dashed line as quiescent galaxies (QS, SFR $<$ 0.5$\times$SFR$_{MS}$). Our samples are distributed in three different star formation modes at 0.0 $<$ $\it{z}$ $<$ 0.2, however there are fewer quiescent galaxies as redshift increases because of the MIR detection limit. Note that quiescent galaxies becomes much fainter in the MIR ranges at higher redshift. Therefore, we constrain the galaxies mass range of 9.0 $<$ log(M$_*$/M$_{\odot}$) $<$ 11.5 as total sample to avoid extreme mass range of galaxies and select the uniform sample over the star formation mode. To better understand the overall star formation activity for galaxies by minimising the mass effects, we plot the starburstiness (R$_{SB}$) distribution in Figure \ref{fig-sbn}. Starburstiness represents the star formation activity which is a measure of the excess in specific star formation rate of a galaxy compared to that of a main sequence galaxy with the same stellar mass and is defined as R$_{SB}$ = sSFR/sSFR$_{MS}$ \citep{elbaz11}. Figure \ref{fig-sbn} displays R$_{SB}$ of galaxies in total mass range 9.0 $<$ log(M$_*$/M$_{\odot}$) $<$11.5 (black solid histogram) and those at 10.5 $<$ log(M$_*$/M$_{\odot}$) $<$11.5 (blue dashed histogram). The galaxies with R$_{SB}$ $<$ 0.5 and 2 $<$ R$_{SB}$ represent quiescent and starburst systems, respectively. As expected, both samples show peaks around R$_{SB}$ =1. However, the bin of 0.6 $<$ $\it{z}$ $<$ 0.8 has fewer quiescent and main sequence galaxies than other bins because of detection limit. Therefore we remove the sample in 0.6 $<$ $\it{z}$ $<$ 0.8 for further analysis. Because the quiescent galaxies could be still affected by detection limits at all redshifts bins except 0.0 $<$ $\it{z}$ $<$ 0.2, it should be noted that the merger fractions of quiescent galaxies mean upper limits. Figure \ref{fig-merg} shows the evolution of merger fractions for starbursts, main sequence, and quiescent galaxies as a function of the redshift. We define the merger fraction as the ratio of a number of merging galaxies to total number of galaxies in each star formation mode within the redshift range. To minimise the mass effects on the comparisons of merger fractions between the samples, we examine the trend of galaxies with total (9.0 $<$ log(M$_*$/M$_{\odot}$) $<$ 11.5) and narrow (10.5 $<$ log(M$_*$/M$_{\odot}$) $<$ 11.5) mass range in the left and right, respectively. We find that merger fractions of all three different modes of galaxies marginally increase with redshift in both panels of different mass ranges. The merger fractions of galaxies in the total mass range at 0.0 $<$ $\it{z}$ $<$ 0.2 are higher compared to those of galaxies in the narrow mass range. This is because there are more galaxy samples in the log(M$_*$/M$_{\odot}$) $<$ 10.5 as shown in Fig \ref{fig-sfmode}. We also find that the merger fractions of galaxies differ for three star formation modes, and the merger fractions of starbursts are higher than those of main sequence and quiescent galaxies in both panels. We fit the merger fractions evolution with power-law \citep{patt02,cons09}, which is given by f$_{m} = \alpha (1+z)^m + \rm{C}$. We use six points of merger fraction with binsize 0.1 of redshift. For starbursts and main sequence galaxies, we obtain the index $\it{m}$ = 0.90 $\pm$ 0.18 and 2.04 $\pm$ 0.13 in total mass range, respectively, and $\it{m}$ = 1.81 $\pm$ 0.12 and 1.21 $\pm$ 0.08 in narrow mass range, respectively. These are relatively similar or lower than those of others \citep{cons03,lotz08,qu17}. To examine whether our results are robust against different main sequence selections, we also use the evolutionary trend of main sequence locus in \citet{pear18}. Following the single power-law they used, S = $\alpha$ [log(M$_*$) + 10.5] + $\beta$ \citep{pear18,whit12}, where $\alpha$ and $\beta$ is the slop and the normalisation, respectively, we calculate the fit of SFR and $M_*$ of galaxies. We fix the $\alpha$ as 0.5 and interpolate the $\beta$ using the parameters from the table 2 in \citet{pear18}, and identify starbursts, main sequence and quiescent galaxies. We analyse merger fractions of galaxies and find that the increase trends of merger fractions for galaxies in different star formation modes as the redshift increase, when we use both the average SFRs of \citet{schr15} and \citet{pear18}, are consistent. \begin{figure*} \centering \includegraphics[width=0.6\textwidth]{fig8.png} \caption{Starburstiness R$_{SB}$ distribution of galaxies at each redshift range. Black solid and blue dashed line represent the galaxies in the stellar mass range of 9.0 $<$ log(M$_*$/M$_{\odot}$) $<$ 11.5 and 10.5 $<$ log(M$_*$/M$_{\odot}$) $<$ 11.5, respectively. Dotted and dash-dotted line represent the borders between starbursts and main sequence, main sequence and quiescent galaxies, respectively.} \label{fig-sbn} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.7\textwidth]{fig9.png} \caption{Evolution of the merger fractions for starburst, main sequence, and quiescent galaxies. Filled red circle, open green triangle, open blue rectangle represent starbursts, main sequence, and quiescent galaxies, respectively. Left panel shows galaxies in the entire stellar mass range, and right panel shows the galaxies in 10.5 $<$ log(M$_*$/M$_{\odot}$) $<$11.5. } \label{fig-merg} \end{figure*} \subsection{Merger fraction of galaxies with and without $\it{Herschel}$ detections It is well known that FIR bright galaxies tend to be found as mergers at low redshifts. However, this is not always true for high redshift galaxies; isolated disk galaxies at high redshifts can have high infrared luminosities without any merger events because of their large amount of gas \citep{drew20}. This suggests that the infrared luminosity may not reflect genuine physical conditions of galaxies when comparing galaxies at different redshifts. Instead, it is important to distinguish galaxies based on more physically motivated parameters including star formation mode, which is the main driver of this study. To better justify this point, we further examine the merger fraction of FIR detected galaxies depending on the redshift and star formation modes with that of FIR non-detected galaxies. Here, the FIR detection means that the galaxies are detected at least one band of $\it{Herschel}$/SPIRE 250, 350 and 500 $\micron$ wavelengths. Because the $\it{Herschel}$/PACS covers only the NEP-Deep field unlike the $\it{Herschel}$/SPIRE (see Fig. 1 in \citealt{kim21}), we use the only the $\it{Herschel}$/SPIRE data to reduce the selection effect for the comparison. We separate our 9 $\micron$ detected samples into $\it{Herschel}$ non-detected and detected ones, and show their starburstiness at different redshift bins in Figure \ref{fig-herschelsbn}. Black dashed and blue solid lines represent $\it{Herschel}$ detected and non-$\it{Herschel}$ detected sample, respectively. We find that $\it{Herschel}$ detected samples have higher R$_{SB}$ than those of non-$\it{Herschel}$ detected sample in all redshift bins as we can expect. Figure \ref{fig-herschel} shows the evolution of galaxy merger fraction for non-$\it{Herschel}$ detected and $\it{Herschel}$ detected samples in upper and bottom panels, respectively. Right and left panels show the total and narrow mass range, respectively. In the total mass range, we find that the merger fraction of starburst galaxies with $\it{Herschel}$ detections seemingly increase as the redshift increases compared to those of non-$\it{Herschel}$ detected galaxies. Also, the merger fraction of those with $\it{Herschel}$ detections is higher than those of non-$\it{Herschel}$ detected galaxies, because of $\it{Herschel}$ detected galaxies have higher FIR luminosities. We fit the merger fractions evolution with power-law \citep{patt02,cons09}, which is given by f$_{m} = \alpha (1+z)^m + \rm{C}$. For non-$\it{Herschel}$ detected galaxies, we obtain the index $\it{m}$ = 0.18 $\pm$ 0.62 and 0.81 $\pm$ 0.19 in total mass range and $\it{m}$ = 0.46 $\pm$ 2.29 and 1.44 $\pm$ 2.12 in narrow mass range for starbursts and main sequence galaxies, respectively. For $\it{Herschel}$ detected galaxies, we obtain the index $\it{m}$ = 2.22 $\pm$ 0.72 and 1.09 $\pm$ 0.78 in total mass range and $\it{m}$ = 2.71 $\pm$ 2.46 and 1.78 $\pm$ 2.04 in narrow mass range for starbursts and main sequence galaxies, respectively. The indices for merger fractions of starbursts in $\it{Herschel}$ detected galaxies are significantly different compared to those in non-$\it{Herschel}$ detected galaxies. The comparison of them are such as $\it{m}$ = 0.18 $\pm$ 0.62 vs. $\it{m}$ = 2.22 $\pm$ 0.72 in total mass range, and $\it{m}$ = 0.46 $\pm$ 2.29 vs. $\it{m}$ = 2.71 $\pm$ 2.46 in narrow mass range. The differences of main sequence galaxies are relatively weak compared to those of starbursts such as $\it{m}$ = 0.81 $\pm$ 0.19 vs. $\it{m}$ = 1.09 $\pm$ 0.78 in total mass range, and $\it{m}$ = 1.44 $\pm$ 2.12 vs. $\it{m}$ = 1.78 $\pm$ 2.04 in narrow mass range. We also examine that the $\it{Herschel}$ detected samples have large range of SFRs at low redshift, and the SFRs of those galaxies become higher as the redshift increases. Thus, the increase tendency of merger fractions for $\it{Herschel}$ detected samples may includes the cosmic evolution. To better compare the merger fractions between $\it{Herschel}$ detected and non-$\it{Herschel}$ detected galaxies by minimising the mass effects, we compare the results in narrow mass range in the right panels of Figure \ref{fig-herschel}. Although errors are large, we find that the fraction at the same star formation mode is not different depending on the $\it{Herschel}$ detection considering the errors. Of course, the merger fractions of different star formation modes are still different for both samples. This comparison shows the importance of star formation mode in determining the merger fraction regardless of FIR luminosities. \begin{figure*} \centering \includegraphics[width=0.85\textwidth]{fig10.png} \caption{Distribution of starburstiness for galaxies at each redshift range. Black dashed line represents non-$\it{Herschel}$ detected galaxies and blue solid line represents $\it{Herschel}$ detected galaxies. } \label{fig-herschelsbn} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.70\textwidth]{fig11.png} \caption{Evolution of the merger fraction of starburst, main sequence, and quiescent galaxies for non-$\it{Herschel}$ detected galaxies (upper) and $\it{Herschel}$ detected galaxies (bottom). The left and right panels show galaxies with total and fixed mass range of 9.0 $<$ log(M$_*$/M$_{\odot}$) $<$ 11.5 and 10.5 $<$ log(M$_*$/M$_{\odot}$) $<$ 11.5. Coloured symbols are same as Fig \ref{fig-merg}. } \label{fig-herschel} \end{figure*} \section{DISCUSSION}\label{discuss} \subsection{Merger fractions over the star formation modes and their evolution The evolution of galaxy merger fraction over the cosmic time has been examined through numerical simulations and observational analysis. Some simulations assuming cold dark matter Universe predicted a decreasing merger fraction of galaxies with cosmic time \citep{fakh08,rodr15}, while other simulations show that the increasing of merger fraction to $z$ $\sim$ 1.5 and then constant as redshift increases \citep{kavi15,qu17,snyd17}. In observations, \citet{cons08} showed that the merger fraction of very massive galaxies with log(M$_*$/M$_{\odot})>$ 10 appears to increase up to $\it{z}$ $\sim$ 3, while the merger fraction of less massive galaxies has a peak $\it{z}$ $\sim$ 1.5 -- 2.5 and decreases to high redshift. \citet{vent17} also showed that the merger fractions for galaxies with log(M$_*$/M$_{\odot}) >$ 9.5 increase to around $\it{z}$ $\sim$ 2 and slowly decrease after that. Even in relatively low redshift range, some authors found an increasing merger fraction with redshift \citep{lope09,vent17}, however constant merger fraction at $\it{z}$ $<$ 0.6 is also suggested \citep{cons09,joge09}. Our results are in agreement with the observations suggesting that the merger fraction of galaxies slightly increases with redshift $\it{z}$ $<$ 0.6 \citep{lope09,man16,vent17}. However, the absolute values of merger fractions could differ due to different methods of sample selections depending on luminosity, mass or definition of a merger, which will be discussed in Section \ref{method}. As shown Figure \ref{fig-merg}, merger fractions of starburst, main sequence and quiescent galaxies are dependent of star formation mode. Although some authors suggested the dependency of merger fraction of galaxies on the distance from the main sequence \citep{cibi19,pear19}, the effect of star formation modes could not be evaluated quantitatively. For the fair comparison, we try to investigate the evolution of merger fraction for galaxies with similar star formation activities. Merger galaxies selected by morphology are mainly late-stage and disturbed systems \citep{pear19}. Because our merger galaxy samples are also selected by morphology, the higher merger fraction in this study than those in other studies can suggest that the star formation enhancement is prominent at the late stage of merging \citep{sand96,haan11,cox06,hwang12} and earlier stage of merging only cause mild increase of SFRs for close pairs \citep{lin07}. Regarding the evolution of merger fraction, \citet{cons09} showed diverse results through the fitting with a power-law function. They found that the power-law slope changes from 1.5 to 3.8 depending on sample selection and different merger fraction at $\it{z}$ = 0. The slope tend to be higher for more massive galaxies and lower for less massive galaxies \citep{cons03}. \cite{qu17} used an exponential power-law function for the simulation predictions, they found that the power-law slope, $\it{m}$, for close pairs with $\it{z}$ $<$ 4 changes from 2.8 to 3.7 depending on the mass limits. Otherwise, others who used morphological disturbances for merger selections and the redshift range of galaxies with $\it{z}$ $<$ 1.2 \citep{lotz08} obtained the mild slope of $\it{m}$ = 1.26. Considering similar merger selection and redshift range, our results on the power-law slope of $\it{m}$ = 0.18 -- 2.71 are consistent with those from \cite{lotz08}. However, they showed that the slope of merger fraction could be easily affected by morphological diagnostics and timescales to determine merger fractions. \subsection{Mergers for $\it{Herschel}$ detected galaxies Galaxy merging is expected to drive star formation episodes \citep{barn96,miho96}, however, UV/optical light is dimmed and sources appear redder due to absorption and scattering by dust. Since considerable amounts of the energy from star formations and AGNs have been absorbed by gas and dust and re-emitted in FIR wavelengths \citep{puge96,dole06}, FIR data set would be good for the study of star formation activity \citep{pei99,chary01}. Some results for LIRGs and ULIRGs showed that FIR-bright galaxies are ongoing mergers and have disturbed morphology, which are the evidence for merger activities \citep{sand88,clem96,hopk06,hwang10}. While these studies were mainly focused on FIR-bright galaxies, our samples selected from MIR detections have a wider range of L$_{TIR}$. As shown in Figure \ref{fig-merg} and \ref{fig-herschel}, the merger fractions are strongly dependent of star formation modes, irrespective of $\it{Herschel}$ detection. We also find that the increasing slope of merger fraction for starbusrts detected in $\it{Herschel}$ is steeper than that of non-$\it{Herschel}$ detected starbursts. The difference of the slope for main sequence galaxies is not significant compared to those of starbursts. Note that quiescent galaxies shows the steepest slope, however merger fractions at $\it{z}$ $>$ 0.2 are upper limit due to the lack of sample. These results could support that $\it{Herschel}$ detected galaxies with high FIR luminosity such as LIRGs/UIRGs are more stochastically in the merging stage. Although it is difficult to compare with other results in the effectiveness of FIR detection, this can be interpreted that the merger fractions of galaxies are determined not only by the IR luminosity, but also by the star formation mode of galaxies at fixed redshift range. \subsection{Comparison to other studies}\label{method} To study the merger fraction of galaxies, one has to define a galaxy sample along with redshift/mass range and galaxy classification method \citep{lotz08,cons09,bund09,man16,dela17,wats19}. Therefore, it is important to understand the sample selection including the merger identification scheme to make fair comparison with other studies. Although it is difficult to directly compare our results with other studies because of these differences, we describe the similarity and the difference between our study and other studies in this section. Methodologically, merger galaxies can be identified by using galaxy pairs or morphological disturbances. As morphological cases, \citet{lotz08} used Gini and M$_{20}$ for selecting merger galaxies in 0.2 $<$ $\it{z}$ $<$ 1.2 and volume limited sample with B-band luminosity limits assuming the luminosity evolution. They found weak evolution of the merger fractions of galaxies in this redshift range. \citet{cons09} derived the increasing merger fraction using asymmetry and clumpiness with galaxies from COSMOS and EGS between 0.2 $<$ $\it{z}$ $<$ 1.2. Although there are differences between our sample and theirs such as redshift range and existence of MIR data, our result is consistent with the previous ones \citep{lotz08,cons09} that the merger fractions for galaxies mildly increase as redshift increases with using morphological selection for merger galaxies. In addition to morphological method, there are studies of merger fractions using galaxy pairs. \citet{bund09} and \citet{dera09} use mass-selected pairs and projected separation (R$_{proj}$) for selecting merger galaxies, respectively. \citet{wats19} showed that the merger fraction for paired galaxies in clusters is higher than those in field environments \citep{bund09,dera09}. \citet{cibi19} compared the results from the morphological classification with those from the pair identification. They found that most of the starbursts galaxies are morphologically disturbed, but for galaxy pairs, the merger fractions were small in starburst galaxies. Thus, this can suggest that the merger fractions of this study could be higher than that in other studies based on galaxy pairs. Relatively high merger fractions of our results also can be explained by sample selection criteria. \citet{lotz08} used luminosity-size limits for selecting of massive galaxies. \citet{cons09,lope09} used galaxies with M$_{*} >$ $10^{10}$ M$_{\odot}$. These criteria secure limited galaxies compared to our sample that have mass range of 9.0 $<$ log(M$_*$/M$_{\odot}$) $<$11.5. However, the largest difference of the sample selection between ours and others is the use of the MIR detection in our study, which can significantly affect the star formation activity. Then, the number of sample can be limited, this small total number of sample which is denominators of merger fractions could affect that the merger fractions of galaxies become high compared to others. The method also can affect results; because Gini-M$_{20}$ are sensitive to the features of minor mergers \citep{lotz11}, our method may select more merger candidates than other studies based on CAS or asymmetry criteria. \citet{pear19} reported elevated merger fraction for galaxies at 0 $< \it{z} <$ 4 based on CANDELS data compared to other studies. Such a result may be arisen because the pixel size of galaxies within the images becomes smaller and galaxies become fainter as redshift increases, then the suppressed galaxy features counted as merger galaxies. Therefore, the direct comparison of absolute values of merger fractions between different studies is difficult. \section{SUMMARY}\label{sum We used the galaxy sample detected at the MIR band (9 \micron) of AKARI in the NEP--Wide field. In order to identify the merging galaxies, the morphological analyses were carried out relying on the Gini and M$_{20}$ coefficients derived from deep Subaru/HSC ($\it{r}$-, $\it{i}$-band) images. Using the spectroscopic and photometric redshifts, we derived total infrared luminosity and SFR from AKARI 9 $\micron$ detections. We compare the merger fractions between three different star formation modes at $\it{z} <$ 0.6: starburst, main sequence and quiescent galaxies. Our main results are as follows: \begin{enumerate} \item The merger fractions for starbursts, main sequence, and quiescent galaxies slightly increase with redshift at $\it{z} <$ 0.6. \item The galaxy merger fractions differ depending on the star formation mode. The starbursts show higher merger fractions than those of main sequence and quiescent galaxies. \item The increasing slope of the merger fractions for $\it{Herschel}$ detected starbursts slightly steeper compared to those of non-$\it{Herschel}$ detected starbursts. \end{enumerate} Our results are in line with the idea that the merger fraction increases with redshift in local Universe \citep{lotz08,cons09,lope09} and galaxies in the different star formation modes such as starbursts, main sequence and quiescent galaxies show different merger fractions \citep{cibi19,pear19}. Regardless of the FIR detection, the increasing trends of the merger fraction over local universe ensure the consistency in all the different star formation modes. These results underscore the importance of the star formation mode in the study of evolution of galaxy merger fraction. To better understand the merger fraction evolution with different star formation activities, it is important to secure a larger, unbiased sample of high-$\it{z}$ galaxies, which does not suffer from observational selection effects on the star formation mode. \section*{Acknowledgements} W-SJ, EK and Y-SJ acknowledges support from the National Research Foundation of Korea (NRF) grant funded by the Ministry of Science and ICT (MSIT) of Korea (NRF-2018M1A3A3A02065645). HSH was supported by the New Faculty Startup Fund from Seoul National University. HShim acknowledges the support from the National Research Foundation of Korea grant No. 2018R1C1B6008498. TH is supported by the Centre for Informatics and Computation in Astronomy (CICA) at National Tsing Hua University (NTHU) through a grant from the Ministry of Education of the Republic of China (Taiwan). \section*{Data Availability} The data underlying this article will be shared on reasonable request to the corresponding author.
1,941,325,220,659
arxiv
\section{Introduction} \label{secI} Dynamic earthquake slip processes are frictional slips inside the Earth. They show various qualitatively different behaviors, and discontinuous transitions between them are widely observed. For example, several earthquakes show pulse-like slips, whereas some generate crack-like slips as their slip profiles \cite{Amp}. The existence of slow earthquakes with slip velocity and rupture velocity (the propagation velocity of the fault tip) much smaller than those for ordinary earthquakes is widely known \cite{Oba}. There are also a variety of stress drops, which is defined as the difference between the shear stress acting on the fault plane and the residual shear stress. Almost all earthquakes have scale-independent stress drops; however, some are considered to have enormously large ones \cite{Dup}. Whether intermediate cases exist or not is a controversial problem. If we refer to each aspect as a phase, these behaviors may be understood in terms of phase transitions in a unified manner. To explain such transitions, it is insufficient to assume cracks in classical elastic bodies. Several researchers have noticed several aspects of the interior of the Earth, such as fault rock melting \cite{Spr} and chemical effects \cite{DiT}. For example, the slow slip velocity and rupture velocity for slow earthquakes are impossible to reproduce with the classical crack model; generation of slow earthquakes is considered to be promoted by certain factors, e.g., migration of the fluid (water) \cite{Suz14, Seg}. Some studies have treated interactions among these effects. For example, frictional heating and fluid pressure are considered to interact as thermal pressurization \cite{Lac}. This interaction describes elevation of fluid pressure due to frictional heating. Such fluid pressure elevation induces a decrease in the normal stress acting on the fault plane, leading to frictional stress decrease and slip acceleration. Many studies, including a sequence of studies by the author, have considered another interaction between thermal pressurization and dilatancy (referred to as ITPD below) \cite{Suz14, Seg, Ric06, Suz07, Suz08, Suz09, Suz10}. The dilatancy is slip-induced inelastic pore creation, leading to fluid pressure decrease and slip deceleration. This interaction enables us to treat slip acceleration and deceleration in a single framework, with which we can understand both ordinary and slow earthquakes. However, some problems remain unsolved in the treatment of ITPD. For example, it should be emphasized that the porosity evolution laws are not yet firmly understood \cite{Bart, Teu, Rud, Mar, Sle}. Laboratory experiments under conditions deep in the Earth are so hard to perform that exact reproduction of porosity evolution is considered difficult. A unified treatment independent of the details of the law is therefore required to understand the behavior of the system in the presence of ITPD. Additionally, geophysical studies have solely focused on explaining geophysical phenomena, and mathematically and physically important characteristics have not been investigated, as indicated below. The studies of ITPD mentioned above employed nonlinear governing equation systems. In fact, the behaviors of solution orbits for nonlinear equation systems have been studied widely. The competitive Lotka-Volterra (LV) equation system, a model describing competition between two species, is an example of such a nonlinear equation system and has been a frequently treated topic recently \cite{Chen, Fer, Fel}. For such a system, isolated fixed points on the phase space have been found, and the features of solution orbits crossing the points are well understood. The points are attractors, a saddle node and a repeller: the attractors describe extinction of one species, the saddle node corresponds to coexistence of both species, and the origin is the repeller. Discontinuous change of the solution behaviors depending on the initial value is observed there, similar to the discontinuous behaviors observed in the dynamic earthquake slip process. Nonetheless, two characteristics need to be emphasized in the ITPD model. First, continuous non-isolated fixed points appear for the ITPD model as attractors, as shown in this paper. Second, the initial values of the variables construct the continuous geometry in the ITPD model; the group of the initial values becomes a line in the phase space. We can therefore conclude that the initial and final values construct the continuous geometries in the ITPD model. We can expect a universal relation between these continuous geometries, although constructing the framework to treat such a relation has not been achieved. This paper is organized as follows. The model setup and governing equation system are clarified in Sec. \ref{secMS}. The slip velocity and inelastic porosity are the variables governing system behavior. Geometrically different attractors generated by the nullclines common to both variables are found mathematically in Sec. \ref{secATT}. Discontinuity of the solution behaviors can be regarded as phase transition behavior. The criticality in the vicinity of the phase transition point is found in Sec. \ref{secPT}. The criticality is universal, and is unaffected by the assumed details of the porosity evolution law. Physical and seismological application of the results obtained is carried out in Sec. \ref{secApp}. Dynamic earthquake slip behavior is concluded to be the phase transition phenomenon. In particular, predicting which phase emerges and the extent of the final slip amount is difficult. The paper is summarized, and nonlinear mathematical application is performed in Sec. \ref{secDisCon}. \section{MODEL SETUP} \label{secMS} We consider a system consisting of a homogeneous and isotropic thermoporoelastic medium, i.e. the medium has pores whose volume ratio to the whole volume (porosity) is initially homogeneous. The initial porosity is referred to as elastic porosity, $\phi_e$. The pores are assumed to be filled with fluid (water). We also assume that the thermal pressurization and dilatancy effects emerge in the slip zone located at $-w_h/2 < y < w_h/2$ along the $x$-axis (Fig. \ref{FigMS}); the relative movement between opposite surfaces is assumed to be accommodated entirely within the slip zone, which is regarded macroscopically as the one-dimensional (1D) mode III slip plane. The slip zone can be considered as a boundary of the medium from a macroscopic viewpoint. Thermal pressurization describes the elevation in fluid pressure based on the frictional heating associated with the dynamic fault slip \cite{Lac}. If frictional slip occurs, frictional heating increases the temperature, inducing expansion of the solid and fluid phases. However, because it is easier to expand the fluid phase than the solid phase, the fluid pressure rises. On the other hand, dilatancy represents the inelastic porosity increase due to the fracturing of fault rocks by the fault slip, which reduces the fluid pressure \cite{Seg}. It should be emphasized that when the fluid pressure increases (decreases), the frictional stress decreases (increases) due to a reduction (increment) in the effective normal stress acting on the fault plane, inducing the slip velocity increase (decrease). The competition between thermal pressurization and dilatancy induces complex feedback in the slip behavior, which can explain many aspects of the dynamic earthquake slip process (e.g., Suzuki and Yamashita (2014) \cite{Suz14}, referred to as SY14 below). However, this system is not yet researched from the viewpoint of nonlinear mathematics, particularly with regard to the behaviors of attractors. This research therefore primarily aims at understanding the system with regard to the behaviors of attractors. \begin{figure}[tbp] \centering \includegraphics[width=8.5cm]{Fig1.eps} \caption{Model setup. The thermoporoelastic medium moves as symbols show. See also Figure 1 in SY14.} \label{FigMS} \end{figure} We have the following system of governing equations within the slip zone (SY14): \begin{equation} \frac{1}{M} \frac{\partial p_D}{\partial t} = ((b-\phi_t) \alpha_s + \phi_t \alpha_f) \frac{\partial T_D}{\partial t} -\frac{\partial \phi_d}{\partial t}, \label{eqP} \end{equation} \begin{equation} ((1-\phi_t) \rho_s C_s + \phi_t \rho_f C_f) \frac{\partial T_D}{\partial t} = \frac{\sigma_{\mathrm{res}} v_D}{w_h}, \label{eqT} \end{equation} \begin{equation} v_D=\frac{2 \beta_v}{\mu} (\sigma_s^0 -\sigma_{\mathrm{res}}), \label{eqEOM} \end{equation} \begin{equation} \sigma_{\mathrm{res}}=-\mu_{\mathrm{slid}} (\sigma_n^0+p_D). \label{eqSres} \end{equation} Table I summarizes the meanings of the parameters. We neglected the advection effect and adiabatic expansion of the solid phase; the validity of this assumption has been demonstrated in our previous study \cite{Suz10}. We also neglected the diffusions of the fluid and heat, such that the Heaviside unit step function used in SY14 is not required and variables do not depend on $y$ because the slip zone becomes an isolated system with such assumptions. \begin{table*}{} \caption{Properties, their meanings and values. Values are based on SY14. However, the values are solely used for estimating $t^f_{\mathrm{ch}}$ and $t^h_{\mathrm{ch}}$, and are not adopted in the normalized equation system.} \begin{tabular}{lll} \hline\hline Properties & Physical Meanings & Values \\ \hline $b=1-\frac{K_v}{K_s} $ & & 0.2 \\ $C_s$ & Specific heat for the solid phase & $9.2 \times 10^2 \ \mathrm{J} \ \mathrm{kg}^{-1} \ \mathrm{K}^{-1}$ \\ $C_f$ & Specific heat for the fluid phase & $4.2 \times 10^3 \ \mathrm{J} \ \mathrm{kg}^{-1} \ \mathrm{K}^{-1}$ \\ $K_s$ & Bulk modulus of the solid phase & $3 \times 10^4$ MPa \\ $K_f$ & Bulk modulus of the fluid phase & $3.3 \times 10^3$ MPa \\ $K_v$ & Bulk modulus of the medium & $2.4 \times 10^4$ MPa \\ $M=\left( \frac{b-\phi_t}{K_s} +\frac{\phi_t}{K_f} \right)^{-1}$ & & $2.97 \times 10^4$ MPa \\ $p_D$ & Fluid pressure & -\footnotemark[1] \\ $T_D$ & Temperature & -\footnotemark[1] \\ $u_D$ & Slip & -\footnotemark[1] \\ $v_D$ & Slip velocity & -\footnotemark[1] \\ $w_h$ & Slip zone thickness & 3 mm to 3 cm \\ $\alpha_s$ & Thermal expansion coefficient of the solid phase & $1 \times 10^{-5}$ $\mathrm{K}^{-1}$ \\ $\alpha_f$ & Thermal expansion coefficient of the fluid phase & $2.1 \times 10^{-4}$ $\mathrm{K}^{-1}$ \\ $\beta_v=\sqrt{\frac{\mu}{(1-\phi_t)\rho_s +\phi_t \rho_f}}$ & Shear wave speed & $2.39 \times 10^3 \ \mathrm{m} \ \mathrm{s}^{-1}$ \\ $\eta$ & Fluid phase viscosity & $2.82 \times 10^{-4}$ Pa s \\ $\mu$ & Shear modulus of the medium & $1.44 \times 10^4$ MPa \\ $\mu_{\mathrm{slid}}$ & Sliding frictional coefficient & 0.6 \\ $\rho_s$ & Solid phase density & $2.7 \times 10^3 \ \mathrm{kg} \ \mathrm{m}^{-3}$ \\ $\rho_f$ & Fluid phase density & $1 \times 10^3 \ \mathrm{kg} \ \mathrm{m}^{-3}$ \\ $\sigma_n^0$ & Normal stress acting on the fault & $-2.5 \times 10^2$ MPa \\ $\sigma_s^0$ & Shear stress acting on the fault & $1 \times 10^2$ MPa \\ $\sigma_{\mathrm{res}}$ & Residual frictional stress & -\footnotemark[1] \\ $\phi_d$ & Inelastic porosity & -\footnotemark[1] \\ $\phi_e$ & Elastic porosity & 0.1 \\ $\phi_t$ & Total porosity ($=\phi_d+\phi_e$) & Assumed to be 0.1 \\ \hline\hline \footnotetext[1]{Functions of time} \end{tabular} \end{table*} We now consider the characteristic fluid diffusion time, $t^f_{\mathrm{ch}}$, and roughly estimate it here to show that neglecting fluid diffusion can be valid in the interior of the Earth. For the estimation, we should first mention that the slip zone thickness is known to be on the order of $\mu$m-cm \cite{Ric06, Ches, Hee, Sib, Uji07, Uji08, Kam, Row, Pla, Ric14}. For subduction thrust faults, the order of mm-cm seems to be reasonable \cite{Uji07, Uji08, Kam, Row}. On the other hand, some researchers insist that the zones have width on the order of $\mu$m \cite{Pla, Ric14}. However, significant along-strike variability may be observed in the localized zone thickness \cite{Ric14}, and the fault zones accommodating displacements are considered to have complex geometrical structures \cite{Ben}. We therefore assume an order of a few cm or less in this study. Additionally, the permeability $k$ is known to be on the order of $10^{-14}$-$10^{-21} \ \mathrm{m^2}$ \cite{Bra}. Let us estimate $t^f_{\mathrm{ch}}$ from the relation $w_h =\sqrt{kMt^f_{\mathrm{ch}}/\eta}$, where $\eta$ is the fluid phase viscosity. This relation is obtained because fluid diffusion can be described by adding a term $(k/\eta) \partial^2p_D/\partial y^2$ to the right hand side of Eq. (\ref{eqP}). When $k$ is $10^{-21} \mathrm{m}^2$, $w_h=3 \ \mathrm{mm}$ and $3 \ \mathrm{cm}$ give $t^f_{\mathrm{ch}} \sim 8.54 \times 10 \ \mathrm{s}$ and $8.54 \times 10^3$, respectively, whereas when $k$ is $10^{-14} \ \mathrm{m}^2$, $w_h=3 \ \mathrm{mm}$ and $3 \ \mathrm{cm}$ lead to $t^f_{\mathrm{ch}} \sim 8.54 \times 10^{-6} \ \mathrm{s}$ and $8.54 \times 10^{-4}$, respectively (the values of $M$ and $\eta$ are listed in Table I). We then consider the characteristic heat diffusion time, $t^h_{\mathrm{ch}}$, defined as $t^h_{\mathrm{ch}}=((1-\phi_t) \rho_s C_s +\phi_t \rho_f C_f) w_h^2 /\lambda$, where $\lambda$ is the thermal conductivity of the medium. This definition is reasonable because the term $\lambda \partial^2 T_D/\partial y^2$ added to the right hand side of Eq. (\ref{eqT}) describes the heat diffusion. Using the values shown in Table I and $\lambda \sim 1 \ \mathrm{J/mKs}$ \cite{Zhe}, we have $t^h_{\mathrm{ch}} \sim 2.39 \times 10 \ \mathrm{s}$ and $2.39 \times 10^3 \ \mathrm{s}$ for $w_h=3 \ \mathrm{mm}$ and $3 \ \mathrm{cm}$, respectively. We can show the conditions for neglecting fluid and heat diffusions based on the results of $t^f_{\mathrm{ch}}$ and $t^h_{\mathrm{ch}}$. If we consider the time scale $\le$ 10 s, neglecting heat diffusion is considered to be valid because $t^h_{\mathrm{ch}} > 10$ s. Additionally, if we consider events with this time scale, we can conclude that when $k$ is near the lower limit, fluid diffusion can be neglected (i.e., the system is undrained), whereas when $k$ is near the upper limit, the diffusion effect should be taken into account (the system is drained). We can thus insist that the undrained assumption corresponds to that of lower $k$, and such low $k$ is assumed henceforth. To summarize, the approximation of neglecting fluid and heat diffusions has been found to be valid for the slip duration $\le 10 \ \mathrm{s}$ with $k$ near the lower limit. A duration lower than 10 s is characteristic time scale of earthquake duration with a moment magnitude $\le 7$, hence earthquakes with a moment magnitude $\le 7$ will be considered below. However, note that the model here developed is 1D; therefore, the moment magnitude cannot be exactly defined. This will be stated again in Sec. \ref{secApp}. Equations (\ref{eqP}) and (\ref{eqT}) govern the temporal evolution of the fluid pressure and temperature, respectively. Equation (\ref{eqEOM}) provides a solution for the equation of motion (EOM) of the medium, with a boundary condition that difference between the applied shear stress and the frictional stress (stress drop, $\Delta \sigma$) is given by $\sigma_s^0-\sigma_{\mathrm{res}}$ \cite{Bru}. In SY14, the displacement appearing in the EOM was that of the solid phase; nonetheless, the displacements of the solid and fluid phases are exactly the same in the present system because fluid diffusion is neglected. Equation (\ref{eqSres}) provides a definition for $\sigma_{\mathrm{res}}$. Note that the normal stress acting on the fault $\sigma_n^0$ is negative because the compression stress is defined as a negative value. In particular, the first and second terms of the right hand side of Eq. (\ref{eqP}) stand for the thermal pressurization and dilatancy effects, respectively. We also assumed that $\phi_e \gg \phi_d$ and $\phi_t=\phi_e +\phi_d \sim \phi_e$ (constant) in Eqs. (\ref{eqP}-\ref{eqEOM}), which was confirmed to be reasonable from the viewpoint of laboratory experiments \cite{Mar} and numerical simulations (SY14, \cite{Suz10}). The framework here is constructed based on SY14. In fact, researchers have shown no agreement on the mathematical treatment of the motion of the thermoporoelastic medium, even without the dilatancy effect \cite{Bio56-a, Bio56-b, Pri92, Pri93}. Despite the lack of agreement, the differences emerging in each framework do not cause any qualitative change in the system behavior because only the analytical forms of the coefficients appearing in the governing equations differ from one another. We can use the framework here as a general framework for treating ITPD. To close the governing equation system, we also need the equation governing inelastic porosity evolution, which will be referred to as a porosity evolution law. Some forms of the law have been suggested based on many laboratory experiments, although there exists no agreement concerning its analytical form \cite{Bart, Teu, Rud, Mar, Sle}. Therefore, we should derive robust results independent of the details of the porosity evolution law. For consistency with previous studies, we only assume that $\phi_d$ is a function in terms of the slip, $u_D$, and is initially zero, i.e., $\phi_d=\phi_d (u_D)$ and $\phi_d (0)=0$. However, we require some conditions on the form of $\phi_d$ from the physical viewpoint. First, the function $\phi_d(u_D)$ is assumed to monotonically increase with increasing $u_D$ since we do not consider pore healing in an entire single slip event; this implies that $\partial \phi_d/\partial u_D$ is always nonnegative. Second, we also assume that $\displaystyle{ \lim_{u_D \to \infty} \partial \phi_d/ \partial t =0 }$, because the relation $0 \le \phi_d \le 1$ must be satisfied and $\phi_d$ must have an upper limit, which we call $\phi_{\mathrm{UL}}$ henceforth. Note that $\phi_{\mathrm{UL}}$ need not be equal to unity. From these statements, we have the porosity evolution law \begin{equation} \frac{\partial \phi_d}{\partial t} =\frac{\partial \phi_d}{\partial u_D} v_D, \label{eqPEL} \end{equation} for which the condition $\displaystyle{ \lim_{u_D \to \infty} \partial \phi_d/\partial u_D=0 }$ must be satisfied. We can obtain the normalized governing equation system from Eqs. (\ref{eqP}-\ref{eqPEL}) as \begin{equation} \dot{v}=v(1-v)- \beta f(u) v, \label{eqGovG1} \end{equation} \begin{equation} \dot{\phi}= f(u) v, \label{eqGovG2} \end{equation} where $v$ $(0 \le v \le 1)$ and $u$ are normalized slip velocity and slip, respectively, $\phi$ is normalized inelastic porosity $(0 \le \phi \le 1)$, $\beta$ is a positive constant, and $f(u)$ is a function of $u$. The overdot stands for differentiation with respect to normalized time, $\tau$. To derive Eqs. (\ref{eqGovG1}) and (\ref{eqGovG2}), note that $v_D$ and $p_D$ are linearly related (see Eqs. (\ref{eqEOM}) and (\ref{eqSres})), allowing us to rewrite $p_D$ in terms of $v_D$. The forms of $u$, $v$, $\phi$, $\beta$, $f(u)$ and $\tau$ are given by \begin{equation} u = \frac{((b-\phi_t) \alpha_s + \phi_t \alpha_f) M \mu_{\mathrm{slid}} }{((1-\phi_t) \rho_s C_s +\phi_t \rho_f C_f) w_h } u_D \equiv \frac{u_D}{U_{\mathrm{ref}}}, \label{eqNu} \end{equation} \begin{equation} v=\frac{\mu}{2 \beta_v \sigma_s^0} v_D, \label{eqNv} \end{equation} \begin{equation} \phi =\frac{\phi_d}{\phi_{\mathrm{UL}}}, \end{equation} \begin{equation} \beta =\frac{M \mu_{\mathrm{slid}} \phi_{\mathrm{UL}}}{\sigma_s^0}, \label{eqParBeta} \end{equation} \begin{equation} f(u)=\frac{\partial}{\partial u} \phi (U_{\mathrm{ref}} u), \end{equation} \begin{equation} \tau = \frac{2 \beta_v ((b-\phi_t) \alpha_s + \phi_t \alpha_f) \sigma_s^0 M \mu_{\mathrm{slid}} }{((1-\phi_t) \rho_s C_s +\phi_t \rho_f C_f) w_h \mu} t, \label{eqParTau} \end{equation} respectively. Based on the assumption for $\partial \phi_d/\partial u_D$, $f(u)$ is a nonnegative function. We also have the condition $\displaystyle{ \lim_{u \to \infty} f(u)=0 }$, owing to the condition $\displaystyle{ \lim_{u_D \to \infty} \partial \phi_d/\partial u_D=0 }$. Moreover, the parameter $\beta$ describes the contribution of inelastic porosity increase to the slip velocity change, because from Eqs. (\ref{eqGovG1}) and (\ref{eqGovG2}) one can obtain $\dot{v}=v(1-v)-\beta \dot{\phi}$. In fact, the temporal evolution equation for the normalized temperature can be given by \begin{equation} \dot{T}=v(1-v), \end{equation} where $T=T_D ((b-\phi_t) \alpha_s + \phi_t \alpha_f) M \mu_{\mathrm{slid}}/\sigma_s^0$ is the normalized temperature. Nonetheless, this will not be handled in the investigation below because the governing equation system (\ref{eqGovG1}) and (\ref{eqGovG2}) is closed in terms of $v$ and $\phi$ (note that $\displaystyle{ u=\int v d\tau }$), and the temperature can be determined by calculating $\displaystyle{ T=\int v(1-v) d \tau }$. We now describe $\phi$ in terms of $u$ for convenience during later analytical treatments. From Eq. (\ref{eqGovG2}), we have $\displaystyle{ \phi =\int f(u) du \equiv F(u) }$, so \begin{equation} u=F^{-1} (\phi), \label{eqRelUPhi} \end{equation} where $F^{-1}$ is an inverse function of $F$. Because $f(u)$ is a nonnegative function, $\displaystyle{ F(u) = \int f(u) du }$ is a monotonically increasing function in terms of $u$. Therefore, $F(u)$ clearly has its inverse function. Using Eq. (\ref{eqRelUPhi}), we have the following governing equations \begin{equation} \dot{v}=v(1-v)- \beta f(F^{-1}(\phi)) v, \label{eqGovG3} \end{equation} \begin{equation} \dot{\phi}= f(F^{-1}(\phi)) v. \label{eqGovG4} \end{equation} Equations (\ref{eqGovG3}) and (\ref{eqGovG4}) form the general framework treating ITPD in the 1D model, and our analytical treatment will be based on these equations henceforth. In particular, we will derive a kind of phase transition and universal criticality emerging near the phase transition point. \section{ATTRACTORS OBSERVED IN THE GOVERNING EQUATION SYSTEM} \label{secATT} \subsection{Geometrically different attractors} \label{secGEO} We mathematically demonstrate that two geometrically different attractors emerge within the present framework including $v$ and $\phi$. To show this, we consider the qualitative behavior of the solution orbit in $\phi-v$ space (Fig. \ref{FigSO}). We first consider nullclines, which are obtained by the conditions $\dot{v}=0$ and $\dot{\phi}=0$. For $\dot{v}=0$, the straight line $v=0$ and the curve $v=1-\beta f(F^{-1}(\phi)) (\equiv g(\phi))$ are nullclines from Eq. (\ref{eqGovG3}). The curve $v=g(\phi)$ on the $\phi-v$ space will be referred to as $C^{\mathrm{crit}}$ henceforth. For $\dot{\phi}=0$, the straight line $v=0$ and the curve $f(F^{-1}(\phi))(=(1-g(\phi))/\beta)=0$ are nullclines. Clearly, the line $v=0$ is the nullcline for both equations. Moreover, we show here that the curve $f(F^{-1}(\phi))=0$ can be described by the straight line $\phi=1$ in $\phi-v$ space. Note that the condition $f(F^{-1}(\phi))=0$ corresponds to $F^{-1}(\phi) \to \infty$ based on the assumption $\displaystyle{ \lim_{u \to \infty} f(u)=0 }$; therefore, $\displaystyle{ \phi =\lim_{u \to \infty} F(u) }$ must be satisfied on such a nullcline. It should also be emphasized that (a) $\phi$ is a monotonically increasing continuous function in terms of $u$, and (b) $\phi$ is normalized to have a maximum value of unity. These statements suggest that the nullcline $f(F^{-1}(\phi))=0$ is given by the straight line $\displaystyle{ \phi=\lim_{u \to \infty}F(u)=1 }$. From these statements, we can also conclude that $C^{\mathrm{crit}}$ is absorbed into point $(1, 1)$. Additionally, we utilize an important condition here for $C^{\mathrm{crit}}$. We assume the relation \begin{equation} g(0)<0. \label{eqCon1} \end{equation} This assumption and the statement that $C^{\mathrm{crit}}$ is absorbed into point $(1, 1)$ allow us to conclude that $C^{\mathrm{crit}}$ crosses the $\phi$-axis in the region $0<\phi<1$, i.e., at least one positive $\phi_c$ satisfying \begin{equation} g(\phi_c)=0 \ \ \ \ \ (0 < \phi_c < 1) \label{eqPhic} \end{equation} is assumed to exist (see Fig. \ref{FigSO}). The same condition has been treated in several previous studies (SY14, \cite{Ric06}). \begin{figure}[tbp] \centering \includegraphics[width=8.5cm]{Fig2.eps} \caption{Solution orbits. The blue and red curves are absorbed into the line and point attractors, respectively. Small arrows denote the directions of solution movement with increasing time. Points $(0, v_{01})-(0, v_{04})$ are those where the solution orbits cross the $v$-axis.} \label{FigSO} \end{figure} We require another important assumption for the solution orbit, namely that the orbits cross the $v$-axis in the region $0<v<1$; that is, we use the condition $0<v_0<1$, where $v_0$ is the value of $v$ at $\phi=0$, for the sake of physical and seismological applications in the latter part of this paper (the physically meaningful regions for $\phi$ and $v$ are $0 \le \phi \le 1$ and $0 \le v \le 1$, respectively). However, because the mathematical treatment is performed in the present subsection, the orbit can pass through the region $v<0$. First, we assume that $g'(\phi) \ge 0$ is satisfied for $0 \le \phi \le 1$, where the prime denotes differentiation with respect to $\phi$. We also assume that if points satisfying $g'(\phi)=0$ exist, they are isolated. With these assumptions, Eq. (\ref{eqPhic}) has a single positive root (which can be a multiple one). In this case, $C^{\mathrm{crit}}$ is clearly right-upward and crosses the $\phi$-axis once (Fig. \ref{FigSO}). The sign of the gradient at the given point on the solution orbit depends on whether the point is on the upside or underside of $C^{\mathrm{crit}}$ because $dv/d \phi=0$ on $C^{\mathrm{crit}}$. If the point is on the upside of $C^{\mathrm{crit}}$, the gradient is negative, whereas if it is on the underside of $C^{\mathrm{crit}}$, the gradient is positive. The orbit becomes horizontal at the point crossing $C^{\mathrm{crit}}$. Moreover, it should be noted that the orbit is neither horizontal nor vertical at the point crossing the line $v=0$, even though the line is a nullcline. This occurs because $v=0$ is a nullcline for both equations; the condition $\dot{v}=\dot{\phi}=0$ is satisfied on the line $v=0$, enabling $dv/d \phi$ to be nonzero and finite there. From these statements and Fig. \ref{FigSO}, the orbit is found to connect the point $(0, v_0)$ with $(1, 1)$. In fact, the point $(1,0)$ is also a fixed one for Eqs. (\ref{eqGovG3}) and (\ref{eqGovG4}), and no orbit connecting $(0,v_0)$ with $(1,0)$ exists, as shown in Appendix \ref{secAA}. Figure \ref{FigSO} also shows arrows indicating the directions of evolution of the solutions with increasing time. These arrows can easily be obtained by the relation $\dot{\phi}=f(F^{-1}(\phi)) v$, because $f(F^{-1}(\phi))$ is always positive except on the nullcline $\phi=1$, as noted in Sec. \ref{secMS}, and the direction of the solution is determined only by the sign of $v$. If $v>0$, the solution moves rightward, whereas if $v<0$, it moves leftward with increasing time. The solution orbits and the arrows shown in Fig. $\ref{FigSO}$ illustrate that we have an attractor and repeller on the $\phi$-axis; $\{ (\phi_a, 0) | \ 0 \le \phi_a \le \phi_c \}$ is an attractor, and $\{ (\phi_r, 0) | \ \phi_c \le \phi_r \le 1 \}$ is a repeller, where $\phi_a$ and $\phi_r$ are real numbers. In particular, note that $\phi_a$ and $\phi_r$ take continuous values. These non-isolated fixed points appear because the line $v=0$ is a common nullcline for both equations, and this is a noteworthy behavior of the present system. In addition, the point $(1, 1)$ is also an attractor because $C^{\mathrm{crit}}$ and all the orbits are absorbed into the point $(1, 1)$, and the solutions move rightward with increasing time where $v>0$. We can therefore categorize the attractors into two geometrically different groups, which are given by the line $ \{ (\phi_a, 0) | \ 0 \le \phi_a \le \phi_c \}$ or the point $(1,1)$ (see the green line and point in Fig. \ref{FigSO}). We will refer to the former and latter attractors as line and point attractors, respectively. Exact analytical investigation determining which attractor appears will be carried out in Sec. \ref{secPT}. In Fig. \ref{FigSO}, we assumed that $g'(\phi) \ge 0$ for $0 \le \phi \le 1$ and that the curve $C^{\mathrm{crit}}$ was right-upward. Here, we allow $g'(\phi)$ to be negative at a certain value of $\phi$. Transitions from right-upward (right-downward) to right-downward (right-upward) with increasing $\phi$ (referred to as up$-$down (down$-$up) transitions henceforth) emerge with this condition. These cases correspond to the condition that $g(\phi)$ has maximal (for up$-$down transition) and minimal (for down$-$up transition) values. First, let us consider $C^{\mathrm{crit}}$ with a single up$-$down transition and a single down$-$up transition. With this condition, the case where $C^{\mathrm{crit}}$ crosses the $\phi$-axis once is shown in Fig. \ref{FigSOmm}(a). In Fig. \ref{FigSOmm}(a), although an orbit crossing $C^{\mathrm{crit}}$ three times appears, the attractors are the same as those observed in Fig. \ref{FigSO}. Additionally, the case where $C^{\mathrm{crit}}$ crosses the $\phi$-axis more than once is shown in Fig. \ref{FigSOmm}(b). When there exist two or more $\phi_c$, we refer to the $m$th smallest one as $\phi_c^m$, where $m$ is a positive integer. When $\phi_c$ is a multiple root, it is regarded as a single $\phi_c^m$. In this case, the area where $v < g(\phi)$ (i.e., the underside of $C^{\mathrm{crit}}$) cannot include attractors for the following reasons. The solution must move away from the $\phi$-axis there because (a) $dv/d \phi >0$ is satisfied and (b) the solution describing infinitesimal perturbation from the $\phi$-axis to the positive (negative) $v$ direction moves rightward (leftward) with increasing time. Therefore, there exist two line attractors on the $\phi$-axis, and the point $(1,1)$ is also an attractor. We can insist that even if $C^{\mathrm{crit}}$ is not monotonically rising with increasing $\phi$, two geometrically different attractors emerge and only the number of line attractors changes. \begin{figure}[tbp] \centering \includegraphics[width=8.5cm]{Fig3a.eps} \includegraphics[width=8.5cm]{Fig3b.eps} \caption{Solution orbits. The blue and red curves are absorbed into the line and point attractors, respectively. (a) Case wherein the nullcline $C^{\mathrm{crit}}$ crosses the $\phi$-axis once. (b) Case wherein the nullcline $C^{\mathrm{crit}}$ crosses the $\phi$-axis more than once. Points $(0, v_{01})-(0, v_{07})$ are those where the solution orbits cross the $v$-axis. The dotted rectangles A and B describe the examples treated in Figs. \ref{FigSOcc}(d) and \ref{FigSOcc}(a), respectively.} \label{FigSOmm} \end{figure} We should investigate the behavior of line attractors near transition points in detail. Let us first consider the case where $C^{\mathrm{crit}}$ has a down$-$up transition. If the transition occurs under the $\phi$-axis, $C^{\mathrm{crit}}$ crosses the $\phi$-axis and the line attractor appears (Fig. \ref{FigSOcc}(a)). However, if the $\phi$-axis becomes tangent to $C^{\mathrm{crit}}$, the attractor converges to a point (Fig. \ref{FigSOcc}(b)). This converged line attractor is a special case for the line attractor, and we do not refer to it as a point attractor. If the down$-$up transition occurs above the $\phi$-axis, the attractor vanishes (Fig. \ref{FigSOcc}(c)). We now treat the case in which $C^{\mathrm{crit}}$ has an up$-$down transition. If such a transition occurs above the $\phi$-axis, $C^{\mathrm{crit}}$ crosses the $\phi$-axis and two line attractors divided by $C^{\mathrm{crit}}$ emerge in the vicinity of the transition point (Fig. \ref{FigSOcc}(d)). However, if the $\phi$-axis becomes tangential to $C^{\mathrm{crit}}$, the line attractors coalesce and turn into a single line attractor (Fig. \ref{FigSOcc}(e)). If the up$-$down transition occurs under the $\phi$-axis, a single line attractor will be observed (Fig. \ref{FigSOcc}(f)). \begin{figure*}[tbp] \centering \begin{tabular}{ccc} \begin{minipage}[t]{5.cm} \includegraphics[width=5.cm]{Fig4a.eps} \end{minipage} & \begin{minipage}[t]{5.cm} \includegraphics[width=5.cm]{Fig4b.eps} \end{minipage} & \begin{minipage}[t]{5.cm} \includegraphics[width=5.cm]{Fig4c.eps} \end{minipage} \end{tabular} \\ \begin{tabular}{ccc} \begin{minipage}[t]{5.cm} \vspace{-5.3cm} \includegraphics[width=5.cm]{Fig4d.eps} \end{minipage} & \begin{minipage}[t]{5.cm} \vspace{-5.3cm} \includegraphics[width=5.cm]{Fig4e.eps} \end{minipage} & \begin{minipage}[t]{5.cm} \includegraphics[width=5.cm]{Fig4f.eps} \end{minipage} \end{tabular} \caption{Down$-$up and up$-$down transitions of $C^{\mathrm{crit}}$. Green lines describe line attractors, and a point such that indicated in green color represents a converged one. (a) The down$-$up transition emerges on the underside of the $\phi$-axis, and $C^{\mathrm{crit}}$ crosses the $\phi$-axis. (b) The $\phi$-axis is tangential to the down$-$up transition. (c) The down$-$up transition occurs above the $\phi$-axis. (d) The up$-$down transition emerges on the upside of the $\phi$-axis, and $C^{\mathrm{crit}}$ crosses the $\phi$-axis. (e) The $\phi$-axis is tangential to the up$-$down transition. (f) The up$-$down transition occurs on the underside of the $\phi$-axis.} \label{FigSOcc} \end{figure*} \begin{figure*}[tbp] \centering \begin{tabular}{cc} \begin{minipage}[t]{5cm} \includegraphics[width=5.cm]{Fig5a.eps} \end{minipage} & \begin{minipage}[t]{5cm} \includegraphics[width=5.cm]{Fig5b.eps} \end{minipage} \end{tabular} \begin{tabular}{cc} \begin{minipage}[t]{5cm} \includegraphics[width=5.cm]{Fig5c.eps} \end{minipage} & \begin{minipage}[t]{5cm} \vspace{-3.25cm} \includegraphics[width=5.cm]{Fig5d.eps} \end{minipage} \end{tabular} \caption{Cases wherein the $\phi$-axis is tangential to $C^{\mathrm{crit}}$. Green lines describe line attractors, and a point such that indicated in green color represents a converged one. The values $n$ and $g^{(n)}(\phi_c^m) (\neq 0)$ are (a) odd and negative, (b) odd and positive, (c) even and positive, and (d) even and negative. The points $(\phi_c^m, 0)$ are (a) the left end point of the line attractor, (b) the right end point of the line attractor, (c) the converged line attractor, and (d) neither the left nor the right end points of the line attractor.} \label{FigSOtan} \end{figure*} \begin{figure}[tbp] \centering \includegraphics[width=8.5cm]{Fig6.eps} \caption{General form of $C^{\mathrm{crit}}$. The $m$th smallest positive solution of $g(\phi)=0$ is described as $\phi_c^m$. The equation is assumed to have $N$ different solutions, where $N$ is a natural number. In this example, $\phi_c^3$ is a multiple root of $g(\phi)=0$.} \label{FigSOgen} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=8.5cm]{Fig7.eps} \caption{Physically prohibited region. The light blue curves cross $(\phi_c^m, 0)$. The solid parts of the orbits are physically meaningful, whereas the dotted parts represent intervals of the mathematical solution which do not have a real physical meaning. The physically meaningful orbits cannot approach the region labeled as PPR, even though the region is on the line attractor.} \label{FigPPR} \end{figure} \begin{figure}[tbp] \centering \includegraphics[width=8.5cm]{Fig8.eps} \caption{Solution orbits whose configuration is an enlarged version of Fig. \ref{FigSO} in the vicinity of $(\phi_{\mathrm{right}}^1, 0)$. The light blue curve represents the critical manifold. The blue curves illustrate the solution orbit absorbed into the first line attractor. The red curve indicates the solution orbit absorbed into the point attractor. Short black arrows show the direction of solution movement with increasing time. Two variables, $\delta v_+^1$ and $\delta \phi_+^1$, for two orbits are also shown. See details in the text.} \label{FigSOen} \end{figure} We can summarize the behaviors of the line attractors in terms of $g$ and its derivatives. Let $\forall j, \ g^{(j)}(\phi_c^m)=0$ and $g^{(n)}(\phi_c^m) \neq 0$, where $g^{(j)}$ stands for the $j$th derivative of $g$ with respect to $\phi$, $n$ is a positive integer and $j$ is a nonnegative integer satisfying $0 \le j \le n-1$ (note that $g^{(0)}=g$). If $n$ is odd and $g^{(n)}(\phi_c^m)<0$, the point $(\phi_c^m, 0)$ is the left end point of a line attractor, as $\phi_c^2$ in Fig. \ref{FigSOmm}(b) and $\phi_c^m$ in Fig. \ref{FigSOtan}(a). If $n$ is odd and $g^{(n)}(\phi_c^m)>0$, the point $(\phi_c^m, 0)$ is the right end point of a line attractor, as $\phi_c^1$ and $\phi_c^3$ in Fig. \ref{FigSOmm}(b) and $\phi_c^m$ in Fig. \ref{FigSOtan}(b). If $n$ is even and $g^{(n)}(\phi_c^m)>0$, the converged line attractor is generated at $(\phi_c^m, 0)$ as shown in Figs. \ref{FigSOcc}(b) and \ref{FigSOtan}(c). If $n$ is even and $g^{(n)}(\phi_c^m)<0$, the coalesced line attractor is observed as in Figs. \ref{FigSOcc}(e) and \ref{FigSOtan}(d). The general form of the line attractors is shown in Fig. \ref{FigSOgen}. The value of $\phi_c^m$ at the left (right) end of the $i$th line attractor from left will be referred to as $\phi_{\mathrm{left}}^i$ ($\phi_{\mathrm{right}}^i$) henceforth (by definition, $\phi_{\mathrm{left}}^1=0$), where $i$ is a positive integer. When the converged line attractor emerges at $(\phi_c^m, 0)$, the point is regarded as both the left and the right end points, whereas when the line attractors coalesce at $(\phi_c^m, 0)$, the point is neither the left nor the right end point. Finally, we can also emphasize that the emergence of two geometrically different attractors does not depend on the details of $g(\phi)(=1-\beta f(F^{-1}(\phi)))$, i.e., the porosity evolution law. Only the number of line attractors varies owing to this detail. \subsection{Important concepts from the physical viewpoint} \label{secPI} Here, we introduce some physically meaningful concepts associated with the current mathematical treatment. First, we introduce ``phases'' of the attractors for later discussion about phase transition. The line and point attractors physically correspond to $\displaystyle{\lim_{\tau \to \infty} v=0 }$ and $1$, respectively. Hence, we can refer to the former and latter cases as the cessation and high-speed phases, respectively, from the physical viewpoint. The physical elementary process realizing these phases will be explained in Sec. \ref{secApp}. We now clarify the physically meaningful solution orbits. For the cases shown in Fig. \ref{FigSO}, the orbit starts on the $v$-axis ($\phi=0$) physically. Within the orbits beginning with $v_0=v_{01}$ or $v_{02}$ in Fig. \ref{FigSO}, only the parts before absorption into the green line are physical, and the other parts are mathematical and unphysical. Additionally, for the cases shown in Fig. \ref{FigSOmm}(b), the solution orbits crossing the green lines are physical before absorption, which is the same as in Fig. \ref{FigSO}. However, a physically prohibited region (PPR) can emerge near $\phi_{\mathrm{left}}^{i+1}$ (Fig. \ref{FigPPR}). We define the point $(\phi_{\mathrm{PPR}}^i, 0)$ as being where the orbit crossing the point $(\phi_{\mathrm{right}}^i, 0)$ again crosses the $\phi$-axis. If such an orbit is absorbed into the point attractor, we define $\phi_{\mathrm{PPR}}^i=1$. The solutions cannot be absorbed into the region $\{ (\phi, 0)| \ \phi_{\mathrm{left}}^{i+1} \le \phi \le \phi_{\mathrm{PPR}}^i \}$ with a physically meaningful initial condition and infinitely long time, and this region will be referred to as the PPR. The left end points of the line attractors other than the origin $(\phi_{\mathrm{left}}^1, 0)$ must be accompanied by the PPR. If $\phi_{\mathrm{right}}^{i+1} < \phi_{\mathrm{PPR}}^i$, the solutions cannot be absorbed into the $i+1$th line attractor. To summarize, although all green lines in Fig. \ref{FigSOgen} actually represent line attractors in a mathematical sense, not all points on the attractors describe physically meaningful solutions. \section{PHASE TRANSITION AND UNIVERSALITY} \label{secPT} From the investigation above, we can conclude that a system including ITPD generates a phase transition between the cessation and the high-speed phases. We now show that universal (scale-independent) behavior emerges in the vicinity of the phase transition point in the present model by considering the perturbation in $v_0$, which is similar to other phase transitions, e.g., susceptibility in phase transition of the second kind, and the spanning probability of percolation \cite{Flo, Sta}. Such universality has many implications for the behavior of the final slip amount, which plays an important role in our understanding of the dynamic earthquake slip process, as shown in Sec. \ref{secApp}. To derive the universality, see the area near the origin in Fig. \ref{FigSO}, which is enlarged in Fig. \ref{FigSOen}. In this case, $C^{\mathrm{crit}}$ is assumed to cross the $\phi$-axis once at $(\phi_c^1, 0)$ and $g'(\phi_c^1)>0$, and we have the single solution $\phi_c=\phi_c^1=\phi_{\mathrm{right}}^1$ for Eq. (\ref{eqPhic}). Note that we have a manifold dividing the point attractor and the line attractor (i.e., the basin boundary), which is drawn by the light blue curve in Fig. \ref{FigSOen}. We refer to the manifold as the critical manifold, and it begins at $(0, v_{\mathrm{right}}^1)$ and ends at $(\phi_{\mathrm{right}}^1, 0)$, where $(0, v_{\mathrm{right}}^i)$ is a point where the manifold crossing $(\phi_{\mathrm{right}}^i, 0)$ crosses the $v$-axis. If $v_0>v_{\mathrm{right}}^1$, the point attractor emerges, whereas if $v_0<v_{\mathrm{right}}^1$, the line attractor is realized. Let us consider here the manifolds in the neighborhood of the critical manifold. We assume that these manifolds are absorbed into the line attractor, such that the assumption $v_0<v_{\mathrm{right}}^1$ is used here. In the $\phi-v$ space, if the initiation point of the manifold is given by $(0, v_{\mathrm{right}}^1-\delta v_+^1)$, its ending point is expected to be described by $(\phi_{\mathrm{right}}^1 -\delta \phi_+^1, 0)$, where $\delta v_+^1$ and $\delta \phi_+^1$ are positive amounts that satisfy $\delta v_+^1 \ll 1$ and $\delta \phi_+^1 \ll 1$. We will show that $\delta \phi_+^1$ and $\delta v_+^1$ are related via a simple power law, \begin{equation} \delta \phi_+^1 \propto (\delta v_+^1)^{\alpha}, \label{eqPgen} \end{equation} and obtain the value of the critical exponent $\alpha$ in the following part of this section. Moreover, we will show that the details of the porosity evolution law do not affect the critical exponent. \subsection{Derivation of the power law} \label{secDPL} To show the power law (\ref{eqPgen}), note that Eqs. (\ref{eqGovG1}) and (\ref{eqGovG2}) yield \begin{equation} \frac{d v}{d \phi} =\frac{1-v-\beta f(F^{-1}(\phi))}{f(F^{-1}(\phi))}=\beta \frac{1-v}{1-g(\phi)} -\beta. \label{eqGovl} \end{equation} From Eq. (\ref{eqGovl}), we have the solution for $v$ based on the method of variation of constants in terms of $\phi$, which leads to \begin{equation} v = -\beta e^{-\beta A(\phi)} (B(\phi)-B(0)) +e^{-\beta (A(\phi)-A(0))} (v_0-1)+1, \label{eqSolPhi} \end{equation} where $\displaystyle{ A(\phi) \equiv \int^{\phi} d \phi^{\ast} /(1-g(\phi^{\ast})) }$ and $\displaystyle{ B(\phi) \equiv \int^{\phi} e^{\beta A(\phi^{\ast})} d \phi^{\ast} }$. The initial condition $v|_{\tau=0}=v_0$ at $\phi|_{\tau=0}=0$ is used. Equation ($\ref{eqSolPhi}$) gives the solution orbit in $\phi-v$ space. We then obtain the value of $\phi_{\mathrm{right}}^1$ from Eq. (\ref{eqPhic}). This equation reads as \begin{equation} \phi_{\mathrm{right}}^1=g^{-1} (0), \label{eqSolPhic} \end{equation} where $g^{-1}$ is the inverse function of $g$. The value $g^{-1}(0)$ can be uniquely defined within the current framework. Note that the character $\phi_{\mathrm{right}}^1$ remains in the analytical treatment below to simplify the description because using $g^{-1}(0)$ makes the representation complex. If we do not use $\phi_{\mathrm{right}}^1$ explicitly, we should employ solution (\ref{eqSolPhic}). Next, we obtain $v_{\mathrm{right}}^1$ from Eq. (\ref{eqSolPhi}). This value can be derived from the condition $v=0$ with $v_0=v_{\mathrm{right}}^1$ and $\phi=\phi_{\mathrm{right}}^1$ in Eq. (\ref{eqSolPhi}): \begin{eqnarray} &-&\beta e^{-\beta A(\phi_{\mathrm{right}}^1)} (B(\phi_{\mathrm{right}}^1)-B(0)) \nonumber \\ &+&e^{-\beta (A(\phi_{\mathrm{right}}^1)-A(0))} (v_{\mathrm{right}}^1-1)+1=0, \end{eqnarray} which reads as \begin{equation} v_{\mathrm{right}}^1=\beta e^{-\beta A(0)} (B(\phi_{\mathrm{right}}^1)-B(0))-e^{\beta (A(\phi_{\mathrm{right}}^1)-A(0))} +1. \label{eqvc} \end{equation} For the existence of a line attractor, the condition $v_{\mathrm{right}}^1>0$ must be satisfied, which is guaranteed if Eq. (\ref{eqCon1}) is satisfied, as shown in Appendix \ref{secAc}. We thus consider the behavior of the solution orbit near the critical manifold and absorbed into the line attractor. We assume $v_0=v_{\mathrm{right}}^1 -\delta v_+^1$ and $\phi_{\infty}=\phi_{\mathrm{right}}^1 -\delta \phi_+^1$, where $\displaystyle{ \phi_{\infty} \equiv \lim_{\tau \to \infty} \phi }$ is the final inelastic porosity. From Eqs. (\ref{eqSolPhi}) and (\ref{eqvc}), we have \begin{eqnarray} &-&\beta e^{-\beta A(\phi_{\mathrm{right}}^1-\delta \phi_+^1)} (B(\phi_{\mathrm{right}}^1-\delta \phi_+^1)-B(0)) \nonumber \\ &+&e^{-\beta (A(\phi_{\mathrm{right}}^1-\delta \phi_+^1)-A(0))} ( \beta e^{-\beta A(0)} (B(\phi_{\mathrm{right}}^1)-B(0)) \nonumber \\ &-&e^{\beta (A(\phi_{\mathrm{right}}^1)-A(0))}-\delta v_+^1) +1=0. \label{eqEx1} \end{eqnarray} Note here that the expansion of $B(\phi_{\mathrm{right}}^1-\delta \phi_+^1)$ can be given by \begin{eqnarray} B(\phi_{\mathrm{right}}^1-\delta \phi_+^1) = &B&(\phi_{\mathrm{right}}^1) \nonumber \\ &-&e^{\beta A(\phi_{\mathrm{right}}^1)} \delta \phi_+^1 +\frac{\beta}{2} e^{\beta A(\phi_{\mathrm{right}}^1)} (\delta \phi_+^1)^2 \nonumber \\ &+&O((\delta \phi_+^1)^3), \label{eqEx2} \end{eqnarray} because we have the relations \begin{equation} \frac{dB}{d \phi} \Big|_{\phi=\phi_{\mathrm{right}}^1} =e^{\beta A(\phi_{\mathrm{right}}^1)}, \end{equation} and \begin{eqnarray} \frac{d^2 B}{d \phi^2} \Big|_{\phi=\phi_{\mathrm{right}}^1} &=& \frac{d}{d \phi} e^{\beta A(\phi)} \Big|_{\phi=\phi_{\mathrm{right}}^1} \nonumber \\ &=&\beta \frac{e^{\beta A(\phi)}}{1-g(\phi)} \Big|_{\phi=\phi_{\mathrm{right}}^1}=\beta e^{\beta A(\phi_{\mathrm{right}}^1)} \label{eqA2} \end{eqnarray} (see also Eq. (\ref{eqSolPhic})). Equations (\ref{eqEx1}) and (\ref{eqEx2}) lead to \begin{eqnarray} \beta \delta \phi_+^1 -\frac{\beta^2}{2} (\delta \phi_+^1)^2 -1 -e^{-\beta( A(\phi_{\mathrm{right}}^1)-A(0))} \delta v_+^1 \nonumber \\ +e^{\beta( A(\phi_{\mathrm{right}}^1-\delta \phi_+^1)-A(\phi_{\mathrm{right}}^1))} +O((\delta \phi_+^1)^3) =0. \label{eqEx3} \end{eqnarray} We have multiplied both sides of Eq. (\ref{eqEx1}) by $\exp(\beta(A(\phi_{\mathrm{right}}^1-\delta \phi_+^1)-A(\phi_{\mathrm{right}}^1)))$. Additionally, we have another important expansion in terms of $\delta \phi_+^1$: \begin{eqnarray} &A&(\phi_{\mathrm{right}}^1-\delta \phi_+^1)-A(\phi_{\mathrm{right}}^1) \nonumber \\ &=&-\frac{d A(\phi)}{d \phi} \Big|_{\phi=\phi_{\mathrm{right}}^1} \delta \phi_+^1 +\frac{1}{2} \frac{d^2 A(\phi)}{d \phi^2} \Big|_{\phi=\phi_{\mathrm{right}}^1} (\delta \phi_+^1)^2 \nonumber \\ &+&O((\delta \phi_+^1)^3) \nonumber \\ &=&-\frac{\delta \phi_+^1}{1-g(\phi_{\mathrm{right}}^1)} +\frac{g'(\phi_{\mathrm{right}}^1)}{2 (1-g(\phi_{\mathrm{right}}^1))^2} (\delta \phi_+^1)^2 +O((\delta \phi_+^1)^3) \nonumber \\ &=&- \delta \phi_+^1 +\frac{g'(\phi_{\mathrm{right}}^1)}{2} (\delta \phi_+^1)^2 +O((\delta \phi_+^1)^3). \label{eqEx4} \end{eqnarray} Using Eqs. (\ref{eqEx3}) and (\ref{eqEx4}) and expanding the exponential function in Eq. (\ref{eqEx3}), we find that the terms of orders $(\delta \phi_+^1)^0$ and $(\delta \phi_+^1)^1$ vanish, leaving us with \begin{equation} \frac{\beta g'(\phi_{\mathrm{right}}^1)}{2} (\delta \phi_+^1)^2 -e^{-\beta( A(\phi_{\mathrm{right}}^1)-A(0))} \delta v_+^1 +O((\delta \phi_+^1)^3) =0. \end{equation} Neglecting the terms of order $(\delta \phi_+^1)^3$ and higher, we obtain a simple power law between $\delta \phi_+^1$ and $\delta v_+^1$: \begin{equation} \delta \phi_+^1 = (\delta v_+^1)^{1/2} \sqrt{ \frac{2 e^{\beta( A(0)-A(\phi_{\mathrm{right}}^1))}}{\beta g' (\phi_{\mathrm{right}}^1)} }. \label{eqP2} \end{equation} Note that we have assumed $g'(\phi_{\mathrm{right}}^1) >0$. The character $\phi_{\mathrm{right}}^1$ vanishes by using Eq. (\ref{eqSolPhic}), which leads to a power law relation described in terms of $\beta$ and $g(\phi)$. We should emphasize that the power 1/2 is universal and does not depend on either $\beta$ or the details of $g(\phi)$. We also have the other right end points, $\phi_{\mathrm{right}}^{i_1}$ $(i_1 \ge 2)$. Furthermore, we should treat the case $g'(\phi_c^m) \le 0$. However, as noted in Sec. \ref{secPI}, the end points other than $\phi_{\mathrm{right}}^1$ and $\phi_{\mathrm{left}}^1$ can be in the PPR depending on the form of $C^{\mathrm{crit}}$, and physically natural solutions may not approach their vicinity. In addition, the possibility that the $\phi$-axis actually becomes tangential to $C^{\mathrm{crit}}$ (i.e., $g'(\phi_c^m)=0$) in natural fault is considered to be negligibly low. We will consider the region only near $(\phi_{\mathrm{right}}^1, 0)$ from the physical and seismological viewpoints in Sec. \ref{secApp}, and mathematical discussions about the other $\phi_c^m$ will be performed in Sec. \ref{secDisCon}. \subsection{Other important suggestions about the phase transition and the power law} \label{secPLS} We have another important conclusion about the parameters governing the phase transition based on the result obtained in this section. Note that the sign of $v_0-v_{\mathrm{right}}^1$ is concluded to be important for the phase transition. If it is positive, the high-speed phase emerges, whereas if it is negative, the cessation phase is realized. Based on Eq. (\ref{eqvc}), we have the relation \begin{eqnarray} v_0-v_{\mathrm{right}}^1=v_0 &-& \beta e^{-\beta A(0)} (B(\phi_{\mathrm{right}}^1)-B(0)) \nonumber \\ &+& e^{\beta (A(\phi_{\mathrm{right}}^1)-A(0))} -1, \label{eqGS} \end{eqnarray} which includes the parameter $\beta$ and the functions $A(\phi_{\mathrm{right}}^1)$ and $B(\phi_{\mathrm{right}}^1)$. These functions can be described by $g(\phi)$ by definition, and $\phi_{\mathrm{right}}^1$ can be written in terms of $g^{-1}$ (see Eq. (\ref{eqSolPhic})). The parameters $\beta$ and $v_0$ as well as the function $g(\phi)$ are found to govern the phase transition, and the governing function is given by the right hand side of Eq. (\ref{eqGS}). We can also obtain a relation similar to Eq. (\ref{eqP2}) by considering the value of $\displaystyle{ u=\int v d \tau }$. We consider the region near $(\phi_{\mathrm{right}}^1,0)$ here. Let us assume that $u_{\infty}=u_{\mathrm{right}}^1-\delta u_+^1$ for the cessation phase, where $\displaystyle{ u_{\infty} \equiv \lim_{\tau \to 0} u }$ is the final slip amount, $u_{\mathrm{right}}^1=u_{\infty}|_{v_0=v_{\mathrm{right}}^1}$ and $\delta u_+^1$ is the positive amount satisfying $\delta u_+^1 \ll 1$. The relation $\delta \phi_+^1 = (1-g(\phi_{\mathrm{right}}^1)) \delta u_+^1/\beta =\delta u_+^1 /\beta$ (derived from Eqs. (\ref{eqGovG4}) and (\ref{eqSolPhic})) and Eq. (\ref{eqP2}) give the simple power law between $\delta u_+^1$ and $\delta v_+^1$: \begin{equation} \delta u_+^1 = (\delta v_+^1)^{1/2} \sqrt{ \frac{2 \beta e^{\beta(A(0)-A(\phi_{\mathrm{right}}^1))}}{g' (\phi_{\mathrm{right}}^1)} }. \label{eqP3} \end{equation} Thus, we have a power law with the same critical exponent 1/2 as observed in Eq. (\ref{eqP2}). Equations (\ref{eqP2}) and (\ref{eqP3}) derived from the mathematical viewpoint imply that, on the phase space, the values on two line segments, one for the initial values and the other for the final values, are related via a simple power law from a physical viewpoint. Physical and seismological implications associated with this statement are given in Sec. \ref{secApp}. Note that we only consider the region in the vicinity of the point $(\phi_{\mathrm{right}}^1,0)$, and write $\phi_c$, $v_c$, $u_c$, $\delta v$, and $\delta u$ instead of $\phi_{\mathrm{right}}^1$, $v_{\mathrm{right}}^1$, $u_{\mathrm{right}}^1$, $\delta v_{\mathrm{right}}^1$, and $\delta u_{\mathrm{right}}^1$, respectively, below. \section{APPLICATION TO NATURAL FAULTS} \label{secApp} Dynamic earthquake slip processes show phase transitions by considering $\phi_{\infty}$ or $u_{\infty}$ as the order parameter from the conclusion in Sec. \ref{secPT}. This has some implications for the diversity observed in natural dynamic earthquake slip behavior. For example, the dependence of stress drops on earthquake size can be explained. The stress drop $\Delta \sigma$ is the difference between the applied shear stress acting on the fault plane and the residual frictional stress, as mentioned in Sec. \ref{secMS}. Some researchers insist that large earthquakes sometimes have larger dynamic stress drops than other earthquakes \cite{Dup}. This mechanism can be understood by the framework here. For such large earthquakes, the point attractor (high-speed phase) may be realized because $\Delta \sigma \equiv \sigma_s^0-\sigma_{\mathrm{res}} =\mu v/2 \beta_v=\sigma_s^0 v$ (see Eqs. (\ref{eqEOM}) and (\ref{eqNv})) remains nonzero. This corresponds to the case wherein the acceleration by the fluid pressure increment due to the thermal pressurization effect completely governs the system behavior, and the shear stress acting on the fault plane is completely released owing to thermal pressurization. The fluid pressure approaches $-\sigma_n^0$ in this case. On the other hand, other ordinary earthquakes may be realizations of the line attractor (cessation phase), because we clearly have $\Delta \sigma=0$ in the cessation phase. This behavior corresponds to a situation in which deceleration due to fluid pressure reduction induced by the dilatancy effect completely governs the slip behavior, and spontaneous slip cessation ($v=0$) is realized. Note that the 1D system utilized here is an approximated one, and $\Delta \sigma$ remains nonzero near the fault tips for real three-dimensional systems. However, the near-tip area becomes negligibly small compared with the whole fault area with propagation of the tip, and the 1D approximation is expected to work well for natural faults. The slip behavior from the onset to the attainment of two phases can be interpreted in terms of two physical processes, the thermal pressurization and dilatancy effects. Note that the dominant physical processes can exchange during the slip. If the solution orbit is on the upside (underside) of $C^{\mathrm{crit}}$, the gradient of the orbit is negative (positive) and $v$ decreases (increases) with increasing time. The deceleration (acceleration) of the slip is physically observed, and the dilatancy (thermal pressurization) effect is dominant. The curve $C^{\mathrm{crit}}$, i.e., the function $g(\phi)$, determines which physical process is dominant. However, it should be emphasized that the high-speed and cessation phases are completely governed by the thermal pressurization and dilatancy effects, respectively, and an intermediate state at $\tau \to \infty$ does not exist, as mentioned above. Although this phase transition was also suggested in SY14, only a single form for the porosity evolution law was assumed, and the order parameter ($\phi_{\infty}$ or $u_{\infty}$) was not clarified there. The question here is whether we can predict which phase appears based on the physical viewpoint. As noted in Sec. \ref{secPLS}, $\beta$, $v_0$, and $g(\phi)$ govern which phase emerges via the sign of relation (\ref{eqGS}). We can completely predict slip behavior mathematically. Nonetheless, $g(\phi)=1-\beta f(F^{-1}(\phi))=1-\beta f(u) =1-(\beta/\phi_{\mathrm{UL}}) \partial \phi_d (U_{\mathrm{ref}} u)/\partial u$ depends upon the porosity evolution law, which has not been firmly understood, as mentioned in Sec. \ref{secI}. Thus, the law is so uncertain that we cannot predict which phase emerges from the physical viewpoint. Though predicting which phase emerges is difficult, it is physically meaningful to assume that the cessation phase emerges because earthquakes with enormously large stress drops are rare \cite{Dup}. This observational result implies that $v_0-v_{\mathrm{right}}^1<0$ is satisfied for many earthquakes. The question arising from the assumption of the cessation phase is whether we can predict the final slip amount. Note here that the value of the final slip amount $u_{\infty}=u_c-\delta u$ is a measure of earthquake magnitude (which cannot be defined exactly here because the model is 1D), and that studying the behavior of $u_{\infty}$ is important for understanding the dynamic earthquake slip process. We adopt the model of SY14 as an example, and perform numerical calculations. We will derive implications independent of $\beta$ and $g(\phi)$. The governing equations in SY14 are given by \begin{equation} \dot{v}=v(1-v)-S_u (1-\phi)v, \label{eqGovSY14-1} \end{equation} \begin{equation} \dot{\phi}=T_a (1-\phi) v, \label{eqGovSY14-2} \end{equation} where $S_u$ and $T_a$ are nondimensional positive parameters. Applying the notation in the present study to this equation system, we have $f(F^{-1}(\phi))=T_a (1-\phi)$, $g(\phi)=1-S_u (1-\phi)$, and $\beta=S_u/T_a$. With these conditions and Eqs. (\ref{eqRelUPhi}), (\ref{eqSolPhic}), and (\ref{eqvc}), we can write $v_c$, $\phi_c$ and $u_c$ for SY14: \begin{equation} v_c = 1-\frac{S_u^{1/T_a} T_a - S_u}{T_a-1}, \label{eqSolvcEx} \end{equation} \begin{equation} \phi_c = 1-\frac{1}{S_u}, \label{eqSolPhicEx} \end{equation} \begin{equation} u_c=\frac{\ln S_u}{T_a}, \label{eqSolucEx} \end{equation} respectively. Finally, we can apply Eq. (\ref{eqP3}) to the model of SY14 to obtain \begin{equation} \delta u = \delta v^{1/2} \sqrt{\frac{2}{S_u^{1/T_a} T_a}} . \label{eqExm1} \end{equation} The values of $S_u$ and $T_a$ to be utilized are those which are concluded to be appropriate for ordinary earthquakes in SY14. The Runge-Kutta method with the fourth-order accuracy is adopted. The calculated $u$ was found to approach $u_{\infty}$ within the numerical accuracy in finite time. Figure \ref{FigPower}(a) shows the numerically obtained relation between $\delta v$ and $\delta u$, which agrees with the result of Eq. (\ref{eqExm1}). In addition, Figs. \ref{FigPower}(b) and (c) show that the relative disturbance in the final slip amount, $\delta u/u_c$, is several to 10 times larger than that in the initial slip velocity, $\delta v/v_c$. This occurs because we have the universal critical exponent 1/2. Since the uncertainty in $\delta v$ is significantly amplified in $\delta u$, predicting $\delta u$ is almost impossible, and thus the final slip amount of the earthquake is hard to predict. This non-predictability is newly suggested here, and was not studied in previous studies including SY14. \begin{figure*}[tbp] \centering \begin{minipage}[t]{8.cm} \includegraphics[width=8.cm]{Fig9a.eps} \end{minipage} \begin{tabular}{cc} \begin{minipage}[t]{7.5cm} \includegraphics[width=7.5cm]{Fig9b.eps} \end{minipage} & \begin{minipage}[t]{7.5cm} \includegraphics[width=7.5cm]{Fig9c.eps} \end{minipage} \end{tabular} \caption{Power law. (a) Relations between $\delta v$ and $\delta u$, and their dependence on the values of $(S_u, T_a)$. The values of $(S_u, T_a)$ are $(1.5, 2)$ (red), $(2, 3)$ (blue) and $(2.5, 6)$ (green). The solid lines are numerically obtained relations, whereas the dotted lines describe Eq. (\ref{eqExm1}). The numerical curves nearly overlap the analytical ones. (b) Relations between $\delta v/v_c$ and $\delta u/u_c$. Values of $v_c$ and $u_c$ are calculated based on Eqs. (\ref{eqSolvcEx}) and (\ref{eqSolucEx}), respectively. (c) Log-log scale version of (b).} \label{FigPower} \end{figure*} Equation (\ref{eqP3}) shows that $\delta u$ is proportional to $\delta v^{1/2}$, and that the factor of proportionality depends on $\beta$ and $g(\phi)$. Therefore, we should emphasize the non-predictability as a universal conclusion independent of uncertainties of $g(\phi)$, and the non-predictability is not specific to the model of SY14. The parameter $\delta v=v_c-v_0$ depends on $\beta$, $g(\phi)$ and $v_0$ (see Eq. (\ref{eqGS})), such that it is hard to evaluate it from the physical viewpoint. Moreover, $\delta v$ has another uncertainty. Note that $v_0= 1+\mu_{\mathrm{slid}}(\sigma_n^0+p_{D0})/\sigma_s^0$, where $p_{D0}$ is the initial fluid pressure. Among the parameters $\mu_{\mathrm{slid}}$, $\sigma_s^0$, $\sigma_n^0$, and $p_{D0}$, note that $p_{D0}$ is considered to be the most susceptible to the surrounding environment. The change of the fluid pressure within the fault rocks may be caused by some chemical processes such as the dehydration of hydrous minerals \cite{Dob, Jun}, hence its quantitative evaluation becomes very arduous, because it is a microscopic phenomenon. It can be concluded that the final slip amount may reflect the fluid pressure profile, and that its exact prediction is significantly difficult. \section{DISCUSSION AND CONCLUSIONS} \label{secDisCon} The present study provides a unified framework to treat thermal pressurization and dilatancy effects simultaneously. In particular, the nullclines common to the two equations are important in this framework. Such a system generates geometrically different attractors: the point attractor (high-speed phase in a physical sense) and the line attractor (cessation phase). The transition behavior between the two phases is observed near the basin boundary, and the universality in the vicinity of the transition point is represented by the power law between $\delta \phi$ and $\delta v$, regardless of the details of the porosity evolution law: $\delta \phi \propto \delta v^{1/2}$. Dynamic earthquake slip processes can be regarded as phase transition phenomena, with the final inelastic porosity or the final slip amount as the order parameter. The prediction of the emergent phase is completely performed mathematically, whereas predicting it physically is difficult, mainly because the porosity evolution law has uncertainties. Moreover, even if we assume that the cessation phase emerges, the universality suggests the non-predictability of the final slip amounts of natural earthquakes. If a complete framework for the porosity evolution law could be developed, we may predict the phase emergence by identifying parameters used to construct $v_0=1+\mu_{\mathrm{slid}} (\sigma_n^0+p_{D0})/ \sigma_s^0$, $\beta=M \mu_{\mathrm{slid}} \phi_{\mathrm{UL}}/\sigma_s^0$ and $U_{\mathrm{ref}}=((1-\phi_t) \rho_s C_s +\phi_t \rho_f C_f)w_h/((b-\phi_t) \alpha_s + \phi_t \alpha_f)M \mu_{\mathrm{slid}}$, because $g(\phi)$ is given by $g(\phi)=1-(\beta/\phi_{\mathrm{UL}}) \partial \phi_d (U_{\mathrm{ref}} u)/\partial u$. A more detailed study of the porosity evolution law is important for understanding the phase emergence. However, it should be noted that as mentioned in Sec. \ref{secApp}, evaluating $v_0$ is difficult. A stochastic approach could eventually make it easier, and the prediction would have a stochastic character. We should also emphasize that even in the case of high probability for the cessation-phase emergence, the final slip amount could not be predicted because of the universality independent of $\beta$ and $g(\phi)$. Nonetheless, it is meaningful to investigate the effect of $\beta$ because it is the single parameter emerging in the governing equation system. As mentioned in Sec. \ref{secMS}, $\beta$ is a measure of the contribution of the inelastic porosity increase to the slip velocity change. Considering the region $0 \le \phi \le 1$ and $0 \le v \le 1$, and assuming that $f$ is completely understood, it can be found that larger $\beta$ generates larger area where $v > g$ within such a region since $g=1- \beta f$; in other words $C^{\mathrm{crit}}$ shifts downward, but it keeps crossing the point $(1,1)$. Therefore, it can be concluded that larger $\beta$ (e.g., larger $M$ or smaller $\sigma_s^0$) is more likely to generate the cessation phase. This means that if the porosity evolution law is understood, the tendency by which the cessation phase is likely to emerge can be estimated from $\beta$. We can perform nonlinear mathematical applications of the framework constructed in this paper. In particular, we now discuss the case of $g'(\phi_c^m)=0$ mathematically. Let us assume again that $\forall j, \ g^{(j)} (\phi_c^m)=0$, $g^{(n)}(\phi_c^m) \neq 0$, and $n \ge 2$. Defining $\delta \phi_{(m)} \equiv |\phi_c^m-\phi_{\infty}|$ and $\delta v_{(m)} \equiv |v_c^m-v_0|$, where $v_c^m$ is the $v$ value at which the manifold crossing $(\phi_c^m, 0)$ crosses the $v$-axis, we can show the relation \begin{equation} \delta \phi_{(m)} =(\delta v_{(m)})^{1/(n+1)} \Big| \frac{(n+1)! e^{\beta(A(0)-A(\phi_c^m))}}{\beta g^{(n)} (\phi_c^m)} \Big|^{1/(n+1)} \label{eqPn} \end{equation} (see details in Appendix \ref{secAg}). In this case, the universal critical power value is $1/(n+1)$, which decreases with increasing $n$. In particular, if $n$ is even and $g^{(n)}(\phi_c^m)<0$, $\phi_c^m-\phi_{\infty}$ and $v_c^m-v_0$ can take both positive and negative values because the point $(\phi_c^m, 0)$ is on a line attractor and not its end point. In this case, Eq. (\ref{eqPn}) predicts that the region on the $\phi$-axis near the point $(\phi_c^m, 0)$ is harder for the solution to approach with larger $n$, because the disturbance in $\delta v_{(m)}$ is enlarged beyond that in $\delta \phi_{(m)}$ (note that $\delta v_{(m)}<1$ and $\delta \phi_{(m)}<1$) and the enlargement is stronger, even though the region is on an attractor. These treatments are important from the viewpoint of nonlinear mathematics. Finally, we can show that the treatments performed in this study can be extended to systems such as competition relations between two species (the LV model). The competitive LV model is a simple model of the population dynamics of species competing for some common resource. The framework constructed in the present article represents a special case for the system. To show this, first, note that the competitive LV model (in the absence of the diffusion terms) is given by the following equations: \begin{equation} \frac{d x_1}{dt}=r_1 x_1 \left( 1-\frac{1}{K_1} \cdot x_1 -\frac{a_{12}}{K_1} \cdot x_2 \right), \label{eqLV1} \end{equation} \begin{equation} \frac{d x_2}{dt}=r_2 x_2 \left( 1-\frac{a_{21}}{K_2} \cdot x_1 -\frac{1}{K_2} \cdot x_2 \right), \label{eqLV2} \end{equation} where $x_1$ ($x_2$) is the population size of species 1 (2), $r_1$ ($r_2$) is the inherent per-capita growth rate of 1 (2), $K_1$ ($K_2$) is the carrying capacity of 1 (2), and $a_{12}$ ($a_{21}$) represents the effect that species 2 (1) has on the population of species 1 (2). Let us consider the system under conditions that (A) the growth rate for species 2 is negligibly smaller than that for species 1, and (B) when species 1 consumes species 2, species 1 also dies, e.g., species 2 is poisonous. If we consider the limit $r_2 \to 0$ while maintaining $r_2 a_{21}/K_2$ constant, this system can be depicted and the governing equation system is given by \begin{equation} \dot{X}_1=X_1 (1 -X_1) -\frac{a_{12} K_2}{K_1} \cdot X_1 X_2, \label{eqMLV1} \end{equation} \begin{equation} \dot{X}_2=-\frac{r_2 a_{21} K_1}{r_1 K_2} \cdot X_1 X_2, \label{eqMLV2} \end{equation} where $X_1 \equiv x_1/K_1$ and $X_2 \equiv x_2/K_2$, and temporal differentiation is performed with respect to $\tau^{LV} \equiv r_1 t$. The system can be described by the equation system in the same manner as Eqs. (\ref{eqGovG3}) and (\ref{eqGovG4}). If we introduce the variables $v \equiv X_1$ and $\phi \equiv 1-X_2$, Eqs. (\ref{eqMLV1}) and (\ref{eqMLV2}) are exactly the same as Eqs. (\ref{eqGovSY14-1}) and (\ref{eqGovSY14-2}), respectively, by replacing $S_u$ and $T_a$ with $a_{12}K_2/K_1$ and $r_2 a_{21} K_1/r_1 K_2$, respectively. Which species survives is determined by two important values. The first one is $v_0^{LV}$, which is the $v$ value at the point where the solution orbit crossing the point $(\phi_{\mathrm{init}}, v_{\mathrm{init}})$ crosses the $v$-axis on the phase space, where $v_{\mathrm{init}}$ and $\phi_{\mathrm{init}}$ are the initial values for $v$ and $\phi$, respectively (note that $v_{\mathrm{init}}$ and $v_0^{LV}$ are different). The variables $(v, \phi, v_0)=(v_{\mathrm{init}}, \phi_{\mathrm{init}}, v_0^{LV})$ must satisfy Eq. (\ref{eqSolPhi}), and we have the relation \begin{eqnarray} v_{\mathrm{init}} = 1 &-&\frac{\frac{a_{12} K_2}{K_1} (1-\phi_{\mathrm{init}})}{1-\frac{r_2 a_{21} K_1}{r_1 K_2}} \nonumber \\ &+& \left( v_0^{LV} -1+\frac{\frac{a_{12} K_2}{K_1}}{1-\frac{r_2 a_{21} K_1}{r_1 K_2}} \right) \nonumber \\ &\times& (1-\phi_{\mathrm{init}})^{r_1 K_2/r_2 a_{21} K_1}. \end{eqnarray} Solving this equation for $v_0^{LV}$ gives us \begin{eqnarray} v_0^{LV}&=&(1-\phi_{\mathrm{init}})^{-r_1 K_2/r_2 a_{21} K_1} \nonumber \\ &\times& \left( v_{\mathrm{init}}-1+\frac{\frac{a_{12}K_2}{K_1} (1-\phi_{\mathrm{init}})}{1-\frac{r_2 a_{21} K_1}{r_1 K_2}} \right) \nonumber \\ &+&1-\frac{\frac{a_{12}K_2}{K_1}}{1-\frac{r_2 a_{21} K_1}{r_1 K_2}}. \label{eqv0LV} \end{eqnarray} The other value is $v_c^{LV}$, which is the $v$ value at the point where the critical manifold crosses the $v$-axis. Replacing $S_u$ and $T_a$ with $a_{12}K_2/K_1$ and $r_2 a_{21} K_1/r_1 K_2$, respectively, in Eq. (\ref{eqSolvcEx}) gives the value, which leads to \begin{equation} v_c^{LV} = 1-\frac{\left( \frac{a_{12}K_2}{K_1} \right)^{r_1 K_2/r_2 a_{21} K_1} \frac{r_2 a_{21} K_1}{r_1 K_2} - \frac{a_{12}K_2}{K_1}}{\frac{r_2 a_{21} K_1}{r_1 K_2}-1}. \label{eqvcLV} \end{equation} Comparison between $v_0^{LV}$ and $v_c^{LV}$ allows an exact evaluation of which species survives; if $v_0^{LV}>v_c^{LV}$, the species $v$ ($x_1$) survives, whereas if $v_0^{LV}<v_c^{LV}$, the species $\phi$ ($x_2$) survives.
1,941,325,220,660
arxiv
\section{Introduction} We consider {\em hyperbolic polynomials (HPs)}, i.e. real univariate polynomials with all roots real. We assume the leading coefficient to be positive and all coefficients to be nonvanishing. The Descartes' rule of signs applied to such a degree $d$ HP with $c$ sign changes and $p$ sign preservations in the sequence of its coefficients, $c+p=d$, implies that the HP has $c$ positive and $p$ negative roots counted with multiplicity. In what follows we consider the {\em generic} case when the moduli of all roots are distinct. \begin{defi}\label{defiSPMO} {\rm (1) A {\em sign pattern (SP)} of length $d+1$ is a sequence of $d+1$ $(+)$- and/or $(-)$-signs. We say that the polynomial $Q:=x^d+\sum _{j=0}^{d-1}a_jx^j$ defines (or realizes) the SP $\sigma (Q):=(+,{\rm sgn}(a_{d-1}),\ldots ,{\rm sgn}(a_0))$. (2) A {\em moduli order (MO)} of length $d$ is a formal string of $c$ letters $P$ and $p$ letters $N$ separated by signs of inequality $<$. These letters indicate the relative positions of the moduli of the roots of the HP on the real positive half-line. E.g. for $d=6$, to say that a given HP $Q$ defines (or realizes) the MO $N<N<P<N<P<N$ means that for $\sigma (Q)$, one has $c=2$ and $p=4$ and that for the positive roots $\alpha _1<\alpha _2$ and the negative roots $-\gamma _j$ of $Q$, one has $\gamma _1<\gamma _2<\alpha _1<\gamma _3<\alpha _2<\gamma _4$. (3) We say that a given MO {\em realizes} a given SP if there exists a HP which defines the given MO and the given~SP.} \end{defi} \begin{ex}\label{ex1} {\rm For $d=1$, if the SP defined by a HP with a nonzero root equals $(+,+)$ (resp. $(+,-)$), then this root is negative (resp. positive). For $d=2$, a HP with roots of opposite signs and different moduli defines the SP $(+,+,-)$ with MO $P<N$ or the SP $(+,-,-)$ with MO $N<P$.} \end{ex} \begin{rem}\label{remconcat} {\rm Suppose that the MO $r$ is realizable by a HP $Q$. Denote by $rP$, $rN$ (resp. $Pr$ and $Nr$) the MOs obtained from $r$ by adding to the right the inequality $<P$ or $<N$ (resp. by adding to the left the inequality $P<$ or $N<$). For $\varepsilon >0$ sufficiently small, the product $(x-\varepsilon )Q(x)$ (resp. $(x+\varepsilon )Q(x)$) defines the MO $Pr$ (resp. $Nr$). Indeed, the modulus of the root $\pm \varepsilon$ is much smaller than any of the moduli of the roots of~$Q$. In the same way, the product $-(1-\varepsilon x)Q(x)$ (resp. $(1+\varepsilon x)Q(x)$) defines the MO $rP$ (resp. $rN$), because the modulus of the root $\pm 1/\varepsilon$ is much larger than any of the moduli of the roots of~$Q$. When several products of the form $(x\pm \varepsilon )Q(x)$ and/or $\pm (1\pm \varepsilon x)Q(x)$ are used, then they are performed with different numbers $\varepsilon _j$ for which one has $0<\cdots \ll \varepsilon _{j+1}\ll \varepsilon _j$. } \end{rem} \begin{defi}\label{defirigid} {\rm A MO is {\em rigid} if all HPs realizing this MO define one and the same SP, i.e. if the MO realizes only one~SP.} \end{defi} The aim of the present paper is to characterize all rigid MOs. From now on we assume that $c\geq 1$ and $p\geq 1$. Indeed, when all roots are of the same sign, then there is a single SP corresponding to such a MO (this is either the all-pluses SP when the roots are negative or $(+,-,+,-,+,\ldots )$ when they are positive), so according to our definition this MO should be considered as rigid. However as it excludes the question how moduli of negative roots are placed w.r.t. the positive roots on the real positive half-line, this case should be considered as trivial. \begin{nota}\label{nota1} {\rm (1) We introduce the following four MOs:} $$\begin{array}{lcl} r_{PN}:P<N<P<N<\cdots <N~,&&r_{PP}:P<N<P<N<\cdots <P~,\\ \\ r_{NP}:N<P<N<P<\cdots <P&{\rm and}&r_{NN}:N<P<N<P<\cdots <N~. \end{array}$$ {\rm The orders $r_{PN}$ and $r_{NP}$ (resp. $r_{PP}$ and $r_{NN}$) correspond to even (resp. to odd) degree $d$. In the case of $r_{PN}$ and $r_{NP}$ there are $d/2$ positive and $d/2$ negative roots, in the case of $r_{PP}$ there are $(d+1)/2$ positive and $(d-1)/2$ negative roots and vice versa in the case of $r_{NN}$. The MOs $r_{PN}$, $r_{NP}$, $r_{PP}$ and $r_{NN}$ are the only ones in which there are no two consecutive moduli of roots of one and the same sign hence for $d\geq 3$, they are the ones and the only ones which contain no (sub)string of the form $P<P<N$, $N<N<P$, $N<P<P$ or $P<N<N$. (2) We are particularly interested in the following two SPs:} $$ \Sigma _{+}:=(+,+,-,-,+,+,-,-,\ldots )~~~\, {\rm and}~~~\, \Sigma _{-}:=(+,-,-,+,+,-,-,+,\ldots )~.$$ \end{nota} The main result of the paper is the following theorem: \begin{tm}\label{tmmain} (1) For $d\geq 3$, a MO different from $r_{PN}$, $r_{NP}$, $r_{PP}$ and $r_{NN}$ is not rigid. (2) For $d\geq 1$, the MOs $r_{PP}$, $r_{PN}$, $r_{NP}$ and $r_{NN}$ are rigid. When the roots of a HP define one of these MOs, then the SP of the HP is one of the SPs $\Sigma _{\pm}$. The exact correspondence is given by the following table (its fourth and seventh columns contain the last three signs of the SP; the degree $d$ is considered modulo~$4$): $$\begin{array}{ccccccccc} d\mod(4)&&{\rm MO}&{\rm SP}&&~~~\, \, &{\rm MO}&{\rm SP}&\\ \\ 0&&r_{NP}&\Sigma _-&-~+~+&&r_{PN}&\Sigma _+&-~-~+\\ \\ 1&&r_{PP}&\Sigma _-&+~+~-&&r_{NN}&\Sigma _+&-~+~+\\ \\ 2&&r_{NP}&\Sigma _-&+~-~-&&r_{PN}&\Sigma _+&+~+~-\\ \\ 3&&r_{PP}&\Sigma _-&-~-~+&&r_{NN}&\Sigma _+&+~-~-\end{array}$$ \end{tm} The theorem is proved in Section~\ref{secprtmmain}. Our next step is to consider the possibility to have equalities between moduli of roots and zeros among the coefficients. \begin{rem} {\rm We remind that a HP $Q$ with nonvanishing constant term cannot have two consecutive vanishing coefficients. Indeed, if $Q$ is hyperbolic, then its nonconstant derivatives are also hyperbolic and the {\em reverted polynomial} $x^dQ(1/x)$ is also hyperbolic. Suppose that $Q$ is hyperbolic and has two or more consecutive vanishing coefficients. Then applying derivation and reversion to $Q$ one can obtain a polynomial of the form $Ax^s+B$, $s\geq 3$, $A$, $B\in \mathbb{R}^*$, which must be hyperbolic. This, however, is impossible.} \end{rem} \begin{defi} {\rm A {\em sign pattern admitting zeros (SPAZ)} of length $d+1$ is a sequence of $d+1$ $(+)$- and/or $(-)$-signs and eventually zeros. The first element of the sequence must be a $(+)$-sign. To determine the number of sign changes and sign preservations of a SPAZ one has to erase the zeros. A {\em moduli order admitting equalities (MOAE)} of length $d$ is a formal string of letters $P$ and $N$ separated by signs of inequality $\leq$. E.g. for $d=6$, saying that the HP $Q$ defines the MOAE $N\leq N\leq P\leq N\leq P\leq N$ means that the SPAZ defined by $Q$ and the one defined by $Q(-x)$ have at least two and four sign changes respectively, and the constant term of $Q$ is nonvanishing; for the moduli of its roots (with the notation from Definition~\ref{defiSPMO}), one has $\gamma _1\leq \gamma _2\leq \alpha _1\leq \gamma _3\leq \alpha _2\leq \gamma _4$.} \end{defi} \begin{ex}\label{ex2} {\rm For $d=2$, a HP with nonvanishing constant term and two opposite roots is of the form $F:=x^2-a^2$, $a\in \mathbb{R}^*$. It defines the SPAZ $(+,0,-)$ which has one sign change and no sign preservation. One has $F(-x)=F(x)$.} \end{ex} \begin{nota} {\rm We denote by $r_{PN}^0$, $r_{PP}^0$, $r_{NN}^0$ and $r_{NP}^0$ the MOsAE obtained from the respective MOs $r_{PN}$, $r_{PP}$, $r_{NN}$ and $r_{NP}$ (see Notation~\ref{nota1}) by replacing the inequalities $<$ by inequalities~$\leq$.} \end{nota} \begin{tm}\label{tm2} (1) Suppose that $d\geq 1$ is odd. If a HP with nonvanishing constant term defines the MOAE $r_{PP}^0$ or $r_{NN}^0$, then this HP has no vanishing coefficient and defines the SP as claimed by part (2) of Theorem~\ref{tmmain}. (2) If $d\geq 2$ is even and a HP with nonvanishing constant term defines the MOAE $r_{PN}^0$ or $r_{NP}^0$, then either (i) this HP is even hence of the form $A\prod _{j=1}^{d/2}(x^2-a_j^2)$, where $A>0$, $a_j\in \mathbb{R}^*$ are not necessarily distinct and the HP defines the SPAZ $(+,0,-,0,+,0,-,0,\ldots )$, or (ii) this HP has no vanishing coefficient, it defines the SP as claimed by part (2) of Theorem~\ref{tmmain} and it is not possible to represent the set of its roots as a union of $d/2$ couples of the form $\{ a_j, -a_j\}$. \end{tm} The theorem is proved in Section~\ref{secprtm2}. In the next section we compare the problem to characterize rigid MOs to other problems arising in the theory of real univariate polynomials. \section{Other related problems} A rigid MO is one which uniquely defines the SP. One could ask the inverse question, whether there exist SPs which uniquely define the corresponding MOs. This question is treated in \cite{KoSe} and \cite{KoPuMaDe}. \begin{defi}\label{deficanon} {\rm Given a SP of length $d+1$ we define the {\em canonical} MO corresponding to it as follows. The SP is read from the back and to each encountered couple of equal (resp. different) consecutive signs one puts in correspondence the letter $N$ (resp. $P$) in the MO. E.g. for $d=7$, to the SP $(+,+,-,-,+,-,+,+,-)$ there corresponds the canonical MO $P<N<P<P<P<N<P<N$. The canonical MO is obtained when one constructs a HP realizing the given SP using consecutive products of the form $(x\pm \varepsilon )Q(x)$, see Remark~\ref{remconcat}. Each SP is realizable by its canonical MO, see \cite{KoSe}. A SP is called {\em canonical} if it is realizable only by its canonical~MO.} \end{defi} For SPs one can use the notation $\Sigma _{p_1,p_2,\ldots ,p_s}$, where $p_i$ are the lengths of the maximal sequences of equal signs. E.g. the SP in Definition~\ref{deficanon} is $\Sigma _{2,2,1,1,2,1}$. The following necessary condition for a SP to be canonical is proved in~\cite{KoSe}: \begin{tm} If the SP $\Sigma _{p_1,p_2,\ldots ,p_s}$ is canonical, then there are no two consecutive numbers $p_i$ which are larger than $1$, and for $2\leq i\leq s-1$, one has $p_i\neq 2$. \end{tm} \begin{rems} {\rm (1) Thus for $d\geq 3$, the SPs $\Sigma _{\pm}$ (corresponding to rigid MOs, see Notation~\ref{nota1} and Theorem~\ref{tmmain}) are not canonical. For $d=1$ and $2$, they are canonical, see Example~\ref{ex1}. (2) The SPs with $c=d$, $p=0$ and $c=0$, $p=d$, are canonical. They correspond to the trivial case when all roots are positive or negative, see the lines after Definition~\ref{defirigid}.. (3) The MO corresponding to a canonical SP for which one does not have $s=1$ or $p_1=\cdots =p_s=1$, with at least one number $p_i$ larger than $2$ (or with $p_1=2$ or with $p_s=2$), is not rigid. Indeed, the presence of a number $p_i>2$ for $2\leq i\leq s-1$ (or of $p_1>1$ or of $p_2>1$) implies the presence of $p_i-1\geq 2$ (or of $p_1$ or of $p_s$) consecutive letters $P$ or $N$ in the MO, see Definition~\ref{deficanon}. Thus in and only in the trivial case does one have a rigid MO realizing a canonical~SP. (4) The SPs of the form $\Sigma _{1,p_2}$, $\Sigma _{p_1,1}$, $\Sigma _{1,p_2,1}$, $p_2\geq 3$, and $\Sigma _{p_1,1,p_3}$ are canonical, see~\cite{KoPuMaDe}.} \end{rems} The problems treated in the present paper are part of problems about real (not necessarily hyperbolic) univariate polynomials. For such a polynomial without vanishing coefficients, Descartes' rule of signs implies that the number $pos$ of its positive roots is not greater than the number $c$ of sign changes in the sequence of its coefficients, and the difference $c-pos$ is even. In the same way, for the number $neg$ of its negative roots, one has $neg\leq p$ and $p-neg\in 2\mathbb{Z}$, where $p$ is the number of sign preservations. The problem for which couples $(pos, neg)$ compatible with these requirements can one find such a real polynomial with prescribed signs of its coefficients seems to have been formulated for the first time in~\cite{AJS}. For $d=4$, D.~Grabiner has obtained the first nontrivial result, i.e. a compatible, but not realizable couple $(pos, neg)$, see~\cite{Gr}. In the cases $d=5$ and $6$ the problem has been thoroughly studied in \cite{AlFu} while the exhaustive answer for $d=7$ and $8$ can be found in \cite{FoKoSh} and \cite{KoCzMJ}. For $d\leq 8$, all compatible, but not realizable cases, are ones in which either $pos=0$ or $neg=0$. For $d\geq 9$, there are examples of compatible and nonrealizable couples $(pos, neg)$ with $pos\geq 1$ and $neg\geq 1$, see \cite{KoMB} and~\cite{CGK}. Various problems about HPs are exposed in~\cite{Ko}. A tropical analog of Descartes' rule of signs is discussed in~\cite{FoNoSh}. \section{Proof of Theorem~\protect\ref{tmmain}\protect\label{secprtmmain}} \begin{proof}[Proof of part (1)] Suppose that for $d\geq 3$, a MO $r$ contains the string of inequalities $P<P<N$. Consider the two polynomials $$\begin{array}{llllll} P_1&:=&(x-1)(x-1.1)(x+3)&=&x^3+0.9x^2-5.2x+3.3&{\rm and}\\ \\ P_2&:=&(x-1)(x-3)(x+3.1)&=&x^3-0.9x^2-9.4x+9.3~.\end{array}$$ They define two different SPs: $\sigma (P_1)=(+,+,-,+)$ and $\sigma (P_2)=(+,-,-,+)$. Hence one can realize the whole MO $r$ by two different SPs starting with the polynomials $P_1$ and $P_2$ and using $d-3$ multiplications with one and the same polynomials $x\pm \varepsilon$ or $1\pm \varepsilon x$, see Remark~\ref{remconcat}. After each multiplication one obtains again two polynomials defining different SPs. Hence $r$ is not rigid. If the MO contains a string of inequalities $N<N<P$, $N<P<P$ or $P<N<N$, then one can consider instead of the polynomials $P_j$, $j=1$, $2$, the polynomials $S_j:=-P_j(-x)$, $T_j:=x^3P_j(1/x)$ and $R_j:=x^3P_j(-1/x)$ respectively and perform a similar reasoning. The SPs defined by these polynomials are: $$\begin{array}{lll} \sigma (S_1)=(+,-,-,-)~,&\sigma (S_2)=(+,+,-,-)~,& \sigma (T_1)=(+,-,+,+)~,\\ \\ \sigma (T_2)=(+,-,-,+)~,&\sigma (R_1)=(+,+,+,-)~,&\sigma (R_2)=(+,+,-,-)~, \end{array}$$ hence $\sigma (S_1)\neq \sigma (S_2)$, $\sigma (T_1)\neq \sigma (T_2)$ and $\sigma (R_1)\neq \sigma (R_2)$. \end{proof} \begin{proof}[Proof of part (2)] We prove part (2) of the theorem by induction on $d$. For $d=1$ and $2$, the theorem is to be checked straightforwardly, see Example~\ref{ex1}. Suppose that part (2) of the theorem holds true for $d\leq d_0$, $d_0\geq 2$. Set $d:=d_0+1$. The sign of the constant term of a HP realizing the given MO depends only on the signs of the roots, not on the MO. So this sign is also to be checked directly. Consider a polynomial $Q:=x^{d_0+1}+\sum _{j=0}^{d_0}b_jx^j$ defining the given MO $\rho$ with $d=d_0+1$, with $\rho$ standing for $r_{PP}$, $r_{PN}$, $r_{NP}$ or $r_{NN}$. We represent it in the form $$Q:=(x-\varphi )(x-\psi )V~,~~~{\rm where}~~~V:=\prod _{j=1}^{d_0-1}(x-\xi _j)= x^{d_0-1}+\sum _{j=0}^{d_0-2}c_jx^j~.$$ Here $\varphi$ and $\psi$ are the two roots of $Q$ of least moduli, $|\varphi |<|\psi |$, and $\xi _j$ are its other roots. The signs of $\varphi$ and $\psi$ are opposite. Denote by $r$ the MO defined by the polynomial $V$. Using the notation of Remark~\ref{remconcat} one can say that the MO defined by the polynomial $R:=(x-\psi )V$ is either $Pr$ or $Nr$ depending to the sign of the root $\psi$, and the MO $\rho$ is either $NPr$ or $PNr$. We denote by $\Sigma$ the SP $\Sigma _+$ or $\Sigma _-$ according to the case and by $\Sigma '$ and $\Sigma ''$ the SPs obtained from $\Sigma$ by deleting its one or two last signs respectively. We include $Q$ into a one-parameter family of polynomials of the form $$Z_t:=(x+t\psi )(x-\psi )\prod _{j=1}^{d_0-1}(x-\xi _j)~~~,~~~t\in [0,1]~.$$ As $\varphi \cdot \psi <0$ and $|\varphi |<|\psi |$, there exists $t_*\in (0,1)$ such that $\varphi =-t_*\psi$, i.e. $Z_{t_*}=Q$. For $t=0$, one obtains $Z_0=xR$. The theorem being true for $d=d_0$ and $d=d_0-1$, the polynomial $R$ defines the SP $\Sigma '$, because $R$ defines the MO $Pr$ or~$Nr$, and $V$ defines the MO $r$ and the SP~$\Sigma ''$. For $t=1$, one has $Q=(x^2-\psi ^2)V$. Hence $$Z_1=x^{d_0+1}+c_{d_0-2}x^{d_0}+\left( \sum _{j=0}^{d_0}(c_j-\psi ^2c_{j+2})x^{j+2}\right) - \psi ^2(c_1x+c_0)~.$$ The signs of $c_j$ and $c_{j+2}$ are opposite (see Notation~\ref{nota1} for the definition of the SPs $\Sigma _{\pm}$), therefore sgn$(c_j-\psi ^2c_{j+2})=$sgn$(c_j)$. Thus the first $d_0$ coefficients of $Z_1$ have the signs given by the SP $\Sigma$. This is the case of the last two coefficients as well, because sgn$(-\psi ^2c_1)=-$sgn$(c_1)$ and sgn$(-\psi ^2c_0)=-$sgn$(c_0)$. Hence $Z_1$ defines the SP~$\Sigma$. The coefficients of $Z_t$ are linear functions in $t\in [0,1]$. If their signs for $t=0$ and $t=1$ are the corresponding components of the SP $\Sigma$, then this is the case for any $t\in [0,1]$. (For the constant term, one has to consider its values for $t=1$ and for $t>0$ close to $0$.) In particular, for $t=t_*$, the signs are the ones of the SP~$\Sigma$. This proves part~(2) of the theorem. \end{proof} \section{Proof of Theorem~\protect\ref{tm2}\protect\label{secprtm2}} Without loss of generality we limit ourselves to the case of monic HPs. We prove the theorem by induction on $d$. The cases $d=1$ and $2$ are considered in Examples~\ref{ex1} and \ref{ex2}. For $d=1$, no coefficient of the HP equals~$0$. Suppose now that $d\geq 3$. We assume that there is at least one equality between a modulus of a negative root and a positive root, otherwise one can apply Theorem~\ref{tmmain}. So suppose that the HP has roots $\pm a$, $a\neq 0$, and the HP is of the form $S:=(x^2-a^2)Q$, where $Q$ is a degree $d-2$ HP without root at~$0$. Thus the roots of $Q$ define one of the MOsAE $r_{PN}^0$, $r_{PP}^0$, $r_{NN}^0$ and $r_{NP}^0$, so one can use the inductive assumption. If $d$ is odd, then $Q$ has no vanishing coefficient and defines one of the SPs $\Sigma _{\pm}$. Set $Q:=\sum _{j=0}^{d-2}q_jx^j$. Then \begin{equation}\label{eqS} S=q_{d-2}x^d+q_{d-3}x^{d-1}+\left( \sum _{j=0}^{d-4}(q_j-a^2q_{j+2})x^{j+2} \right) -a^2q_1x-a^2q_0~. \end{equation} The first two and the last two of the coefficients of $S$ are obviously nonzero. For the others one can observe that as by inductive assumption $Q$ defines one of the SPs $\Sigma _{\pm}$ hence $q_j\cdot q_{j+2}<0$, one has $q_j-a^2q_{j+2}\neq 0$. This proves part (1) of the theorem. If $d$ is even, then $Q$ can have a vanishing coefficient in which case $Q$ is of the form $\prod _{j=1}^{(d-2)/2}(x^2-a_j^2)$ hence $S$ is of the form $\prod _{j=1}^{d/2}(x^2-a_j^2)$ (with $a_{d/2}=a$) and defines the SPAZ $(+,0,-,0,+,0,-,0,\ldots )$. If $d$ is even and $Q$ has no vanishing coefficient, then the set of its roots is not representable as a union of couples $\{ a_j, -a_j\}$, $a_j\in \mathbb{R}^*$, so this is the case of $S$ as well. Moreover, using equality (\ref{eqS}) in the same way as for $d$ odd one concludes that $S$ has no vanishing coefficient. Part (2) of the theorem is proved.
1,941,325,220,661
arxiv
\section{Introduction} Recently, in a number of papers \cite{aref1,remb4,remb2,remb3,brzez1,ubr1,schw1,cab1} it was considered a variety of models realising the ideas of quantization of space-time and phase-space by means of the non-commutative geometry and quantum group theory methods. In particular in \cite{aref1} it was considered simple one-dimensional non-commutative dynamical systems. A very intriguing feature of that models is non-commutativity of the inertial mass. However, the non-commutativity of the coordinates with inertial mass is no so satisfactory property because of generically different physiacal meaning of the coordinates (geometrical objects) and the inertial mass (dynamical object). In this paper we try to solve simultanously two problems: (a) the indicated question of the non-commutativity of the inertial mass, (b) the rescaling of the $q$-Lorentz invaraints related to the fact that quantum determinant does not belong to the center of the $q$-Poincar\'e group \cite{remb4}. This can be done by introducing in the $q$-deformed Minkowski space-time a kind of the ``quantum geometry''by means of the notion of ``quantum metrics''. In the classical limit the standard Lobatshevsky geometry is recovered. Moreover, in a contrast to the \cite{remb4} we give a geometric interpretation of momenta as the translation generators defined via corresponding quantum Cartan-Maurer forms. \section{Quantum Minkowski Space-Time and \newline the Quantum Poincar\'e-Weyl Group} We assume that the $q$-deformed Minkowski space-time is a {\em real form\/} of the Manin's plane. Consequently the $q$-Minkowski space-time is generated by generators $x_\pm$ satysfying \begin{equation} x_+x_-=qx_-x_+, \end{equation} with $x^*_\pm=x_\pm$ and $|q|=1$. We will interprete $x_\pm$ as light-cone coordinates. The resulting Poincar\'e-Weyl group is a subgroup of the $IGL(2)_{q,s,\mu}$ quantum group introduced in \cite{remb1} with the co-module action of the form \begin{equation} \pmatrix{x'_+\cr x'_-\cr 1}=\delta\pmatrix{x_+\cr x_-\cr 1} =\pmatrix{\sigma\omega&0&u_+\cr0&\sigma\omega^{-1}&u_-\cr0&0&1} \otimes\pmatrix{x_+\cr x_-\cr 1}, \end{equation} where $\sigma^2 =\det_q\pmatrix{\sigma\omega&0&u_+\cr0&\sigma\omega^{-1}&u_-\cr0&0&1}$ and $\sigma^*=\sigma$, $\omega^*=\omega$, $u^*_\pm=u_\pm$. The generators of the above $q$-Poincar\'e-Weyl group fulfil the following algebraic rules \begin{eqnarray} \omega\sigma&=&\sigma\omega,\\ u_\pm\sigma&=&q^{\pm\frac{1}{2}}\sigma u_\pm,\\ \omega u_\pm&=&q^\frac{1}{2}u_\pm\omega,\\ u_+u_-&=&qu_-u_+. \end{eqnarray} We will follow with the standard Woronowicz form of co-product \cite{wor1}. The antipode and co-unity have the form \begin{equation} g^{-1}=\pmatrix {\sigma^{-1}\omega^{-1}&0&-\sigma^{-1}\omega^{-1}u_+\cr 0&\sigma^{-1}\omega&-\sigma^{-1}\omega u_-\cr 0&0&1}; \end{equation} \begin{equation} \epsilon(\sigma)=\epsilon(\omega)=1,\qquad\epsilon(u_\pm)=0. \end{equation} \section{Bicovariant Differential Calculus on Quantum Poincar\'e-Weyl Group} We can formulate a bicovariant differential calculus on the above qauntum group \cite{wor2} with help of the usual differentials ${\rm d}\sigma$, ${\rm d}\omega$, ${\rm d}u_\pm$ \begin{eqnarray} \sigma\,{\rm d}\sigma&=&{\rm d}\sigma\,\sigma,\\ \sigma\,{\rm d}\omega&=&{\rm d}\omega\,\sigma,\\ \omega\,{\rm d}\omega&=&{\rm d}\omega\,\omega,\\ \omega\,{\rm d}\sigma&=&{\rm d}\sigma\,\omega, \end{eqnarray} \begin{eqnarray} u_\pm\,{\rm d}\omega&=&q^{-\frac{1}{2}}{\rm d}\omega\,u_\pm,\\ u_\pm\,{\rm d}\sigma&=&q^{\pm\frac{1}{2}}{\rm d}\sigma\,u_\pm,\\ \omega\,{\rm d}u_\pm&=&q^\frac{1}{2}{\rm d}u_\pm\,\omega,\\ \sigma\,{\rm d}u_\pm&=&q^{\mp\frac{1}{2}}{\rm d}u_\pm\,\sigma,\\ u_\pm\,{\rm d}u_\pm&=&{\rm d}u_\pm\,u_\pm,\\ u_\pm\,{\rm d}u_\mp&=&q^{\pm1}{\rm d}u_\mp\,u_\pm, \end{eqnarray} \begin{eqnarray} {\rm d}u_\pm\,{\rm d}\omega&=&-q^{-\frac{1}{2}}\,{\rm d}\omega\,{\rm d}u_\pm,\\ {\rm d}u_\pm\,{\rm d}\sigma& =&-q^{\pm\frac{1}{2}}\,{\rm d}\sigma\,{\rm d}u_\pm,\\ {\rm d}u_+\,{\rm d}u_-&=&-q\,{\rm d}u_-\,{\rm d}u_+,\\ {\rm d}\omega\,{\rm d}\sigma&=&-{\rm d}\sigma\,{\rm d}\omega,\\ ({\rm d}\sigma)^2=({\rm d}\omega)^2&=&({\rm d}u_\pm)^2=0. \end{eqnarray} Now we introduce the differential forms \begin{equation} \Sigma=\sigma^{-1}\,{\rm d}\sigma,\quad \Omega=\omega^{-1}\,{\rm d}\omega,\quad T_\pm=\sigma^{-1}\omega^{\mp1}{\rm d}u_\pm, \end{equation} with the following commutation relations \begin{eqnarray} \Sigma\Omega&=&-\Omega\Sigma,\\ \Sigma T_\pm&=&-T_\pm\Sigma,\\ \Omega T_\pm&=&-T_\pm\Omega,\\ T_+T_-&=&-q^{-1}T_-T_+. \end{eqnarray} Then we can define, via the Cartan-Maurer form \begin{equation} g^{-1}\,{\rm d}g\equiv{\rm i}(D\Sigma+K\Omega+P_+T_++P_-T_-) \end{equation} the generators of the scaling $D$---dilatation, the Lorentz boost $K$ and translations $P_\pm$. Consequently \begin{eqnarray} {}[D,K]&=&0,\\ {}[D,P_\pm]&=&-{\rm i}P_\pm,\\ {}[K,P_\pm]&=&\mp{\rm i}P_\pm,\\ {}[P_+,P_-]_{q^{-1}}&=&0.\label{qcommut} \end{eqnarray} Here $[P_+,P_-]_{q^{-1}}=P_+P_--q^{-1}P_-P_+$. \section{Covariant Differential Calculus on the $q$-Min\-kow\-ski Space-Time and the $q$-Kine\-mat\-ics} To construct a kind of relativistic kinematics it is necessary to have a covariant differential calculus on the $q$-Minkowski space. Taking into account the classification of differential calculi for the Manin's plane \cite{brzez2}, we obtain \begin{eqnarray} x_\pm\,{\rm d}x_\pm&=&{\rm d}x_\pm\,x_\pm,\\ x_\pm\,{\rm d}x_\mp&=&q^{\pm1}\,{\rm d}x_\mp\,x_\pm,\\ {\rm d}x_+\,{\rm d}x_-&=&-q\,{\rm d}x_-\,{\rm d}x_+,\\ ({\rm d}x_\pm)^2&=&0. \end{eqnarray} Now we assume that $x_\pm$ depend on an affine parameter $\tau$, i.e.\ \begin{equation} x_\pm=x_\pm(\tau),\quad{\rm d}x_\pm=\dot x_\pm\,{\rm d}\tau. \end{equation} Consequently, under the existence of the free motion (we admit solutions $\ddot x_\pm=0$), we obtain \begin{eqnarray} x_\pm\dot x_\mp&=&q^{\pm1}\dot x_\mp x_\pm,\\ x_\pm\dot x_\pm&=&\dot x_\pm x_\pm,\\ \dot x_+\dot x_-&=&q\dot x_-\dot x_+. \end{eqnarray} Momentum is defined in a standard way. We assume that the inertial mass $m$ commute with $x_\pm$ (i.e.\ $m$ has not any geometrical meaning). \begin{equation} m^*=m,\quad\dot m=0, \end{equation} \begin{equation} mx_\pm=x_\pm m. \end{equation} Now for a free particle \begin{equation} p_\pm=m\dot x_\pm, \end{equation} so \begin{equation} p^*_\pm=p_\pm, \end{equation} \begin{equation} p_+p_-=qp_-p_+,\label{momenta} \end{equation} according to the geometrical interpretation of $p_\pm$. \section{Quantum Geometry} The difference between powers of $q$ in the Eq.~(\ref{qcommut}) and Eq.~(\ref{momenta}) is related to the contravariant nature of $p_\pm=m\dot x_\pm$ and covariant nature of $P_\pm$. This suggests a possibility of an existence of an analogon of the metrics in the $q$-Minkowski space-time. Let us define a ``qauntum'' geometry in our Minkowski space-time by means of a ``quantum metrics''; namely \begin{equation} {\rm d}s^2=\pmatrix{{\rm d}x_+,&{\rm d}x_-}\pmatrix{0&1\cr1&0} \pmatrix{{\rm d}x_+\cr{\rm d}x_-} \end{equation} is replaced by \begin{equation} {\rm d}s^2_q=\pmatrix{{\rm d}x_+,&{\rm d}x_-} \pmatrix{0&q^\frac{1}{2}\Gamma\cr q^{-\frac{1}{2}}\Gamma&0} \pmatrix{{\rm d}x_+\cr{\rm d}x_-}, \end{equation} where the new generator $\Gamma$ satisfy \begin{equation} \Gamma^*=\Gamma,\quad\dot\Gamma=0,\quad\delta(\Gamma)=\sigma^{-2}\otimes\Gamma, \end{equation} \begin{eqnarray} \Gamma m&=&m\Gamma,\\ \Gamma x_\pm&=&q^{\pm1}x_\pm\Gamma. \end{eqnarray} The ``line'' element ${\rm d}s^2_q$ can be written in a compact form as \begin{equation} {\rm d}s^2_q={\rm d}x^\mu g_{\mu\nu}{\rm d}x^\nu \end{equation} with $[g^q_{\mu\nu}] =\pmatrix{0&q^\frac{1}{2}\Gamma\cr q^{-\frac{1}{2}}\Gamma&0}$, $\mu,\nu=\pm$. Notice that $[g^{\mu\nu}_q] =\pmatrix{0&q^\frac{1}{2}\Gamma^{-1}\cr q^{-\frac{1}{2}}\Gamma^{-1}&0}$. It is easy to see that ${\rm d}s^2_q$ belongs to the center of the space-time algebra. Consequently the square of the relativistic momentum reads \begin{equation} p^2=p^\mu g_{\mu\nu}p^\nu =q^\frac{1}{2}p_+\Gamma p_-+q^{-\frac{1}{2}}p_-\Gamma p_+=m^2C^2, \end{equation} where $C^2=q^\frac{1}{2}\dot x_+\Gamma\dot x_- q^{-\frac{1}{2}}+\dot x_-\Gamma\dot x_+$ is the square of the ``light velocity''. Both $p^2$ and $C^2$ also belong to the center of the space-time algebra. Note that the relation of momenta satisfying Eq.~(\ref{qcommut}) and Eq.~(\ref{momenta}) is of the form $P_\mu=g_{\mu\nu}p^\nu$. Let us summarize the basic space-time algebra rules \begin{eqnarray} x_+x_-&=&qx_-x_+,\\ p_+p_-&=&qp_-p_+,\\ x_\pm p_\pm&=&p_\pm x_\pm,\\ x_\pm p_\mp&=&q^{\pm1}p_\mp x_\pm,\\ \Gamma x_\pm&=&q^{\pm1}x_\pm\Gamma,\\ \Gamma p_\pm&=&q^{\pm1}p_\pm\Gamma. \end{eqnarray} It is evident that non-commutativity of coordinates and momenta implies that there is a number of ``classical'' Heisenbreg-like uncertainity relations. However, an appriopriate analysis demanded a knowledge of representations of the above algebra in the (rigged) Hilbert space framework. \section*{Acknowledgements} We are grateful to K.~A.~Smoli\'nski for fruitful discussions. One of us (J.R.) thanks to David Yetter for his kind hospitality.
1,941,325,220,662
arxiv
\section{Introduction} \section{Introduction} An intriguing open problem in the theory of stationary axisymmetric vacuum black holes is that of existence of regular solutions with disconnected components. Key progress on this was made recently by Hennig and Neugebauer~\cite{HennigNeugebauer3} (see also~\cite{Neugebauer:2003qe,Varzugin1}) concerning two-components solutions: \begin{enumerate} \item According to~\cite{{Neugebauer:2003qe,HennigNeugebauer,HennigNeugebauer3,Varzugin1}}, if a sufficiently regular multi-component solution exists, it must belong to the multi-Kerr family. \item There are \emph{no} double-Kerr solutions satisfying a \emph{``sub-extremality"} condition~\cite{HennigNeugebauer,HennigNeugebauer2,HennigNeugebauer3}. \end{enumerate} The sub-extremality condition of~\cite{HennigNeugebauer,HennigNeugebauer2,HennigNeugebauer3} appears as an undesirable restriction on the class of space-times considered. It is the purpose of this work to tie instead the analysis of Hennig and Neugebauer to stability properties of Killing horizons, and to point out that the stability condition is necessarily satisfied by, say, $I^+$-regular black hole space-times. More precisely, in Section~\ref{S28VII11.1} below we establish the following result: \begin{theorem} \label{T28VII11.1} {$I^+$-regular two-Kerr solutions do not exist}. \end{theorem} Some more comments on the issues arising might be in order. The key use of the sub-extremality condition in~\cite{HennigNeugebauer,HennigNeugebauer2,HennigNeugebauer3} is a result from \cite{HennigAnsorgCederbaum} which states that the horizon area $A$ and the horizon angular momentum $J$ satisfy $A > 8\pi|J|$. Interestingly, a recent result of~\cite{DainReiris} (see also~\cite{JRD}) asserts that an alternative condition on the horizon implies a weaker area-angular momentum inequality $A \geq 8\pi|J|$. This naturally begs for clarifications \begin{enumerate}[(a)] \item whether the sub-extremality condition in~\cite{HennigNeugebauer,HennigNeugebauer2,HennigNeugebauer3} is related to the conditions in~\cite{DainReiris}, \item and whether some natural global properties of space-times considered imply that horizons satisfy the condition. \end{enumerate} These are the main issues addressed by this work. In fact, our proof may be viewed as a verification that~\cite{DainReiris} applies to the problem at hand, i.e. the answer to (b) is positive for $I^+$-regular double-Kerr black holes. Regarding (a), while the proofs of the area-angular momentum inequality in~\cite{HennigAnsorgCederbaum} and in~\cite{DainReiris} are different, they share the same intermediate step: the inequality (15) in~\cite{HennigAnsorgCederbaum} is the strict version of the inequality (34) in \cite{DainReiris}. In any event, we give in Appendix~\ref{A15VII11.1} a self-contained derivation of the inequality as needed for our purposes, using an argument closely related to, but not identical with, that in~\cite{HennigAnsorgCederbaum}. \section{The proof} \label{S28VII11.1} Arguing by contradiction, consider (regular) metrics of the form \bel{10VII11.1} g= f^{-1}( h (d\rho^2 + dz^2 ) + \rho^2 d\varphi^2 ) - f (dt+ a d\varphi)^2 \end{equation} on \bel{28VII11.1} \mbox{${\cal M}:=\{t,z\in \Bbb R$, $\varphi\in [0,2\pi] $, $\rho\in [0,\infty)\}$, } \end{equation} and where $f$ takes the explicit form considered in~\cite{HennigNeugebauer3}. The potential $a$ can be obtained from~\cite[Eq.~(30)]{HennigNeugebauer3} (the auxiliary function $ \chi$ needed for this can be found in Eq.~(92) there, while $\psi$ can be obtained from $\chi$ using Eq.~(26) there). The function $h$ can be calculated using \cite[Eq. (15)]{Kramer}.% \footnote{We are grateful to J.~Hennig for pointing out the relevant equations, and for making his {\sc Maple} files available to us.} This results in a manifestly stationary and axisymmetric metric: $\partial_t g_{\mu\nu}=0=\partial_\varphi g_{\mu\nu}$. We start by noting that double-Kerr solutions with vanishing surface gravity of both components (in the notation below, this corresponds to $K_1=K_2$ and $K_3=K_4$, whence $S=\emptyset$) have been shown to be singular in~\cite{HennigNeugebauer2} without supplementary hypotheses, and do not require further considerations. It remains to consider the case where both components have non-vanishing surface gravity or exactly one of the components has non-vanishing surface gravity. We assume that the metric \eq{10VII11.1} describes the \emph{domain of outer communications} of a well behaved vacuum space-time with an event horizon which has two components. More precisely, we \emph{assume} that there exists a choice of the parameters occurring in the metric functions such that the following conditions hold. First, we assume that \begin{enumerate}[1.] \item The metric functions $f$, $af$, $a^2f-f^{-1}\rho^2$ and $hf^{-1}$ are smooth for $\rho>0$. \item The Killing vector $\partial_\varphi$ is spacelike wherever non-vanishing. In particular $$ g(\partial_\varphi,\partial_\varphi) = f^{-1} \rho^2 - f a^2 > 0 \ \mbox{for} \ \rho>0 \;. $$ \end{enumerate} In case that both components of the horizon are non-degenerate, i.e. they have non-vanishing surface gravity, we supplement the above with an assumption that there exists $K_1 > K_2 > K_3 > K_4$ such that \begin{enumerate}[1.] \addtocounter{enumi}{2} \item On the boundary $$\cal A:=\{\rho=0\} \;, $$ the intervals defined by $z\not\in[K_1,K_2] \cup [K_3,K_4]$ can be mapped, by a suitable coordinate transformation, to a smooth axis of rotation for the Killing vector $\partial_\varphi$. \item There exists a coordinate system in which the manifold $\cal S_0:=\{t=0\}$ equipped with the metric % \bel{10VII11.2} f^{-1}( h (d\rho^2 + dz^2 ) + \rho^2 d\varphi^2 ) - f (a d\varphi)^2 \end{equation} is a smooth Riemannian manifold whose boundary is located exactly at the intervals $z \in[K_1,K_2]$ and $z \in [K_3,K_4]$ such that each of these intervals corresponds to a smooth sphere. \item Finally, the space-time manifold ${\cal M}$ defined in \eq{28VII11.1} can be extended so that the boundary spheres \bel{14VII11} \mbox{$S_1:=\{t=0,z \in[K_1,K_2]\}$ and $S_2:=\{t=0,z \in [K_3,K_4]\}$} \end{equation} % are bifurcation surfaces of bifurcate Killing horizons. \end{enumerate} If one of the component of horizon is degenerate (i.e. it has vanishing surface gravity) and one is non-degenerate, we assume that there exist $K_1 > K_2 > K_3 = K_4$ such that \begin{enumerate}[1'.] \addtocounter{enumi}{2} \item On the boundary $\cal A:=\{\rho=0\}$, the intervals defined by $z\not\in[K_1,K_2] \cup \{K_3\}$ can be mapped, by a suitable coordinate transformation, to a smooth axis of rotation for the Killing vector $\partial_\varphi$. \item There exists a coordinate system in which the manifold $\cal S_0:=\{t=0\}$ equipped with the metric % \bel{10VII11.2asdf} f^{-1}( h (d\rho^2 + dz^2 ) + \rho^2 d\varphi^2 ) - f (a d\varphi)^2 \end{equation} is a smooth Riemannian manifold which has a boundary located at the interval $z \in[K_1,K_2]$ corresponding to a smooth sphere and an asymptotically cylindrical end located at $z = K_3$. \item Finally, the space-time manifold ${\cal M}$ defined in \eq{28VII11.1} can be extended so that the boundary sphere \bel{14VII11asdf} \mbox{$S_1:=\{t=0,z \in[K_1,K_2]\}$} \end{equation} % is a bifurcation surface of a bifurcate Killing horizon while $z = K_3$ corresponds to a smooth degenerate Killing horizon. \end{enumerate} In point 4', we adopt the following definition for a cylindrical end: An initial data set $([0,\infty)\times N,g,K)$ is called a \emph{cylindrical end} if $N$ is a compact manifold, if the metric $g$ approaches an $x$--independent Riemannian metric on $[0,\infty)\times N$ as the variable $x$ running along the $ [0,\infty)$--factor tends to infinity, and if the extrinsic curvature tensor $K$ approaches an $x$-independent symmetric tensor field as $x$ tends to infinity. The seemingly \emph{ad hoc} requirements set forth above can be derived from the hypothesis of \emph{$I^+$-regularity} of the two-component black-hole axisymmetric configuration under consideration, see \cite{ChCo,ChNguyenMass,ChNguyen} and references therein. Recall that, in vacuum, the \emph{stability operator} for a marginally outer trapped surface (MOTS) is defined as \bel{16VII11.21} -\Delta_S \phi + 2 K(\nu,{\mycal D} \phi) + \big(\mbox{\rm div}_S K(\nu, \cdot) - \frac 12 |\chi |^2 - |K(\nu, \cdot)|^2 + \frac 12 R_S \big)\phi \;, \end{equation} where $\phi$ is a smooth function on $S$, $\Delta_S$ is the Laplace operator on $S$, $\nu$ is a field of unit normals, ${\mycal D}$ is the Levi-Civita derivative of the metric on $S$, $\mbox{\rm div}_S$ is the divergence on $S$, and $R_S$ is the scalar curvature of the metric induced on $S$. Finally, if $W$ is the second fundamental form of $S$, then $\chi$ is defined as $W+K_S$, where $K_S$ is the pull-back of $K$ to $S$. Following~\cite{AndM2,AMS2}, a MOTS will be called \emph{stable} if the smallest real part of all eigenvalues of the stability operator is non-negative. It follows from Appendix~\ref{A14VII11.1} that, on $S$, $K(\nu, \cdot)$ is proportional to $d\varphi$, and hence $K(\nu, {\mycal D} \phi)$ vanishes for {axisymmetric} functions $\phi$. The divergence $\mbox{\rm div}_S K(\nu, \cdot)$ vanishes for similar reasons. Since all the metric functions are smooth functions of $\rho^2$ away from the axis of rotation, one easily finds that $W$ vanishes. So, on $\varphi$-independent functions, the stability operator reduces to \bel{16VII11.21asdf} -\Delta_S \phi +\frac 12 \big( R_S - |K|^2 \big)\phi \;, \end{equation} which, after using the vacuum constraint equation on the maximal hypersurface $\cal S_0$, is the familiar stability operator for minimal surfaces \emph{within the class of functions that are invariant under rotations}. We note the following theorem, pointed out to us by M.~Eichmair (private communication), which follows from the existence theory for marginally outer trapped surfaces\cite{AEM} with an additional argument to accommodate the cylindrical ends: \begin{Theorem} \label{T13VII1.1} Consider a smooth initial data set $(M,g,K)$, where $M$ is the union of a compact set with several asymptotic ends in which the metric is either asymptotically flat or asymptotically cylindrical, with at least one asymptotically flat end. If $M$ has a non-empty boundary, assume that $\partial M$ is weakly inner-trapped, where ``inner" refers to variations pointing towards $M$. If $|K|$ tends to zero as one recedes to infinity along the cylindrical ends (if any), then there exists an outermost smooth compact MOTS in $M$ which is \underline{stable}. \end{Theorem} Indeed, this theorem is one of the main results in \cite{AndM2,Eichmair} when there are only asymptotically flat ends. In the setting above one can deform each cylindrical end to an asymptotically flat end for $x\ge x_0$, and apply the asymptotically flat version of the theorem to the deformed data set. When $x_0$ has been chosen large enough one shows that the resulting stable smooth compact MOTS is contained in the region $x\le x_0$, and hence provides an stable MOTS within the original undeformed data set. This argument uses the asymptotic vanishing of $|K|$ in the cylindrical ends. Unfortunately this last condition is too strong to be useful for a direct argument which only uses the $\{t=0\}$ slice. An extension of Theorem~\ref{T13VII1.1} to cylindrical ends with tensors $K$ as described in Appendix~\ref{A14VII11.1} would immediately extend our analysis of the $\{t=0\}$ slice to configurations with one horizon degenerate and one not. (In fact, one would only need an extension of this theorem to axially symmetric MOTS, in which case various terms in the relevant equations vanish, possibly simplifying the analysis.) In any event, the version in \cite{AndM2,Eichmair} suffices in our subsequent analysis. Going back to our problem, let us first treat the case where the horizon has two non-degenerate components: we assume that the boundaries $S_1$ and $S_2$ of ${\cal S}_0 = \{t = 0\}$ defined in \eq{14VII11} are non-empty. Now, it is standard that each $S_a$ has vanishing null expansions, in particular it is weakly outer-trapped; by the latter we mean that the future outer null expansion of $S_a$ is non-negative. We want to show that each $S_a$ is \emph{stable} in the sense of~\cite{AndM2}. For this, suppose first that $S:=S_1\cup S_2$ is not \emph{outermost} as a MOTS. By Theorem~\ref{T13VII1.1} (without cylindrical ends) $\cal S_0$ contains a smooth compact MOTS, say $S'$, which must be distinct from $S$ since $S'$ is stable and $S$ is not. In particular $S'$ contains a smooth compact component, say $S'_1$, which is distinct from both $S_a$'s. By~\cite[Theorem~6.1]{CGS} the MOTS $S_1'$ cannot be seen from $\scrip$, hence $\cal S_0$ is not contained in the domain of outer communications, which is a contradiction. Thus $S$ is a stable MOTS. On the other hand, we have just seen that the MOTS-stability operator of $S$ coincides with its minimal-surface stability operator. We deduce that $S$ is a minimal surface within $\cal S_0$ which is weakly stable, in the sense of minimal surfaces contained in $\cal S_0$, with respect to variations invariant under rotations around an axis of symmetry. In other words, it holds \[ \int_{S} \Big\{|\nabla_S \phi|^2 + \frac{1}{2} (R_S - R_{{\cal S}_0})\phi^2\Big\}\,dv_S \geq 0 \text{ for all } \phi \in C^\infty(S), \partial_\varphi \phi = 0 \] (compare Equation (2.12) in~\cite{GallowaySchoen}). This allows us to invoke~\cite{DainReiris} (see also Appendix~\ref{A15VII11.1} below) to conclude that each component of $S$ satisfies \bel{26VII11.1} A_a \ge 8\pi |J_a| \;, \end{equation} where $A_a$ is the area of $S_a$ and $J_a$ its Komar angular momentum. But this inequality has been shown to be violated by at least one of the components% \footnote{Actually Hennig and Neugebauer draw contradictions from $A _a > 8\pi |J_a|$, but the inequality \eq{26VII11.1} suffices.} of $S$ by Hennig and Neugebauer~\cite{HennigNeugebauer}, which gives a contradiction, and proves Theorem~\ref{T28VII11.1}. Consider, finally, a two-Kerr metric with exactly one non-degenerate component, and assume that the space-time is $I^+$-regular. As already mentioned, Theorem \ref{T13VII1.1} does not apply directly to the hypersurface $\{t = 0\}$ which contains a cylindrical end. Instead we argue as follows. Let $\cal S$ be any hypersurface in the d.o.c. with compact boundary on a degenerate component of $\partial J^-(M_{\mathrm{ext}}) \cap I^+ (M_{\mathrm{ext}})$ (for notation, see~\cite{ChCo}), and which coincides with the surface $\{t=0\}$ near the non-degenerate component of the event horizon. One possible method to obtain such hypersurface is to start with a spacelike, acausal hypersurface given by the $I^+$-regularity assumption and deform it near the birfurcate horizon to $\{t = 0\}$ using the construction in \cite{RaczWald2}. We also detail a different construction for one such hypersurface in Appendix \ref{AppSurfConstr}, which is more elementary in nature. Arguing exactly as before, both components of $\partial \cal S$ are stable in the sense of MOTs. It follows as above that the non-degenerate component of the event horizon satisfies \eq{26VII11.1}. Now, for such configurations Hennig and Neugebauer show that the inequality $ A _1 > 8\pi |J_1|$ implies that the total mass $m$ of the metric is strictly negative. One can check (J.~Hennig, private communication) that under \eq{26VII11.1} the conclusion still holds, which is incompatible with the positive energy theorem for black holes~\cite{Herzlich:bh}. Hence no such configurations are possible, and the theorem is established.
1,941,325,220,663
arxiv
\section{Strange Matter and Hypermatter} Relativistic heavy ion collisions provide a promising tool for studying the physics of strange quark and strange hadronic matter (see recent review \cite{SM96}). Fig.~\ref{fig:grei1} shows schematically the phase diagram of hot, dense and strange matter. Perhaps the only unambiguous way to detect the transient existence of a quark gluon plasma (QGP) might be the experimental observation of exotic remnants, like the formation of strange quark matter (SQM) droplets. First studies in the context of the MIT-bag model predicted that sufficiently heavy strangelets might be metastable or even absolutely stable. The reason for the possible stability of SQM is related to a third flavour degree of freedom, the strangeness. As the mass of the strange quark is smaller than the Fermi energy of the quarks, the total energy of the system is lowered by adding strange quarks. According to this picture, the number of strange quarks is nearly equal to the number of massless up or down quarks and saturated SQM is nearly charge neutral. This simple picture does not hold for small baryon numbers. Finite size effects shift unavoidably the mass of strangelets to the metastable regime. Moreover strangelets can have very high charge to mass ratios for low baryon numbers. This behaviour is well known from normal nuclei. Therefore, instead of long-lived nearly neutral objects, strangelet searches in heavy ion experiments have to cope with short-lived highly charged objects \cite{Carsten93} ! \begin{figure} \centerline{ \epsfysize=0.35\textheight \epsfbox{jsfig1.ps}} \caption{\protect\small Sketch of the nuclear matter phase diagram with its extensions to strangeness and to the antiworld.} \label{fig:grei1} \end{figure} On the other hand, metastable exotic multihypernuclear objects (MEMOs) consisting of nucleons and hyperons have been proposed \cite{Sch92} which extend the periodic system of elements into a new dimension. MEMOs have remarkably different properties as compared with known nuclear matter as e.g.\ being negatively charged while carrying a positive baryon number! Even purely hyperonic matter has been predicted \cite{Sch93}. These rare composites would have a very short lifetime, at the order of the lifetime of the $\Lambda$. Central relativistic heavy ion collisions provide a prolific source of hyperons and hence, possibly, the way for producing MEMOs. Again, heavy ion experiments looking for these exotic composites have to deal with very short-lived highly charged objects! \section{Hyperon-rich Matter in Neutron Stars} Strangeness, in form of hyperons, appears in neutron star matter at a moderate density of about $2-3$ times normal nuclear matter density $\rho_0=0.15$ fm$^{-3}$ as shown by Glendenning within the Relativistic Mean Field (RMF) model \cite{Glen87}. These new species have considerable influences on the equation of state and the global properties of neutron stars. On the other side, much attention has been paid in recent years to the possible onset of kaon condensation as the other hadronic form of strangeness in neutron stars. Most recent calculations based on chiral perturbation theory \cite{Brown94} show that kaon condensation may set in at densities of $(3-4)\rho_0$. Nevertheless, these calculations do not take into account the presence of hyperons which may already occupy a large fraction of matter when the kaons possibly start to condense \cite{Sch94b}. Below I present new results from our recent paper \cite{Sch96}, where the properties of neutron matter with hyperons were studied in detail. We use the extended version of the relativistic mean field (RMF) model and constrain our parameters to the available hypernuclear data and to the kaon nucleon scattering lengths. \subsection{The Model with Hyperons} The implementation of hyperons within the RMF approach is straightforward. SU(6)-symmetry is used for the vector coupling constants and the scalar coupling constants are fixed to the potential depth of the corresponding hyperon in normal nuclear matter \cite{Sch93}. We choose \begin{equation} U_\Lambda^{(N)} = U_\Sigma^{(N)} = -30 \mbox{ MeV} \quad , \qquad U_\Xi^{(N)} = -28 \mbox{ MeV} \quad. \label{eq:potdep1} \end{equation} Note that a recent analysis \cite{Mar95} comes to the conclusion that the potential changes sign in the nuclear interior, i.e.\ being repulsive instead of attractive. In this case, $\Sigma$ hyperons will not appear at all in our calculations. The observed strongly attractive $\Lambda\Lambda$ interaction is introduced by two additional meson fields, the scalar meson $f_0(975)$ and the vector meson $\phi(1020)$. The vector coupling constants to the $\phi$-field are given by SU(6)-symmetry and the scalar coupling constants to the $\sigma^*$-field are fixed by \begin{equation} U^{(\Xi)}_\Xi \approx U^{(\Xi)}_\Lambda \approx 2U^{(\Lambda)}_\Xi \approx 2U^{(\Lambda)}_\Lambda \approx -40 \mbox{ MeV} \quad . \label{eq:potdep2} \end{equation} Note that the nucleons are not coupled to these new fields. \subsection{Neutron Stars with Hyperons} Fig.~\ref{fig2} shows the composition of neutron star matter for the parameter set TM1 with hyperons including the hyperon-hyperon interactions. \begin{figure} \vspace{-0.5cm} \epsfysize=0.5\textheight \centerline{\epsfbox{jsfig2.ps}} \vspace{-1.3cm} \caption{ The composition of neutron star matter with hyperons which appear abundantly in the dense interior. } \label{fig2} \end{figure} Up to the maximum density considered here all effective masses remain positive and no instability occurs. The proton fraction has a plateau at $(2-4)\rho_0$ and exceeds 11\%{} which allows for the direct URCA process and a rapid cooling of a neutron star. Hyperons, first $\Lambda$'s and $\Sigma^-$'s, appear at $2\rho_0$, then $\Xi^-$'s are populated already at $3\rho_0$. The number of electrons and muons has a maximum here and decreases at higher densities, i.e.\ the electrochemical potential decreases at high densities. The fractions of all baryons show a tendency towards saturation, they asymptotically reach similar values corresponding to spin-isospin and hypercharge-saturated matter. Hence, a neutron star is more likely a giant hypernucleus! \subsection{Kaon Condensation ?} In the following we adopt the meson-exchange picture for the KN-interaction simply because we use it also for parametrizing the baryon interactions. We start from the following Lagrangian \begin{equation} {\cal L}'_K = D^*_\mu \bar K D^\mu K - m_K^2 \bar K K - g_{\sigma K} m_K \bar{K}K \sigma - g_{\sigma^* K} m_K \bar{K}K \sigma^* \label{eq:modlagr} \end{equation} with the covariant derivative \begin{equation} D_\mu = \partial_\mu + ig_{\omega K} V_\mu + ig_{\rho K} \vec{\tau}\vec{R}_\mu + ig_{\phi K}\phi_\mu \quad . \end{equation} The coupling constants to the vector mesons are chosen from SU(3)-relations. The scalar coupling constants are fixed by the s-wave KN-scattering lengths. We have found that this leads to an $\bar K$-optical potential around $U^{\bar K}_{\rm opt} = -(130\div 150)$ MeV at normal nuclear density for the various parameter sets used. This is between the two families of solutions found for Kaonic atoms \cite{Fried93}. The onset of s-wave kaon condensation is now determined by the condition $- \mu_e = \mu_{K^-} \equiv \omega_{K^-} (k=0)$. \begin{figure} \vspace{-0.5cm} \epsfysize=0.45\textheight \centerline{\epsfbox{jsfig3.ps}} \vspace{-1.3cm} \caption{ The effective energy of the kaon and the antikaon. and the electrochemical potential. Kaon condensation does not occur over the whole density region considered. } \label{fig8} \end{figure} The density dependence of the K and $\bar K$ effective energies is displayed in Fig.~\ref{fig8}. The energy of the kaon is first increasing in accordance with the low density theorem. The energy of the antikaon is decreasing steadily at low densities. With the appearance of hyperons the situation changes dramatically. The potential induced by the $\phi$-field cancels the contribution coming from the $\omega$-meson. Hence, at a certain density the energies of the kaons and antikaons become equal to the kaon (antikaon) effective mass, i.e.\ the curves for kaons and antikaons are crossing at a sufficiently high density. At higher densities the energy of the kaon gets even lower than that of the antikaon! Since the electrochemical potential never reaches values above 160 MeV here antikaon condensation does not occur at all. We have checked the possibility of antikaon condensation for all parameter sets and found that at least 100 MeV are missing for the onset of kaon condensation in contrast to previous calculations disregarding hyperons \cite{Brown94}. \section{Strange Antiworld} There are evidences for strong scalar and vector potentials in nuclear matter. Already in 1956 D\"urr and Teller proposed a relativistic model with strong scalar and vector potentials to explain the saturation of nuclear forces \cite{Duerr56} and found a scalar potential of $U_s=a m_N\phi$ where $\phi$ is a scalar field and $a$ is a coupling constant. This was the first version of the RMF model discussed above, where $U_s=g_{\sigma} \sigma$, and its extension, the Relativistic Br\"uckner-Hartree-Fock (RBHF) \cite{Cel92} calculations, where the scalar potential is the scalar part of the self-energy of the nucleon $U_s=\Sigma_s(p_N)$. In the chiral $\sigma\omega$ model \cite{Bog83} one finds $U_s = m_N \sigma /f_\pi - m_N$ which incorporates the (approximate) chiral symmetry at the underlying QCD Lagrangian. Here $f_\pi$ is the pion decay constant. Besides these models based on a hadronic description there exists effective models dealing with constituent quarks, like the Nambu--Jona-Lasinio (NJL) model and models based on QCD sum rules \cite{Coh91}. These models can be linked to hadronic observables in the dense medium by using the low density expansion of the quark condensate which gives $U_s=-\frac{m_N \sigma_N}{m_\pi^2 f_\pi^2}\rho_N$, where $\sigma_N\approx 45$ MeV is the pion-nucleon sigma term. Astonishingly, {\em all} these approaches come to the same conclusion, namely that the scalar potential is as big as \begin{equation} U_s = -(350\div 400) \mbox{ MeV } \rho_N/\rho_0 \end{equation} for moderate densities! This strong scalar attraction has to be compensated by a strong repulsion to get the total potential depth of nucleons correct. Hence, one finds for the vector potential \begin{equation} U_v = (300\div 350) \mbox{ MeV } \rho_N/\rho_0 \quad . \end{equation} These big potentials are in fact needed to get a correct spin-orbit potential. The idea of D\"urr and Teller \cite{Duerr56} was that the antinucleons feel the difference of these two potentials, i.e. \begin{equation} U_{\bar N} = U_s - U_v = -(650\div 750) \mbox{ MeV } \rho_N/\rho_0 \end{equation} which is already comparable to the mass of the nucleon. Note that the extrapolation to high densities is quite dangerous as effects nonlinear in density might get important. It is already known from RMF models that the scalar potential saturates at high densities instead of growing steadily. RBHF calculations show that this might be also true for the vector potential. With this in mind one can extrapolate to higher densities and finds that the field potentials get overcritical at $\rho_c=(3-7)\rho_0$ which was first pointed out by Mishustin \cite{Mish90}. At this critical density the potential felt by the antinucleons is equal to $U_{\bar N}=2m_N$, the negative energy states are diving in the positive continuum and this allows for the spontaneous nucleon-antinucleon pair production. This has certain parallels to the spontaneous $e^+e^-$ production proposed by Pieper and Greiner \cite{Piep69}. Assuming SU(6)-symmetry one gets for $\Lambda$'s \begin{equation} U^\Lambda_v = (200\div 230) \mbox{ MeV } \rho_N/\rho_0 \end{equation} and combining with hypernuclear data this gives then for the total $\bar\Lambda$ potential \begin{equation} U_{\bar\Lambda} = U^\Lambda_s - U^\Lambda_v = U_\Lambda - 2U^\Lambda_v = -(430\div 500) \mbox{ MeV } \rho_N/\rho_0 \quad . \end{equation} In the hyperon-rich medium additional fields will enhance this potential. Assuming again SU(6)-symmetry one can estimate the vector potential coming from the $\phi$ meson \begin{equation} V^\Lambda_v = \frac{2}{9} \frac{m_\omega^2}{m_\phi^2} U_v \cdot f_s \approx 40 \mbox{ MeV } \rho_B/\rho_0 \cdot f_s \end{equation} where $f_s$ is the total strangeness fraction. The corresponding strange scalar potential is in principle unknown but definitely higher than the strange vector potential to explain the strongly attractive $\Lambda\Lambda$ interaction seen in double $\Lambda$ hypernuclei. Hence one gets at least an additional $\bar\Lambda$ potential of \begin{equation} V_{\bar\Lambda} = V^\Lambda_s - V^\Lambda_v \approx - 120 \mbox{ MeV } \rho_B/\rho_0 \cdot f_s \end{equation} in the hyperon-rich medium. These strong antibaryon potentials will have certain impacts for heavy ion reactions. Proposed signals for antiprotons are: enhanced subthreshold production \cite{Mish90}, change of the slope of the excitation function \cite{Mish90}, apparent higher temperatures \cite{Koch91}, which have indeed been measured at GSI \cite{gsianti}. Nevertheless, a recent analysis indicates that the antiproton potential might be quite shallow at normal nuclear density, around $U_{\rm\bar p}= -100$ MeV \cite{giessen}. Possible other signals include: enhanced antihyperon production \cite{Mish90}, strong antiflow of antibaryons \cite{Jahns}, cold baryons from tunnelling \cite{Mish90}, cold kaons from annihilation in the medium (the phase space of the reaction $\bar\Lambda + {\rm p}\to{\rm K}^+ + \pi's$ is reduced by $U_{\bar\Lambda} + U_N - U_K \approx -600 \mbox{ MeV } \rho_N/\rho_0$ compared to the vacuum), enhanced pion production due to the abundant annihilation processes which would also enhance the entropy. Definitely, more elaborate work is needed to pin down the possible signals from the critical phenomenon of the antiworld. We conclude this section with a brief comment concerning the limitations of the RMF model. This is clearly an effective model which successfully describes nuclear phenomenology in the vicinity of the ground state. On the other hand, this model does not respect chiral symmetry and the quark structure of baryons and mesons. Also negative energy states of baryons and quantum fluctuations of meson fields are disregarded. These deficiencies may affect significantly the extrapolations to high temperatures, densities or strangeness contents. \section*{Acknowledgements} This paper is dedicated to Prof.\ Walter Greiner on the occasion of his 60$^{\rm th}$ birthday. I am indebted to him for guiding me to the fascinating field of hypermatter and antimatter and his continuous support. I thank my friends and colleagues A. Diener, C.B. Dover, A. Gal, Carsten Greiner, and especially I.N. Mishustin and H. St\"ocker for their help and collaboration which made this work possible.
1,941,325,220,664
arxiv
\section*{Acknowledgments} L.M. acknowledges CINECA under the ISCRA initiative for providing high performance computational resources employed in this work. M.C. thanks GENCI for providing computational resources under the grant number 0906493, the Grands Challenge DARI for allowing calculations on the Joliot-Curie Rome HPC cluster under the project number gch0420. M.C., K.N., and S.S. thank RIKEN for providing computational resources of the supercomputer Fugaku through the HPCI System Research Project (Project ID: hp210038). K.N. acknowledges support from the JSPS Overseas Research Fellowships, from Grant-in-Aid for Early Career Scientists (Grant No.~JP21K17752), and from Grant-in-Aid for Scientific Research (Grant No.~JP21K03400). S.S. acknowledges support from MIUR, PRIN-2017BZPKSZ. This work was supported by the European Centre of Excellence in Exascale Computing TREX-Targeting Real Chemical Accuracy at the Exascale. This project has received funding from the European Union’s Horizon 2020 Research and Innovation program under Grant Agreement No.~952165. \include{scifile_SM} \clearpage \end{document} \section*{Supplementary materials} \paragraph*{Details on the Phase Diagram calculation and the role of anharmonicity and electronic correlations} In this Section, we describe in details how the phase diagram is computed, and discuss how it changes when we adopt a different electronic theory - density functional theory (DFT) versus diffusion quantum Monte Carlo (DMC) - with and without anharmonicity. We relaxed each structure by including quantum fluctuations and anharmonicity through the stochastic self-consistent harmonic approximation (SSCHA), optimizing the auxiliary force constants, centroid positions, and lattice vectors within the constraints of the symmetry group, at roughly every 100 GPa (from \SI{250}{\giga\pascal} to \SI{650}{\giga\pascal}). In the SSCHA calculations, we employed the DFT framework with the BLYP\cite{BLYP} exchange-correlation functional to account for electronic energy and determine the Born-Oppenheimer (BO) potential energy surface. BLYP is one of the most accurate DFT functionals for phase-diagram calculations of high-pressure hydrogen, outperforming more refined techniques such as hybrid DFT\cite{Drummond2015,Clay_2014}. The full anharmonic energy is obtained within DFT, by fitting with a parabola the difference between the BO energy and the SSCHA total energy at fixed volume for each phase. Also the anharmonic stress tensor is employed in the fit to increase accuracy. We then add to the static BO energy-versus-volume curves, computed in DFT every \SI{5}{\giga\pascal}, the quantum anharmonic lattice vibrational contribution at the corresponding volume calculated from the fit. We finally perform the Legendre transform to get the enthalpy-vs-pressure curves and the resulting phase diagram. The static phase diagram simulated within DFT-BLYP is reported in \figurename~\ref{fig:staticDFT}, while in \figurename~\ref{fig:pd:qha:blyp} we show the DFT-BLYP phase diagram with \emph{harmonic} zero-point energy . We included the harmonic contributions only for the most relevant phases: C2/c-24, Cmca-12 and Cs-IV. The harmonic zero-point energy leaves almost unchanged the pressure of the C2/c-24 to Cmca-12 transition (phase III to VI), while it substantially shifts the atomic transition down to pressures even lower than the Cmca-12. The results of the anharmonic phase diagram of both hydrogen ($^1\text{H}$ protium or H) and deuterium ($^2\text{H}$ or D) computed by DFT-BLYP and SSCHA are reported in \figurename~\ref{fig:pd:sscha}. It shows that anharmonicity strongly favors the molecular phases over the atomic one, shifting back the atomic transition to higher pressures. Between Cmca-12 and C2/c-24, anharmonicity favors the Cmca-12 crystal symmetry, moving the III-to-VI phase transition down by about \SI{150}{\giga\pascal}. In this case, phase VI candidates P62/c-24, Cmca-12 and Cmca-4 are almost degenerate up to \SI{400}{\giga\pascal}, where the Cmca-4 starts dominating over the other molecular phases. \begin{figure}[b!] \centering \includegraphics[width=.8\textwidth]{StaticPD.eps} \caption{DFT-BLYP static enthalpies (static lattice) of high-pressure hydrogen.} \label{fig:staticDFT} \end{figure} \begin{figure} \centering \includegraphics[width=.7\textwidth]{QHA_blyp.eps} \caption{DFT-BLYP enthalpies including nuclear zero-point energy within the harmonic approximation for hydrogen and deuterium, shown on the left and right side, respectively.} \label{fig:pd:qha:blyp} \end{figure} \begin{figure} \centering \includegraphics[width=.7\textwidth]{PD_sscha_blyp.eps} \caption{As in Fig.~\ref{fig:pd:qha:blyp}, but for the DFT-BLYP enthalpies including quantum anharmonic effects. } \label{fig:pd:sscha} \end{figure} Apart from Cmca-4, the DFT-BLYP phase diagram is in qualitative agreement if compared to the one including the electron correlation treated at the quantum Monte Carlo (QMC) level. \newpage Thanks to extensive DMC calculations performed at fixed structures for several phases and volumes, we have been able to correct the DFT-BLYP internal energies, and add the contribution coming from a nearly exact treatment of electron correlation on the top of the static, harmonic and quantum anharmonic phase diagrams previously computed at the DFT-BLYP level. DMC corrections are added on the total energy-versus-volume curves of the corresponding DFT (and DFT+SSCHA) calculations. As in the DFT case, the enthalpy-versus-pressure curves are then obtained by Legendre transform. For the sake of completeness, we report the static DMC-corrected phase-diagram in \figurename~\ref{fig:qmcstatich}, the DMC-corrected enthalpies accounting for the nuclear zero-point energy within the harmonic approximation (\figurename~\ref{fig:QHA}) and the full anharmonic enthalpies with DMC corrections (\figurename~\ref{fig:pd:full}). The latter data provide the final phase diagram reported in the main text. \begin{figure}[b!] \centering \includegraphics[width=.7\textwidth]{QMC_staticpd_H.eps} \caption{Diffusion QMC static enthalpies (static lattic) of high-pressure hydrogen. Based on these enthalpies, we draw the static-nuclei phase diagram reported in the main text (\figurename~\ref{fig:PD}). } \label{fig:qmcstatich} \end{figure} \begin{figure} \centering \includegraphics[width=.65\textwidth]{QHA.eps} \caption{Diffusion QMC enthalpies including nuclear zero-point energy within the harmonic approximation for hydrogen (left panel) and deuterium (right panel). Based on these enthalpies, we draw the harmonic phase diagram reported in the main text (\figurename~\ref{fig:PD}). The harmonic zero-point energies are calculated at the DFT-BLYP level and then added to the DMC energies. } \label{fig:QHA} \end{figure} \begin{figure} \centering \includegraphics[width=.65\textwidth]{MetalPD.eps} \caption{Diffusion QMC enthalpies, including quantum anharmonic effects for hydrogen and deuterium, shown on the left and right side, respectively. Nuclear quantum effects are added based on SSCHA calculations performed at the DFT-BLYP level. The corresponding phase diagram is reported in the main text (\figurename~\ref{fig:PD}). } \label{fig:pd:full} \end{figure} \newpage \paragraph*{The P62/c-24 symmetry} Among other known structures, we also simulated the new P62/c-24 symmetry we discovered through the relaxation of phase III (C2/c-24) at the anharmonic level within DFT-BLYP. In particular, phase III becomes unstable at DFT-BLYP level after \SI{310}{\giga\pascal}, when the free energy curvature becomes negative around an IR-active nuclear vibration at $\Gamma$. In \figurename~\ref{fig:c2ctop62c} we report the simulation of the C2/c-24 free energy Hessian as a function of pressure along with the unstable nuclear vibration. The free energy Hessian at the SSCHA level is computed with the full expression discussed in Ref.\cite{Bianco2017}, including non perturbatively both three- and four-phonon scattering vertices. Interestingly, this is a very peculiar case where the four-phonon scattering is fundamental to have a correct result, even at a qualitative level, as the C2/c-24 is unstable at all pressures if only three-phonon scattering processes are accounted for. \begin{figure}[b!] \centering \includegraphics[width=0.7\textwidth]{softening.eps} \caption{Frequency of the eigenvalue of the free energy Hessian along the unstable nuclear vibration, obtained with DFT-BLYP. On the negative axis we report imaginary values. Inset: the square of the frequency, which correspond to the free energy curvature. A negative value indicates an instability. } \label{fig:c2ctop62c} \end{figure} The unstable mode breaks the C2/c symmetry in a Cc group with just two symmetry operations and 24 atoms in the unit cell. We performed the full anharmonic relaxation of the new phase. The monoclinic cell becomes hexagonal, and two layers out of four in the primitive cell transform in perfectly graphene-like sheets, with alternated stacking. The other two layers keep their molecular feature, and the \ch{H2} molecule reduces its bond length with respect to the C2/c geometry. The transformation of the graphene-like layer is reported in \figurename~\ref{fig:all_structures}b. The symmetry group of the new structure is P62/c, as identified through both ISOTROPY\cite{ISOTROPY} and spglib\cite{SPGLIB} software. This phase is strongly unstable at harmonic level (it has four degenerate imaginary frequencies at $\Gamma$ above \SI{2000i}{\per\centi\meter}) but it is stabilized by anharmonicity. As far as we know, this is the first example of a new structure discovered by a full quantum relaxation of nuclear position. This is only possible thanks to the simultaneous relaxation of auxiliary force constants, centroids, and lattice vectors. When more accurate DMC calculations are employed to evaluate its energy, this phase becomes unfavoured. Therefore, the instability of C2/c-24 towards P62/c is an artifact of the DFT-BLYP functional. \paragraph*{The atomic phase} In the atomic phase, the only free parameter is the c/a ratio of the primitive lattice vectors. In the following, we will then present its main properties as a function of the c/a value. The structure is stable at the static level, as it has a well-defined minimum. However, suppose we compute phonons at the harmonic level, and use the phonon dispersion to include the kinetic energy of ions due to the quantum zero-point motion. In that case, the total energy decreases with the c/a ratio until imaginary frequencies appear before the minimum is reached, and the system becomes unstable (see Fig.~\ref{fig:qha:ca}). The Cs-IV atomic phase is, therefore, unstable within the quasi-harmonic approximation. The SSCHA fixes this instability: the c/a increases only by about 0.2 compared to the static value. This effect is, however, strongly size-dependent, and its value becomes even smaller (c/a $\approx$ 0.12) when larger cells of 128 atoms are considered, by strengthening the outcome of our analysis. We computed the free energy Hessian at the SSCHA-relaxed c/a value, and it is stable by a significant amount, shifting the higher energy modes down only by about $\SI{250}{\per\centi\meter}$. Therefore, even if the structure itself is not so anharmonic, the stability of the structure is met only within a complete anharmonic calculation. \begin{figure} \centering \includegraphics[width=0.9\textwidth]{csiv_2.1.eps} \caption{Free energy with static lattice (upper panel) and with harmonic lattice vibrations (lower panel) as a function of c/a. These quantities are computed with the BLYP functional at the volume of $\SI{1.067}{\angstrom}^3$ per atom. The vertical dashed line indicates the c/a value where imaginary phonons appear.} \label{fig:qha:ca} \end{figure} It turns out that the c/a equilibrium value is strongly functional dependent. Nevertheless, by running a SSCHA simulation either with BLYP or with PBE, we verified that the effect of the SSCHA on the c/a parameter is additive on the functional used and, thus, the shift with respect to the static equilibrium value is rather functional independent. \begin{figure}[b!] \centering \includegraphics[width=0.49\textwidth]{dos_csiv_v1.067_2.424.eps} \includegraphics[width=0.49\textwidth]{dos_csiv_v1.067_2.625.eps} \caption{Electronic DOS for c/a values corresponding to BLYP static equilibrium geometry (left panel) and to BLYP equilibrium geometry including also nuclear fluctuations (right panel). The DOS increases when increasing c/a. However, we get further away from the Lifshitz transition for larger c/a values, and this happens for both BLYP (blue lines) and PBE (red lines) functionals.} \label{fig:lifshifts} \end{figure} Interestingly, the atomic phase is in the proximity of a Lifshitz transition, signaled by a sudden jump of the DOS, which is indeed located very close to the Fermi level ($\epsilon_F$). This is reported in \figurename~\ref{fig:lifshifts}. In particular, although the DOS at $\epsilon_F$ steadily increases with c/a, the Lifshitz transition gets further away from $\epsilon_F$. The PBE and BLYP functionals predict the same electronic DOS at $\epsilon_F$ but slightly different locations for the Lifshitz transition, whose energy is systematically closer to the Fermi level in PBE. The same behavior is found also for the other volumes studied for the atomic phase, as reported in \figurename~\ref{fig:lifshifts:v}. Only at the largest volume taken into account, i.e. V=1.259 \AA$^3$/atom, the PBE functional triggers the Lifshitz transition, as shown in the bottom-left panel of \figurename~\ref{fig:lifshifts:v}. However, the c/a value reported there corresponds to the BLYP equilibrium geometry of the static lattice. The PBE equilibrium geometry has a larger c/a value, which pushes the Lifshitz transition energy above the Fermi level. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{dos_csiv_v1.116_2.360.eps} \includegraphics[width=0.49\textwidth]{dos_csiv_v1.116_2.573.eps} \includegraphics[width=0.49\textwidth]{dos_csiv_v1.259_2.227.eps} \includegraphics[width=0.49\textwidth]{dos_csiv_v1.259_2.305.eps} \caption{DOS plotted for different volumes and c/a values corresponding to the BLYP static equilibrium geometry (left-hand side), and the BLYP equilibrium geometry with nuclear fluctuations (right-hand side). For these geometries, the DOS is computed with both BLYP (blue lines) and PBE (red lines) functionals. } \label{fig:lifshifts:v} \end{figure} There is a strong linear correlation between the energy location of the Lifshitz transition and the c/a value, as revealed by plotting the distance of the Lifshitz transition energy from $\epsilon_F$ as a function of c/a for all volumes and functionals taken into account (\figurename~\ref{fig:lif:all}). This correlation allows us to estimate the exact c/a value at which the Lifshitz transition occurs for each volume, as obtained by fitting the results in \figurename~\ref{fig:lif:all} and extrapolating the Lifshitz transition energy to $\epsilon_F$ (\figurename~\ref{fig:lif}). \begin{figure} \centering \includegraphics[width=0.49\textwidth]{Lifshiftz_noca.eps} \includegraphics[width=0.49\textwidth]{Lifshiftz_ca.eps} \caption{Distance of the Lifshitz transition from the Fermi level. In the left panel, this is plotted as a function of c/a. On the right panel, the origin of the c/a axis is set to the equilibrium value of c/a for each volume and functional. The abscissa at which the curves cross the $y = 0$ line is the c/a value when the system undergoes the Lifshitz transition. This is neither volume nor functional universal. } \label{fig:lif:all} \end{figure} \begin{figure} \centering \includegraphics[width=.56\textwidth]{Lifshitz_transition.eps} \caption{The c/a value at which the Lifshitz transition occurs, causing a sudden increase of the electronic DOS at the Fermi level. We compare the Lifshitz transition with the equilibrium c/a value as yielded by BLYP, PBE, and DMC. Quantum fluctuations move the equilibrium c/a ratio by about 0.1-0.2 towards larger values, independently on the choice of the DFT functional, driving the system away from the transition.} \label{fig:lif} \end{figure} Care must be taken in evaluating the distance of the Lifshitz transition energy from the Fermi level. Indeed, the Fermi level determination could be affected by the choice of the smearing parameter. In our analysis, we employed the so-called "cold" smearing (aka the Marzari-Vanderbilt scheme). We repeated the calculation for three volumes by using the Gaussian smearing and the Fermi-Dirac smearing, and compared their outcome for the Fermi energy determination. It turns out that the difference in the Fermi level is at most \SI{20}{\milli\electronvolt}. The data are reported in \tablename~\ref{tab:smearing}. This Fermi energy uncertainty converts into an error on the c/a value for the Lifshitz transition of 0.008, much smaller than the variations plotted in Fig.~\ref{fig:lif}. \begin{table}[hbtp] \centering \begin{tabular}{c|ccc} \textbf{Volume per \ch{H}} & \textbf{ Marzari-Vanderbilt} & \textbf{ Gaussian} & \textbf{ Fermi-Dirac}\\ \hline \SI{1.067}{\angstrom^3} & \SI{17.115}{\electronvolt} & \SI{17.161}{\electronvolt} & \SI{17.156}{\electronvolt} \\ \SI{1.116}{\angstrom^3} & \SI{16.403}{\electronvolt} & \SI{16.401}{\electronvolt} & \SI{16.387}{\electronvolt} \\ \SI{1.259}{\angstrom^3} & \SI{14.394}{\electronvolt} & \SI{14.402}{\electronvolt} & \SI{14.371}{\electronvolt} \end{tabular} \caption{Fermi level computed with different smearing schemes and volumes for the atomic Cs-IV phase. The electronic temperature is \SI{0.03}{\rydberg} and a $48 \times 48 \times 48$ grid has been used as a $\textbf{k}$-mesh. The error on the Fermi-level that comes from the choice of the smearing scheme is lower than \SI{0.03}{\electronvolt}. } \label{tab:smearing} \end{table} The knowledge of the actual volume dependence of the c/a equilibrium value is of paramount importance, because this could potentially have a strong impact on the electronic properties of the atomic metallic phase of hydrogen. Indeed, the sudden increase of the DOS yielded by the Lifshitz transition could affect the superconducting critical temperature. This is a fascinating scenario whose chance of occurrence needs to be addressed by a more accurate method such as QMC. With the aim of determining the exact c/a equilibrium value, we studied the c/a energy curve of BLYP, PBE, and LDA functionals, and compared them with reference results obtained by DMC calculations (\figurename~\ref{fig:energy:ca}). \begin{figure}[b!] \centering \includegraphics[width=0.49\textwidth]{v_1.050.eps} \includegraphics[width=0.49\textwidth]{v_1.115.eps} \includegraphics[width=0.49\textwidth]{v_1.260.eps} \caption{Energy profile as a function of the c/a parameter for different volumes and electronic theories. Three of the c/a points taken in the DMC calculations are the static equilibrium value in BLYP, the SSCHA equilibrium value for deuterium and the one for protium.} \label{fig:energy:ca} \end{figure} From the comparison with the DMC results, one can notice that the BLYP robustness in providing accurate equilibrium geometry deteriorates at small volumes, corresponding to pressures above 560 GPa. The PBE curves show instead the opposite behavior, as they become more and more accurate as the volume shrinks. For the smallest volumes, the PBE results are the most accurate. Therefore, by looking at the right panel of Fig.~\ref{fig:lif}, which reports the PBE equilibrium c/a values compared with the critical c/a values for the Lifshitz transition, we can safely disregard the occurrence of this transition in the pressure range where the atomic metallic phase becomes favorable. Indeed, nuclear quantum effects move the atomic phase further away from the transition. \paragraph*{Optical properties} We report here additional information regarding the optical properties of phase III, phase VI and the atomic one. To complete the analysis presented in the main text about the comparison between phase III and VI, in \figurename~\ref{fig:reflect} we show the electronic density of states (DOS) and the reflectivity of both phases. As mentioned in the main text, phases III and VI are almost indistinguishable from reflectivity measurements in the visible range, but they present differences in the IR frequency range, where reflectivity is enhanced in phase VI. Also, the DOS at the Fermi level of phase VI is higher, resulting in a better DC conductivity of this phase, as reported in \figurename~\ref{fig:optic} of the main text. \begin{figure}[hbtp] \centering \includegraphics[width=\textwidth]{refl_cmca.pdf} \caption{Reflectivity (upper row) and DOS (lower row) of phase III (C2/c-24) and phase VI (Cmca-12) at different pressures.} \label{fig:reflect} \end{figure} To compute the optical properties of phase III and VI, we used supercells containing 324 atoms, where phonons are accounted for as static disorder (adiabatic approximation). We evaluated the refractive index and the transmitted light through a \SI{1.5}{\micro\meter} thick sample. This is the typical thickness of experimental samples at the target pressures. To avoid the systematic underestimation of the empty bands energy in the DFT calculation, we employed the modified Becke-Johnson meta-GGA exchange-correlation functional\cite{Tran_2009}, which is known to perform as well as more established (and more computationally expensive) methods like the HSE06 hybrid functional or the GW calculations\cite{Borlido_2019}. All the calculations details, the equations employed and software used are the same as those discussed in the Methods Section of Ref.~\cite{MonacelliNatPhys2020}. As far as the comparison between phase IV and the atomic phase is concerned, we complement the data reported in the main text by including the optical conductivity (real and imaginary part) computed for the Cmca-12 and Cs-IV between 460 GPa and 660 GPa in \figurename~\ref{fig:conductivity}. \begin{figure}[b!] \centering \includegraphics[width=0.8\textwidth]{conductivity.pdf} \caption{Real and imaginary part of the conductivity of Cmca-12 and Cs-IV phases.} \label{fig:conductivity} \end{figure} As mentioned in the paper, our results on the optical properties are at variance with the ones of Ref.~\cite{Gorelov2020}. This could be explained by the different approach used. Indeed, in Ref.~\cite{Gorelov2020} the authors computed the optical properties on a smaller cell of 96 atoms and measured the optical gap by accounting for electron-hole pair excitations at the same point in the Brillouin zone of the 12-atom cell unfolded bands. In this way, scattering processes involving phonon momenta $\textbf{q} \ne \Gamma$ are not included, where the excited electron-hole pair has a non-zero total momentum. In ordinary materials like silicon, these effects contribute as a small perturbation. However, this is not the case of hydrogen, where the electron-phonon interaction shifts the bands by \SI{2}{\electronvolt}. We carried out a thorough study of the convergence of the Cs-IV reflectivity with respect to the number of $\mathbf{k}$-points, the smearing (\figurename~\ref{fig:smearing:r}), and the electronic temperature (\figurename~\ref{fig:temperature:r}). The reflectivity shown in the main text has been obtained by employing a 729 $\mathbf{k}$-mesh with a smearing of \SI{0.05}{\electronvolt}. The reflectivity depends weakly on the electronic temperature when its value is lower than the smearing. Thus, at \SI{300}{\kelvin}, the temperature used in our analysis, the reflectivity is fully converged in its temperature dependence. \begin{figure}[b!] \centering \includegraphics[width=0.85\textwidth]{conv.eps} \caption{Convergence of the reflectivity as a function of smearing and number of $\textbf{k}$-points in the Cs-IV phase at approximately \SI{660}{\giga\pascal}.} \label{fig:smearing:r} \end{figure} \begin{figure} \centering \includegraphics[width=0.85\textwidth]{temp.eps} \caption{Convergence of the reflectivity as a function of the electronic temperature and smearing in the Cs-IV phase at approximately \SI{660}{\giga\pascal}.} \label{fig:temperature:r} \end{figure} \newpage The most important dependence introduced by the finite smearing is the drop of reflectivity at low frequency. This is due to the strong but trivial dependence of the Drude peak. In \figurename~\ref{fig:optic:smearing} we show this effect, by plotting the real part of the conductivity as a function of smearing. \begin{figure}[b!] \centering \includegraphics[width=0.85\textwidth]{sigma.eps} \caption{Convergence of the real conductivity as a function of smearing and number of $\textbf{k}$-points in the Cs-IV phase at approximately \SI{660}{\giga\pascal}.} \label{fig:optic:smearing} \end{figure} \newpage \paragraph{Details on the DFT calculations} For the DFT calculations we employed the Quantum Espresso\cite{Giannozzi2009,Giannozzi2017} software suite, using a plane-wave basis set with a cutoff on the kinetic energy of \SI{1088}{\electronvolt} (\SI{4353}{\electronvolt} for the electronic density). We employed a norm-conserving pseudo-potential from the Pseudo Dojo library\cite{pseudodojo}. To sample nuclear fluctuations within the SSCHA, the supercell contains 96 atoms for all the molecular structures (54 atoms for the atomic hydrogen with finite-size convergence checked against a 128 atoms supercell). The electronic $k$ mesh is reported for each structure in \tablename~\ref{tab:my_label}. In all cases, a Marzari-Vanderbilt smearing of \SI{0.41}{\electronvolt} has been employed. Convergence of the energy with smearing and \textbf{k}-points is reported in \figurename~\ref{fig:cmca4:kconv} and \ref{fig:csiv:kconv} for the Cmca-4 and Cs-IV structures, respectively. The Cmca-4 and Cs-IV phases are the ones with the most prominent metallic character, requiring the largest \textbf{k}-point sampling to converge. \begin{table}[h!] \centering \begin{tabular}{c|cc} & $\textbf{k}$-mesh \\ \hline C2/c-24 & $12\times 12 \times 6$ \\ P62/c-24 & $12\times 12 \times 6$ \\ Cmca-12 & $12\times 12\times 12$ \\ Cs-IV & $48\times 48\times 48$ \\ Cmca-4 & $36 \times 24 \times 24$ \\ \end{tabular} \caption{\textbf{k}-mesh employed for the DFT simulation of each phase. The Cmca-4 mesh is performed in the conventional unit cell containing 8 atoms.} \label{tab:my_label} \end{table} \begin{figure} \centering \includegraphics[width=0.6\textwidth]{kpts_cmca4_conv.eps} \caption{Molecular metallic Cmca-4 phase. DFT energy versus smearing and number of \textbf{k}-points in the primitive unit cell.} \label{fig:cmca4:kconv} \end{figure} \begin{figure} \centering \includegraphics[width=0.6\textwidth]{kpts_csiv_conv.eps} \caption{Atomic metallic Cs-IV phase. DFT energy versus smearing and number of \textbf{k}-points in the primitive unit cell.} \label{fig:csiv:kconv} \end{figure} \newpage \paragraph{Details on the DMC calculations} QMC calculations have been performed using the TurboRVB package\cite{nakano2020turborvb}. We carried out extensive DMC simulations in the lattice regularized DMC (LRDMC) flavor\cite{casula2005diffusion}, to project the initial many-body wave function towards the ground state of the system within the fixed-node approximation (FNA)\cite{anderson1976quantum}, and compute its energy. As starting many-body state, we employed a Jastrow-Slater variational wave function $\Psi^\textbf{k}(\textbf{R})=\exp\{-U(\textbf{R})\} \det\{\phi^\textbf{k}_j(\textbf{r}^\uparrow_i)\}\det\{\phi^\textbf{k}_j(\textbf{r}^\downarrow_i)\}$ for $i,j \in \{1,\ldots,N/2\}$, where $N$ is the number of electrons in the unpolarized supercell, $\textbf{k}$ is the twist belonging to a Monkhorst-Pack (MP) grid of the supercell Brillouin zone, and $\textbf{R}=\{\textbf{r}^\uparrow_1,\ldots,\textbf{r}^\uparrow_{N/2},\textbf{r}^\downarrow_1,\ldots,\textbf{r}^\downarrow_{N/2}\}$ is the $N$-electron coordinate. $U$ is the Jastrow function, which is split into electron-nucleus, electron-electron, and electron-electron-nucleus parts: $U=U_{en}+U_{ee}+U_{een}$. The electron-nucleus function has an exponential decay and it reads as $U_{en}=\sum_{iI} J_{1b}(r_{iI}) + U_{en}^\textrm{no-cusp}$, where the index $i$($I$) runs over electrons (nucleus), $r_{iI}$ is the electron-nucleus distance, and $J_{1b}(r)=\alpha (1-\exp\{-r/\alpha\})$, with $\alpha$ a variational parameter. $J_{1b}$ cures the nuclear cusp conditions, and allows the use of the bare Coulomb potential in our QMC framework. The electron-electron function has a Pad\'e form and it reads as $U_{ee}=-\sum_{i \ne j} J_{2b}(r_{ij})$, where the indices $i$ and $j$ run over electrons, $r_{ij}$ is the electron-electron distance, and $J_{2b}(r)=0.5 r/(1+ \beta r)$, with $\beta$ a variational parameter. This two-body Jastrow term fulfills the cusp conditions for antiparallel electrons. The last term in the Jastrow factor is the electron-electron-nucleus function: $U_{een}=\sum_{(i \ne j) I} \sum_{\gamma\delta} M_{\gamma \delta I}\chi_{\gamma I}(r_{iI}) \chi_{\delta I}(r_{jI})$, with $M_{\gamma \delta I}$ a matrix of variational parameters, and $\chi_{\gamma I}(r)$ a $(2s,2p,1d)$ Gaussian basis set, with orbital index $\gamma$, centered on the nucleus $I$. Analogously, the electron-nucleus cusp-free contribution to the Jastrow function, $U_{en}^\textrm{no-cusp}$, is developed on the same Gaussian basis set, such that $U_{en}^\textrm{no-cusp}=\sum_{i I} \sum_{\gamma} V_{\gamma I}\chi_{\gamma I}(r_{iI})$, where $V_{\gamma I}$ is a vector of parameters. The $J_{1b}$ and $J_{2b}$ Jastrow functions have been periodized using a $\textbf{r} \rightarrow \textbf{r}^\prime$ mapping that makes the distances diverge at the border of the unit cell, as explained in Ref~\cite{nakano2020turborvb}. For the inhomogeneous $U_{een}$ part, the Gaussian basis set $\chi$ has been made periodic by summing over replicas translated by lattice vectors. The one-body orbitals $\phi$ are expanded on a primitive $(4s,2p,1d)$ Gaussian basis set, which we contracted into 6 hybrid orbitals, by using the geminal embedding orbitals (GEO) contraction scheme\cite{sorella2015geminal} at the $\Gamma$ point. $\phi$s' are made periodic by using the same scheme as for the $\chi$s'. We verified that this basis set yields a FN-LRDMC bias in the energy differences smaller than the target error of 1 meV per atom. For each $\textbf{k}$ belonging to the MP grid of a given supercell, we performed independent DFT calculations in the local density approximation (LDA) to generate $\{\phi^\textbf{k}_j\}_{j=1,\ldots,N/2}$ for all occupied states. Note that these LDA calculations are done for an \emph{ab initio} Hamiltonian with bare Coulomb potential for the electron-ion interactions. This is thanks to the one-body Jastrow factor included in the DFT wave function, with $\alpha \approx 2.5$. In presence of a Coulomb divergence, fulfilling the ion cusp conditions accelerates enormously the basis set convergence already at the DFT level. Before running LRDMC calculations, we optimized the $\alpha$, $\beta$ and $M_{\gamma \delta I}$ parameters, by minimizing the variational energy of the wave function $\Psi$ within the QMC linear optimization method\cite{umrigar2007alleviation}, by keeping the orbitals $\phi^\textbf{k}_i$ fixed. All $\textbf{k}$ twists belonging to the same system share the same set of optimal variational parameters for the Jastrow factor. The LRDMC projection is carried out at the lattice space $a=0.25 a_0$, which yields converged energy differences. The projection algorithm has been implemented with a fixed population of 256 walkers per twist for the largest system sizes. The population bias, falling within the error bars, has been corrected by the ``correcting factors'' scheme\cite{buonaura1998numerical}. \begin{figure}[th!] \centering \includegraphics[width=0.49\textwidth]{en_vs_N_c2c_Pall_hydro_lrdmc_simple_paper.eps} \includegraphics[width=0.49\textwidth]{en_vs_N_cmca_Pall_hydro_lrdmc_paper.eps} \includegraphics[width=0.49\textwidth]{en_vs_N_cmca4_Pall_hydro_lrdmc_paper.eps} \includegraphics[width=0.49\textwidth]{en_vs_N_csiv_Pall_final_paper.eps} \caption{QMC finite-size scaling and extrapolation to the thermodynamic limit. KZK-corrected LRDMC energies for 4 crystalline symmetries (C2/c-24, Cmca-12, Cmca-4, and Cs-IV) plotted as a function of $1/N$, where $N$ is the number of atoms, with respect to their value at $N=96$, taken as reference. The energies are twisted-averaged in the canonical ensemble over a $\textbf{k}$-grid that has been rescaled according to the size of the supercell, as explained in the text. Note that despite the KZK correction and the canonical $\textbf{k}$-average, there is a residual size dependence beyond $N=96$, larger than the target accuracy of 1 meV per atom, that needs to be extrapolated. As expected, this residual dependence is stronger in the atomic metallic phase and in the molecular phases under high pressure, where the metallic character is enhanced.} \label{fig:qmc:finite-size-extrapolation} \end{figure} For each lattice symmetry and volume $V$, we performed a size-scaling analysis to extrapolate the energies to the thermodynamic limit (see Fig.~\ref{fig:qmc:finite-size-extrapolation}). Let $N_x\times N_y\times N_z$ be the electronic $\textbf{k}$-mesh yielding converged DFT results. In QMC, we used the same $\textbf{k}$-meshes reported in Tab.~\ref{tab:my_label}, except for the Cs-IV and Cmca-4 symmetries, where we used a slightly smaller $24\times24\times24$ and $18\times12\times12$ mesh, respectively. To further reduce finite-size errors, the $\textbf{k}$-mesh of the metallic Cs-IV symmetry has been centered at $(\pi, \pi, \pi)$, while the other $\textbf{k}$-grids are centred at $\Gamma$. We then took supercells with volume $V_s= L_x L_y L_z V$, where $L_i$ are the number of unit-cell replica in the $i$-th direction. Accordingly, the twists have been taken as belonging to the $M_x\times M_y\times M_z$ MP $\mathbf{k}$ mesh with $M_i= \textrm{int} [ N_i / L_i ]$, where $\textrm{int}$ is the integer function. The ground state energies have been extrapolated by using supercells as large as $N=768$ for the molecular phases, while for atomic I4/amd-2 symmetry we used supercells as large as $N=1024$. The final extrapolations have been performed by a linear fitting in $1/N$, computed with Kwee-Zhang-Krakauer (KZK)-corrected energies\cite{kwee2008finite}. We also investigated the role of the FNA in DMC, by exploiting the capabilities of TurboRVB for optimizing the Slater orbitals $\phi^\textbf{k}_i$. Due to the increased cost of these simulations, we performed the nodal optimization by the variational Monte Carlo energy minimization at the special $\textbf{k}$-point only\cite{dagrada2016exact}. The corresponding DMC energies computed by projecting wave functions with relaxed nodes prove that the FN bias does not affect the relative energies between molecular phases. However, the optimization of the FNA with respect to the LDA nodes in the QMC wave function shifts the atomic-phase energy upwards by about \SI{3}{\milli\electronvolt} per atom (see Tab.~\ref{tab:FN_label}), also shifting the transition pressure by \SI{20}{\giga\pascal} towards higher pressures (the correct value is accounted for in the phase diagram of \figurename~\ref{fig:PD}). \begin{table}[h!] \centering \begin{tabular}{c | c | c | c} symmetry & 1.416 \AA$^3$ & 1.259 \AA$^3$ & 1.115 \AA$^3$ \\ \hline C2/c-24 & 6.2 ($\pm$ 1.7) & - & - \\ P62/c-24 & 6.0 ($\pm$ 1.7) & - & - \\ Cmca-4 & 5.9 ($\pm$ 1.5) & - & - \\ Cmca-12 & 7.2 ($\pm$ 1.1) & 6.5 ($\pm$ 1.1) & 6.1 ($\pm$ 1.0)\\ Cs-IV & - & 3.4 ($\pm$ 0.8) & 3.2 ($\pm$ 0.8) \\ \end{tabular} \caption{Fixed node LRDMC energy gain (in meV/H) at different volumes with respect to the LDA nodes after full wave function optimization at the special $\textbf{k}$ point. The energy optimization has been performed at the variational Monte Carlo level for supercells up to $N=288$. It turns out that the LRDMC energy gain due to the nodal optimization has a very weak system size dependence. $N=96$ gives already converged results for the FN energy gain.} \label{tab:FN_label} \end{table} We ran all LRDMC calculations long enough to reach a stochastic error bar around the target accuracy of \SI{1}{\milli\electronvolt} per atom. \newpage \paragraph*{DMC correction of the DFT exchange-correlation energy} To correct for the DFT exchange-correlation error, we computed DMC energies at each centroids structure. For each structure, we fitted the energy difference between DFT and DMC for the same structure as a function of the density, and added it on the top of the DFT energy-volume relationship, computed on a much denser volume grid thanks to the cheaper cost of DFT. With this procedure, we do not rely on any phenomenological definition of the equation of states (EOS), such as the Birch-Murnaghan or Vinet EOS, in order to get QMC-interpolated energy-versus-volume curves. Fitting the DMC \emph{corrections} with respect to an underlying \emph{ab initio} theory is easier than fitting directly the DMC total energies. Indeed, total energies show a much larger dependence on the volume than energy corrections. The plot of the QMC corrections is reported in \figurename~\ref{fig:qmcshift}. In this plot, the QMC corrections are obtained from DMC energies computed within the fixed-node approximation (FNA) and with DFT-LDA nodes. \begin{figure} \centering \includegraphics[width=\columnwidth]{QmcShift.eps} \caption{QMC energy corrections. Electronic energy differences between DFT-BLYP and DMC calculations at hydrogen centroid positions obtained from SSCHA nuclear quantum fluctuations evaluated at the DFT-BLYP level. The DMC energies are computed within the FNA with DFT-LDA nodes. The fit is a straight line for all phases. } \label{fig:qmcshift} \end{figure} Fig.~\ref{fig:qmcshift} shows that the difference between phases smears out when the density increases, pointing toward a better DFT description above densities corresponding to \SI{800}{\giga\pascal}, where the electronic behavior of the system is more similar to the jellium model. This regime, however, kicks in at pressures above the range of interest for this work. It is worth noting that the smallest absolute values of the DMC correction with respect to DFT-BLYP are found for the atomic and Cmca-4 phases. This is the reason why accounting for DMC corrections is fundamental to correctly reproduce the hydrogen atomization: all molecular phases, except for Cmca-4, are lowered in energy with a consequent shift of the atomization transition toward higher pressures. We have mentioned that the QMC corrections reported in \figurename~\ref{fig:qmcshift} are based on DMC calculations within the FNA and with DFT-LDA nodes. As explained in the QMC calculations details, we assessed the quality of the DFT-LDA wave-function nodes that are kept fixed during the DMC projection in the FNA. This is the only bias present in the DMC energies, which would otherwise have been exact. We have been able to relax the nodes at the variational Monte Carlo level, and then use improved nodes in DMC. According to Tab.~\ref{tab:FN_label}, a systematic gain of 3meV/H is found after nodal optimization in DMC energies for the molecular phases with respect to the atomic one. It appears that the gain is the same for all molecular structures (within the error bars) and it is volume independent in the range of pressures explored in this work. This 3meV/H shift is added to the QMC corrections in Fig.~\ref{fig:qmcshift} to yield the final corrections, used to compute the QMC phase diagram of \figurename~\ref{fig:PD}. It adds up to further disfavor the atomic phase, whose stability is pushed up to higher pressures in the phase diagram. \paragraph*{DMC+SSCHA systematic errors} Our approach of combining DMC energy corrections and SSCHA anharmonic vibrational contributions in an additive way relies upon the hypothesis that DMC corrections depend mainly on the phase and pressure (or volume), and very weakly on the particular atomic displacement around the centroids within a given phase. To test this hypothesis and give an estimate of the systematic error introduced by this approximation, we repeated the same calculations by choosing a different reference structure to compute the DMC shifts. Therefore, we replaced the SSCHA centroids of hydrogen with those of deuterium to change reference structure, and we went again through all steps by using this time the DMC correction computed on the D centroids, for validation and error quantification. In \figurename~\ref{fig:qmcshift:96}, we compare the DMC corrections computed for the SSCHA centroids of $\text{H}$ and $\text{D}$, respectively, employing supercells of 96 nuclei in the DMC calculations. In this way, we can check whether the DMC shifts are independent of specific ionic distortion, by retaining a dependence only on the overall arrangement, i.e. on the crystalline symmetry. The plots in panels a) and b) show that the shift between C2/c-24 and Cmca-12 depends only slightly on the centroid position, while the atomic phase is more sensitive to the choice of the centroid. This discrepancy is mainly due to the difference in the c/a equilibrium values of Cs-IV between DMC and DFT-BLYP (see Section on the Cs-IV phase). \begin{figure} \centering \includegraphics[width=\textwidth]{QmcShift_n96.eps} \caption{Electronic energy difference per atom between DMC and DFT calculations on the same BLYP structures. Here, we used the KZK-corrected LRDMC energies computed at $N=96$ for all phases reported in the plot, before full thermodynamic extrapolation. This allows for a direct comparison with deuterium, where we did not perform an explicit finite-size extrapolation as we did in hydrogen (see Fig.~\ref{fig:qmc:finite-size-extrapolation}). Panel a: the energy shift per atom is computed on the SCHA centroid positions with mass equal to hydrogen. Panel b: the energy shift is computed on structures with SCHA centroid positions for deuterium.} \label{fig:qmcshift:96} \end{figure} \newpage In \figurename~\ref{fig:pd:full:D}, we show the phase diagram obtained by considering the DMC correction on the $\text{D}$ centroids. Here, the DMC energies computed on the $\text{D}$ centroids are extrapolated to the thermodynamic limit by assuming the same $1/N$ dependence as found in protium. Therefore, the most accurate DMC correction of the BLYP exchange-correlation energy is obtained for the H-centroids geometries, where an explicit and computational time-consuming extrapolation to the thermodynamic limit has been performed explicitly. Nonetheless, the phase diagram in \figurename~\ref{fig:pd:full:D} is valuable for estimating the systematic errors. This can be done by direct comparison with the phase diagram drawn in \figurename~\ref{fig:pd:full}, obtained instead by relying upon the H-centroids DMC correction. \begin{figure} \centering \includegraphics[width=.7\textwidth]{MetalPD_D.eps} \caption{Full phase diagram, considering DMC corrections and nuclear quantum effects. DMC corrections are evaluated using the SSCHA centroids of deuterium. } \label{fig:pd:full:D} \end{figure} Based on the phase diagrams plotted in Figs.~\ref{fig:pd:full} and \ref{fig:pd:full:D}, we report the phase-transition pressures for hydrogen and deuterium in Tabs.~\ref{tab:PD:H} and \ref{tab:PD:D}, respectively. The discrepancies between the results is used as an estimate of the systematic error in the final phase diagram, shown in \figurename~\ref{fig:PD}. \begin{table} \centering \begin{tabular}{c|c|c} &III $\rightarrow$ VI & VI $\rightarrow$ ATOMIC\\ \hline $\text{H}$-centroids DMC correction & \SI{422}{\giga\pascal} & \SI{575}{\giga\pascal} \\ $\text{D}$-centroids DMC correction & \SI{404}{\giga\pascal} & \SI{616}{\giga\pascal} \end{tabular} \caption{The final phase-transition pressures for H obtained with DMC energy correction estimated based on H or D centroids. The results obtained from the H-centroids based DMC correction are taken as the most accurate in the main text. The ones obtained from the D-centroids based DMC correction provide an estimation of systematic error introduced by the approximation. \label{tab:PD:H}} \end{table} \begin{table} \centering \begin{tabular}{c|c|c} &III $\rightarrow$ VI & VI $\rightarrow$ ATOMIC\\ \hline $\text{H}$-centroids DMC correction & \SI{452}{\giga\pascal} & \SI{646}{\giga\pascal} \\ $\text{D}$-centroids DMC correction & \SI{432}{\giga\pascal} & \SI{683}{\giga\pascal} \end{tabular} \caption{Same transition pressures as in Fig.~\ref{tab:PD:H}, but for deuterium. The results obtained from the H-centroids based DMC correction are taken as the most accurate in the main text. The ones obtained from the D-centroids based DMC correction provide an estimation of systematic error introduced by the approximation.\label{tab:PD:D}} \end{table}
1,941,325,220,665
arxiv
\section{Introduction} The {\it IJCAI--22 Proceedings} will be printed from electronic manuscripts submitted by the authors. These must be PDF ({\em Portable Document Format}) files formatted for 8-1/2$''$ $\times$ 11$''$ paper. \subsection{Length of Papers} All paper {\em submissions} must have a maximum of six pages, plus at most one for references. The seventh page cannot contain {\bf anything} other than references. The length rules may change for final camera-ready versions of accepted papers and will differ between tracks. Some tracks may include only references in the last page, whereas others allow for any content in all pages. Similarly, some tracks allow you to buy a few extra pages should you want to, whereas others don't. If your paper is accepted, please carefully read the notifications you receive, and check the proceedings submission information website\footnote{\url{https://proceedings.ijcai.org/info}} to know how many pages you can finally use. That website holds the most up-to-date information regarding paper length limits at all times. Please notice that if your track allows for a special references-only page, the {\bf references-only page(s) cannot contain anything else than references} (i.e.: do not write your acknowledgments on that page or you will be charged for it). \subsection{Word Processing Software} As detailed below, IJCAI has prepared and made available a set of \LaTeX{} macros and a Microsoft Word template for use in formatting your paper. If you are using some other word processing software, please follow the format instructions given below and ensure that your final paper looks as much like this sample as possible. \section{Style and Format} \LaTeX{} and Word style files that implement these instructions can be retrieved electronically. (See Appendix~\ref{stylefiles} for instructions on how to obtain these files.) \subsection{Layout} Print manuscripts two columns to a page, in the manner in which these instructions are printed. The exact dimensions for pages are: \begin{itemize} \item left and right margins: .75$''$ \item column width: 3.375$''$ \item gap between columns: .25$''$ \item top margin---first page: 1.375$''$ \item top margin---other pages: .75$''$ \item bottom margin: 1.25$''$ \item column height---first page: 6.625$''$ \item column height---other pages: 9$''$ \end{itemize} All measurements assume an 8-1/2$''$ $\times$ 11$''$ page size. For A4-size paper, use the given top and left margins, column width, height, and gap, and modify the bottom and right margins as necessary. \subsection{Format of Electronic Manuscript} For the production of the electronic manuscript, you must use Adobe's {\em Portable Document Format} (PDF). A PDF file can be generated, for instance, on Unix systems using {\tt ps2pdf} or on Windows systems using Adobe's Distiller. There is also a website with free software and conversion services: \url{http://www.ps2pdf.com}. For reasons of uniformity, use of Adobe's {\em Times Roman} font is strongly suggested. In \LaTeX2e{} this is accomplished by writing \begin{quote} \mbox{\tt $\backslash$usepackage\{times\}} \end{quote} in the preamble.\footnote{You may want also to use the package {\tt latexsym}, which defines all symbols known from the old \LaTeX{} version.} Additionally, it is of utmost importance to specify the {\bf letter} format (corresponding to 8-1/2$''$ $\times$ 11$''$) when formatting the paper. When working with {\tt dvips}, for instance, one should specify {\tt -t letter}. \subsection{Title and Author Information} Center the title on the entire width of the page in a 14-point bold font. The title must be capitalized using Title Case. Below it, center author name(s) in 12-point bold font. On the following line(s) place the affiliations, each affiliation on its own line using 12-point regular font. Matching between authors and affiliations can be done using numeric superindices. Optionally, a comma-separated list of email addresses follows the affiliation(s) line(s), using 12-point regular font. \subsubsection{Blind Review} In order to make blind reviewing possible, authors must omit their names and affiliations when submitting the paper for review. In place of names and affiliations, provide a list of content areas. When referring to one's own work, use the third person rather than the first person. For example, say, ``Previously, Gottlob~\shortcite{gottlob:nonmon} has shown that\ldots'', rather than, ``In our previous work~\cite{gottlob:nonmon}, we have shown that\ldots'' Try to avoid including any information in the body of the paper or references that would identify the authors or their institutions. Such information can be added to the final camera-ready version for publication. \subsection{Abstract} Place the abstract at the beginning of the first column 3$''$ from the top of the page, unless that does not leave enough room for the title and author information. Use a slightly smaller width than in the body of the paper. Head the abstract with ``Abstract'' centered above the body of the abstract in a 12-point bold font. The body of the abstract should be in the same font as the body of the paper. The abstract should be a concise, one-paragraph summary describing the general thesis and conclusion of your paper. A reader should be able to learn the purpose of the paper and the reason for its importance from the abstract. The abstract should be no more than 200 words long. \subsection{Text} The main body of the text immediately follows the abstract. Use 10-point type in a clear, readable font with 1-point leading (10 on 11). Indent when starting a new paragraph, except after major headings. \subsection{Headings and Sections} When necessary, headings should be used to separate major sections of your paper. (These instructions use many headings to demonstrate their appearance; your paper should have fewer headings.). All headings should be capitalized using Title Case. \subsubsection{Section Headings} Print section headings in 12-point bold type in the style shown in these instructions. Leave a blank space of approximately 10 points above and 4 points below section headings. Number sections with arabic numerals. \subsubsection{Subsection Headings} Print subsection headings in 11-point bold type. Leave a blank space of approximately 8 points above and 3 points below subsection headings. Number subsections with the section number and the subsection number (in arabic numerals) separated by a period. \subsubsection{Subsubsection Headings} Print subsubsection headings in 10-point bold type. Leave a blank space of approximately 6 points above subsubsection headings. Do not number subsubsections. \paragraph{Titled paragraphs.} You should use titled paragraphs if and only if the title covers exactly one paragraph. Such paragraphs should be separated from the preceding content by at least 3pt, and no more than 6pt. The title should be in 10pt bold font and ended with a period. After that, a 1em horizontal space should follow the title before the paragraph's text. In \LaTeX{} titled paragraphs should be typeset using \begin{quote} {\tt \textbackslash{}paragraph\{Title.\} text} . \end{quote} \subsubsection{Acknowledgements} You may include an unnumbered acknowledgments section, including acknowledgments of help from colleagues, financial support, and permission to publish. If present, acknowledgements must be in a dedicated, unnumbered section appearing after all regular sections but before any appendices or references. Use \begin{quote} {\tt \textbackslash{}section*\{Acknowledgements\}}) \end{quote} to typeset the acknowledgements section in \LaTeX{}. \subsubsection{Appendices} Any appendices directly follow the text and look like sections, except that they are numbered with capital letters instead of arabic numerals. See this document for an example. \subsubsection{References} The references section is headed ``References'', printed in the same style as a section heading but without a number. A sample list of references is given at the end of these instructions. Use a consistent format for references. The reference list should not include publicly unavailable work. \subsection{Citations} Citations within the text should include the author's last name and the year of publication, for example~\cite{gottlob:nonmon}. Append lowercase letters to the year in cases of ambiguity. Treat multiple authors as in the following examples:~\cite{abelson-et-al:scheme} or~\cite{bgf:Lixto} (for more than two authors) and \cite{brachman-schmolze:kl-one} (for two authors). If the author portion of a citation is obvious, omit it, e.g., Nebel~\shortcite{nebel:jair-2000}. Collapse multiple citations as follows:~\cite{gls:hypertrees,levesque:functional-foundations}. \nocite{abelson-et-al:scheme} \nocite{bgf:Lixto} \nocite{brachman-schmolze:kl-one} \nocite{gottlob:nonmon} \nocite{gls:hypertrees} \nocite{levesque:functional-foundations} \nocite{levesque:belief} \nocite{nebel:jair-2000} \subsection{Footnotes} Place footnotes at the bottom of the page in a 9-point font. Refer to them with superscript numbers.\footnote{This is how your footnotes should appear.} Separate them from the text by a short line.\footnote{Note the line separating these footnotes from the text.} Avoid footnotes as much as possible; they interrupt the flow of the text. \section{Illustrations} Place all illustrations (figures, drawings, tables, and photographs) throughout the paper at the places where they are first discussed, rather than at the end of the paper. They should be floated to the top (preferred) or bottom of the page, unless they are an integral part of your narrative flow. When placed at the bottom or top of a page, illustrations may run across both columns, but not when they appear inline. Illustrations must be rendered electronically or scanned and placed directly in your document. They should be cropped outside \LaTeX{}, otherwise portions of the image could reappear during the post-processing of your paper. When possible, generate your illustrations in a vector format. When using bitmaps, please use 300dpi resolution at least. All illustrations should be understandable when printed in black and white, albeit you can use colors to enhance them. Line weights should be 1/2-point or thicker. Avoid screens and superimposing type on patterns, as these effects may not reproduce well. Number illustrations sequentially. Use references of the following form: Figure 1, Table 2, etc. Place illustration numbers and captions under illustrations. Leave a margin of 1/4-inch around the area covered by the illustration and caption. Use 9-point type for captions, labels, and other text in illustrations. Captions should always appear below the illustration. \section{Tables} Tables are considered illustrations containing data. Therefore, they should also appear floated to the top (preferably) or bottom of the page, and with the captions below them. \begin{table} \centering \begin{tabular}{lll} \hline Scenario & $\delta$ & Runtime \\ \hline Paris & 0.1s & 13.65ms \\ Paris & 0.2s & 0.01ms \\ New York & 0.1s & 92.50ms \\ Singapore & 0.1s & 33.33ms \\ Singapore & 0.2s & 23.01ms \\ \hline \end{tabular} \caption{Latex default table} \label{tab:plain} \end{table} \begin{table} \centering \begin{tabular}{lrr} \toprule Scenario & $\delta$ (s) & Runtime (ms) \\ \midrule Paris & 0.1 & 13.65 \\ & 0.2 & 0.01 \\ New York & 0.1 & 92.50 \\ Singapore & 0.1 & 33.33 \\ & 0.2 & 23.01 \\ \bottomrule \end{tabular} \caption{Booktabs table} \label{tab:booktabs} \end{table} If you are using \LaTeX, you should use the {\tt booktabs} package, because it produces better tables than the standard ones. Compare Tables \ref{tab:plain} and~\ref{tab:booktabs}. The latter is clearly more readable for three reasons: \begin{enumerate} \item The styling is better thanks to using the {\tt booktabs} rulers instead of the default ones. \item Numeric columns are right-aligned, making it easier to compare the numbers. Make sure to also right-align the corresponding headers, and to use the same precision for all numbers. \item We avoid unnecessary repetition, both between lines (no need to repeat the scenario name in this case) as well as in the content (units can be shown in the column header). \end{enumerate} \section{Formulas} IJCAI's two-column format makes it difficult to typeset long formulas. A usual temptation is to reduce the size of the formula by using the {\tt small} or {\tt tiny} sizes. This doesn't work correctly with the current \LaTeX{} versions, breaking the line spacing of the preceding paragraphs and title, as well as the equation number sizes. The following equation demonstrates the effects (notice that this entire paragraph looks badly formatted): \begin{tiny} \begin{equation} x = \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \prod_{i=1}^n \sum_{j=1}^n j_i \end{equation} \end{tiny}% Reducing formula sizes this way is strictly forbidden. We {\bf strongly} recommend authors to split formulas in multiple lines when they don't fit in a single line. This is the easiest approach to typeset those formulas and provides the most readable output% \begin{align} x =& \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \nonumber\\ + & \prod_{i=1}^n \sum_{j=1}^n j_i \end{align}% If a line is just slightly longer than the column width, you may use the {\tt resizebox} environment on that equation. The result looks better and doesn't interfere with the paragraph's line spacing: % \begin{equation} \resizebox{.91\linewidth}{!}{$ \displaystyle x = \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \prod_{i=1}^n \sum_{j=1}^n j_i $} \end{equation}% This last solution may have to be adapted if you use different equation environments, but it can generally be made to work. Please notice that in any case: \begin{itemize} \item Equation numbers must be in the same font and size as the main text (10pt). \item Your formula's main symbols should not be smaller than {\small small} text (9pt). \end{itemize} For instance, the formula \begin{equation} \resizebox{.91\linewidth}{!}{$ \displaystyle x = \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j + \prod_{i=1}^n \sum_{j=1}^n j_i + \prod_{i=1}^n \sum_{j=1}^n i_j $} \end{equation} would not be acceptable because the text is too small. \section{Examples, Definitions, Theorems and Similar} Examples, definitions, theorems, corollaries and similar must be written in their own paragraph. The paragraph must be separated by at least 2pt and no more than 5pt from the preceding and succeeding paragraphs. They must begin with the kind of item written in 10pt bold font followed by their number (e.g.: Theorem 1), optionally followed by a title/summary between parentheses in non-bold font and ended with a period. After that the main body of the item follows, written in 10 pt italics font (see below for examples). In \LaTeX{} We strongly recommend you to define environments for your examples, definitions, propositions, lemmas, corollaries and similar. This can be done in your \LaTeX{} preamble using \texttt{\textbackslash{newtheorem}} -- see the source of this document for examples. Numbering for these items must be global, not per-section (e.g.: Theorem 1 instead of Theorem 6.1). \begin{example}[How to write an example] Examples should be written using the example environment defined in this template. \end{example} \begin{theorem} This is an example of an untitled theorem. \end{theorem} You may also include a title or description using these environments as shown in the following theorem. \begin{theorem}[A titled theorem] This is an example of a titled theorem. \end{theorem} \section{Proofs} Proofs must be written in their own paragraph separated by at least 2pt and no more than 5pt from the preceding and succeeding paragraphs. Proof paragraphs should start with the keyword ``Proof." in 10pt italics font. After that the proof follows in regular 10pt font. At the end of the proof, an unfilled square symbol (qed) marks the end of the proof. In \LaTeX{} proofs should be typeset using the \texttt{\textbackslash{proof}} environment. \begin{proof} This paragraph is an example of how a proof looks like using the \texttt{\textbackslash{proof}} environment. \end{proof} \section{Algorithms and Listings} Algorithms and listings are a special kind of figures. Like all illustrations, they should appear floated to the top (preferably) or bottom of the page. However, their caption should appear in the header, left-justified and enclosed between horizontal lines, as shown in Algorithm~\ref{alg:algorithm}. The algorithm body should be terminated with another horizontal line. It is up to the authors to decide whether to show line numbers or not, how to format comments, etc. In \LaTeX{} algorithms may be typeset using the {\tt algorithm} and {\tt algorithmic} packages, but you can also use one of the many other packages for the task. \begin{algorithm}[tb] \caption{Example algorithm} \label{alg:algorithm} \textbf{Input}: Your algorithm's input\\ \textbf{Parameter}: Optional list of parameters\\ \textbf{Output}: Your algorithm's output \begin{algorithmic}[1] \STATE Let $t=0$. \WHILE{condition} \STATE Do some action. \IF {conditional} \STATE Perform task A. \ELSE \STATE Perform task B. \ENDIF \ENDWHILE \STATE \textbf{return} solution \end{algorithmic} \end{algorithm} \section*{Acknowledgments} The preparation of these instructions and the \LaTeX{} and Bib\TeX{} files that implement them was supported by Schlumberger Palo Alto Research, AT\&T Bell Laboratories, and Morgan Kaufmann Publishers. Preparation of the Microsoft Word file was supported by IJCAI. An early version of this document was created by Shirley Jowell and Peter F. Patel-Schneider. It was subsequently modified by Jennifer Ballentine and Thomas Dean, Bernhard Nebel, Daniel Pagenstecher, Kurt Steinkraus, Toby Walsh and Carles Sierra. The current version has been prepared by Marc Pujol-Gonzalez and Francisco Cruz-Mencia. \section{Introduction}\label{sec1} \subfile{sections/1_introduction} \section{Preliminaries}\label{sec2} \subfile{sections/2_preliminaries} \section{Optimization Strategies}\label{sec3} \subfile{sections/3_solutions} \section{Performance Comparison}\label{sec4} \subfile{sections/4_comparison} \section{Future Directions}\label{sec5} \subfile{sections/5_futuredirections} \section{Conclusion}\label{sec6} \subfile{sections/6_conclusion} \section{Acknowledgement}\label{sec7} \subfile{sections/7_acknowledgement} \clearpage {\small \subsection{Neural Architecture Search (NAS)} Neural Architecture Search\,(NAS) automatically finds an architecture for a deep learning task while satisfying accuracy constraints. The NAS process can be broken down into three parts: the search space, the search strategy, and the performance estimation strategy \cite{elsken2019neural}. \begin{itemize} \item \textbf{Search space.} The search space embodies the possible architectures for the given task. Introducing inductive bias when defining the search space is inevitable. However, works involving meta-learning \cite{lee2021rapid} constructs an embedded search space that circumvents such human bias. The earliest example of a search space would be a sequential layer of operations. \item \textbf{Search strategy.} The search strategy determines how the architectures are sampled from the search space. There are two competing metrics here: architecture performance and search time. Finding high performing architectures may take a long time. A potential navigation method for this would be probing the Pareto frontier; the set of solutions where improving one objective will worsen another objective. Reinforcement learning\,\cite{zoph2016neural} and evolution\,\cite{real2017large} are other examples of search strategies. \item \textbf{Performance estimation strategy.} The performance estimation strategy seeks to measure how well the searched architecture does on new unseen data. The naive way to do this is to predict on the validation set. Of course, full-training is computationally taxing and not the most practical way to do things. Works in the NAS literature has found many different proxies for performance estimation. \end{itemize} \subsection{Supernet Optimization} We remark that the terms supernet and weight-sharing are at times used interchangeably and circularly in the NAS literature. For the purposes of this survey, we will distinguish and define those terms. A \textbf{supernet} is a neural network that contains all possible architectures to be sampled, i.e., it serves as the search space. Architectures are found by taking subnets from the supernet. We can represent a supernet as a Directed Acyclic Graph\,(DAG) where the nodes are feature maps and the edges are operations\,(e.g. convolution, pool, zero). We refer to figure \ref{fig:cell} for an illustration of a supernet. \begin{figure}[ht] \centering \hbox{\hspace{2cm}\includegraphics[width=5cm,keepaspectratio]{images/cell.png}} \caption{An illustration of a supernet. The nodes denote feature maps and the colored edges represent candidate operations. This supernet embodies the search space where a sampled architecture would be a subnet.} \label{fig:cell} \end{figure} A key feature of supernet-based NAS would be \textbf{weight-sharing} \cite{pham2018efficient}. Weight-sharing is done by first training a single large supernet. An RL-based controller \cite{pham2018efficient} then samples a subnet from the supernet. As a result, all sampled child networks share weights from the supernet. The introduction of weight-sharing has reduced GPU days for the search process from thousands of GPU days down to a few GPU days. \cite{zoph2016neural,real2017large,zoph2018learning}. The one-shot approach \cite{bender2018understanding} extends the weight-sharing technique. The authors train a single large supernet which is just a neural network trained using SGD with Momentum. During training, a linearly scheduled dropout strategy \cite{hinton2012improving} is used to prevent co-adaptation among operations (edges). The dropout rate starts at 0 and then linearly increases to $r^{1/k}$ where $0<r<1$ is a hyperparameter and $k$ is the number of incoming paths to a given operation. When applying dropout, a random subset of the operations are zeroed out. After training, subnets are randomly sampled from the trained supernet following a uniform distribution (other search methods such as evolution or reinforcement learning can be used instead). The sampled subnet is then evaluated on a held-out validation set. The resulting output is a list of candidate architectures ranked by one-shot accuracy (based on validation set). The best performing architectures can be selected to be retrained from scratch. \subsection{Data-Side Optimization} A lot of works in supernet optimization focus on improving the architecture or the search phase itself. Works such as OFA\,\cite{cai2019once} and BigNAS\,\cite{yu2020bignas} seek to yield architectures that can be instantly deployed for tasks\,(these are mentioned in section \ref{tran}). Methods such as ENAS\,\cite{pham2018efficient} and DARTS\,\cite{liu2018darts} focus on searching architectures efficiently (quickly). However, the actual architecture and search phase are not the only avenues for improving a machine learning algorithm. Data is another route for optimization. The issue of data is essential to any machine learning task. In this section, we survey supernet-based NAS with respect to data-side optimization. Given that the task is image classification, the image resolution is of concern. We cover High Resolution NAS\,(HR-NAS)\,\cite{ding2021hr} which is designed for high resolution images. For supervised learning, labels are integral. However, labelling high volumes of data is laborious and can be a bottleneck in the machine learning pipeline. We explore Random Label NAS\,(RLNAS)\,\cite{zhang2021neural} which removes ground truth labels and only uses random labels for the search. \iffalse In the case where data is limited, we discuss Contrastive NAS\,(CTNAS)\,\cite{chen2021contrastive} that does not estimate architecture performance but rather computes the probability that the sampled architecture will outperform a given baseline. One could also remove labels altogether, by considering the unsupervised case. We also survey supernet-based NAS that use unsupervised representation learning.\fi In addition, most NAS methods are fed input signals with clean data. There is a need for NAS methods that are robust against noisy data. We also look into NAS that is robust against Out-of-Distribution\,(OOD) data\,\cite{bai2021ood}. \subsubsection{High Resolution Images} Image resolution sizes in image classification tasks tend to be on the smaller side (about $224\times224)$. Naturally, computational requirements increases as image resolution goes up. So a natural concern is to develop methods that handle high resolution images. High Resolution NAS (HR-NAS) searches for architectures designed to take in high resolution images. HR-NAS introduces a light-weight Vision Transformer (ViT) \cite{dosovitskiy2020image} that also incorporates convolution. The ViT consists of a projector, an encoder, and a decoder. For the projector, instead of using a sinusoidal encoding, the authors project the input feature with a simple normalized 2D positional map. For the encoder, the input feature is transformed into a set of $n$ tokens where each is a lower-dimensional semantic embeddding with positional information. Then the tokens are fed as queries, keys, and values into the ViT. The authors introduce a multi-branch search space inspired by HRNet \cite{wang2020deep}. This search space contains both multi-scale features and global contexts while maintaining high resolution representations throughout the network. The supernet in HR-NAS is a multi-branch network where each branch is a chain of search blocks operating at different resolutions. Each search block combines a MixConv and a Transformer\,\cite{vaswani2017attention}. The search is based on progressive shrinking that discards some convolutional channels and Transformer queries during training. \subsubsection{Randomly Labeled Data} Random Label NAS\,(RLNAS)\,\cite{zhang2021neural} abandons ground truth labels and adopts random labels during searching. The authors of RLNAS decouple two convergence-based optimization steps. First, they train a supernet with random labels: \begin{equation} W^*=\argmin_{W}{\mathbb{E}_{\alpha\sim\Gamma(\mathcal{A})}}{\mathcal{L}(\alpha,W,R)}, \end{equation} where $W,\,\alpha,\,\mathcal{A},\,R,\,\mathcal{L},\,\Gamma(\mathcal{A})$ denote the weights, architecture, architecture space, random labels, loss function, and a uniform distribution of $\alpha\in\mathcal{A}$, respectively. Second, the authors run an evolution algorithm that searches the optimal architecture based on a convergence-based metric Convergence($\cdot$) as fitness: \begin{equation} \alpha^*=\argmax_{\alpha\in\mathcal{A}}{\text{Convergence}(\alpha,W^*)}. \end{equation} The random labels follow a uniform distribution and the convergence-based metric is an angle-based model evaluation metric where it takes the angle between initial model weights and trained ones. \subsubsection{Out-of-Distribution Generalization} NAS for image classification mostly works with clean datasets such as CIFAR10/100, ImageNet, etc. However, clean data is scarce in real world settings. In addition, data distribution shifts are often found in real world settings. For example, a dog breed identifier might be given a photo of a cat due to user error. Thus, there is a need for supernet optimization to be robust to Out-of-Distribution\,(OOD) cases. Out-of-Distribution refers to the case where a machine learning algorithm is fed data that is not from the distribution it was trained on. I.e., there is a distribution shift. There are mainly two kinds of OOD shifts: covariate shift and semantic shift. Covariate shift refers to a change in the input distribution. Semantic shift refers to a change in the output/label distribution. The authors of NAS-OOD\,\cite{bai2021ood} aim to develop a method that is robust to OOD data. They carry this out by jointly optimizing a conditional generator to synthesize OOD examples during training. To elaborate, let $\alpha,\,\omega,\,\theta_G$ denote the parameters for the architecture topology, the supernet, and the conditional genetator $G(\cdot,\cdot)$, respectively. The conditional generator $G(\cdot,\cdot)$ takes data $x$ and domain labels $\widetilde{k}$ as inputs. NAS-OOD is carried out by a minimax optimization. First, the conditional generator $G$ is learned to generate novel domain data by maximizing the validation loss, serving as an adversarial attack. Next, the validation loss is minimized by optimizing the architecture variables in $\alpha$ on the generated OOD images. The constrained minimax optimization problem can be formulated as: \begin{equation} \begin{aligned} &\min_{\alpha}\max_{G}\mathcal{L}_{val}(\omega^*(\alpha),\alpha,G(x,\widetilde{k}))\\ &\text{s.t. }\,\omega^*(\alpha)=\argmin_{\omega}\mathcal{L}_{train}(\omega,\alpha,x), \end{aligned} \end{equation} where $G(x,\widetilde{k})$ is the generated data from the original input data $x$ on the domain labels $\widetilde{k}$. The optimization is carried out by gradient ascent for $\theta_G$ and gradient descent for $\omega$ and $\alpha$. In addition, the authors apply gradient descent to $\theta_G$ with respect to an auxiliary loss in order to improve the generator's consistency and semantic information preservation when training. \subsection{Poor Rank Correlation Alleviation} In this section, we will survey methods that aims to alleviate poor rank correlation. After supernet optimization, a subnet needs to be sampled and evaluated. This process repeats and returns a list of ranked architectures. Unfortunately, supernet-based NAS suffers from poor rank correlation. I.e., model accuracy predictions are poorly correlated to the standalone accuracy. \cite{bender2018understanding} attributes this to the co-adaptation among the weights. I.e., when the shared weights have highly correlated behavior. Rank correlation is measured by three metrics: Kendall Tau\,($\tau$)\,\cite{kendall1938new}, Spearman Rho\,($\rho$), and Pearson R\,(R). Poor rank correlation alleviation is usually accomplished by navigating or factorizing the search space. Navigation is carried out by selecting a path\,\cite{bender2018understanding,guo2020single,chu2021fairnas,chu2020mixpath} in the supernet while search space factorization is done by block-wise\,\cite{li2021bossnas} or hierarchical partitioning\,\cite{zhao2021few}. Disjoint from the aforementioned approaches, there is research that focuses on training consistency shift in the supernet\,\cite{peng2021pi}. \subsubsection{Path Selection} Since architectures are sampled by taking subnets\,(paths) of the supernet, a natural approach is to smartly choose paths. I.e., this avenue of research is focused on navigating through the supernet search space. \cite{bender2018understanding} utilizes a linearly scheduled dropout strategy during training. After learning the parameters, subnets are sampled following a uniform distribution. Aiming to improve upon\,\cite{bender2018understanding}, Single-Path One-Shot NAS\,(SPOS)\,\cite{guo2020single} abstracts the supernet as a collection of choice modules. During training, SPOS randomly picks a subnet based on a uniform distribution and evaluates its validation accuracy. \begin{figure}[ht] \centering \hbox{\hspace{0.25cm}\includegraphics[width=8.25cm,keepaspectratio]{images/choice_module.PNG}} \caption{The choice module paradigm introduced by Single-Path One-Shot NAS. Each module is randomly chosen from a set of choices based on a uniform distribution.} \label{fig:choice_module} \end{figure} Under the choice module paradigm, FairNAS\,\cite{chu2021fairnas} raises the issue of biased supernets, i.e., parameters are not trained evenly. FairNAS aims to make sure the parameters of each choice module are updated the same number of times. For example, if there are $m$ choice modules for each layer, FairNAS takes a uniform sampling without replacement of the choice modules, i.e., the $m$ choice module indices are permuted. Then, a model is built and evaluated based on the sampled index permutation. The search is accomplished via a genetic algorithm, NSGA-II\,\cite{deb2002fast}. The aforementioned methods fall under single path sampling, i.e., methods that sample only one architecture\,(subnet). An extension to this would be multi-path sampling, i.e., supernet optimization methods that sample multiple architectures. The authors of MixPath\,\cite{chu2020mixpath} find that a simple superposition of feature vectors sampled from multiple paths incurs very dynamic statistics causing training instability. They find that feature vectors from multiple paths are nearly multiples of those from a single-path. The solution to this is to scale the vectors down to the same magnitude creating stability. The authors accomplish said stability by proposing a novel Shadow Batch Normalization (SBN). MixPath is a two-stage pipeline: train supernet with SBN and searching for high performing models via NSGA-II. \subsubsection{Block-Wise Partitioning} It has been found that the rank correlation can vary depending on the search space itself\,\cite{zhang2020does}. For example, on the NAS-Bench-201 search space\,\cite{dong2020bench}, the correlation can be as high as 0.7, whereas architectures sampled from DARTS-PTB\,\cite{liu2018darts} can perform worse than random search. In an attempt to alleviate the co-adaptation,\,\cite{zhang2020deeper} proposed to reduce the weight sharing density\,(the number of architectures sharing the weight of one operator) by downscaling the search space. However,\,\cite{zhang2020does} finds that the performance improvement is only significant when the downscale factor is 64 on NAS-Bench-201\,(only 244 different architectures). Instead of shrinking the entire search space, other works\,\cite{li2020block,moons2021distilling} partition the search space into blocks to improve the rank correlation. This allows the original search space size to remain intact. The current state-of-the-art\,(SOTA) NAS architecture on ImageNet for image classification is BossNet-T1+\,\cite{li2021bossnas}. The authors of Block-wisely Self-supervised Neural Architecture Search\,(BossNAS) achieve SOTA results alleviating the rank correlation issue by using block-wise weight-sharing. We give a brief primer on block-wise partitioning. Let $\mathcal{N}$ be the supernet. We break $\mathcal{N}$ into $N$ blocks by the supernet's depth and have: \begin{equation} \mathcal{N}=\mathcal{N}_N\circ \dots \circ \mathcal{N}_{i+1}\circ\mathcal{N}_i\circ\dots\circ\mathcal{N}_1, \end{equation} where $\mathcal{N}_{i+1}\circ\mathcal{N}_i$ denotes that the $(i+1)$-th block is connected to the $i$-th block in the supernet. Then each block is learned separately by optimizing: \begin{equation} \mathcal{W}_i^*=\min_{\mathcal{W}_i}{\mathcal{L}_{train}(\mathcal{W}_i,\mathcal{A}_i;\textbf{X},\textbf{Y})},\,\,i=1,2,\dots,N, \end{equation} where $\mathcal{L}_{train},\,\mathcal{W}_i,\:\mathcal{A}_i,\:\textbf{X},\:\textbf{Y}$ denote training loss, weights of the $i$-th block, search space of the $i$-th block, input data, and ground truth labels respectively. What inspired the authors of BossNAS to use block-wise weight-sharing were other works such as DNA\,\cite{li2020block} and DONNA\,\cite{moons2021distilling} which achieve high correlation and high efficiency. However, the authors of BossNAS argue that those prior works have limited application on search spaces with disparate candidates, such as CNNs and Transformers\,\cite{li2021bossnas}. BossNAS trains each block separately before searching among all blocks in a linear combination of each block's validation loss: \begin{equation} \begin{aligned} &\alpha^*=\{\alpha_i\}^*=\argmin_{\forall\{\alpha_i\}\subset\mathcal{A}}{\sum_{i=1}^{N}{\lambda_i\mathcal{L}_{val}}\left(W_i^*,\alpha_i;\textbf{X}_i,\textbf{Y}_i\right)}\\ &\text{s.t. }\,W_i^*=\argmin_{W_i}{\mathcal{L}_{train}\left(W_i,A_i;\textbf{X}_i,\textbf{Y}_i\right)} \end{aligned} \end{equation} where $\alpha^*$ is the optimal architecture, $\{\alpha_i\}^*$ is the same optimal architecture as a sequence of blocks, $\lambda_i$ is the weight of the $i$-th block, $\mathcal{A}$ is the search space, $\textbf{X}_i$ and $\textbf{Y}_i$ are the training data and labels for the $i$-th block respectively. On the MBConv Search Space, BossNAS produces strong rank correlations of $\tau=0.65,\, \rho=0.78,$ and $R=0.85$. In contrast, DARTS\,\cite{liu2018darts}, an early supernet-based method shows abysmal correlation $\tau=0.08,\,\rho=0.14$ and $R=0.06$. \subsubsection{Hierarchical Partitioning} The authors of few-shot NAS \cite{zhao2021few} adopt a hierarchical partitioning approach. They argue that one-shot NAS using only one supernet may not be able to model the search space due to its limited capacity and co-adaptation of operations \cite{bender2018understanding}. The hierarchical partitioning is done by splitting the connections in the supernet. In a supernet, each connection is a compound edge. I.e., all predefined candidate operations connect a pair of nodes (feature maps). In one-shot NAS, only one subnet is sampled from the supernet. For the case of few-shot, subnets are collected hierarchically when splitting compound edges. After sampling $k$ subnets, they are trained separately. \begin{figure}[ht] \centering \hbox{\hspace{0.25cm}\includegraphics[width=8.25cm,keepaspectratio]{images/fewshot.pdf}} \caption{Hierarchical partitioning introduced by few-shot NAS. (a) One-shot NAS trains the entire supernet with all compound edges intact. (b) Few-shot NAS carries out hierarchical partitioning by splitting compound edges. The split subnets are then trained separately. (c) In principle, one can enumerate every possible subnet which would be equivalent to doing brute force vanilla NAS. } \label{fig:fewshot} \end{figure} Naturally, there is a trade-off between evaluation accuracy and inference time. The hierarchical splitting could be repeated so that the search space is partitioned exhaustively\,(i.e. brute force search), but that would be computationally infeasible. The authors find that only using 7 subnets establishes new state-of-the-art results on ImageNet: 80.5\% top-1 accuracy at 600 MB FLOPS and 77.5\% top-1 accuracy at 238 MFLOPS. The co-adaptation is found to be alleviated when measuring the rank correlation. The authors compared the correlation between the model's predicted performance and the ground truth. The Kendall's Tau scores for 1 sub-supernet (one-shot NAS), 6 sub-supernets, 36 sub-supernets, 216 sub-supernets, 1296 sub-supernets (the entire search space) are 0.013, 0.12, 0.26, 0.63, and 1.0 respectively. Naturally, more accurate predictions are made when more of the search space is incorporated. \subsubsection{Supernet Training Consistency Shift} The search space itself is not the only avenue for finding poor rank correlation causes. The authors of\,\cite{peng2021pi} empirically demonstrate that supernet training consistency shift hurts architecture ranking correlation. They break the consistency shift into two components: feature shift and parameter shift. \textit{Feature shift} refers to network instability due to input image perturbation. Let $\textbf{X}_l$, $\textbf{Y}_l$, and $\textbf{W}_l$ denote the input, output, and weights of layer $l$ respectively. Network instability refers to the disturbance in the loss $\mathcal{L}$. By the chain rule for back propagation, we have: $\frac{\partial \mathcal{L}}{\partial \textbf{W}_l} = \frac{\partial \mathcal{L}}{\partial \textbf{Y}_l}\frac{\partial \textbf{Y}_l}{\partial \textbf{W}_l}=\frac{\partial \mathcal{L}}{\partial \textbf{Y}_l}\textbf{X}_l$. This shows that architecture ranking preservation is highly dependent on the inputs $\textbf{X}_l$. Since one-shot methods involve random path-sampling, the path that leads to layer $l$ varies, and thus the input $\textbf{X}_l$ varies. \textit{Parameter shift} refers to contradictory parameter updates for a given layer. During supernet training, a given layer $l$ is always present in different paths throughout all iterations. However, the weights might have a contradictory update when iterating. Looking at the gradient descent update step: $\textbf{W}_l^{t+1}\leftarrow\textbf{W}_l^t-\frac{\partial \mathcal{L}^t}{\partial \textbf{W}_l^t}$ there are two ways the rapidly varying $\textbf{W}_l$ will hurt rank correlation. One, the loss is dependent on both the weights $\textbf{W}_l$ and the descent $\textbf{W}_l-\frac{\partial \mathcal{L}}{\partial \textbf{W}_l}$. I.e., stable weights can ensure a correct loss descent and guarantee an accurate architecture ranking, while erratic parameters could not achieve a correct ranking. Two, since the input $\textbf{X}_l$ is generated by the network weights of the previous layers, varying parameters can also result in a feature shift, which further hurts architecture rank correlation. The authors reduce the supernet training consistency shift by $\Pi$-NAS: a nontrivial supernet-$\Pi$ model. They design a supernet-$\Pi$ model and a nontrivial mean teacher model\,\cite{tarvainen2017mean} to address feature shift and parameter shift respectively. \iffalse \textit{Supernet}- $\Pi$ \textit{model}. To achieve a stable input distribution, the supernet-$\Pi$ model penalizes the inconsistency between the same input predictions through different sampled paths. By evaluating different augmentations of an input $x$, the authors define a cross-path consistency cost as follows: \begin{equation} \mathcal{L}_{Con}=-\mathbb{E}_{X}{\left[\mathcal{D}(z_i,z_j')+\mathcal{D}(z_j,z_i')\right]} \end{equation} where $X$ and $\mathcal{D}$ denote a training set and a consistency metric, respectively. $\{i,j\}$ and $\{z_i,z_j,z_i',z_j'\}$ denote paths and representations, respectively. The feature shift is reduced by minimizing the cross-path consistency cost. \textit{Nontrivial mean teacher model.} Parameter shift is reduced by smoothing parameter updates. The authors were inspired by mean-teacher\,\cite{tarvainen2017mean} so they used an exponential moving average weights for the teacher model. They denote $\mathcal{W}_t$ as parameters of student mapping function $f$ at training step $t$. Then, weights of mean teacher model $f'$ can be defined as: \begin{equation} \mathcal{W}_t'=\lambda\mathcal{W}_{t-1}'+(1-\lambda)\mathcal{W}_t \end{equation} where $\lambda\in[0,1]$ is a smoothing coefficient hyperparameter. \fi \subsection{Transferable NAS}\label{tran} After supernet optimization, the sampled subnet requires post-processing: retraining, fine-tuning, etc. Training neural networks from scratch is computationally taxing, so there is interest in developing NAS methods that can yield architectures that can be instantly deployed for various tasks and hardware settings. \subsubsection{Specialized Networks for Specific Hardware Settings} Once-For-All (OFA) networks is a type of supernet that yields deployable sub-networks that require no further training for diverse tasks and hardware platforms \cite{cai2019once}. The authors use \emph{progressive shrinking} (PS) to train the OFA network. The idea is to initially train a large network and then progressively shrink it for fine-tuning with respect to 4 dimensions: width (channels), depth (layers), kernel size, and resolution. After training the full model, the authors set the filter size to be elastic, i.e., choose the kernel size to be 3, 5, or 7 at each layer while the depth and width remain the maximum values. The depth and width are then sequentially set to be elastic similarly. The resolution size remains elastic throughout the whole training process. For elastic kernel size, all kernels of different sizes share the same center. The concern that arises when kernels share the same center is that they need to play multiple roles (independent kernel and part of a large kernel) which can degrade performance. To address this, the authors use separate kernel transformation matrices for different layers. For elastic depth, if one wants a $D$ layer unit from an $N$ layer unit (such that $D<N$), the authors keep the first $D$ layers and skip the last $N-D$ layers. For elastic width, the authors introduce a channel sorting operation to support partial widths. If full width is $M$, then the top $K$ ($K<M$) important channels are retained. Importance is defined as computing the L1 norm of a channel's weight (higher value meaning more important). The authors then use knowledge distillation \cite{hinton2015distilling} after training the largest network. After training the OFA network, the authors build neural-network twins to predict the latency and accuracy given a neural network architecture. The authors randomly sample 16K sub-networks with different architectures and image resolutions, then measure their accuracy on 10K validation images. The goal is to build an accuracy predictor fed with architectures as an input signal and accuracy as labels. The latency for a given architecture is predicted by utilizing a latency lookup table based on various target hardware platforms. When given a target hardware and latency constraint, the authors conduct evolutionary search based on the neural-network-twins to yield a specialized sub-network. Mobile setting performance is of great interest when it comes to OFA. The authors compared ImageNet performance of OFA with MobileNetV3\,\cite{howard2019searching} on 6 different mobile platforms: Samsung S7 Edge, Note8, Note10, Google Pixel1 \& Pixel2, and LG G8. OFA consistently outperforms MobileNetV3 with the same latency constraints by training only once. Similar results were also found for specialized hardware accelerators: NVIDIA 1080TI \& V100, Intel Xeon CPU, Jetson TX2, Xilinx ZU9EG FPGA, \& ZU3EG FPGA. \subsubsection{BigNAS} BigNAS \cite{yu2020bignas} aims to train a single supernet that yields subnets whose sizes range from 200 to 1000 MFLOPS without extra training or post-processing. In other words, the weights can be directly deployed after training. The authors coin such supernet as a big \emph{single-stage model}. They select architectures using a simple \emph{coarse-to-fine} selection method to find the most accurate model under resource constraints such as FLOPs, memory footprint and/or runtime latency budgets on different devices. When training the single-stage model the authors use two techniques \cite{yu2019universally}: the sandwich rule and inplace distillation. The sandwich rule, in each training step, given a mini-batch of data, samples the smallest child model, the biggest (full) child model and $N$ randomly sampled child models. It then aggregates the gradients from all sampled child models before updating the weights of the single-stage model. Here, "smallest" child means smallest input resolution, thinnest width, shallowest depth, and smallest kernel size. The motivation is to improve all child models in the search space simultaneously, by pushing up both the performance lower bound (smallest model) and the performance upper bound (full model).
1,941,325,220,666
arxiv
\section{Introduction} Deep Reinforcement Learning algorithms aim to learn a policy of an agent to maximize its cumulative rewards by interacting with environments and have demonstrated substantial success in a wide range of application domains, such as video game ~\cite{mnih2015human}, board games~\cite{silver2016mastering}, and visual navigation~\cite{zhu2017target}. While these results are remarkable, one of the critical constraints is the prerequisite of carefully engineered dense reward signals, which are not always accessible. To overcome these constraints, researchers have proposed a range of intrinsic reward function. For example, curiosity-driven intrinsic reward based on prediction error of current~\cite{burda2018exploration} or future state\cite{pathak2017curiosity} on the latent feature spaces have shown promising results. Nevertheless, visual state prediction is a non-trivial problem as visual state is high-dimensional and tends to be highly stochastic in real-world environments. The occurrence of physical events (\textit{e.g.} objects coming into contact with each other, or changing state) often correlates with both visual and auditory signals. Both sensory modalities should thus offer useful cues to agents learning how to act in the world. Indeed, classic experiments in cognitive and developmental psychology show that humans naturally attend to both visual and auditory cues, and their temporal coincidence, to arrive at a rich understanding of physical events and human activity such as speech ~\cite{spelke1976infants,mcgurk1976hearing}. In artificial intelligence, however, much more attention has been paid to the ways visual signals (e.g., patterns in pixels) can drive learning. We believe this misses important structure learners could exploit. As compared to visual cues, sounds are often more directly or easily observable causal effects of actions and interactions. This is clearly true when agents interact: most communication uses speech or other nonverbal but audible signals. However, it is just as much in physics. Almost any time two objects collide, rub or slide against each other, or touch in any way, they make a sound. That sound is often clearly distinct from background auditory textures, localized in both time and spectral properties, hence relatively easy to detect and identify; in contrast, specific visual events can be much harder to separate from all the ways high-dimensional pixel inputs are changing over the course of a scene. The sounds that result from object interactions also allow us to estimate underlying causally relevant variables, such as material properties (\textit{e.g.}, whether objects are hard or soft, solid or hollow, smooth, or rough), which can be critical for planning actions. These facts bring a natural question of how to use audio signals to benefit policy learning in RL. In this paper, our main idea is to use sound prediction as an intrinsic reward to guide RL exploration. Intuitively, we want to exploit the fact that sounds are frequently made when objects interact, or other causally significant events occur, like cues to causal structure or candidate subgoals an agent could discover and aim for. A na\"ive strategy would be to directly regress feature embeddings of audio clips and use feature prediction errors as intrinsic rewards. However, prediction errors on feature space do not accurately reflect how well the agent understands the underlying causal structure of events and goals. It also remains an open problem on how to perform appropriate normalizations to solve intrinsic reward diminishing issues. To bypass these limitations, we formulate the sound-prediction task as a classification problem, in which we train a neural network to predict auditory events that occurred after applying action to a visual scene. We use classification errors as an exploration bonus for deep reinforcement learning. Concretely, our pipeline consists of two exploration phases. In the beginning, the agent receives an incentive to actively collect a small amount of auditory data by interacting with the environment. Then we cluster the sound data into auditory events using K-means. In the second phase, we train a neural network to predict the auditory events conditioned on the embedding of visual observations and actions. The state that has the wrong prediction is rewarded and encouraged to be visited more. We demonstrate the effectiveness of our intrinsic motivation module on 25 Atari Games and a rolling robot multi-modal physic simulation platform build on top of TDW~\cite{gan2020threedworld}. In summary, our work makes the following contributions: \setdefaultleftmargin{1em}{1em}{}{}{}{} \begin{compactitem} \item We introduce a novel and effective auditory event prediction (AEP) framework to make use of the auditory signals as intrinsic rewards for RL exploration. \item Our system outperforms previous state-of-the-art vision only curiosity-driven exploration agents on most of the Atari games. \item We show that our new intrinsic module is more stable in the 3D multi-modal physical world environment and can encourage interest actions that involved physical interactions. \end{compactitem} \section{Related Work} \textbf{Audio-Visual Learning.} In recent years, audio-visual learning has been studied extensively. By leveraging audio-visual correspondences in videos, it can help to learn powerful audio and visual representations through self-supervised learning~\cite{owens2016ambient,aytar2016soundnet,arandjelovic2017look,korbar2018co,owens2018audio}. Other interesting applications using audio-visual knowledge transfer include sounding object localization\cite{senocak2018learning,arandjelovic2018objects}, sound source separation~\cite{gao2018object-sounds,gan2020music,Zhao_2018_ECCV,ephrat2018looking,zhao2019sound,afouras2018conversation}, biometric matching~\cite{nagrani2018seeing}, sound generation for videos \cite{owens2016visually,zhou2017visual,gao20182,morgado2018self,gan2020Foley}, audio-visual co-segmentation~\cite{rouditchenko2019self}, auditory vehicle tracking~\cite{gan2019self} and action recognition~\cite{long2018attention,long2018multimodal,nagrani2020speech2action}. In contrast to the widely used correspondences between these two modalities, we take a step further by considering sound as causal effects of actions. \textbf{RL Explorations.} The problem of exploration in Reinforcement Learning (RL) has been an active research topic for decades. There are various solutions that have been investigated for encouraging the agent to explore novel states, including rewarding information gain~\cite{little2013learning}, surprise~\cite{schmidhuber1991possibility,schmidhuber2010formal}, state visitation counts~\cite{tang2017exploration,bellemare2016unifying}, empowerment~\cite{klyubin2005empowerment}, curiosity~\cite{pathak2017curiosity,burda2018large} disagreement~\cite{pathak2019self} and so on. A separate line of work~\cite{osband2017deep,osband2016deep} studies adopt parameter noises and Tompson sampling heuristics for exploration. For example, Osband \cite{osband2017deep} trains multiple value functions and makes use of the bootstraps for deep exploration. Here, we mainly focus on the problem of using intrinsic rewards to drive explorations. The most widely used intrinsic motivation could be roughly divided into two families. The first one is count-based approaches~\cite{strehl2008analysis,bellemare2016unifying,tang2017exploration,ostrovski2017count,martin2017count,burda2018exploration}, which encourage the agent to visit novel states. For example, Burda \cite{burda2018exploration} employs the prediction errors of a self-state feature extracted from a fixed and random initialized network as exploration bonuses and encourage the agent to visit more previous unseen states. Another one is the curiosity-based approach~\cite{stadie2015incentivizing,pathak2017curiosity,haber2018learning,burda2018large}, which is formulated as the uncertainty in predicting the consequences of the agent's actions. For instance, \cite{pathak2017curiosity,burda2018large} uses the errors of predicting the next state in the latent feature space as rewards. The agent is then encouraged to improve its knowledge about the environment dynamics. In contrast to previous work that purely works on visual observations, we make use of the sound signals as rewards for RL explorations. \textbf{Sounds and Actions.} There are numerous works to explore the associations between sounds and actions. For example, Owens \cite{owens2016visually} made the first attempt to collect an audio-video dataset through physical interaction with objects and train an RNN model to generate sounds for silent videos. \cite{shlizerman2018audio,ginosar2019learning} explore the problem of predicting body dynamics from music and body gesture from speech. Gan ~\cite{gan2019look} and Chen ~\cite{chen2019audio} introduce an interesting audio-visual embodied task in 3D simulation environments. More recently, Gandhi~\cite{Gandhi20} collected a large sound-action-vision dataset using Tilt-bolt and demonstrates sound signals could provide valuable information for find-grained object recognition, inverse model learning, and forward dynamic model prediction. More related to us are the papers from \cite{aytar2018playing} and \cite{omidshafiei2018crossmodal}, which have shown that the sound signals could provide useful supervisions for imitation learning and reinforcement learning in Atari games. Concurrent to our work, \cite{dean2020see} uses novel associations of audio and visual signals as intrinsic rewards to guide RL exploration. Different from them, we mainly studied if the sound signals alone could be utilized as intrinsic rewards for RL explorations. \section{Method} In this section, we first introduce some background knowledge of reinforcement learning and intrinsic rewards. Then we will present the representations of auditory events. Finally, we elaborate on the pipeline of self-supervised exploration through auditory event predictions. The pipeline of our system is outlined in Figure~\ref{fig:framework}. \input{teaser.tex} \subsection{Background} \newcommand{\Gamma}{\Gamma} \newcommand{\mathcal{S}}{\mathcal{S}} \newcommand{\mathcal{A}}{\mathcal{A}} \newcommand{\mathcal{T}}{\mathcal{T}} \newcommand{\mathcal{R}}{\mathcal{R}} \newcommand{\gamma}{\gamma} \newcommand{\pi _ {\MDPA}}{\pi _ {\mathcal{A}}} \newcommand{\mathcal{L}}{\mathcal{L}} \newcommand{\pi _ {\MRDPL}}{\pi _ {\mathcal{L}}} \noindent\textbf{MDPs} We formalize the decision procedure in our context as a standard Markov Decision Process (MDP), defined as $(\mathcal{S}, \mathcal{A}, r, \mathcal{T}, \mu, \gamma)$. $\mathcal{S}$, $\mathcal{A}$ and $\mu(s): S \rightarrow [0, 1]$ denote the sets of state, action and the distribution of initial state respectively. The transition function $\mathcal{T}(s'|s,a): \mathcal{S} \times \mathcal{A} \times \mathcal{S} \rightarrow [0, 1]$ defines the transition probability to next-step state $s'$ if the agent takes action $a$ at current state $s$. The agent will receive a reward $r$ after taking an action $a$ according to the reward function $\mathcal{R}(s, a)$, discounted by $\gamma \in (0, 1)$. The goal of training reinforcement learning is to learn an optimal policy $\pi^{*}$ that can maximize the expected rewards under the discount factor $\gamma$ as \begin{equation} \pi^{*} = \mathop{\arg\max}_{\pi} E_{\zeta \in \pi} \left \lceil \sum R(s,t) \gamma_{t} \right \rceil \end{equation} where $\zeta$ represents the agent's trajectory, namely $\{(s_0, a_0), (s_1, a_1), \cdots\}$. The agent chooses an action $a$ from a policy $\pi(a|s): \mathcal{S} \times \mathcal{A} \rightarrow [0,1]$ that specifies the probability of taking action $a\in \mathcal{A}$ under state $s$. In this paper, we concentrate on the MDPs whose states are raw image-based observations as well as audio clips, actions are discrete, and $\mathcal{T}$ is provided by the game engine. \noindent\textbf{Intrinsic Rewards for Exploration.} Designing intrinsic rewards for exploration has been widely used to resolve the sparse reward issues in the deep RL communities. One effective approach is to use the errors of a predictive model as exploration bonuses ~\cite{pathak2017curiosity,haber2018learning,burda2018large}. The intrinsic rewards will encourage the agent to explore those states with less familiarity. In this paper, we aim to train a policy that can maximize the errors of auditory event predictions. \subsection{Representations of Auditory Events} Consider an agent that sees a visual observation $s_{v,t}$, takes an action $a_t$ and transits to the next state with visual observation $s_{v,t+1}$and sound effect $s_{s,t+1}$. The main objective of our intrinsic module is to predict auditory events of the next state, given feature representations of the current visual observation $s_{v,t}$ and the agent' action $a_t$. We hypothesize that the agents, through this process, could learn the underlying causal structure of the physical world and use that to make predictions about what will happen next, and as well as plan actions to achieve their goals. To better capture the statistic of the raw auditory data, we extract sound textures~\cite{mcdermott2011sound} $\Phi(s_{s,t})$ to represent each audio clip $s_t$ $s_{s,t}$. For the task of auditory event predictions, perhaps the most straightforward option is to directly regress the sound features $\Phi(s_{s,t+1})$ given the feature embeddings of the image observation $s_{v, t}$and agent' actions $a_t$. Nevertheless, we find that not very effective. The reasons are mainly in two folds: 1) the sound textures do not explicitly capture the high-level events information; 2) the distances between the sound textures could not accurately reflect how well the agents grasp the underlying causal structure of these auditory events. For example, from the position of an aircraft and the shooting action, we hope the agent can infer that a critical event like an explosion will happen, rather than the intricate rhythm of the bang. Therefore, we choose instead to define explicit auditory events categories and formulate this auditory event prediction problem as a classification task, similar to~\cite{owens2016ambient}. \input{events} \subsection{Auditory Events Prediction Based Intrinsic Reward} Our AEP framework consists of two stages: sound clustering and auditory event prediction. We need to collect a small set of diverse auditory data in the first stage and use them to define the underlying auditory event classes. To achieve this goal, we first train an RL policy that rewards the agents based on sound novelty. And then, we run a K-means algorithm to group these data into several auditory events. In the second phase, we train a forward dynamic network that takes input as the embedding of visual observation and action and predicts which auditory event will happen next. The prediction error is then utilized as an intrinsic reward to encourage the agent to explore those auditory events with more uncertainty, thus improving its ability to understand the consequences of its actions. We will elaborate on the details of these two-phase below. \noindent\textbf{Sound clustering.} The agents start to collect audio data by interacting with the environment. The goal of this phase is to gather diverse data that could be used to define auditory events. For this purpose, we train an RL policy by maximizing the occurrences of novel sound effects. In particular, we design an online clustering-based intrinsic motivation module to guide explorations. Assuming we have a series of sound embeddings $\Phi(s_{s,t}), t \in {1, 2, ..., T}$ and temporarily grouped into $K$ clusters. Given a new coming sound embedding $\Phi(s_{s, t+1})$, we compute its distance to the closest cluster centers and use that as an exploration bonus. Formally, \begin{equation} r_{t} = \min_{i} ||\Phi(s_{s, t+1}) - c_{i}||_2 \end{equation} where $r_t$ denotes an intrinsic reward at time $t$, and $c_{i}, i \in \{1, 2, ..., K\}$ represents a cluster center. During this exploration, the number of clusters will grow, and each cluster's center will also be updated. Through this process, the agent is encouraged to collect novel auditory data that could enrich cluster diversity. After the number of the clusters is saturated, we then perform the K-Means clustering algorithm~\cite{Lloyd1982LeastSQ} on the collected data to define the auditory event classes and use the center of each cluster for the subsequent auditory event prediction task. We visualize the corresponding visual states in two games (\textit{Frostbite} and \textit{Assault}) that belong to the same sound clusters, and it can be observed that each cluster always contains identical or similar auditory events (see Figure~\ref{fig:events}) \noindent\textbf{Auditory event predictions.} Since we have already explicitly defined the auditory event categories, the prediction problem can then be easily formulated as a classification task. We label each sound texture with the index of the closest center, and then train a forward dynamic network $f(\Psi(s_{v,t}),a_{t}; \theta_p)$ that takes the embeddings of visual observation $\Psi(s_{v,t})$ and action $a_{t}$ as input to predict which auditory event cluster the incurred sound $\Phi(s_{s,t+1})$ belongs to. The forward dynamic model is trained on collected data using gradient descent to minimize the cross-entropy loss $L$ between the estimated class probabilities with the ground truth distributions $y_{t+1}$ as: \begin{equation} L = {\rm Loss}(f(\Psi(s_{v,t}),a_{t}; \theta_p). y_{t+1}) \end{equation} The prediction is expected to fail for novel associations of visual and audio data. We will reward the agent at the that stage and encourage it to visit more since it is uncertain with about this scenario. In practice, we do find that the agent can learn to avoid dying scenario in the games since that gives a similar sound effect it has already encountered many times and can predict very well. By avoiding potential dangers and keeping seeking novel events, agents can learn causal knowledge of the world for planing their actions to achieve the goal. \section{Experiments} In the experiments, we investigate the following questions: \begin{compactitem} \item Does the proposed audio-driven explorations outperform other intrinsic reward modules on learning skills without extrinsic rewards? \item Can our approach be combined with extrinsic rewards to improve policy learning on hard exploration Atari games? \item What is the agent's behavior using different methods in a 3D multi-modality physical environment? \item Is each component in our methods necessary? \end{compactitem} \subsection{Setup} \noindent\textbf{Atari Game Environment} Our primary goal is to investigate whether we could use auditory event prediction as intrinsic rewards to help RL exploration. For this purpose, we use the Gym Retro~\cite{nichol2018retro} as a testbed to measure agents' competency quantitatively. Gym Retro consists of a diverse set of Atari games, and also support an audio API to provide the sound effects of each state. We first use 20 familiar video games that contain the sound effects to compare the intrinsic reward only exploration against several previous state-of-the-art intrinsic modules. Then, we use five hard exploration games to investigate whether the newly designed motivation could be combined with extrinsic rewards to improve the results \noindent\textbf{Rolling Robot Multi-Modality Simulation Platform.} We also test our module on a 3D multi-modal physic simulation platform. We build this platform on top of the Unity game engine and physics-based impact sound simulation toolbox from TDW~\cite{gan2020threedworld}. As shown in Figure~\ref{fig:physics} , we place a rolling robot agent (\textit{i.e.}, red sphere) on a billiard table. The agent can execute actions to interact with objects of different materials and shapes. When two objects collide, the environment could generate collision sound based on the physical properties of objects. We would like to compare the agent's behaviors in this 3D world physical environment using different intrinsic rewards. \noindent\textbf{Implementation details.} For all the experiments, we choose PPO algorithm~\cite{schulman2017proximal} based on the PyTorch implementation to train our RL agent since it is robust and requires little hyper-parameter tuning. For experiments on Atari games, we use gray-scale image observations of size 84$\times$84 and 60ms audio clip. We set the skip frame S$=$4 for all the experiments. We use a 4-layer CNN as the encoder of the policy network. As for the auditory prediction network, we choose a 3-layer CNN to encode the image observation and use 2-layer MLP to predict the auditory events. \input{result_intrinsic.tex} \subsection{Explorations without Extrinsic Rewards} We first aim to compare how agents use different intrinsic rewards to explore the environment without extrinsic rewards. To quantify how well an exploration strategy, we use the external rewards it can achieve as an evaluation metric. It is important to note that the extrinsic reward is only used for evaluation, not for training. We consider five state-of-the-arts intrinsic motivation modules for comparisons, including Intrinsic Curiosity Module (ICM)~\cite{pathak2017curiosity}, Random Feature Networks (RFN)~\cite{burda2018large}, Random Network Distillation (RND)~\cite{burda2018large}, and Model Disagreement (DIS)~\cite{pathak2019self}. We run all the experiments for 10 million steps with 8 parallel environments. Figure~\ref{fig:intrinsic} summarizes the evaluation curves of mean extrinsic reward in 20 Atari games. For each method, we experiment with three different random seeds and report the standard deviation in the shaded region. As the figure shows, our module achieves significantly better results than previous vision-only intrinsic motivation modules in 15 out of 20 games and is comparable in the other 5 games. Another interesting observation is that the earned score keeps going up, even only using intrinsic rewards. We observe that the agent could learn to avoid dangerous situations with dead sound, which might frequently happen at the beginning of explorations. There are also some failure cases in the above Atari games. For example, our method falls short on the games with background music or noises that could not reflect any auditory events (\textit{e.g.} Freeway and Time Pilot). Advanced audio processing algorithms might help, and we leave this to future work. \textbf{More model analysis and demo videos could be found in supplementary materials.} \vspace{-3mm} \subsection{Combining Extrinsic and Intrinsic Rewards for Hard Explorations} The intrinsic rewards could serve as incentives that allow the agent to distinguish novel and fruitful states, but the lack of extrinsic rewards impedes the awareness of auditory events where agents can earn more rewards and need to visit again. In this section, we investigate whether the audio-driven intrinsic rewards could be further utilized to improve policy learning in the hard exploration scenarios, where extrinsic rewards exist but very sparse. We use five hard exploration environments in Atari games, including Venture, Solaris, Private Eye, Pitfall!, and Gravitar for experiments. Following the strategy proposed by RND~\cite{burda2018large}, we use two value heads separately for the intrinsic and extrinsic reward module and then combine their returns. We also normalize intrinsic rewards to make up the variances among different environments. We compare our model against the plain PPO using purely extrinsic rewards. All experiments are run for 4 million steps with 32 parallel environments. We use the max episodic returns to measure the ability of explorations. The comparison results are shown in Figure~\ref{fig:extrinsic}. We find that our audio-driven exploration can significantly improve the policy learning of hard exploration games in Atari. For example, when combining the intrinsic reward, the agent can earn four times rewards on game Venture and almost 30 times rewards Private Eye. \input{hard_exploration.tex} \subsection{Understanding Explorations Behaviors in 3D Physical World} In this section, we would like to see the agent's behaviors in a near photo-realistic 3D physical world. A curious rolling robot is required to use interactions to build causal models of the physical world. \input{physics} \noindent\textbf{Setup.} For this experiment, we take an image observation of 84$\times$84 size and 50ms audio clip as input. We use a three-layer convolutional network to encode the image and extract sound textures from the audio clip. Same as the previous experiment, we train the policy using the PPO algorithm. The action space consists of moving to eight directions and stop. The action is repeated 4 times on each frame. We run all the experiments for 200K steps with 8 parallel environments. \noindent\textbf{Result Analysis.} To understand and quantify the agent's behaviors in this environment, we show the number of collision events and intrinsic reward rewards in Figure~\ref{fig:physics}. We noticed two major issues with the previous vision-based curiosity model (\textit{e.g.} RND and ICM). First, the prediction errors on latent features space could not accurately reflect subtle object state changes in the 3D photo-realistic world, in which a physical event happens. Second, the intrinsic reward can diminish quickly during training, since the learned predictive model usually converges to a stable state representation of the environment. Instead, our auditory event prediction driven exploration will lead agents to interact more with objects in the physical world, which is critical to learning the dynamics of the environments. \subsection{Ablated Study} In this section, we perform in-depth ablation studies to evaluate each component of our model using 3 Atari games with apparent sound effects: Amidar, Battle Zone, and Carnival. All experiments are run for 2 million steps with 8 parallel environments. \noindent\textbf{Predict auditory events or sound features?} One main contribution of our paper is to use auditory event prediction as an intrinsic reward. To further understand this module's ability, we conduct an ablated study by replacing this module with sound feature prediction module. In particular, we train a neural network that takes the embedding of visual state and action as input and predicts the sound textures. The comparison curves are plotted in Figure~\ref{fig:event}. We observe that the auditory event prediction module indeed earned more rewards. This result demonstrates the advantage of using auditory event classes over latent sound feature embedding for RL explorations. We speculate that auditory events provide more structured knowledge of the world, thus lead to better policy learning. \noindent\textbf{Sound clustering or auditory event prediction?} We adopt a two-stage exploration strategy. A natural question is if this is necessary. We show the curve of using sound clustering only as an intrinsic reward in Figure~\ref{fig:event}. We notice that the returned extrinsic reward is similar to sound feature prediction, but worse than auditory event prediction. \noindent\textbf{Active exploration or random explorations?} We propose an online clustering-based intrinsic module for active audio data collections. To verify its efficacy, we replace this module with random explorations. For fair comparisons, we allow both models to use 10K interaction data to define the event classes with the same K-mean clustering algorithm. The comparison results are shown in Figure~\ref{fig:active}. We can find that the proposed active explorations indeed achieve better results. We also compute the cluster distances of both two models and find that the sound clusters discovered by active exploration are much diverse, thus facilitate the agents to perform in-depth explorations. \input{ablated.tex} \section{Conclusions} In this work, we introduce an intrinsic reward function of predicting sound for RL exploration. Our model employs the errors of auditory event prediction as an exploration bonuses, which allows RL agent to explore novel physical interactions of objects. We demonstrate our proposed methodology and compared it against a number of baselines on Atari games. Based on the experimental result above, we therefore conclude that sound conveys rich information and is powerful for agents to build a causal model of the physical world for efficient RL explorations. We hope our work could inspire more works on using multi-modality cues for planing and control. \section*{Broader Impact} Our work is on the basic science of multimodal learning and exploration in RL. Since auditory signals are prevalent in real-world scenarios, we believe that combining them with visual signals could help guide exploration in many robotic applications~\cite{Gandhi20,gan2019look,chen2019audio}. For example, the honk of a car may be a useful signal that a self-driving agent has entered an unexpected situation. This example also raises a potential negative use of the ideas in this paper: if a self-driving car explores by seeking out honks it would likely put humans in danger. Future work should therefore consider how to combine the ideas in this paper with the notion of safe exploration. Studying the role of auditory and visual signals could also be especially relevant for sight and hearing impaired populations. For example, if we better understand the role of audition in exploration, perhaps we can develop applications that better serve deaf users, who lack this signal. A limitation of our work is that it only experiments on synthetic environments, which may not reflect realistic scenarios. For example, Atari games have sound effects that often correlate with game achievements, whereas the correlation between sound and reward in nature is likely much more complex. The findings in our paper can therefore be considered to be biased by the design of the synthetic environments. Future work will be necessary to validate our methods on real world applications. \medskip \small \bibliographystyle{plain} \section{Result Analysis} \label{sec:result_analysis} In this section, we would like to provide an in-depth understanding of our algorithm works well under what circumstances. The sound effects in Atari games fall into three different categories: 1) event-driven sounds which emitted when agents achieve a specific condition (\textit{e.g.}, picking up a coin, the explosion of an aircraft, etc.); 2) action-driven sounds which emitted when agents implement a specific action (\textit{e.g.}, shooting, jumping, etc.) and 3) background noise/music. According to the dominant sound effects in each game, we summarize the 20 Atari games in Table \ref{table:games}. \begin{table}[h] \centering \caption{Category results of 20 Atari games according to the dominant type of sound effects. We label the games in which our method performs the best in \textbf{bold front}.} \vspace{7mm} \label{table:games} \begin{tabular}{c|c} \hline Dominant sound effects & Atari games \\ \hline Event-driven sounds & \begin{tabular}[c]{@{}c@{}}\textbf{Amidar}, \textbf{Carnival}, \textbf{NameThisGame}, \textbf{Frostbite}, \\ \textbf{FishingDerby}, \textbf{MsPacman} \end{tabular} \\ \hline Action-driven sounds & \begin{tabular}[c]{@{}c@{}}\textbf{AirRaid}, \textbf{Assault}, \textbf{Jamesbond}, \textbf{ChopperCommand}, \textbf{StarGunner}, \\ \textbf{Tutankham}, \textbf{WizardOfWor}, \textbf{Gopher}, DemonAttack \end{tabular} \\ \hline Background sounds & Asteroids, Freeway, TimePilot, BattleZone, CrazyClimber \\ \hline \end{tabular} \end{table} Based on the category defined in Table~\ref{table:games} and the performance shown in Figure 3 in the main paper, we can draw three conclusions. First, both event-driven and action-driven sounds boost the performance of our algorithm. Since the sound is more observable effects of action and events, understanding these casual effects is essential to learn a better exploration policy. Second, our algorithm performs better on the games dominant with event-driven sounds compare with those with action-driven sounds. We believe that event-driven sounds contain higher-level information, such as the explosion of an aircraft or collecting coins, which can be more beneficial for the agent to understand the physical world. Third, our algorithm falls short in comparison with baselines when the sounds effects mainly consist of meaningless background noise or background music (\textit{i.e.} CrazyClimber, MsPacman, Gopher, Tutankham, WizardOfWor, and BattleZone). These sounds have little relevance to visual clues and cannot provide useful information or rewards to agents. \section{Training Details} \label{sec:training} Table \ref{table:hyp} shows the hyper-parameters used in our algorithm. \begin{table}[h] \centering \caption{Hyper-parameters used in our algorithms.} \vspace{7mm} \label{table:hyp} \begin{tabular}{c|c} \toprule Hyperparameter & Value \\ \midrule Rollout length & 128 \\ Number of minibatches & 4 \\ Learning rate & 2.5e-4 \\ Clip parameter & 0.1 \\ Entropy coefficient & 0.01 \\ $\lambda$ & 0.95 \\ $\gamma$ & 0.99 \\ \bottomrule \end{tabular} \end{table} \clearpage \section{Ablated Study} \label{sec:ablated} In this section, we carry out ablated experiments to demonstrate that the gains in our method are caused by the audio-event prediction, rather than the use of multi-modality information. For four baselines (\textit{i.e.} RND, RFN, ICM, and DIS), instead of predicting audio-event, they consider sound information by concatenating both visual and sound features to predict the image embedding in the next time step. As shown in Figure~\ref{fig:sup_ab}, our algorithm significantly outperforms other baselines in five Atari games. This indicates that it is non-trivial to exploit sound information for RL, and our algorithm benefits from the carefully designed audio-event prediction as an intrinsic reward. \input{sup_ab}
1,941,325,220,667
arxiv
\section{Introduction}\label{sec:Introduction} There is a curious uplink-downlink duality between the Gaussian multiple-access channel with a multiple-antenna receiver and the Gaussian broadcast channel with a multiple-antenna transmitter --- under the same total power constraint, the uplink and downlink achievable rate regions with linear processing, or respectively the uplink and downlink capacity regions with optimal nonlinear processing, are identical \cite{duality1}. While the traditional multiple-access and broadcast channel models are suited for an isolated wireless cellular system with a single base-station (BS), in this paper, we are motivated by a generalization of this model to cooperative cellular networks in which BSs cooperate over \emph{rate-limited} digital links to a central processor (CP) in communicating with the users. In this model, the BSs in effect act as remote radio-heads with finite-capacity fronthaul links and function as relays between the CP and the users. The aim of this paper is to establish an uplink-downlink duality between achievable rate regions of such a Gaussian multiple-access \emph{relay} channel and a Gaussian broadcast \emph{relay} channel. The centralized cooperative communication architecture, in which multiple relay-like BSs cooperatively serve the users under the coordination of a CP, is an appealing solution to the ever-increasing demand for mobile broadband in future wireless communication networks, because of its ability to mitigate intercell interference. Under the above architecture, the users and the relay-like BSs are connected by noisy wireless channels, while the relay-like BSs and the CP are connected by noiseless fronthaul links of finite rate limit, as shown in Fig.~\ref{fig1}. In the uplink, the users transmit their signals and the relay-like BSs forward their received signals to the CP for joint information decoding, while in the downlink, the CP jointly encodes the user messages and sends them to the relay-like BSs to transmit to the users. Because the CP can jointly decode and encode the user messages, this cooperative architecture can effectively utilize cross-cell links to enhance message transmission, instead of treating signals from neighboring cells as interference, thus enabling a significant improvement in the overall network throughput. In the literature, there are several terminologies to describe the above centralized cellular architecture, including coordinated multipoint (CoMP) \cite{Irmer11}, distributed antenna system (DAS) \cite{Kerpez96}, cloud radio access network (C-RAN) \cite{Simeone16}, cell-free massive multiple-input multiple-output (MIMO) \cite{Ngo17}, etc. All of these systems can be modeled by the multiple-access relay channel in the uplink and the broadcast relay channel in the downlink as discussed above, where the hop between the users and the relays is wireless, while the hop between the relays and the CP is digital. \begin{figure}[t] \centering \includegraphics[width=11cm]{CranModel.pdf} \caption{Cooperative cellular network in which the relay-like BSs are connected to the CP via rate-limited fronthaul links and the user messages are jointly encoded/decoded at the CP.} \label{fig1} \end{figure} When the capacities of all the fronthaul links are infinite, the above model reduces to the traditional multiple-access and broadcast channels. In this case, the relays can simply be treated as remote antennas of a single virtual BS over the entire network. From the existing literature, we know that there exists a duality relationship between the multiple-access channel and the broadcast channel \cite{duality6,duality7,duality8,duality11,duality9,duality3,duality1,duality4}, which states that given the same sum-power constraint, any rate-tuple achievable in the uplink is also achievable in the downlink, and vice versa. This duality holds under both linear and nonlinear receiving/precoding at the BS. We now ask the following question: If the capacity of the fronthaul links is finite, does a similar uplink-downlink duality relationship hold? The answer to this question depends on not only the joint processing scheme at the CP, but also the relaying strategies employed at the BSs. In practical implementations, a variety of ways of jointly optimizing the utilization of fronthaul and the wireless links have been proposed, e.g., for CoMP \cite{Marsch11,Yu13}, DAS \cite{Gerhard07}, C-RAN \cite{Zhou14,Yu16,Zhou16,Liu15,Liang15,Simeone13}, and cell-free massive MIMO \cite{Bashar18,Marzetta17a,Marzetta17b}. This paper provides an affirmative answer to the above question in the sense that if the compression-based strategies are utilized over the fronthaul links, then indeed the achievable rate regions of the Gaussian multiple-access relay channel and the Gaussian broadcast relay channel are identical under the same sum-power constraint and individual fronthaul capacity constraints. This duality relationship holds with either independent compression \cite{Zhou14,Liu15,Liang15,dai2014sparse} or Wyner-Ziv/multivariate compression \cite{Zhou14,Yu16,Zhou16,Simeone13} at the relays, and with either linear or nonlinear processing at the CP. \subsection{Prior Works on Uplink-Downlink Duality} \subsubsection{Linear Encoding and Linear Decoding} The uplink-downlink duality between the multiple-access channel and the broadcast channel is first established in the case of linear encoding/decoding while treating interference as noise. Assuming single-antenna users and a multiple-antenna BS, the main result is that any signal-to-interference-plus-noise ratio (SINR) tuple that is achievable in the multiple-access channel can also be achieved in the broadcast channel under the same sum-power constraint, and vice versa. This duality has been proved in \cite{duality6,duality7,duality8,duality11,duality3} as follows. First, it is shown that if the receive beamforming vectors in the multiple-access channel and the transmit beamforming vectors in the broadcast channel are identical, then the feasibility conditions to ensure that an SINR-tuple can be achieved for both the multiple-access and the broadcast channels via power control are the same. The key observation is that a feasible power control solution exists if and only if the spectral radius of a so-called interference matrix is less than one \cite{powercontrol}, while given the same receive/transmit beamforming vectors and SINR targets, the spectral radii of the interference matrices of the multiple-access and the broadcast channels are the same. Moreover, it is shown that given the same transmit/receive beamforming vectors, the minimum total transmit power to achieve a set of feasible SINR targets in the uplink is the same as that to achieve the same set of SINR targets in the downlink. As a result, the achievable SINR regions of the multiple-access and broadcast channels are identical under the same power constraint. An alternative approach to proving the above duality result for the case of linear encoding and linear decoding is based on the Lagrangian duality technique \cite{duality9}. Specifically, given the receive/transmit beamforming vectors, the power control problems of minimizing the total transmit power subject to the users' individual SINR constraints in the multiple-access channel and the broadcast channel are both convex. Moreover, if the receive beamforming vectors and users' SINR targets in the multiple-access channel are the same as the transmit beamforming vectors and users' SINR targets in the broadcast channel, then the Lagrangian dual of the uplink sum-power minimization problem can be shown to be equivalent to the downlink sum-power minimization problem, and vice versa. This shows that the achievable SINR regions of the multiple-access channel and the broadcast channel are identical under the same sum-power constraint. \subsubsection{Nonlinear Encoding and Nonlinear Decoding} The uplink-downlink duality between the multiple-access channel and the broadcast channel is also established in the case of nonlinear encoding/decoding. Assuming again the case of single-antenna users, the main result is that the sum capacity (and also the capacity region) of the multiple-access channel, which is achieved by successive interference cancellation \cite{Cover}, is the same as the sum capacity (or the capacity region) of the broadcast channel, which is achieved by dirty-paper coding \cite{DPC}. Similar to the linear encoding/decoding case \cite{duality6,duality7,duality8,duality11}, it is shown in \cite{duality3} that if the decoding order for successive interference cancellation is the reverse of the encoding order for dirty-paper coding, and the uplink receive beamforming vectors are the same as the downlink transmit beamforming vectors, then the feasibility conditions to ensure that an SINR-tuple can be achieved in both the multiple-access channel and the broadcast channel via power control are the same. Moreover, it is shown in \cite{duality3} that if each user achieves the same SINR in the multiple-access channel and the broadcast channel, then the total transmit power in the uplink and downlink is the same. As a result, under the same sum-power constraint, the capacity region of the multiple-access channel is the same as the capacity region of the broadcast channel achieved by dirty-paper coding. This duality result can be extended \cite{duality1} even to the case where the users are equipped with multiple antennas, using a clever choice of the transmit covariance matrix for the broadcast channel to achieve each achievable rate-tuple in the multiple-access channel and vice versa. For the sum-capacity problem, there is also an alternative approach of establishing uplink-downlink duality based on the Lagrangian duality of a minimax problem. Along this line, \cite{duality4} shows that the sum capacity of the broadcast channel can be characterized by a minimax optimization problem, where the maximization is over the transmit covariance matrix, while the minimization is over the receive covariance matrix. Since this minimax problem is a convex problem, it is equivalent to its dual problem, which is another minimax problem in the multiple-access channel. Moreover, the optimal value of this new minimax problem is shown to be exactly the sum capacity of the multiple-access channel. As a result, the sum capacities of the multiple-access channel and the broadcast channel are the same. \subsubsection{Other Duality Results} Apart from the above results, the duality relationship is also established for the multiple-access channel and the broadcast channel under different power constraints and encoding/decoding strategies. For example, \cite{duality10} shows that the power minimization problem in the broadcast channel under the per-antenna power constraints is equivalent to the minimax problem in the multiple-access channel with an uncertain noise. Moreover, \cite{duality12} shows that the rate region of the broadcast channel achieved by dirty-paper coding and under multiple power constraints is the same as that of the multiple-access channel achieved by successive interference cancellation and under a weighted sum-power constraint. Further, \cite{duality14} shows that any sum rate achievable via integer-forcing in the MIMO multiple-access channel can be achieved via integer-forcing in the MIMO broadcast channel with the same sum-power and vice versa. In \cite{Tsung-Hui18}, the duality between the multiple-access channel and the broadcast channel is extended to the scenario with a full-duplex BS. Moreover, the duality relationship is also investigated for the multiple-access channel and the broadcast channel with amplify-and-forward relays. It is shown in \cite{duality16,duality13,duality15} that for both the two-hop and multihop relay scenarios, the user rate regions are the same in the uplink and downlink under the same sum-power constraint. Finally, duality is used in \cite{Yafeng13} to characterize the polynomial-time solvability of a power control problem in the multiple-input single-output (MISO) and single-input multiple-output (SIMO) interference channels. \subsection{Overview of Main Results} This paper establishes a duality relationship between the multiple-access relay channel and the broadcast relay channel when the compression-based relay strategies are used over the rate-limited fronthaul links between the CP and the relays. Both the users and the relays are assumed to be equipped with a single antenna. In the uplink, each relay compresses its received signals from the users, and sends the compressed signal to the CP via the fronthaul link. The CP first decompresses the signals from the relays, then jointly decodes the user messages based on the decompressed signals. In the downlink, the CP jointly encodes the user messages, compresses the transmit signals for the relays, and sends the compressed signals to the relays via the fronthaul links. Then, each relay decompresses its received signal and transmits it to the users. Compared to the classic uplink-downlink duality results in the literature, the novel contributions of our work are as follows. We show that under the same sum-power constraint and individual fronthaul capacity constraints, the achievable rate regions of the multiple-access channel and the broadcast channel are identical using compression-based relays, under the following four cases: \begin{itemize} \item[I:] In the uplink, the relays compress their received signals independently and the CP applies the linear decoding strategy. In the downlink, the CP applies the linear encoding strategy and compresses the signals for the relays independently. \item[II:] In the uplink, the relays compress their received signals independently and the CP applies the successive interference cancellation strategy. In the downlink, the CP applies the dirty-paper coding strategy and compresses the signals for the relays independently. \item[III:] In the uplink, the relays apply the Wyner-Ziv compression strategy to compress their received signals and the CP applies the linear decoding strategy. In the downlink, the CP applies the linear encoding strategy and the multivariate compression strategy to compress the signals for the relays. \item[IV:] In the uplink, the relays apply the Wyner-Ziv compression strategy to compress their received signals and the CP applies the successive interference cancellation strategy. In the downlink, the CP applies the dirty-paper coding strategy and the multivariate compression strategy to compress the signals for the relays. \end{itemize} Note that the conventional uplink-downlink duality relationship between the multiple-access channel and the broadcast channel \cite{duality6,duality7,duality8,duality11,duality9,duality3,duality1,duality4} is a special case of the duality relationship established in this work if we assume the fronthaul links all have infinite capacities. For Cases I and II with independent compression for the relays, we provide an alternative proof for the duality relationship as compared to our previous work \cite{Liang16}. Specifically, the duality relationship is validated based on the Lagrangian duality \cite{Boyd04}, which provides a unified approach for all the cases. In particular, we show that given the transmit beamforming vectors in the downlink, the sum-power minimization problem subject to the individual user rate constraints and individual fronthaul capacity constraints is a linear program (LP) with strong duality. Then, it is shown that given the same receive and transmit beamforming vectors (and the reversed decoding order and encoding order for Case II) and under the same individual user rate constraints as well as individual fronthaul capacity constraints, the uplink sum-power minimization problem is equivalent to the Lagrangian dual of the downlink sum-power minimization problem. This approach is similar to that used in \cite{duality9} to verify the duality of the conventional multiple-access channel and broadcast channel without relays. However, interesting new insights can be obtained when relays are deployed between the CP and the users. Specifically, the dual variables associated with the user rate constraints and the fronthaul capacity constraints in the downlink sum-power minimization problem play the role of uplink user transmit powers and uplink relay quantization noise levels, respectively, in the dual problem. For Cases III and IV, we establish a novel duality relationship between Wyner-Ziv compression and multivariate compression \cite{Gamal}. Intuitively, Wyner-Ziv compression over the noiseless fronthaul resembles successive interference cancellation in the noisy wireless channel in the sense that the decompressed signals can provide side information for decompressing the remaining signals. On the other hand, multivariate compression in the noiseless fronthaul resembles dirty-paper coding in the noisy wireless channel in the sense that it can control the interference caused by compression seen by the users. Despite the well-known duality between successive interference cancellation and dirty-paper coding, the relationship between these two compression strategies has not been established previously. We show in this work that if the decompression order in Wyner-Ziv compression is the reverse of the compression order in the multivariate compression, the uplink-downlink duality remains true between the multiple-access relay channel and the broadcast relay channel. To prove the above result, we use the Lagrangian duality approach similar to that taken in \cite{duality9,duality4}, rather than the approach of checking the feasibility conditions adopted in \cite{duality6,duality7,duality8,duality11,duality3}. This is because under the Wyner-Ziv and multivariate compression strategies, the fronthaul rates are complicated functions of the transmit powers and quantization noises. In this case, the fronthaul capacity constraints are no longer linear in the variables, and the feasibility condition proposed in \cite{powercontrol} for linear constraints does not work. Despite the complicated fronthaul rate expressions, we reveal that under the multivariate compression strategy in the broadcast relay channel, if the transmit beamforming vectors are fixed, the sum-power minimization problem subject to individual user rate constraints and individual fronthaul capacity constraints can be transformed into a convex optimization problem over the transmit powers as well as the compression noise covariance matrix. Then, we characterize the dual problem of this convex optimization. It turns out that if we interpret the dual variables associated with the user rate constraints as the uplink transmit powers and some diagonal elements of the dual matrices associated with the fronthaul capacity constraints as the uplink compression noise levels, then the Lagrangian dual of the broadcast relay channel problem is equivalent to the sum-power minimization problem in the multiple-access relay channel subject to individual user rate constraints as well as a single matrix inequality constraint. In contrast to Cases I and II with independent compression where the primal downlink problem is an LP and its dual problem directly has individual fronthaul capacity constraints, here the problem is a semidefinite program (SDP), and we need to take a further step to transform the single matrix inequality constraint into the individual fronthaul capacity constraints under the Wyner-Ziv compression strategy via a series of matrix operations. At the end, we show that at the optimal solution, all the user rate constraints and fronthaul capacity constraints in the dual problem are satisfied with equality, and moreover there exists a unique solution to this set of nonlinear equations. As a result, the dual of the broadcast relay channel problem is equivalent to the multiple-access relay channel problem. \subsection{Organization} The rest of this paper is organized as follows. Section \ref{sec:System Model} describes the system model and characterizes the achievable rate regions of the multiple-access relay channel and the broadcast relay channel under various encoding/decoding and compression/decompression strategies. In Section \ref{sec:Duality between Multiple-Access Channel and Broadcast Relay Channel}, the duality relationship between the multiple-access channel and the broadcast relay channel with compression-based relays is established. Sections \ref{sec 1} to \ref{sec 4} prove the duality for Cases I-IV, respectively, based on the Lagrangian duality theory. We summarize the main duality relations in Section \ref{sec:Summary} and provide an application of duality in optimizing the broadcast relay channel via its dual multiple-access relay channel in Section \ref{sec:Application}. We conclude this paper with Section \ref{sec:Conclusion}. {\it Notation}: Scalars are denoted by lower-case letters; vectors are denoted by bold-face lower-case letters; matrices are denoted by bold-face upper-case letters. We use $\mv{I}$ to denote an identity matrix with an appropriate dimension, $\mv{0}_{x\times y}$ to denote a all-zero matrix with dimension of $x\times y$. For a matrix $\mv{A}$, $\mv{A}^{(x,y)}$ denotes the entry on the $x$-th row and the $y$-th column of $\mv{A}$, and $\mv{A}^{(x_1:y_1,x_2:y_2)}$ denotes a submatrix of $\mv{A}$ defined by \begin{align*} \mv{A}^{(x_1:y_1,x_2:y_2)}=\left[\begin{array}{ccc}\mv{A}^{(x_1,y_1)} & \cdots & \mv{A}^{(x_1,y_2)} \\ \vdots & \ddots & \vdots \\ \mv{A}^{(x_2,y_1)} & \cdots & \mv{A}^{(x_2,y_2)}\end{array}\right]. \end{align*}For a square full-rank matrix $\mv{S}$, $\mv{S}^{-1}$ denotes its inverse, and $\mv{S}\succeq \mv{0}$ or $\mv{S} \succ \mv{0}$ indicates that $\mv{S}$ is a positive semidefinite matrix or a positive definite matrix, respectively. For a matrix $\mv{M}$ of an arbitrary size, $\mv{M}^{H}$, $\mv{M}^{T}$, and $\mv{M}^\ast$ denote the conjugate transpose, transpose and conjugate of $\mv{M}$, respectively, and ${\rm rank}(\mv{M})$ denotes the rank of $\mv{M}$. We use ${\rm diag}(x_1,\ldots,x_K)$ to denote a diagonal matrix with the diagonal elements given by $x_1,\ldots,x_K$. For two real vectors $\mv{x}=[x_1,\ldots,x_N]^T$ and $\mv{y}=[y_1,\ldots,y_N]^T$, $\mv{x}\geq \mv{y}$ means that $x_n\geq y_n$, $\forall n=1,\ldots,N$. \section{System Model and Achievable Rate Regions}\label{sec:System Model} \begin{figure} \begin{center} \scalebox{0.6}{\includegraphics*{relay_channel.pdf}} \end{center \caption{System model of the multiple-access relay channel and the broadcast relay channel.}\label{fig3 \end{figure} The Gaussian multiple-access relay channel and the Gaussian broadcast relay channel considered in this paper consist of one CP, $M$ single-antenna relays, denoted by the set $\mathcal{M}=\{1,\ldots,M\}$, and $K$ single-antenna users, denoted by the set $\mathcal{K}=\{1,\ldots,K\}$, as shown in Fig.~\ref{fig3}, where each relay $m\in \mathcal{M}$ is connected to the users over the wireless channels and to the CP via the noiseless digital fronthaul link of capacity $C_m$ bits per symbol (bps). For the multiple-access relay channel, the overall channel from user $k\in \mathcal{K}$ to all the relays is denoted by \begin{align} \mv{h}_k=[h_{1,k},\ldots,h_{M,k}]^T, ~~~ \forall k\in \mathcal{K}, \end{align}where $h_{m,k}$ denotes the channel from user $k\in \mathcal{K}$ to relay $m\in \mathcal{M}$; in the dual broadcast relay channel, the overall channel from all the relays to user $k\in \mathcal{K}$ is the Hermitian transpose of the corresponding uplink channel, i.e., $\mv{h}_k^H$. Further, we assume a sum-power constraint $P$ for all the users in the multiple-access relay channel, and the same sum-power constraint $P$ for all the relays in the broadcast relay channel. In the following, we review the compression-based relaying strategies \cite{Simeone16, Yu_CRAN_book} for the multiple-access relay channel and the broadcast relay channel in detail. These compression-based strategies are simplified versions of the more general relaying strategies for the multihop relay networks studied in \cite{Kim_NNC,Kim_DDF}. In particular, the compression-based strategies considered in this paper take the approach of separating the encoding/decoding of the relay codeword and the encoding/decoding of the user messages, in contrast to the joint encoding/decoding approach in \cite{Kim_NNC,Kim_DDF}. In several specific cases \cite{Yu16,Yu19}, these simplified strategies can be shown to already achieve the capacity regions of the specific Gaussian multiple-access and broadcast relay channels to within a constant gap. \subsection{Multiple-Access Channel with Compressing Relays}\label{sec:Multiple-Access Relay Channel} The Gaussian multiple-access relay channel model is as shown in Fig.~\ref{fig3}(a). The discrete-time baseband channel between the users and the relays can be modelled as \begin{align}\label{eqn:uplink signal} \hspace{-8pt} \left[\begin{array}{c} y_{1}^{\rm ul} \\ \vdots \\ y_M^{\rm ul} \end{array} \right]\hspace{-2pt} =\hspace{-2pt} \left[\begin{array}{ccc}h_{1,1} & \cdots & h_{1,K} \\ \vdots & \ddots & \vdots \\ h_{M,1} & \cdots & h_{M,K} \end{array}\right]\left[\begin{array}{c}x_1^{\rm ul} \\ \vdots \\ x_K^{\rm ul} \end{array}\right]\hspace{-2pt}+\hspace{-2pt}\left[\begin{array}{c} z_1^{\rm ul} \\ \vdots \\ z_M^{\rm ul} \end{array}\right],\end{align} where $x_k^{\rm ul}$ denotes the transmit signal of user $k$, $\forall k\in \mathcal{K}$, and $z_m^{\rm ul}\sim\mathcal{CN}(0,\sigma^2)$ denotes the additive white Gaussian noise (AWGN) at relay $m$, $\forall m\in \mathcal{M}$. Transmission and relaying strategies for the multiple-access relay channel have been studied extensively in the literature \cite{Sanderovich08, Yu13, Zhou14, Zhou16, Yu16}. In this paper, we assume the following strategy in which each user transmits using a Gaussian codebook, i.e., \begin{align} x_k^{\rm ul}=\sqrt{p_k^{\rm ul}}s_k^{\rm ul}, ~~~ \forall k\in \mathcal{K}, \end{align}where $s_k^{\rm ul}\sim \mathcal{CN}(0,1)$ denotes the message of user $k$, and $p_k^{\rm ul}$ denotes the transmit power of user $k$. As a result, the total transmit power of all the users is expressed as \begin{align}\label{eqn:uplink sum-power} P^{\rm ul}(\{p_k^{\rm ul}\})=\sum\limits_{k=1}^K\mathbb{E}[|x_k^{\rm ul}|^2]=\sum\limits_{k=1}^Kp_k^{\rm ul}. \end{align}After receiving the wireless signals from the users, we assume that relay $m$ compresses $y_m^{\rm ul}$ and sends the compressed signals to the CP, $\forall m$. We assume a Gaussian compression codebook and model the quantization noise introduced in the compression process as an independent Gaussian random variable, i.e,\begin{align}\label{eqn:uplink compression} \tilde{y}_m^{\rm ul}=y_m^{\rm ul}+e_m^{\rm ul}=\sum\limits_{k=1}^Kh_{m,k}x_k^{\rm ul}+z_m^{\rm ul}+e_m^{\rm ul}, ~~~ \forall m\in \mathcal{M}, \end{align}where $e_m^{\rm ul}\sim \mathcal{CN}(0,q_m^{\rm ul})$ denotes the compression noise at relay $m$, with $q_m^{\rm ul}\geq 0$ denoting its variance. While the Gaussian compression codebook is not necessarily optimal \cite{Sanderovich08}, it gives tractable achievable rate regions. Note that $e_m^{{\rm ul}}$'s are independent of $y_m^{\rm ul}$'s and are independent across $m$. In other words, if we define $\mv{e}^{{\rm ul}}=[e_1^{{\rm ul}},\ldots,e_M^{{\rm ul}}]^T$, then it follows that \begin{align} \mathbb{E}\left[\mv{e}^{{\rm ul}}\left(\mv{e}^{{\rm ul}}\right)^H\right]={\rm diag}(q_1^{{\rm ul}},\ldots,q_M^{{\rm ul}})\succeq \mv{0}. \end{align} After receiving the compressed signals, the CP first decodes the compression codewords then decodes each user's message based on the beamformed signals, i.e., \begin{align} \tilde{s}_k^{\rm ul}=\mv{w}_k^H\tilde{\mv{y}}^{\rm ul}, ~~~ \forall k\in \mathcal{K}, \end{align}where $\mv{w}_k=[w_{k,1},\ldots,w_{k,M}]^T$ with $\|\mv{w}_k\|^2=1$ denotes the receive beamforming vector for user $k$'s message, and $\tilde{\mv{y}}^{\rm ul}=[\tilde{y}_1^{\rm ul},\ldots,\tilde{y}_M^{\rm ul}]^T$ denotes the collective compressed signals from all the relays. \subsubsection{Compression Strategies} We consider two compression strategies at the relays in this work: the independent compression strategy and the Wyner-Ziv compression strategy. If independent compression is performed across the relays, the fronthaul rate for transmitting $\tilde{y}_m^{\rm ul}$ is expressed as \begin{align}\label{eqn:uplink fronthaul rate independent} C_m^{\rm ul,IN}(\{p_k^{\rm ul}\},q_m^{\rm ul})=I(y_m^{\rm ul};\tilde{y}_m^{\rm ul}) = \log_2\frac{\sum\limits_{k=1}^Kp_k^{\rm ul}|h_{m,k}|^2+q_m^{\rm ul}+\sigma^2}{q_m^{\rm ul}}, ~~~ \forall m\in \mathcal{M}. \end{align} Alternatively, the relays can also perform the Wyner-Ziv compression strategy in a successive fashion, accounting for the fact that the compressed signals are to be decoded jointly at the CP. Given a decompression order $\rho^{{\rm ul}}(1),\ldots,\rho^{{\rm ul}}(M)$ at the CP in which the signal from relay $\rho^{{\rm ul}}(1)\in\mathcal{M}$ is decompressed first, the signal from relay $\rho^{{\rm ul}}(2)\in \mathcal{M}$ is decoded second (with $\rho^{\rm ul}(1)$ as side information), etc., the Wyner-Ziv compression rate of relay $\rho^{{\rm ul}}(m)$ can be expressed as \cite{Simeone16} \begin{align} & C_{\rho^{{\rm ul}}(m)}^{{\rm ul,WZ}}(\{p_k^{\rm ul}\},q_{\rho^{{\rm ul}}(1)}^{\rm ul},\ldots,q_{\rho^{{\rm ul}}(m)}^{\rm ul},\{\rho^{{\rm ul}}(m)\}) = I(\hat{y}_{\rho^{{\rm ul}}(m)}^{\rm ul};y_{\rho^{{\rm ul}}(m)}^{\rm ul}|\hat{y}_{\rho^{{\rm ul}}(1)}^{\rm ul},\ldots,\hat{y}_{\rho^{{\rm ul}}(m-1)}^{\rm ul}) \nonumber \\ &= \log_2\frac{|\mv{\Gamma}_{\{\rho^{{\rm ul}}(m)\}}^{(1:m,1:m)}|}{|\mv{\Gamma}_{\{\rho^{{\rm ul}}(m)\}}^{(1:m-1,1:m-1)}|q_{\rho^{{\rm ul}}(m)}^{\rm ul}} \\ &= \log_2 \frac{\mv{\Gamma}_{\{\rho^{{\rm ul}}(m)\}}^{(m,m)} - \mv{\Gamma}_{\{\rho^{{\rm ul}}(m)\}}^{(m,1:m-1)} (\mv{\Gamma}_{\{\rho^{{\rm ul}}(m)\}}^{(1:m-1,1:m-1)})^{-1} \mv{\Gamma}_{\{\rho^{{\rm ul}}(m)\}}^{(1:m-1,m)}}{q_{\rho^{{\rm ul}}(m)}^{\rm ul}}, ~~~ \forall m\in \mathcal{M}, \label{eqn:uplink fronthaul rate} \end{align}where \begin{align} & \mv{\Gamma}_{\{\rho^{{\rm ul}}(m)\}}=\mathbb{E}\left[\tilde{\mv{y}}_{\{\rho^{{\rm ul}}(m)\}}^{\rm ul}\left(\tilde{\mv{y}}_{\{\rho^{{\rm ul}}(m)\}}^{\rm ul}\right)^H\right]\nonumber \\ & ~~~~~~~~~ =\sum\limits_{k=1}^K p_k^{\rm ul}\bar{\mv{h}}_{k,\{\rho^{{\rm ul}}(m)\}}(\bar{\mv{h}}_{k,\{\rho^{{\rm ul}}(m)\}})^H+\sigma^2\mv{I}+{\rm diag}(q_{\rho^{{\rm ul}}(1)}^{{\rm ul}},\ldots,q_{\rho^{{\rm ul}}(M)}^{{\rm ul}}), \label{eqn:Gamma} \\ & \tilde{\mv{y}}_{\{\rho^{{\rm ul}}(m)\}}^{\rm ul}=[\tilde{y}_{\rho^{{\rm ul}}(1)}^{\rm ul},\ldots,\tilde{y}_{\rho^{{\rm ul}}(M)}^{\rm ul}]^T, \\ & \bar{\mv{h}}_{k,\{\rho^{{\rm ul}}(m)\}}=[h_{\rho^{{\rm ul}}(1),k},\ldots,h_{\rho^{{\rm ul}}(M),k}]^T, ~~~ \forall k\in \mathcal{K}. \end{align} In words, $\bar{\mv{h}}_{k,\{\rho^{{\rm ul}}(m)\}}$ denotes the collection of the channels from user $k$ to relays $\rho^{{\rm ul}}(1),\ldots,\rho^{{\rm ul}}(M)$, $\tilde{\mv{y}}_{\{\rho^{{\rm ul}}(m)\}}^{\rm ul}$ is the collection of the compressed received signals from relay $\rho^{{\rm ul}}(1)$ to relay $\rho^{{\rm ul}}(M)$, and $\mv{\Gamma}_{\{\rho^{{\rm ul}}(m)\}}$ is the covariance matrix of this re-ordered vector. \subsubsection{Decoding Strategies} We consider two decoding strategies at the CP: the linear decoding strategy by treating interference as noise and the nonlinear decoding strategy with successive interference cancellation. First, if the CP treats interference as noise, the achievable rate of user $k$ is expressed as \begin{align}\label{eqn:uplink rate} & R_k^{\rm ul,TIN}(\{p_k^{\rm ul},\mv{w}_k\},\{q_m^{\rm ul}\})=I(s_k^{\rm ul};\tilde{s}_k^{\rm ul})\nonumber \\ & = \log_2\frac{\sum\limits_{i=1}^Kp_i^{\rm ul}|\mv{w}_k^H\mv{h}_i|^2+\sum\limits_{m=1}^Mq_m^{\rm ul}|w_{k,m}|^2+\sigma^2}{\sum\limits_{j\neq k}p_j^{\rm ul}|\mv{w}_k^H\mv{h}_j|^2+\sum\limits_{m=1}^Mq_m^{\rm ul}|w_{k,m}|^2+\sigma^2}, ~~~ \forall k\in \mathcal{K}. \end{align} If the successive interference cancellation strategy is used, given a decoding order $\tau^{{\rm ul}}(1),\ldots,\tau^{{\rm ul}}(K)$ at the CP in which the message of user $\tau^{{\rm ul}}(1)$ is decoded first, the message of user $\tau^{{\rm ul}}(2)$ is decoded second, etc., the achievable rate of user $\tau^{{\rm ul}}(k)$ is expressed as \begin{align}\label{eqn:uplink rate successive interference cancellation} & R_{\tau^{{\rm ul}}(k)}^{\rm ul,SIC}(\{p_k^{\rm ul},\mv{w}_k\},\{q_m^{\rm ul}\},\{\tau^{{\rm ul}}(k)\}) =I(s_{\tau^{{\rm ul}}(k)}^{\rm ul};\tilde{s}_{\tau^{{\rm ul}}(k)}^{\rm ul}|s_{\tau^{{\rm ul}}(1)}^{\rm ul},\ldots,s_{\tau^{{\rm ul}}(k-1)}^{\rm ul}) \nonumber \\ &= \log_2\frac{\sum\limits_{i\geq k}p_{\tau^{{\rm ul}}(i)}^{\rm ul}|\mv{w}_{\tau^{{\rm ul}}(k)}^H\mv{h}_{\tau^{{\rm ul}}(i)}|^2+\sum\limits_{m=1}^Mq_m^{\rm ul}|w_{\tau^{{\rm ul}}(k),m}|^2+\sigma^2}{\sum\limits_{j> k}p_{\tau^{{\rm ul}}(j)}^{\rm ul}|\mv{w}_{\tau^{{\rm ul}}(k)}^H\mv{h}_{\tau^{{\rm ul}}(j)}|^2+\sum\limits_{m=1}^Mq_m^{\rm ul}|w_{\tau^{{\rm ul}}(k),m}|^2+\sigma^2}, ~~~ \forall k\in \mathcal{K}. \end{align} \subsubsection{Achievable Rate Regions} Given the individual fronthaul capacity constraints $C_m$'s and sum-power constraint $P$, define \begin{align} &\mathcal{T}^{{\rm ul,IN}}(\{C_m\},P) \nonumber \\ &=\big\{(\{p_k^{\rm ul},\mv{w}_k\},\{q_m^{\rm ul}\}): P^{\rm ul}(\{p_k^{\rm ul}\})\leq P, C_m^{\rm ul}(\{p_k^{\rm ul}\},q_m^{\rm ul})\leq C_m, q_m^{{\rm ul}}\geq 0, \forall m\in \mathcal{M}, \nonumber \\ & ~~~~~~~~~~~~~~~~~~~~~~~~ p_k^{{\rm ul}}\geq 0, \|\mv{w}_k\|^2=1, \forall k\in \mathcal{K} \big\}, \\ &\mathcal{T}^{{\rm ul,WZ}}(\{C_m\},P,\{\rho^{{\rm ul}}(m)\}) \nonumber \\ &=\big\{(\{p_k^{\rm ul},\mv{w}_k\},\{q_m^{\rm ul}\}): P^{\rm ul}(\{p_k^{\rm ul}\})\leq P, C_{\rho^{{\rm ul}}(m)}^{{\rm ul,WZ}}(\{p_k^{\rm ul}\},q_{\rho^{{\rm ul}}(1)}^{\rm ul},\ldots,q_{\rho^{{\rm ul}}(m)}^{\rm ul},\{\rho^{{\rm ul}}(j)\})\leq C_{\rho^{{\rm ul}}(m)}, \nonumber \\ & ~~~~~~~~~~~~~~~~~~~~~~~~ q_m^{{\rm ul}}\geq 0, \forall m\in \mathcal{M}, p_k^{{\rm ul}}\geq 0, \|\mv{w}_k\|^2=1, \forall k\in \mathcal{K} \big\}, \end{align} as the sets of feasible transmit powers, compression noise levels, and receive beamforming vectors for the cases of independent compression and Wyner-Ziv compression under the decompression order of $\rho^{{\rm ul}}(1),\ldots,\rho^{{\rm ul}}(M)$ at the CP, respectively. Then, for the considered multiple-access relay channel, the achievable rate regions for Case I: linear decoding at the CP and independent compression across the relays, Case II: successive interference cancellation at the CP and independent compression across the relays, Case III: linear decoding at the CP and Wyner-Ziv compression across the relays, and Case IV: successive interference cancellation at the CP and Wyner-Ziv compression across the relays, are respectively given by \begin{align} & \mathcal{R}_{{\rm I}}^{\rm ul}(\{C_m\},P) \triangleq \text{co } \bar{\mathcal{R}}_{{\rm I}}^{\rm ul}(\{C_m\},P) \label{eqn:uplink rate region 1} \\ & \mathcal{R}_{{\rm II}}^{\rm ul}(\{C_m\},P\})\triangleq \text{co} \bigcup\limits_{\{\tau^{{\rm ul}}(k)\}}\bar{\mathcal{R}}_{{\rm II}}^{\rm ul}(\{C_m\},P,\{\tau^{{\rm ul}}(k)\}), \label{eqn:uplink rate region 2} \\ & \mathcal{R}_{{\rm III}}^{\rm ul}(\{C_m\},P\})\triangleq \text{co} \bigcup\limits_{\{\rho^{{\rm ul}}(m)\}}\bar{\mathcal{R}}_{{\rm III}}^{\rm ul}(\{C_m\},P,\{\rho^{{\rm ul}}(m)\}), \label{eqn:uplink rate region} \\ & \mathcal{R}_{{\rm IV}}^{\rm ul}(\{C_m\},P\})\triangleq \text{co} \bigcup\limits_{\left(\{\rho^{{\rm ul}}(m)\},\{\tau^{{\rm ul}}(k)\}\right)}\bar{\mathcal{R}}_{{\rm IV}}^{\rm ul}(\{C_m\},P,\{\rho^{{\rm ul}}(m)\},\{\tau^{{\rm ul}}(k)\}), \label{eqn:uplink rate region 4} \end{align} where ``co'' stands for the closure of convex hull operation and in (\ref{eqn:uplink rate region 1}), (\ref{eqn:uplink rate region 2}), (\ref{eqn:uplink rate region}), and (\ref{eqn:uplink rate region 4}), \begin{align} & \bar{\mathcal{R}}_{{\rm I}}^{\rm ul}(\{C_m\},P) \nonumber \\ & \triangleq \bigcup\limits_{\left(\{p_k^{\rm ul},\mv{w}_k\},\{q_m^{\rm ul}\}\right)\in \mathcal{T}^{\rm ul,IN}(\{C_m\},P)} \left\{(r_1^{\rm ul},\ldots,r_K^{\rm ul}): r_k^{\rm ul} \le R_k^{\rm ul,TIN}(\{p_k^{\rm ul},\mv{w}_k\},\{q_m^{\rm ul}\}), \forall k\in \mathcal{K} \right\},\\ & \bar{\mathcal{R}}_{{\rm II}}^{\rm ul}(\{C_m\},P,\{\tau^{{\rm ul}}(k)\}) \nonumber \\ & \triangleq \bigcup\limits_{\left(\{p_k^{\rm ul},\mv{w}_k\},\{q_m^{\rm ul}\}\right)\in \mathcal{T}^{{\rm ul,IN}}(\{C_m\},P)} \left\{(r_1^{\rm ul},\ldots,r_K^{\rm ul}): r_{\tau^{{\rm ul}}(k)}^{\rm ul} \le R_{\tau^{{\rm ul}}(k)}^{\rm ul,SIC}(\{p_k^{\rm ul},\mv{w}_k\},\{q_m^{\rm ul}\},\{\tau^{{\rm ul}}(k)\}), \forall k\in \mathcal{K} \right\}, \\ & \bar{\mathcal{R}}_{{\rm III}}^{\rm ul}(\{C_m\},P,\{\rho^{{\rm ul}}(m)\}) \nonumber \\ & \triangleq \bigcup\limits_{\left(\{p_k^{\rm ul},\mv{w}_k\},\{q_m^{\rm ul}\}\right)\in \mathcal{T}^{{\rm ul,WZ}}(\{C_m\},P,\{\rho^{{\rm ul}}(m)\})} \left\{(r_1^{\rm ul},\ldots,r_K^{\rm ul}): r_k^{\rm ul} \le R_k^{\rm ul,TIN}(\{p_k^{\rm ul},\mv{w}_k\},\{q_m^{\rm ul}\}),\forall k\in \mathcal{K} \right\}, \label{eqn:uplink rate region com} \\ & \bar{\mathcal{R}}_{{\rm IV}}^{\rm ul}(\{C_m\},P,\{\rho^{{\rm ul}}(m)\},\{\tau^{{\rm ul}}(k)\}) \nonumber \\ & \triangleq \bigcup\limits_{\left(\{p_k^{\rm ul},\mv{w}_i\},\{q_m^{\rm ul}\}\right)\in \mathcal{T}^{{\rm ul,WZ}}(\{C_m\},P,\{\rho^{{\rm ul}}(m)\})} \bigg\{(r_1^{\rm ul},\ldots,r_K^{\rm ul}): \nonumber \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ r_{\tau^{{\rm ul}}(k)}^{\rm ul} \le R_{\tau^{{\rm ul}}(k)}^{\rm ul,SIC}(\{p_k^{\rm ul},\mv{w}_k\},\{q_m^{\rm ul}\},\{\tau^{{\rm ul}}(k)\}),\forall k\in \mathcal{K} \bigg\}, \end{align}denote the rate regions of Case I, and Case II given the decoding order $\tau^{{\rm ul}}(1),\ldots,\tau^{{\rm ul}}(K)$, and Case III given the decompression order $\rho^{{\rm ul}}(1),\ldots,\rho^{{\rm ul}}(M)$, and Case IV given the decoding order $\tau^{{\rm ul}}(1),\ldots,\tau^{{\rm ul}}(K)$ and the decompression order $\rho^{{\rm ul}}(1),\ldots,\rho^{{\rm ul}}(M)$, respectively. As a remark, because the achievable rates under the proposed transmission and relaying strategies are not necessarily concave functions of $C_m$ and $P$, there is the potential to further enlarge the above rate region by taking the convex hull over different $C_m$'s and $P$'s. For ease of presentation, the statements of the main results in this paper do not include this additional convex hull operation, but such an operation can be easily incorporated. \subsection{Broadcast Channel with Compressing Relays}\label{sec:Broadcast Relay Channel} The Gaussian broadcast relay channel model is as shown in Fig.~\ref{fig3}(b). The discrete-time baseband channel model between the relays and the users is the dual of the broadcast relay channel given by \begin{align}\label{eqn:downlink received signal} \left[\begin{array}{c} y_{1}^{\rm dl} \\ \vdots \\ y_K^{\rm dl} \end{array} \right]\hspace{-2pt} =\hspace{-2pt} \left[\begin{array}{ccc}h_{1,1}^H & \cdots & h_{M,1}^H \\ \vdots & \ddots & \vdots \\ h_{1,K}^H & \cdots & h_{M,K}^H \end{array}\right]\left[\begin{array}{c}x_1^{\rm dl} \\ \vdots \\ x_M^{\rm dl} \end{array}\right]\hspace{-2pt}+\hspace{-2pt}\left[\begin{array}{c} z_1^{\rm dl} \\ \vdots \\ z_K^{\rm dl} \end{array}\right], \end{align}where $x_m^{\rm dl}$ denotes the transmit signal of relay $m$, and $z_k^{\rm dl}\sim \mathcal{CN}(0,\sigma^2)$ denotes the AWGN at user $k$. Transmission and relaying strategies for the broadcast relay channel have also been studied extensively in the literature. For example, the CP can choose to partially share the messages of each user with multiple BSs in order to enable cooperation \cite{Pratik_hybrid}. This paper however focuses on a compression strategy in which the beamformed signals are precomputed at the CP, then compressed and forwarded to the relays \cite{Simeone13}, because of its potential to achieve the capacity region to within a constant gap \cite{Kim_DDF, Yu19}. More specifically, we use a Gaussian codebook for each user and define the beamformed signal intended for user $k$ to be transmitted across all the relays as $\mv{v}_k\sqrt{p_k^{\rm dl}}s_k^{\rm dl}$, $\forall k$, where $s_k^{\rm dl}\sim \mathcal{CN}(0,1)$ denotes the message for user $k$, $p_k^{\rm dl}$ denotes the transmit power, and $\mv{v}_k=[v_{k,1},\ldots,v_{k,M}]^T$ with $\|\mv{v}_k\|^2=1$ denotes the transmit beamforming vector across the relays. The aggregate signal intended for all the relays is thus given by \begin{align} \tilde{\mv{x}}^{{\rm dl}}=[\tilde{x}_1^{{\rm dl}},\ldots,\tilde{x}_M^{{\rm dl}}]^T=\sum_{k=1}^K\mv{v}_k\sqrt{p_k^{\rm dl}}s_k^{\rm dl}, \end{align} which is compressed then sent to the relays via fronthaul links. Similar to (\ref{eqn:uplink compression}), the quantization noises are modelled as Gaussian random variables, i.e., \begin{align}\label{eqn:downlink compression} x_m^{\rm dl}=\tilde{x}_m^{\rm dl}+e_m^{\rm dl}, ~~~ \forall m\in \mathcal{M}, \end{align}where $e_m^{\rm dl}\sim \mathcal{CN}(0,q_m^{\rm dl})$ denotes the quantization noise at relay $m$, with $q_m^{\rm dl}$ denoting its variance. Putting all the quantization noises across all the relays together as $\mv{e}^{\rm dl}=[e_1^{\rm dl},\ldots,e_M^{\rm dl}]^T$, we have the quantization noise covariance matrix \begin{align}\label{eqn:downlink compression covariance 0} \mv{Q}=\mathbb{E}\left[\mv{e}^{\rm dl}\left(\mv{e}^{\rm dl}\right)^H\right]\succeq \mv{0}. \end{align} The compressed versions of the beamformed signals are transmitted across the relays. According to (\ref{eqn:downlink compression}), the transmit signal can be expressed as \begin{align}\label{eqn:downlink signal} \left[\begin{array}{c} x_1^{\rm dl} \\ \vdots \\ x_M^{\rm dl} \end{array}\right]=\left[\begin{array}{c} \sum_{k=1}^Kv_{k,1}\sqrt{p_k^{\rm dl}}s_k^{\rm dl} \\ \vdots \\ \sum_{k=1}^Kv_{k,M}\sqrt{p_k^{\rm dl}}s_k^{\rm dl} \end{array}\right]+\left[\begin{array}{c}e_1^{\rm dl}\\ \vdots \\ e_M^{\rm dl} \end{array}\right]. \end{align} Under the above model, the transmit power of all the relays is expressed as \begin{align}\label{eqn:downlink sum-power} \hspace{-5pt} P^{\rm dl}(\{p_k^{\rm dl}\},\mv{Q})=\sum\limits_{m=1}^M\mathbb{E}[|x_m^{\rm dl}|^2]=\sum\limits_{k=1}^Kp_k^{\rm dl}+{\rm tr}(\mv{Q}). \end{align} \subsubsection{Compression Strategies} We consider two compression strategies in our considered broadcast relay channel: the independent compression strategy and the multivariate compression strategy. If the compression is done independently for the signals across different relays, then the covariance matrix of the compression noise given in (\ref{eqn:downlink compression covariance 0}) reduces to a diagonal matrix, i.e., \begin{align}\label{eqn:downlink compression covariance 1} \mv{Q}= \mv{Q}_{{\rm diag}}\triangleq {\rm diag}([q_1^{{\rm dl}},\ldots,q_M^{{\rm dl}}]). \end{align}As a result, the fronthaul rate for transmitting $x_m^{\rm dl}$ is expressed as \begin{align}\label{eqn:downlink fronthaul rate 1} C_m^{\rm dl,IN}(\{p_k^{\rm dl},\mv{v}_k\},\mv{Q}_{{\rm diag}})= I(\tilde{x}_m^{\rm dl};x_m^{\rm dl}) = \log_2 \frac{\sum\limits_{k=1}^Kp_k^{\rm dl}|v_{k,m}|^2+\mv{Q}_{{\rm diag}}^{(m,m)} }{\mv{Q}_{{\rm diag}}^{(m,m)}}, ~~~ \forall m\in \mathcal{M}. \end{align} Alternatively, the CP can also use the multivariate compression strategy. Given a compression order at the CP $\rho^{{\rm dl}}(1),\ldots,\rho^{{\rm dl}}(M)$ in which the signal for relay $\rho^{{\rm dl}}(1)\in \mathcal{M}$ is compressed first, the signal for relay $\rho^{{\rm dl}}(2)$ is compressed second, etc., the compression rate for relay $\rho^{{\rm dl}}(m)$ can be expressed as \cite{Simeone13} \begin{align}\label{eqn:downlink fronthaul rate} & C_{\rho^{{\rm dl}}(m)}^{\rm dl,MV}(\{p_k^{\rm dl},\mv{v}_k\},\mv{Q},\{\rho^{{\rm dl}}(m)\}) = I(x_{\rho^{{\rm dl}}(m)}^{\rm dl};\hat{x}_{\rho^{{\rm dl}}(m)}^{\rm dl}|\hat{x}_{\rho^{{\rm dl}}(1)}^{\rm dl},\ldots,\hat{x}_{\rho^{{\rm dl}}(m-1)}^{\rm dl}) \nonumber \\ & = \log_2 \frac{\sum_{k=1}^{K} p_k^{\rm dl} \left|v_{k,\rho^{{\rm dl}}(m)}\right|^2 + \mv{Q}_{\{\rho^{{\rm dl}}(m)\}}^{(m,m)}}{{\mv{Q}_{\{\rho^{{\rm dl}}(m)\}}^{(m,m)}}-\mv{Q}_{\{\rho^{{\rm dl}}(m)\}}^{(m,m+1:M)}(\mv{Q}_{\{\rho^{{\rm dl}}(m)\}}^{(m+1:M,m+1:M)})^{-1}\mv{Q}_{\{\rho^{{\rm dl}}(m)\}}^{(m+1:M,m)}}, ~~~ \forall m \in \mathcal{M}, \end{align}where \begin{align}\label{eqn:downlink compression covariance} \mv{Q}_{\{\rho^{{\rm dl}}(m)\}}=\mathbb{E}\left[[e_{\rho^{{\rm dl}}(1)}^{{\rm dl}},\ldots,e_{\rho^{{\rm dl}}(M)}^{{\rm dl}}]^T[(e_{\rho^{{\rm dl}}(1)}^{{\rm dl}})^\ast,\ldots,(e_{\rho^{{\rm dl}}(M)}^{{\rm dl}})^\ast]\right]. \end{align} \subsubsection{Encoding Strategies} We consider two encoding strategies at the CP: the linear encoding strategy and the nonlinear encoding strategy via dirty-paper coding. If the CP employs linear encoding, the achievable rate of user $k$ can be expressed as \begin{align}\label{eqn:downlink rate 1} R_k^{\rm dl,LIN}(\{p_k^{\rm dl},\mv{v}_k\},\mv{Q})=I(s_k^{\rm dl};y_k^{\rm dl}) = \log_2\frac{\sum\limits_{i=1}^Kp_i^{\rm dl}|\mv{v}_i^H\mv{h}_k|^2+\mv{h}_k^H\mv{Q}\mv{h}_k+\sigma^2}{\sum\limits_{j\neq k}p_j^{\rm dl}|\mv{v}_j^H\mv{h}_k|^2+\mv{h}_k^H\mv{Q}\mv{h}_k+\sigma^2}, ~~~ \forall k\in \mathcal{K}. \end{align} If the dirty-paper coding strategy is used, given an encoding order $\tau^{{\rm dl}}(1),\ldots,\tau^{{\rm dl}}(K)$ at the CP in which the message of user $\tau^{{\rm dl}}(1)\in \mathcal{K}$ is decoded first, the message of user $\tau^{{\rm dl}}(2)\in \mathcal{K}$ is decoded second, etc., then the achievable rate of user $\tau^{{\rm dl}}(k)$ is expressed as \begin{align}\label{eqn:downlink rate} & R_{\tau^{{\rm dl}}(k)}^{\rm dl,DPC}\left(\{p_k^{\rm dl},\mv{v}_k\},\mv{Q},\{\tau^{{\rm dl}}(k)\}\right)=I(s_{\tau^{{\rm dl}}(k)}^{\rm dl};y_{\tau^{{\rm dl}}(k)}^{\rm dl}|s_{\tau^{{\rm dl}}(1)}^{\rm dl},\ldots,s_{\tau^{{\rm dl}}(k-1)}^{\rm dl}) \nonumber \\ & = \log_2\frac{\sum\limits_{i\leq k}p_{\tau^{{\rm dl}}(i)}^{\rm dl}|\mv{v}_{\tau^{{\rm dl}}(i)}^H\mv{h}_{\tau^{{\rm dl}}(k)}|^2+\mv{h}_{\tau^{{\rm dl}}(k)}^H\mv{Q}\mv{h}_{\tau^{{\rm dl}}(k)}+\sigma^2}{\sum\limits_{j< k}p_{\tau^{{\rm dl}}(j)}^{\rm dl}|\mv{v}_{\tau^{{\rm dl}}(j)}^H\mv{h}_{\tau^{{\rm dl}}(k)}|^2+\mv{h}_{\tau^{{\rm dl}}(k)}^H\mv{Q}\mv{h}_{\tau^{{\rm dl}}(k)}+\sigma^2}, ~~~ \forall k \in \mathcal{K}. \end{align} In (\ref{eqn:downlink rate 1}) and (\ref{eqn:downlink rate}), if we set $\mv{Q}=\mv{Q}_{{\rm diag}}$ as shown in (\ref{eqn:downlink compression covariance 1}), then $R_k^{\rm dl,LIN}(\{p_k^{\rm dl},\mv{v}_k\},\mv{Q}_{{\rm diag}})$'s and $R_{\tau^{{\rm dl}}(k)}^{\rm dl,DPC}(\{p_k^{\rm dl},\mv{v}_k\},\mv{Q}_{{\rm diag}},\{\tau^{{\rm dl}}(k)\})$'s denote the user rates achieved by the independent compression strategy. If $\mv{Q}$ is a full matrix (i.e., non-diagonal), then $R_k^{\rm dl,LIN}(\{p_k^{\rm dl},\mv{v}_k\},\mv{Q})$'s and $R_{\tau^{{\rm dl}}(k)}^{\rm dl,DPC}\left(\{p_k^{\rm dl},\mv{v}_k\},\mv{Q},\{\tau^{{\rm dl}}(k)\}\right)$'s denote the user rates achieved by the multivariate compression strategy. \subsubsection{Achievable Rate Regions} Given the individual fronthaul capacity constraints $\{C_m\}$ and sum-power constraint $P$, define \begin{align} & \mathcal{T}^{\rm dl,IN}(\{C_m\},P) \nonumber \\ &= \big\{(\{p_k^{\rm dl},\mv{v}_k\},\mv{Q}_{{\rm diag}}): P^{\rm dl}(\{p_k^{\rm dl}\},\mv{Q}_{{\rm diag}})\leq P, C_m^{\rm dl,IN}(\{p_k^{\rm dl},\mv{v}_k\},\mv{Q}_{{\rm diag}}) \leq C_m, \forall m\in \mathcal{M}, \nonumber \\ & ~~~~~~~~~~~~~~~~~~~~~~~ \mv{Q}_{{\rm diag}}\succeq \mv{0} ~ {\rm is} ~ {\rm diagonal}, p_k^{{\rm dl}}\geq 0, \|\mv{v}_k\|^2=1,\forall k\in \mathcal{K} \big\}, \\ & \mathcal{T}^{\rm dl,MV}(\{C_m\},P,\{\rho^{{\rm dl}}(m)\}) \nonumber \\ &= \big\{(\{p_k^{\rm dl},\mv{v}_k\},\mv{Q}\}): P^{\rm dl}(\{p_k^{\rm dl}\},\mv{Q})\leq P, C_{\rho^{{\rm dl}}(m)}^{\rm dl,MV}(\{p_k^{\rm dl},\mv{v}_k\},\mv{Q},\{\rho^{{\rm dl}}(m)\}) \leq C_{\rho^{{\rm dl}}(m)}, \forall m\in \mathcal{M}, \nonumber \\ & ~~~~~~~~~~~~~~~~~~~~~ \mv{Q}\succeq \mv{0}, p_k^{{\rm dl}}\geq 0, \|\mv{v}_k\|^2=1,\forall k\in \mathcal{K} \big\} \end{align} as the sets of feasible transmit powers, beamforming vectors, and compression noise covariance matrices for the cases of independent compression and multivariate compression under the compression order of $\rho^{{\rm dl}}(1),\ldots,\rho^{{\rm dl}}(M)$, respectively. Then, in the broadcast relay channel, the achievable rate regions for Case I: linear encoding and independent compression at the CP, Case II: dirty-paper coding and independent compression at the CP, Case III: linear encoding and multivariate compression at the CP, and Case IV: dirty-paper coding and multivariate compression at the CP, are respectively given by \begin{align} & \mathcal{R}_{{\rm I}}^{\rm dl}(\{C_m\},P) \triangleq \text{co } \mathcal{\bar R}_{{\rm I}}^{\rm dl}(\{C_m\},P) \label{eqn:downlink rate region 1} \\ & \mathcal{R}_{{\rm II}}^{\rm dl}(\{C_m\},P\})\triangleq \text{co} \bigcup\limits_{\{\tau^{{\rm dl}}(k)\}}\bar{\mathcal{R}}_{{\rm II}}^{\rm dl}(\{C_m\},P,\{\tau^{{\rm dl}}(k)\}), \label{eqn:downlink rate region 2} \\ & \mathcal{R}_{{\rm III}}^{\rm dl}(\{C_m\},P\})\triangleq \text{co} \bigcup\limits_{\{\rho^{{\rm dl}}(m)\}}\bar{\mathcal{R}}_{{\rm III}}^{\rm dl}(\{C_m\},P,\{\rho^{{\rm dl}}(m)\}), \label{eqn:downlink rate region} \\ & \mathcal{R}_{{\rm IV}}^{\rm dl}(\{C_m\},P\})\triangleq \text{co} \bigcup\limits_{\left(\{\rho^{{\rm dl}}(m)\},\{\tau^{{\rm dl}}(k)\}\right)}\bar{\mathcal{R}}_{{\rm IV}}^{\rm dl}(\{C_m\},P,\{\rho^{{\rm dl}}(m)\},\{\tau^{{\rm dl}}(k)\}), \label{eqn:downlink rate region 4} \end{align} where \begin{align} & \mathcal{\bar R}_{{\rm I}}^{\rm dl}(\{C_m\},P) \nonumber \\ & \triangleq \bigcup\limits_{\left(\{p_k^{\rm dl},\mv{v}_k\},\mv{Q}_{{\rm diag}}\right)\in \mathcal{T}^{\rm dl,IN}(\{C_m\},P)} \left\{ (r_1^{\rm dl},\ldots,r_K^{\rm dl}): r_k^{\rm dl} \le R_k^{\rm dl,LIN}(\{p_k^{\rm dl},\mv{v}_k\},\mv{Q}_{{\rm diag}}), \forall k\in \mathcal{K} \right\}, \\ & \mathcal{\bar R}_{{\rm II}}^{\rm dl}(\{C_m\},P,\{\tau^{{\rm dl}}(k)\}) \nonumber \\ & \triangleq \bigcup\limits_{\left(\{p_k^{\rm dl},\mv{v}_k\},\mv{Q}_{{\rm diag}}\right)\in \mathcal{T}^{\rm dl,IN}(\{C_m\},P)} \left\{ (r_1^{\rm dl},\ldots,r_K^{\rm dl}): r_{\tau^{{\rm dl}}(k)}^{\rm dl} \le R_{\tau^{{\rm dl}}(k)}^{\rm dl,DPC}(\{p_k^{\rm dl},\mv{v}_k\},\mv{Q}_{{\rm diag}},\{\tau^{{\rm dl}}(k)\}), \forall k\in \mathcal{K} \right\}, \\ & \mathcal{\bar R}_{{\rm III}}^{\rm dl}(\{C_m\},P,\{\rho^{{\rm dl}}(m)\}) \nonumber \\ & \triangleq \bigcup\limits_{\left(\{p_k^{\rm dl},\mv{v}_k\},\mv{Q}\right)\in \mathcal{T}^{\rm dl,MV}(\{C_m\},P,\{\rho^{{\rm dl}}(m)\})} \left\{ (r_1^{\rm dl},\ldots,r_K^{\rm dl}): r_k^{\rm dl} \le R_k^{\rm dl,LIN}(\{p_k^{\rm dl},\mv{v}_k\},\mv{Q}), \forall k\in \mathcal{K} \right\}, \label{eqn:downlink rate region com} \\ & \mathcal{\bar R}_{{\rm IV}}^{\rm dl}(\{C_m\},P,\{\rho^{{\rm dl}}(m)\},\{\tau^{{\rm dl}}(k)\}) \nonumber \\ & \triangleq \bigcup\limits_{\left(\{p_k^{\rm dl},\mv{v}_k\},\mv{Q}\right)\in \mathcal{T}^{\rm dl,MV}(\{C_m\},P,\{\rho^{{\rm dl}}(m)\})} \left\{ (r_1^{\rm dl},\ldots,r_K^{\rm dl}): r_{\tau^{{\rm dl}}(k)}^{\rm dl} \le R_{\tau^{{\rm dl}}(k)}^{\rm dl,DPC}(\{p_k^{\rm dl},\mv{v}_k\},\mv{Q},\{\tau^{{\rm dl}}(k)\}), \forall k\in \mathcal{K} \right\}, \end{align}are the rate regions of Case I, and Case II given the encoding order $\tau^{{\rm dl}}(1),\ldots,\tau^{{\rm dl}}(K)$, and Case III given the compression order $\rho^{{\rm dl}}(1),\ldots,\rho^{{\rm dl}}(M)$, and Case IV given the encoding order $\tau^{{\rm dl}}(1),\ldots,\tau^{{\rm dl}}(K)$ and the compression order $\rho^{{\rm dl}}(1),\ldots,\rho^{{\rm dl}}(M)$, respectively. As a remark, similar to the multiple-access relay channel case, an additional closure of convex hull operation can be applied over the $C_m$'s and $P$'s to potentially enlarge the achievable rate region. The statements of main results in this paper do not include this extra convex hull operation for simplicity, but it can be easily incorporated in all the statements of the theorems and the proofs. \section{Main Results}\label{sec:Duality between Multiple-Access Channel and Broadcast Relay Channel} The main results of this work are the following set of theorems showing the duality relationships between the achievable rate regions of the multiple-access relay channel and the broadcast relay channel under the same sum-power constraint and individual fronthaul constraints. \begin{theorem}\label{theorem3} Consider the multiple-access relay channel implementing independent compression across the relays as well as linear decoding at the CP and the broadcast relay channel implementing independent compression across the relays as well as linear encoding at the CP, where all the users and relays are equipped with a single antenna. Then, under the same sum-power constraint $P$ and individual fronthaul capacity constraints $C_m$'s, the achievable rate regions of the multiple-access relay channel defined in (\ref{eqn:uplink rate region 1}) and the broadcast relay channel defined in (\ref{eqn:downlink rate region 1}) are identical. In other words, \begin{align} \mathcal{R}_{{\rm I}}^{\rm ul}(\{C_m\},P)=\mathcal{R}_{{\rm I}}^{\rm dl}(\{C_m\},P). \end{align} \end{theorem} \begin{theorem}\label{theorem2} Consider the multiple-access relay channel implementing independent compression across the relays as well as successive interference cancellation at the CP and the broadcast relay channel implementing independent compression across the relays as well as dirty-paper coding at the CP, where all the users and relays are equipped with a single antenna. Then, under the same sum-power constraint $P$ and individual fronthaul capacity constraints $C_m$'s, the achievable rate regions of the multiple-access relay channel defined in (\ref{eqn:uplink rate region 2}) and the broadcast relay channel defined in (\ref{eqn:downlink rate region 2}) are identical. In other words, \begin{align} \mathcal{R}_{{\rm II}}^{\rm ul}(\{C_m\},P)=\mathcal{R}_{{\rm II}}^{\rm dl}(\{C_m\},P). \end{align} \end{theorem} \begin{theorem}\label{theorem1} Consider the multiple-access relay channel implementing Wyner-Ziv compression across the relays as well as linear decoding at the CP and the broadcast relay channel implementing multivariate compression across the relays as well as linear encoding at the CP, where all the users and relays are equipped with a single antenna. Then, under the same sum-power constraint $P$ and individual fronthaul capacity constraints $C_m$'s, the achievable rate regions of the multiple-access relay channel defined in (\ref{eqn:uplink rate region}) and the broadcast relay channel defined in (\ref{eqn:downlink rate region}) are identical. In other words, \begin{align} \mathcal{R}_{{\rm III}}^{\rm ul}(\{C_m\},P)=\mathcal{R}_{{\rm III}}^{\rm dl}(\{C_m\},P). \end{align} \end{theorem} \begin{theorem}\label{theorem4} Consider the multiple-access relay channel implementing Wyner-Ziv compression across the relays as well as successive interference cancellation at the CP and the broadcast relay channel implementing multivariate compression across the relays as well as dirty-paper coding at the CP, where all the users and relays are equipped with a single antenna. Then, under the same sum-power constraint $P$ and individual fronthaul capacity constraints $C_m$'s, the achievable rate regions of the multiple-access relay channel defined in (\ref{eqn:uplink rate region 4}) and the broadcast relay channel defined in (\ref{eqn:downlink rate region 4}) are identical. In other words, \begin{align} \mathcal{R}_{{\rm IV}}^{\rm ul}(\{C_m\},P)=\mathcal{R}_{{\rm IV}}^{\rm dl}(\{C_m\},P). \end{align} \end{theorem} As mentioned earlier, when the fronthaul capacity of each relay is infinite, i.e., $C_m\rightarrow \infty$, $\forall m$, the quantization noises can be set to zero. As a result, the $M$ relays and the CP are equivalent to a virtual BS with $M$ antennas. In this case, the multiple-access relay channel and the broadcast relay channel reduce to the usual multiple-access channel and broadcast channel, respectively; therefore, the classic uplink-downlink duality directly applies. Our main results, i.e., Theorems \ref{theorem3} to \ref{theorem4}, are generalizations of the classic uplink-downlink duality result to the case with non-zero quantization noises. We note that the exact capacity region characterizations of the multiple-access relay channel and the broadcast relay channel are both still open problems. The duality results above pertain to the specific compression-based relaying strategies. Although it is possible to outperform these strategies for specific channel instances, the compression strategies are important both in practical implementations \cite{Simeone16} and due to its ability to approximately approach the theoretical capacity regions under specific conditions for both the uplink \cite{Yu16} and the downlink \cite{Yu19} as mentioned earlier. \section{Proof of Theorem \ref{theorem3}}\label{sec 1} In this section, we prove the duality between the multiple-access relay channel with linear decoding at the CP as well as independent compression across the relays and the broadcast relay channel with linear encoding at the CP and independent compression across the relays. Suppose that the same beamforming vectors \begin{align}\label{eqn:beamforming vector} \mv{v}_k=\mv{w}_k=\bar{\mv{u}}_k=[\bar{u}_{k,1},\ldots,\bar{u}_{k,M}]^T, ~~~ \forall k\in \mathcal{K}, \end{align}with $\|\bar{\mv{u}}_k\|=1$, $\forall k\in \mathcal{K}$, are used in both the multiple-access relay channel and the broadcast relay channel. For simplicity, we assume that the beamforming vectors $\bar{\mv{u}}_k$'s satisfy the following condition: \begin{align}\label{eqn:assumption} \sum\limits_{k=1}^K|\bar{u}_{k,m}|^2>0, ~~~ \forall m\in \mathcal{M}. \end{align}Condition (\ref{eqn:assumption}) indicates that all the relays are used for communications so that the fronthaul rates are properly defined. If we have $\sum_{k=1}^K|\bar{u}_{k,m}|^2=0$ for some $m$, then we can simply define $\bar{\mathcal{M}}=\{m:\sum_{k=1}^K|\bar{u}_{k,m}|^2>0\}$ and $\bar{M}=|\bar{\mathcal{M}}|$, respectively. In this case, the considered system is equivalent to a system merely consisting of $\bar{M}$ relays in the set of $\bar{\mathcal{M}}$, in which we have $\sum\limits_{k=1}^K|\bar{u}_{k,m}|^2>0$, $\forall m\in \bar{\mathcal{M}}$. As a result, condition (\ref{eqn:assumption}) does not affect the generality of our following results. Let $\{R_k \ge 0, k \in \mathcal{K} \}$ be a set of user target rates and $\{C_m\geq 0, m \in \mathcal{M} \}$ be a set of fronthaul rate requirements for the relays. For the multiple-access relay channel as described in Section \ref{sec:Multiple-Access Relay Channel}, we fix the receive beamforming vectors as in (\ref{eqn:beamforming vector}) and formulate the transmit power minimization problem subject to the individual rate constraints as well as the individual fronthaul capacity constraints as follows: \begin{align}\hspace{-8pt} \mathop{\mathrm{minimize}}_{\{p_k^{\rm ul}\},\{q_m^{\rm ul}\}} & ~ P^{\rm ul}(\{p_k^{\rm ul}\}) \label{eqn:problem3 1} \\ \hspace{-8pt} \mathrm {subject ~ to} & ~ R_k^{\rm ul,TIN}(\{p_k^{\rm ul},\bar{\mv{u}}_k\},\{q_m^{\rm ul}\})\geq R_k, ~ \forall k\in \mathcal{K}, \label{eqn:uplink rate constraint} \\ \hspace{-8pt} & ~ C_m^{\rm ul,IN}(\{p_k^{\rm ul}\},q_m^{\rm ul})\leq C_m, ~ \forall m\in \mathcal{M}, \label{eqn:uplink fronthaul constraint3} \\ \hspace{-8pt} & ~ p_k^{\rm ul} \ge 0, ~ \forall k\in \mathcal{K}, \label{eqn:uplink nonnegative power} \\ \hspace{-8pt} & ~ q_m^{\rm ul} \ge 0, ~ \forall m\in \mathcal{M}. \label{eqn:uplink positive} \end{align} Similarly, for the broadcast relay channel as described in Section \ref{sec:Broadcast Relay Channel}, we fix the transmit beamforming vectors as in (\ref{eqn:beamforming vector}) and formulate the transmit power minimization problem as \begin{align}\hspace{-8pt} \mathop{\mathrm{minimize}}_{\{p_k^{\rm dl}\},\mv{Q}_{{\rm diag}}} & ~ P^{\rm dl}(\{p_k^{\rm dl}\},\mv{Q}_{{\rm diag}}) \label{eqn:problem3 2} \\ \hspace{-8pt} \mathrm {subject ~ to} & ~ R_k^{\rm dl,LIN}(\{p_k^{\rm dl},\bar{\mv{u}}_k\},\mv{Q}_{{\rm diag}})\geq R_k, ~ \forall k\in \mathcal{K}, \label{eqn:downlink rate constraint} \\ \hspace{-8pt} & ~ C_m^{\rm dl,IN}(\{p_k^{\rm dl},\bar{\mv{u}}_k\},\mv{Q}_{{\rm diag}})\leq C_m, ~ \forall m\in \mathcal{M}, \label{eqn:downlink fronthaul constraint3} \\ \hspace{-8pt} & ~ p_k^{\rm dl} \ge 0, ~ \forall k\in \mathcal{K}, \label{eqn:nonnegative power} \\ \hspace{-8pt} & ~ \mv{Q}_{{\rm diag}}^{(m,m)} \ge 0, ~ \forall m\in \mathcal{M}. \label{eqn:positive quantization noise3} \end{align} Our aim is to show that given the same fixed beamformer and under the same set of rate targets $\{R_k\}$, the optimization problems (\ref{eqn:problem3 1}) and (\ref{eqn:problem3 2}) are equivalent in the sense that either both are infeasible, or both are feasible and have the same minimum solution. This would imply that under the same fronthaul capacity constraints $\{C_m\}$ and total transmit power constraint $P$, fixing the same beamformers, any achievable rate-tuple $\{R_k\}$ in the multiple-access relay channel is also achievable in the broadcast relay channel, and vice versa. Then, by trying all beamforming vectors, this would imply that the achievable rate regions of the multiple-access relay channel under linear decoding as well as independent compression and the broadcast relay channel under linear encoding as well as independent compression are identical, i.e., $\mathcal{\bar R}_{{\rm I}}^{\rm ul}(\{C_m\},P)=\mathcal{\bar R}_{{\rm I}}^{\rm dl}(\{C_m\},P)$. Finally, by taking convex hull, we get $\mathcal{R}_{{\rm I}}^{\rm ul}(\{C_m\},P)=\mathcal{R}_{{\rm I}}^{\rm dl}(\{C_m\},P)$. To show the equivalence of the optimization problems (\ref{eqn:problem3 1}) and (\ref{eqn:problem3 2}) for the fixed beamformers, we take a set of $\{R_k\}$ and $\{C_m\}$ such that both (\ref{eqn:problem3 1}) and (\ref{eqn:problem3 2}) are strictly feasible, and show that (\ref{eqn:problem3 1}) and (\ref{eqn:problem3 2}) can both be transformed into convex optimization problems. Further, we show that the two convex formulations are the Lagrangian duals of each other, which implies that they must have the same minimum sum power. Once this is proved, we can further infer that the feasible rate regions of the two problems are identical. This is because the feasible rate regions can be equivalently viewed as the sets of rate-tuples for which the minimum values of the optimization problems (\ref{eqn:problem3 1}) and (\ref{eqn:problem3 2}) are less than infinity. As both (\ref{eqn:problem3 1}) and (\ref{eqn:problem3 2}) can be reformulated as convex problems, the minimum powers in the two problems are convex functions of $\{R_k\}$ and $\{C_m\}$ \cite{Boyd04}. This means that the minimum powers in the two problems are continuous functions of $\{R_k\}$. Since the minimum powers of the two problems are the same whenever $\{R_k\}$ is strictly feasible for both problems, as $\{R_k\}$ approaches the feasibility boundary of one problem, the minimum powers for both problems must go to infinity at the same time, implying that the same $\{R_k\}$ must be approaching the feasibility boundary of the other problem as well; thus the two problems must have the same feasibility region. This same argument also applies to the proofs of Theorems \ref{theorem2} to \ref{theorem4}. Thus in the rest of the proofs of all four theorems, we only show that given a set of user rates $\{R_k\}$ and fronthaul capacities $\{C_m\}$ that are strictly feasible in both the multiple-access relay channel and the broadcast relay channel under the fixed beamforming vectors (\ref{eqn:beamforming vector}), the minimum sum powers of the two problems are the same. We remark that our previous work \cite{Liang16} provides a different approach to validate the equivalence between (\ref{eqn:problem3 1}) and (\ref{eqn:problem3 2}) based on the classic power control technique. This paper uses the alternative approach of showing the equivalence between (\ref{eqn:problem3 1}) and (\ref{eqn:problem3 2}) based on a Lagrangian duality technique. This allows a unified approach for proving Theorems \ref{theorem3} to \ref{theorem4}. The proof involves the following two steps. \subsection{Convex Reformulation of Problem (\ref{eqn:problem3 2}) and Its Dual Problem}\label{sec:Convex Reformulation} First, we transform the problem (\ref{eqn:problem3 2}) for the broadcast relay channel into the following convex optimization problem: \begin{align}\hspace{-8pt} \mathop{\mathrm{minimize}}_{\{p_k^{\rm dl}\},\{q_m^{{\rm dl}}\}} & ~ \sum\limits_{k=1}^Kp_k^{\rm dl}\sigma^2+\sum\limits_{m=1}^Mq_m^{{\rm dl}}\sigma^2 \label{eqn:problem3 3} \\ \hspace{-8pt} \mathrm {subject ~ to} & ~ \frac{p_k^{{\rm dl}}|\bar{\mv{u}}_k^H\mv{h}_k|^2}{2^{R_k}-1}-\sum\limits_{j\neq k}p_j^{\rm dl}|\bar{\mv{u}}_j^H\mv{h}_k|^2-\sum\limits_{m=1}^Mq_m^{{\rm dl}}|h_{k,m}|^2-\sigma^2\geq 0, ~ \forall k\in \mathcal{K}, \label{eqn:downlink rate constraint3 1} \\ \hspace{-8pt} & ~ \sum\limits_{k=1}^Kp_k^{\rm dl}|{\bar{u}}_{k,m}|^2+q_m^{\rm dl}-2^{C_m}q_m^{\rm dl}\leq 0, ~ \forall m\in \mathcal{M}, \label{eqn:downlink fronthaul constraint3 2}, \\ \hspace{-8pt} & ~ (\ref{eqn:nonnegative power}), ~ (\ref{eqn:positive quantization noise3}). \nonumber \end{align}Note that in the objective function, the sum power is multiplied by a constant $\sigma^2$ without loss of generality. In fact, problem (\ref{eqn:problem3 3}) is an LP, since the objective function and constraints are all linear in $p_k^{{\rm dl}}$'s and $q_m^{{\rm dl}}$'s. Take a set of strictly feasible $\{R_k\}$. Since the problem (\ref{eqn:problem3 3}) is a convex problem, strong duality holds \cite{Boyd04}, i.e., problem (\ref{eqn:problem3 3}) is equivalent to its dual problem. In the following, we derive the dual problem of the problem (\ref{eqn:problem3 3}). The Lagrangian of the problem (\ref{eqn:problem3 3}) is \begin{align} & L(\{p_k^{\rm dl},\beta_k\},\{q_m^{{\rm dl}},\lambda_m\})\nonumber \\ &= \sum\limits_{k=1}^Kp_k^{\rm dl}\sigma^2+\sum\limits_{m=1}^Mq_m^{{\rm dl}}\sigma^2-\sum\limits_{k=1}^K\beta_k\left(\frac{p_k^{{\rm dl}}|\bar{\mv{u}}_k^H\mv{h}_k|^2}{2^{R_k}-1}-\sum\limits_{j\neq k}p_j^{\rm dl}|\bar{\mv{u}}_j^H\mv{h}_k|^2-\sum\limits_{m=1}^Mq_m^{{\rm dl}}|h_{k,m}|^2-\sigma^2\right) \nonumber \\ & \quad +\sum\limits_{m=1}^M\lambda_m\left(\sum\limits_{k=1}^Kp_k^{\rm dl}|{\bar{u}}_{k,m}|^2+q_m^{\rm dl}-2^{C_m}q_m^{\rm dl}\right) \\ &= \sum\limits_{k=1}^K\beta_k\sigma^2+\sum\limits_{k=1}^Kp_k^{{\rm dl}}\left(\sum\limits_{j\neq k}\beta_j|\bar{\mv{u}}_k^H\mv{h}_j|^2+\sum\limits_{m=1}^M\lambda_m|\bar{u}_{k,m}|^2+\sigma^2-\frac{\beta_k|\bar{\mv{u}}_k^H\mv{h}_k|^2}{2^{R_k}-1}\right) \nonumber \\ &\quad +\sum\limits_{m=1}^Mq_m^{{\rm dl}}\left(\sum\limits_{k=1}^K\beta_k|h_{k,m}|^2+\lambda_m+\sigma^2-2^{C_m}\lambda_m\right), \label{eqn:Lagrangian} \end{align}where $\beta_k\geq 0$'s and $\lambda_m\geq 0$'s are the dual variables associated with constraints (\ref{eqn:downlink rate constraint3 1}) and (\ref{eqn:downlink fronthaul constraint3 2}) in problem (\ref{eqn:problem3 3}), respectively. The dual function is then defined as \begin{align}\label{eqn:dual function 1} g(\{\beta_k\},\{\lambda_m\})=\min\limits_{p_k^{{\rm dl}}\geq 0, \forall k\in \mathcal{K},q_m^{{\rm dl}}\geq 0, \forall m\in \mathcal{M}} ~ L(\{p_k^{\rm dl},\beta_k\},\{q_m^{{\rm dl}},\lambda_m\}) \end{align} Finally, the dual problem of problem (\ref{eqn:problem3 3}) is expressed as \begin{align} \hspace{-8pt} \mathop{\mathrm{maximize}}_{\{\beta_k\},\{\lambda_m\}} & ~ g(\{\beta_k\},\{\lambda_m\}) \label{eqn:dual problem3 3} \\ \hspace{-8pt} \mathrm {subject ~ to} & ~ \beta_k\geq 0, ~~~ \forall k\in \mathcal{K}, \label{eqn:broadcast positve beta}\\ & ~ \lambda_m\geq 0, ~~~ \forall m\in \mathcal{M}. \label{eqn:broadcast positive lambda} \end{align} Note that according to (\ref{eqn:dual function 1}), $g(\{\beta_k\},\{\lambda_m\})=\sum_{k=1}^K\beta_k\sigma^2$ if and only if \begin{align} & \sum\limits_{j\neq k}\beta_j|\bar{\mv{u}}_k^H\mv{h}_j|^2+\sum\limits_{m=1}^M\lambda_m|\bar{u}_{k,m}|^2+\sigma^2-\frac{\beta_k|\bar{\mv{u}}_k^H\mv{h}_k|^2}{2^{R_k}-1}\geq 0, ~ \forall k\in \mathcal{K}, \label{eqn:dual constraint 1} \\ & \sum\limits_{k=1}^K\beta_k|h_{k,m}|^2+\lambda_m+\sigma^2-2^{C_m}\lambda_m\geq 0, ~ \forall m\in \mathcal{M}. \label{eqn:dual constraint fronthaul} \end{align} Otherwise, $g(\{\beta_k\},\{\lambda_m\})=-\infty$. As a result, problem (\ref{eqn:dual problem3 3}) can be transformed into the following equivalent problem: \begin{align} \hspace{-8pt} \mathop{\mathrm{maximize}}_{\{\beta_k\},\{\lambda_m\}} & ~ \sum\limits_{k=1}^K\beta_k\sigma^2 \label{eqn:dual problem3 4} \\ \hspace{-8pt} \mathrm {subject ~ to} & ~ (\ref{eqn:broadcast positve beta}), ~ (\ref{eqn:broadcast positive lambda}), ~ (\ref{eqn:dual constraint 1}), ~ (\ref{eqn:dual constraint fronthaul}). \nonumber \end{align} This problem is now very similar to the multiple-access relay channel problem. The physical interpretation of the above dual problem is the following. We can view $\beta_k$ as the transmit power of user $k$, $\forall k\in \mathcal{K}$, and $\lambda_m$ as the quantization noise level of relay $m$ in the multiple-access relay channel, $\forall m\in \mathcal{M}$. Then, problem (\ref{eqn:dual problem3 4}) aims to maximize the sum-power of all the users, while constraint (\ref{eqn:dual constraint 1}) requires that user $k$'s rate is no larger than $R_k$, $\forall k\in \mathcal{K}$, and constraint (\ref{eqn:dual constraint fronthaul}) requires that relay $m$'s fronthaul rate is no smaller than $C_m$, $\forall m\in \mathcal{M}$. In the following, we show that for the multiple-access relay channel, this power maximization problem (\ref{eqn:dual problem3 4}) is equivalent to the power minimization problem (\ref{eqn:problem3 1}). \subsection{Equivalence Between Power Maximization Problem (\ref{eqn:dual problem3 4}) and Power Minimization Problem (\ref{eqn:problem3 1})} To show the equivalence between problem (\ref{eqn:dual problem3 4}) and problem (\ref{eqn:problem3 1}), we first have the following proposition. \begin{proposition}\label{proposition broadcast 1} At the optimal solution to problem (\ref{eqn:dual problem3 4}), constraints (\ref{eqn:dual constraint 1}) and (\ref{eqn:dual constraint fronthaul}) should hold with equality, i.e., \begin{align} & \sum\limits_{j\neq k}\beta_j|\bar{\mv{u}}_k^H\mv{h}_j|^2+\sum\limits_{m=1}^M\lambda_m|\bar{u}_{k,m}|^2+\sigma^2-\frac{\beta_k|\bar{\mv{u}}_k^H\mv{h}_k|^2}{2^{R_k}-1}= 0, ~ \forall k\in \mathcal{K}, \label{eqn:dual constraint 11} \\ & \sum\limits_{k=1}^K\beta_k|h_{k,m}|^2+\lambda_m+\sigma^2-2^{C_m}\lambda_m= 0, ~ \forall m\in \mathcal{M}. \label{eqn:dual constraint fronthaul 1} \end{align} \end{proposition} \begin{IEEEproof} Please refer to Appendix \ref{appendix broadcast 1}. \end{IEEEproof} To satisfy (\ref{eqn:broadcast positve beta}), (\ref{eqn:broadcast positive lambda}), (\ref{eqn:dual constraint 11}), and (\ref{eqn:dual constraint fronthaul 1}), it can be shown that \begin{align} & \beta_k>0, ~~~ \forall k\in \mathcal{K}, \label{eqn:broadcast positive beta 11} \\ & \lambda_m>0, ~~~ \forall m\in \mathcal{M}. \label{eqn:broadcast positive lambda 11} \end{align}As a result, Proposition \ref{proposition broadcast 1} indicates that problem (\ref{eqn:dual problem3 4}) is equivalent to the following problem \begin{align} \hspace{-8pt} \mathop{\mathrm{maximize}}_{\{\beta_k\},\{\lambda_m\}} & ~ \sum\limits_{k=1}^K\beta_k\sigma^2 \label{eqn:dual problem3 5} \\ \hspace{-8pt} \mathrm {subject ~ to} & ~ (\ref{eqn:dual constraint 11}), ~ (\ref{eqn:dual constraint fronthaul 1}), ~ (\ref{eqn:broadcast positive beta 11}), ~ (\ref{eqn:broadcast positive lambda 11}). \nonumber \end{align} \begin{proposition}\label{proposition broadcast 2} If there exists one set of solutions $\beta_k$'s and $\lambda_m$'s that satisfies (\ref{eqn:dual constraint 11}), (\ref{eqn:dual constraint fronthaul 1}), (\ref{eqn:broadcast positive beta 11}), and (\ref{eqn:broadcast positive lambda 11}) in problem (\ref{eqn:dual problem3 5}), then this solution is unique. \end{proposition} \begin{IEEEproof} Please refer to Appendix \ref{appendix broadcast 2}. \end{IEEEproof} Since there is a unique solution that satisfies all the constraints in problem (\ref{eqn:dual problem3 5}), it follows that the maximization problem (\ref{eqn:dual problem3 4}) is equivalent to the following minimization problem: \begin{align} \hspace{-8pt} \mathop{\mathrm{minimize}}_{\{\beta_k\},\{\lambda_m\}} & ~ \sum\limits_{k=1}^K\beta_k\sigma^2 \label{eqn:dual problem3 6} \\ \hspace{-8pt} \mathrm {subject ~ to} & ~ (\ref{eqn:dual constraint 11}), ~ (\ref{eqn:dual constraint fronthaul 1}), ~ (\ref{eqn:broadcast positive beta 11}), ~ (\ref{eqn:broadcast positive lambda 11}). \nonumber \end{align} At last, we relate problem (\ref{eqn:dual problem3 6}) to the power minimization problem (\ref{eqn:problem3 1}) in the multiple-access relay channel by the following proposition. \begin{proposition}\label{proposition broadcast 3} Problem (\ref{eqn:dual problem3 6}) is equivalent to \begin{align} \hspace{-8pt} \mathop{\mathrm{minimize}}_{\{\beta_k\},\{\lambda_m\}} & ~ \sum\limits_{k=1}^K\beta_k\sigma^2 \label{eqn:dual problem3 7} \\ \hspace{-8pt} \mathrm {subject ~ to} & ~ \sum\limits_{j\neq k}\beta_j|\bar{\mv{u}}_k^H\mv{h}_j|^2+\sum\limits_{m=1}^M\lambda_m|\bar{u}_{k,m}|^2+\sigma^2-\frac{\beta_k|\bar{\mv{u}}_k^H\mv{h}_k|^2}{2^{R_k}-1}\leq 0, ~ \forall k\in \mathcal{K}, \label{eqn:dual constraint 2} \\ & ~ \sum\limits_{k=1}^K\beta_k|h_{k,m}|^2+\lambda_m+\sigma^2-2^{C_m}\lambda_m\leq 0, ~ \forall m\in \mathcal{M}, \label{eqn:dual constraint fronthaul 2} \\ & ~ (\ref{eqn:broadcast positive beta 11}), ~ (\ref{eqn:broadcast positive lambda 11}). \nonumber \end{align} \end{proposition} \begin{IEEEproof} Similar to Proposition \ref{proposition broadcast 1}, it can be shown that with the optimal solution to problem (\ref{eqn:dual problem3 7}), constraints (\ref{eqn:dual constraint 2}) and (\ref{eqn:dual constraint fronthaul 2}) should hold with equality. As a result, problems (\ref{eqn:dual problem3 6}) and (\ref{eqn:dual problem3 7}) are equivalent to each other. \end{IEEEproof} The equivalence between the dual problem of the problem (\ref{eqn:problem3 3}), i.e., problem (\ref{eqn:dual problem3 4}), and the problem (\ref{eqn:dual problem3 7}) is therefore established. The key point here is that by viewing the dual variable $\beta_k$ as the transmit power of user $k$, $\forall k\in \mathcal{K}$, and the dual variable $\lambda_m$ as the quantization noise level of relay $m$, $\forall m \in \mathcal{M}$, in the multiple-access relay channel, the problem (\ref{eqn:dual problem3 7}) is exactly the power minimization problem (\ref{eqn:problem3 1}). As a result, we have shown that the problem (\ref{eqn:problem3 2}) for the broadcast relay channel is equivalent to the problem (\ref{eqn:problem3 1}) for the multiple-access relay channel if the user rate targets $\{R_k\}$ and fronthaul rate constraints $\{C_m\}$ are strictly feasible in both the uplink and downlink. \section{Proof of Theorem \ref{theorem2}}\label{sec 2} In this section, we prove the duality between the multiple-access relay channel with successive interference cancellation at the CP as well as independent compression across the relays and the broadcast relay channel with dirty-paper coding at the CP as well as independent compression across the relays. Similar to Section \ref{sec 1}, we fix the same beamforming vectors $\bar{\mv{u}}_k$'s in the multiple-access relay channel and the broadcast relay channel as shown in (\ref{eqn:beamforming vector}), where $\bar{\mv{u}}_k$'s satisfy (\ref{eqn:assumption}). Next, we assume that the successive interference cancellation order at the CP for the multiple-access relay channel and the dirty-paper encoding order at the CP for the broadcast relay channel are reverse of each other, i.e., \begin{align}\label{eqn:encoding decoding order} \tau^{{\rm ul}}(k)=\tau^{{\rm dl}}(K+1-k)=\bar{\tau}(k), ~~~ \forall k\in \mathcal{K}. \end{align}For example, if the decoding order in the multiple-access relay channel is $1,\ldots,K$, then the encoding order in the broadcast relay channel is $K,\ldots,1$, i.e., $\bar{\tau}^{{\rm ul}}(k)=k$, $\forall k$. Let $\{R_k \ge 0, k \in \mathcal{K} \}$ and $\{C_m\geq 0, m \in \mathcal{M} \}$ be sets of strictly feasible user rate targets and fronthaul constraints for both the uplink and the downlink. Given the receive beamforming vectors (\ref{eqn:beamforming vector}) and decoding order (\ref{eqn:encoding decoding order}), for the multiple-access relay channel, the transmit power minimization problem subject to the individual rate constraints as well as the individual fronthaul capacity constraints is formulated as \begin{align}\hspace{-8pt} \mathop{\mathrm{minimize}}_{\{p_k^{\rm ul}\},\{q_m^{\rm ul}\}} & ~ P^{\rm ul}(\{p_k^{\rm ul}\}) \label{eqn:problem4 1} \\ \hspace{-8pt} \mathrm {subject ~ to} & ~ R_{\bar{\tau}(k)}^{\rm ul,SIC}(\{p_k^{\rm ul},\bar{\mv{u}}_k\},\{q_m^{\rm ul}\},\{\bar{\tau}(k)\})\geq R_{\bar{\tau}(k)}, ~ \forall k \in \mathcal{K}, \label{eqn:uplink rate constraint4} \\ \hspace{-8pt} & ~ (\ref{eqn:uplink fronthaul constraint3}), ~ (\ref{eqn:uplink nonnegative power}), ~ (\ref{eqn:uplink positive}). \nonumber \end{align} Likewise, given the transmit beamforming vectors (\ref{eqn:beamforming vector}) and a reverse encoding order (\ref{eqn:encoding decoding order}), for the broadcast relay channel, the transmit power minimization problem is formulated as \begin{align}\hspace{-8pt} \mathop{\mathrm{minimize}}_{\{p_k^{\rm dl}\},\mv{Q}_{{\rm diag}}} & ~ P^{\rm dl}(\{p_k^{\rm dl}\},\mv{Q}_{{\rm diag}}) \label{eqn:problem4 2} \\ \hspace{-8pt} \mathrm {subject ~ to} & ~ R_{\bar{\tau}(K+1-k)}^{\rm dl,DPC}(\{p_k^{\rm dl},\bar{\mv{u}}_k\}\},\mv{Q}_{{\rm diag}},\{\bar{\tau}(K+1-k)\})\geq R_{\bar{\tau}(K+1-k)}, ~ \forall k \in \mathcal{K}, \label{eqn:downlink rate constraint4} \\ \hspace{-8pt} & ~ (\ref{eqn:downlink fronthaul constraint3}), ~ (\ref{eqn:nonnegative power}), ~ (\ref{eqn:positive quantization noise3}). \nonumber \end{align} Similar to the equivalence between problem (\ref{eqn:problem3 1}) and problem (\ref{eqn:dual problem3 4}) in Case I, it can be shown that problem (\ref{eqn:problem4 1}) is equivalent to the following convex problem: \begin{align}\hspace{-8pt} \mathop{\mathrm{maximize}}_{\{p_k^{\rm ul}\},\{q_m^{{\rm ul}}\}} & ~ \sum\limits_{k=1}^Kp_k^{\rm ul}\sigma^2 \label{eqn:problem4 4} \\ \hspace{-8pt} \mathrm {subject ~ to} & ~ \sum\limits_{j> k}p_{\bar\tau(j)}^{{\rm ul}}|\bar{\mv{u}}_{\bar\tau(k)}^H\mv{h}_{\bar\tau(j)}|^2+\sum\limits_{m=1}^Mq_m^{{\rm ul}}|\bar{u}_{{\bar\tau(k)},m}|^2+\sigma^2\geq \frac{p_{\bar\tau(k)}^{{\rm ul}}|\bar{\mv{u}}_{\bar\tau(k)}^H\mv{h}_{\bar\tau(k)}|^2}{2^{R_{\bar\tau(k)}}-1}, ~ \forall k\in \mathcal{K}, \\ & \sum\limits_{k=1}^Kp_k^{{\rm ul}}|h_{k,m}|^2+q_m^{{\rm ul}}+\sigma^2\geq 2^{C_m}q_m^{{\rm ul}}, ~ \forall m\in \mathcal{M}, \\ & ~ (\ref{eqn:uplink nonnegative power}), ~ (\ref{eqn:uplink positive}). \nonumber \end{align} Moreover, we can transform problem (\ref{eqn:problem4 2}) into the following convex problem: \begin{align}\hspace{-8pt} \mathop{\mathrm{minimize}}_{\{p_k^{\rm dl}\},\{q_m^{{\rm dl}}\}} & ~ \sum\limits_{k=1}^Kp_k^{\rm dl}\sigma^2+\sum\limits_{m=1}^Mq_m^{{\rm dl}}\sigma^2 \label{eqn:problem4 3} \\ \hspace{-8pt} \mathrm {subject ~ to} & ~ \frac{p_{\bar{\tau}(k)}^{{\rm dl}}|\bar{\mv{u}}_{\bar{\tau}(k)}^H\mv{h}_{\bar{\tau}(k)}|^2}{2^{R_{\bar{\tau}(k)}}-1}\geq \sum\limits_{j\leq k}p_{\bar{\tau}(j)}^{\rm dl}|\mv{v}_{\bar{\tau}(j)}^H\mv{h}_{\bar{\tau}(k)}|^2+\sum\limits_{m=1}^Mq_m^{{\rm dl}}|h_{\bar{\tau}(k),m}|^2+\sigma^2, ~ \forall k \in \mathcal{K}, \label{eqn:downlink rate constraint4 1} \\ \hspace{-8pt} & ~ (\ref{eqn:downlink fronthaul constraint3 2}), ~ (\ref{eqn:nonnegative power}), ~ (\ref{eqn:positive quantization noise3}). \nonumber \end{align} Similar to Case I, we can show that problem (\ref{eqn:problem4 4}) is the Lagrangian dual of problem (\ref{eqn:problem4 3}). Since the method adopted is almost exactly the same as that in Section \ref{sec 1}, we omit the details here. As a result, under the same fronthaul capacity constraints $\{C_m\}$ and total transmit power constraint $P$, any rate-tuple achievable in the multiple-access relay channel can be shown to be achievable also in the broadcast relay channel by setting the transmit beamforming vectors as the receive beamforming vectors in the multiple-access relay channel and the encoding order to be reverse of the decoding order in the multiple-access channel, and vice versa. By trying all the feasible beamforming vectors, it follows that \begin{align} \bar{\mathcal{R}}_{{\rm II}}^{\rm ul}(\{C_m\},P,\{\bar{\tau}(k)\})=\mathcal{\bar R}_{{\rm II}}^{\rm dl}(\{C_m\},P,\{\bar{\tau}(K+1-k)\}). \end{align}Then, by trying all the encoding/decoding orders, we can show that the achievable rate regions of the multiple-access relay channel under successive interference cancellation as well as independent compression and the broadcast relay channel under dirty-paper coding as well as independent compression are identical, i.e., $\mathcal{R}_{{\rm II}}^{\rm ul}(\{C_m\},P)=\mathcal{R}_{{\rm II}}^{\rm dl}(\{C_m\},P)$. \section{Proof of Theorem \ref{theorem1}}\label{sec 3} In this section, we prove the duality between the multiple-access relay channel with linear decoding at the CP as well as Wyner-Ziv compression across the relays and the broadcast relay channel with linear encoding at the CP as well as multivariate compression across the relays. Similar to Sections \ref{sec 1} and \ref{sec 2}, we fix the same beamforming vectors $\bar{\mv{u}}_k$'s in the multiple-access relay channel and the broadcast relay channel as shown in (\ref{eqn:beamforming vector}), where $\bar{\mv{u}}_k$'s satisfy (\ref{eqn:assumption}). Next, for Wyner-Ziv compression across the relays in the multiple-access relay channel and multivariate compression across relays in the broadcast relay channel, we assume that the decompression order is the reverse of the compression order, i.e., \begin{align}\label{eqn:compression decompression order} \rho^{{\rm ul}}(m)=\rho^{{\rm dl}}(M+1-m)=\bar{\rho}(m), ~~~ \forall m\in \mathcal{M}. \end{align}For example, if the decompression order in the multiple-access relay channel is $1,\ldots,M$, then the compression order in the broadcast relay channel is $M,\ldots,1$, i.e., $\bar{\rho}(m)=m$, $\forall m$. Let $\{R_k \ge 0, k \in \mathcal{K} \}$ and $\{C_m\geq 0, m \in \mathcal{M} \}$ be sets of strictly feasible user rate targets and fronthaul constraints for both the uplink and the downlink. For the multiple-access relay channel, given the beamforming vectors (\ref{eqn:beamforming vector}) and the decompression order (\ref{eqn:compression decompression order}), the transmit power minimization problem subject to the individual rate constraints as well as the individual fronthaul capacity constraints is formulated as \begin{align}\hspace{-8pt} \mathop{\mathrm{minimize}}_{\{p_k^{\rm ul}\},\{q_m^{\rm ul}\}} & ~ P^{\rm ul}(\{p_k^{\rm ul}\}) \label{eqn:problem 1} \\ \hspace{-8pt} \mathrm {subject ~ to} & ~ C_{\bar{\rho}(m)}^{{\rm ul,WZ}}(\{p_k^{\rm ul}\},q_{\rho^{{\rm ul}}(1)}^{\rm ul},\ldots,q_{\rho^{{\rm ul}}(m)}^{\rm ul},\{\bar{\rho}(m)\})\leq C_{\bar{\rho}(m)}, ~ \forall m \in \mathcal{M}, \label{eqn:uplink fronthaul constraint} \\ \hspace{-8pt} & ~ (\ref{eqn:uplink rate constraint}), ~ (\ref{eqn:uplink nonnegative power}), ~ (\ref{eqn:uplink positive}). \nonumber \end{align} Likewise, for the broadcast relay channel as described in Section \ref{sec:Broadcast Relay Channel}, given the same transmit beamforming vectors (\ref{eqn:beamforming vector}) and a reverse compression order (\ref{eqn:compression decompression order}), the transmit power minimization problem is formulated as \begin{align}\hspace{-8pt} \mathop{\mathrm{minimize}}_{\{p_k^{\rm dl}\},\mv{Q}} & ~ P^{\rm dl}(\{p_k^{\rm dl}\},\mv{Q}) \label{eqn:problem 2} \\ \hspace{-8pt} \mathrm {subject ~ to} & ~ C_{\bar{\rho}(M+1-m)}^{\rm dl,MV}(\{p_k^{\rm dl},\bar{\mv{u}}_k\},\mv{Q},\{\bar{\rho}(M+1-m)\})\leq C_{\bar{\rho}(M+1-m)}, ~ \forall m \in \mathcal{M}, \label{eqn:downlink fronthaul constraint} \\ \hspace{-8pt} & ~ \mv{Q}\succeq \mv{0}, \label{eqn:positive} \\ \hspace{-8pt} & ~ (\ref{eqn:downlink rate constraint}), ~ (\ref{eqn:nonnegative power}). \nonumber \end{align} If we can show that problem (\ref{eqn:problem 1}) and problem (\ref{eqn:problem 2}) are equivalent, then it implies that under the same fronthaul capacity constraints $\{C_m\}$ and total transmit power constraint $P$, any achievable rate-tuple in the multiple-access relay channel must also be achievable in the broadcast relay channel by setting the transmit beamforming vectors as the receive beamforming vectors in the multiple-access relay channel and the compression order as the reverse of the decompression order in the multiple-access channel, and vice versa. In addition, by trying all the feasible beamforming vectors, it follows that \begin{align} \bar{\mathcal{R}}_{{\rm III}}^{\rm ul}(\{C_m\},P,\{\bar{\rho}(m)\})=\mathcal{\bar R}_{{\rm III}}^{\rm dl}(\{C_m\},P,\{\bar{\rho}(M+1-m)\}). \end{align} Finally, by trying all the compression/decompression orders, we can show that the achievable rate regions of the multiple-access relay channel under linear decoding as well as Wyner-Ziv compression and the broadcast relay channel under linear encoding as well as multivariate compression are identical, i.e., $\mathcal{R}_{{\rm III}}^{\rm ul}(\{C_m\},P)=\mathcal{R}_{{\rm III}}^{\rm dl}(\{C_m\},P)$. The key to proving Theorem \ref{theorem1} is therefore to show that the power minimization problems (\ref{eqn:problem 1}) and (\ref{eqn:problem 2}) are equivalent. However, different from problems (\ref{eqn:problem3 1}) and (\ref{eqn:problem3 2}) in Section \ref{sec 1} or problems (\ref{eqn:problem4 1}) and (\ref{eqn:problem4 2}) in Section \ref{sec 2}, problems (\ref{eqn:problem 1}) and (\ref{eqn:problem 2}) cannot be transformed into an LP due to the complicated expression of the fronthaul rates given in (\ref{eqn:uplink fronthaul rate}) and (\ref{eqn:downlink fronthaul rate}). In the rest of this section, we validate the equivalence between problems (\ref{eqn:problem 1}) and (\ref{eqn:problem 2}) based on Lagrangian duality of SDP. For convenience, we merely consider the case when the decompression order in the multiple-access relay channel and the compression order in the broadcast relay channel are respectively set as \begin{align} & \rho^{{\rm ul}}(m)=m, ~~~ \forall m\in \mathcal{M}, \label{eqn:de order} \\ & \rho^{{\rm dl}}(m)=M+1-m, ~~~ \forall m\in \mathcal{M}.\label{eqn:com order} \end{align}For the other decompression order and the corresponding reversed compression order, the equivalence between problems (\ref{eqn:problem 1}) and (\ref{eqn:problem 2}) can be proved in a similar way. The proof involves the following three steps. \subsection{Convex Reformulation of Problem (\ref{eqn:problem 2}) and Its Dual Problem} First, we transform problem (\ref{eqn:problem 2}) into an equivalent convex problem. \begin{proposition}\label{proposition1} Power minimization problem (\ref{eqn:problem 2}) in the broadcast relay channel is equivalent to the following problem: \begin{align}\hspace{-8pt} \mathop{\mathrm{minimize}}_{\{p_k^{\rm dl}\},\mv{Q}} & ~ \sum\limits_{k=1}^Kp_k^{\rm dl}\sigma^2+{\rm tr}(\mv{Q})\sigma^2 \label{eqn:problem 3} \\ \hspace{-8pt} \mathrm {subject ~ to} & ~ \frac{p_k^{\rm dl}|\bar{\mv{u}}_k^H\mv{h}_k|^2}{2^{R_k}-1}\geq \sum\limits_{j\neq k} p_j^{\rm dl}|\bar{\mv{u}}_j^H\mv{h}_k|^2+{\rm tr}(\mv{Q}\mv{h}_k\mv{h}_k^H)+\sigma^2, ~~~ \forall k\in \mathcal{K}, \label{eqn:downlink convex rate constraint} \\ & ~ 2^{C_m}\left[\begin{array}{cc}\mv{0}_{(m-1)\times (m-1)} & \mv{0}_{(m-1)\times (M-m+1)} \\ \mv{0}_{(M-m+1)\times (m-1)} & \mv{Q}^{(m:M,m:M)}\end{array}\right]-\mv{E}_m(\mv{Q}+\mv{\Psi})\mv{E}_m^H\succeq \mv{0}, ~~~ \forall m\in \mathcal{M}, \label{eqn:downlink convex fronthaul constraint} \\ & ~ (\ref{eqn:nonnegative power}), ~ (\ref{eqn:positive}), \nonumber \end{align}where $\mv{E}_m\in \mathbb{C}^{M\times M}$ denotes the matrix where the $m$-th diagonal element is $1$, while the other elements are $0$, and $\mv{\Psi}={\rm diag}\left(\sum_{k=1}^Kp_k^{\rm dl}|\bar{u}_{k,1}|^2,\ldots,\sum_{k=1}^Kp_k^{\rm dl}|\bar{u}_{k,M}|^2\right)$. \end{proposition} \begin{IEEEproof} Please refer to Appendix \ref{appendix1}. \end{IEEEproof} It can be seen that problem (\ref{eqn:problem 3}) is an SDP problem and is convex. Further, by our choice of feasible $\{R_k\}$, it satisfies the Slater's condition. As a result, strong duality holds for problem (\ref{eqn:problem 3}). In other words, problem (\ref{eqn:problem 3}) is equivalent to its dual problem. In the following, we derive the dual problem of (\ref{eqn:problem 3}). \begin{proposition}\label{proposition2} The dual problem of problem (\ref{eqn:problem 3}) is \begin{align}\hspace{-8pt} \mathop{\mathrm{maximize}}_{\{\beta_k\},\{\mv{\Lambda}_m\}} & ~ \sum\limits_{k=1}^K\beta_k \sigma^2 \label{eqn:problem 4} \\ \mathrm {subject ~ to} & ~ \sigma^2+\sum\limits_{j\neq k}\beta_j|\bar{\mv{u}}_k^H\mv{h}_j|^2+\sum\limits_{m=1}^M\mv{\Lambda}_m^{(m,m)}|\bar{u}_{k,m}|^2-\frac{\beta_k|\bar{\mv{u}}_k^H\mv{h}_k|^2}{2^{R_k}-1}\geq 0, ~ \forall k\in \mathcal{K}, \label{eqn:dual uplink SINR constraint} \\ & ~ \sigma^2\mv{I}+\sum\limits_{k=1}^K\beta_k\mv{h}_k\mv{h}_k^H+\sum\limits_{m=1}^M\mv{E}_m^H\mv{\Lambda}_m\mv{E}_m \nonumber \\ & ~~~~~~~~ -\sum\limits_{m=1}^M2^{C_m}\left[\begin{array}{cc}\mv{0}_{(m-1)\times (m-1)} & \mv{0}_{(m-1)\times (M-m+1)} \\ \mv{0}_{(M-m+1)\times (m-1)} & \mv{\Lambda}_m^{(m:M,m:M)}\end{array}\right] \succeq \mv{0}, \label{eqn:dual uplink fronthaul constraint} \\ & ~ \beta_k\geq 0, ~ \forall k\in \mathcal{K}, \label{eqn:nonegative beta} \\ & ~ \mv{\Lambda}_m\succeq \mv{0}, ~ \forall m\in \mathcal{M}, \label{eqn:nonegative Phi} \end{align} where $\beta_k$'s and $\mv{\Lambda}_m\in \mathbb{C}^{M\times M}$'s are the dual variables associated with constraints (\ref{eqn:downlink convex rate constraint}) and (\ref{eqn:downlink convex fronthaul constraint}), respectively. \end{proposition} \begin{IEEEproof} Please refer to Appendix \ref{appendix2}. \end{IEEEproof} It can be observed that similar to the uplink-downlink duality shown in Section \ref{sec 1}, the dual problem (\ref{eqn:problem 4}) of the power minimization problem (\ref{eqn:problem 3}) in the broadcast relay channel is closely related to the power minimization problem (\ref{eqn:problem 1}) of the multiple-access relay channel. Specifically, if we interpret the dual variables $\beta_k$'s as the uplink transmit powers $p_k^{{\rm ul}}$'s and the dual variables $\mv{\Lambda}_m^{(m,m)}$'s as the uplink quantization noise levels $q_m^{{\rm ul}}$'s, then the objective of problem (\ref{eqn:problem 4}) is to maximize the total transmit power, and constraint (\ref{eqn:dual uplink SINR constraint}) is to make sure that the rate of each user $k$ in the multiple-access relay channel is no larger than its rate requirement. The remaining challenge is to transform constraint (\ref{eqn:dual uplink fronthaul constraint}) in problem (\ref{eqn:problem 4}) into a set of $M$ fronthaul capacity constraints that are in the same form of constraints (\ref{eqn:uplink fronthaul constraint}) in problem (\ref{eqn:problem 1}). Note that in contrast to the case with independent compression shown in Section \ref{sec 1}, where constraint (\ref{eqn:dual constraint fronthaul}) of the dual problem (\ref{eqn:dual problem3 4}) in the broadcast relay channel is directly the reverse of constraint (\ref{eqn:uplink fronthaul constraint3}) of problem (\ref{eqn:problem3 1}) in the multiple-access relay channel, in the case with Wyner-Ziv compression and multivariate compression, the validation of the equivalence between problem (\ref{eqn:problem 1}) and problem (\ref{eqn:problem 2}) is considerably more complicated. \subsection{Equivalent Transformation of Constraint (\ref{eqn:dual uplink fronthaul constraint}) in the Dual Problem (\ref{eqn:problem 4})}\label{sec 31} In the following, we transform constraint (\ref{eqn:dual uplink fronthaul constraint}) in problem (\ref{eqn:problem 4}) into the form of constraint (\ref{eqn:uplink fronthaul constraint}) in problem (\ref{eqn:problem 1}). First, we introduce some auxiliary variables to problem (\ref{eqn:problem 4}). \begin{proposition}\label{proposition3} Problem (\ref{eqn:problem 4}) is equivalent to the following problem: \begin{align}\hspace{-8pt} \mathop{\mathrm{maximize}}_{\{\beta_k\},\{\mv{\Lambda}_m,\mv{A}_m\}} & ~ \sum\limits_{k=1}^K\beta_k \sigma^2 \label{eqn:problem 5} \\ \mathrm {subject ~ to} \ \ & ~ \sigma^2\mv{I}+\sum\limits_{k=1}^K\beta_k\mv{h}_k\mv{h}_k^H+\sum\limits_{m=1}^M\mv{E}_m^H\mv{\Lambda}_m\mv{E}_m \succeq \mv{A}_1, \label{eqn:dual uplink fronthaul constraint 1} \\ & ~ \mv{A}_m=2^{C_m}\mv{\Lambda}_m^{(m:M,m:M)}+\left[\begin{array}{cc}0 & \mv{0}_{1\times (M-m)} \\ \mv{0}_{(M-m)\times 1} & \mv{A}_{m+1} \end{array}\right], ~ m=1,\ldots,M-1, \label{eqn:A1} \\ & ~ \mv{A}_M=2^{C_M}\mv{\Lambda}_M^{(M,M)}, \label{eqn:A2} \\ & ~ (\ref{eqn:dual uplink SINR constraint}), ~ (\ref{eqn:nonegative beta}), ~ (\ref{eqn:nonegative Phi}), \nonumber \end{align}where $\mv{A}_m\in \mathbb{C}^{(M-m+1)\times (M-m+1)}$'s, $m=1,\ldots,M$, are auxiliary variables. \end{proposition} \begin{IEEEproof} Please refer to Appendix \ref{appendix3}. \end{IEEEproof} Next, we show some key properties of the optimal solution to problem (\ref{eqn:problem 5}). \begin{proposition}\label{lemma1} The optimal solution to problem (\ref{eqn:problem 5}) must satisfy \begin{align} & \sigma^2\mv{I}+\sum\limits_{k=1}^K\beta_k\mv{h}_k\mv{h}_k^H+\sum\limits_{m=1}^M\mv{E}_m^H\mv{\Lambda}_m\mv{E}_m = \mv{A}_1, \label{eqn:optimal condition 4} \\ & \mv{A}_m\succ \mv{0}, ~~~ m=1,\ldots,M, \label{eqn:optimal condition 2} \\ & 2^{C_m}\mv{\Lambda}_m^{(m:M,m:M)}=\frac{\mv{A}_m^{(1:M-m+1,1)}\mv{A}_m^{(1,1:M-m+1)}}{\mv{A}_m^{(1,1)}}, ~~~ m=1,\ldots,M. \label{eqn:optimal condition} \end{align} \end{proposition} \begin{IEEEproof} Please refer to Appendix \ref{appendix4}. \end{IEEEproof} Proposition \ref{lemma1} indicates that the optimal $\mv{\Lambda}_m$'s to problem (\ref{eqn:problem 5}) are rank-one matrices. This property can be utilized to simplify problem (\ref{eqn:problem 5}) as follows. \begin{proposition}\label{proposition4} Define \begin{align}\label{eqn:Omega} \mv{\Omega}&=\sigma^2\mv{I}+\sum\limits_{k=1}^K\beta_k\mv{h}_k\mv{h}_k^H+\sum\limits_{m=1}^M\mv{E}_m^H\mv{\Lambda}_m\mv{E}_m \nonumber \\ & =\sigma^2\mv{I}+\sum\limits_{k=1}^K\beta_k\mv{h}_k\mv{h}_k^H+{\rm diag}(\mv{\Lambda}_1^{(1,1)},\ldots,\mv{\Lambda}_M^{(M,M)})\succ \mv{0}. \end{align}Then, problem (\ref{eqn:problem 5}) is equivalent to \begin{align}\hspace{-8pt} \mathop{\mathrm{maximize}}_{\{\beta_k\},\{\mv{\Lambda}_m^{(m,m)}\}} & ~ \sum\limits_{k=1}^K\beta_k \sigma^2 \label{eqn:problem 6} \\ \mathrm {subject ~ to} \ \ & ~ 2^{C_m}\mv{\Lambda}_m^{(m,m)}\leq \mv{\Omega}^{(m,m)}-\mv{\Omega}^{(m,1:m-1)}(\mv{\Omega}^{(1:m-1,1:m-1)})^{-1}\mv{\Omega}^{(1:m-1,m)}, ~ \forall m \in \mathcal{M}, \label{eqn:eqv dual uplink fronthaul constraint} \\ & ~ \mv{\Lambda}_m^{(m,m)} \ge 0, \label{eqn:positive Phi} \\ & ~ (\ref{eqn:dual uplink SINR constraint}), ~ (\ref{eqn:nonegative beta}). \nonumber \end{align} \end{proposition} \begin{IEEEproof} Please refer to Appendix \ref{appendix5}. \end{IEEEproof} Note that the difference between problem (\ref{eqn:problem 6}) and problem (\ref{eqn:problem 4}) are two-fold. First, the optimization variables reduce from $\mv{\Lambda}_m$'s to their $m$-th diagonal elements, i.e., $\mv{\Lambda}_m^{(m,m)}$'s. Second, constraint (\ref{eqn:dual uplink fronthaul constraint}) in the matrix form reduces to $M$ constraints given in (\ref{eqn:eqv dual uplink fronthaul constraint}) in the scalar form. If we interpret the dual variables $\beta_k$'s as the transmit powers $p_k^{{\rm ul}}$'s and the dual variables $\mv{\Lambda}_m^{(m,m)}$'s as the quantization noise levels $q_m^{{\rm ul}}$'s in the multiple-access relay channel, then problem (\ref{eqn:problem 6}) is equivalent to the maximization of the transmit power subject to the constraints that the rate of each user $k$ is no larger than $R_k$, $\forall k$, and the fronthaul rate of each relay $m$ is no smaller than $C_m$, $\forall m$. As a result, problem (\ref{eqn:problem 6}) is a reverse problem to the power minimization problem (\ref{eqn:problem 1}) in the multiple-access relay channel. In the following, we show that this problem is indeed equivalent to the power minimization problem (\ref{eqn:problem 1}). \subsection{Equivalence Between Power Maximization Problem (\ref{eqn:problem 6}) and Power Minimization Problem (\ref{eqn:problem 1})}\label{sec:step 3} First, we prove one important property of the optimal solution to problem (\ref{eqn:problem 6}). \begin{proposition}\label{proposition5} With the optimal solution to problem (\ref{eqn:problem 6}), constraints (\ref{eqn:dual uplink SINR constraint}) and (\ref{eqn:eqv dual uplink fronthaul constraint}) should hold with equality, i.e., \begin{align} & \sigma^2+\sum\limits_{j\neq k}\beta_j|\bar{\mv{u}}_k^H\mv{h}_j|^2+\sum\limits_{m=1}^M\mv{\Lambda}_m^{(m,m)}|\bar{u}_{k,m}|^2-\frac{\beta_k|\bar{\mv{u}}_k^H\mv{h}_k|^2}{2^{R_k}-1}= 0, ~ \forall k\in \mathcal{K}, \label{eqn:dual uplink SINR constraint 01} \\ & 2^{C_m}\mv{\Lambda}_m^{(m,m)}= \mv{\Omega}^{(m,m)}-\mv{\Omega}^{(m,1:m-1)}(\mv{\Omega}^{(1:m-1,1:m-1)})^{-1}\mv{\Omega}^{(1:m-1,m)}, ~ \forall m\in \mathcal{M}. \label{eqn:eqv dual uplink fronthaul constraint 01} \end{align} \end{proposition} \begin{IEEEproof} Please refer to Appendix \ref{appendix6}. \end{IEEEproof} According to Proposition \ref{proposition5}, problem (\ref{eqn:problem 6}) is equivalent to the following problem \begin{align}\hspace{-8pt} \mathop{\mathrm{maximize}}_{\{\beta_k\},\{\mv{\Lambda}_m^{(m,m)}\}} & ~ \sum\limits_{k=1}^K\beta_k \sigma^2 \label{eqn:problem 7} \\ \mathrm {subject ~ to} \ \ & ~ (\ref{eqn:nonegative beta}), ~ (\ref{eqn:positive Phi}), ~ (\ref{eqn:dual uplink SINR constraint 01}), ~ (\ref{eqn:eqv dual uplink fronthaul constraint 01}). \nonumber \end{align} Next, we show one important property of the optimal solution to problem (\ref{eqn:problem 7}). \begin{proposition}\label{proposition6} If there exists one set of solutions $\{\beta_k, \mv{\Lambda}_m^{(m,m)}\}$ that satisfies the constraints (\ref{eqn:nonegative beta}), (\ref{eqn:positive Phi}), (\ref{eqn:dual uplink SINR constraint 01}), and (\ref{eqn:eqv dual uplink fronthaul constraint 01}) in problem (\ref{eqn:problem 7}), then this solution is unique. \end{proposition} \begin{IEEEproof} Please refer to Appendix \ref{appendix7}. \end{IEEEproof} Note that the above proposition is very similar to Proposition \ref{proposition broadcast 2} for the case with independent compression shown in Section \ref{sec 1}. However, the proof of Proposition \ref{proposition6} is much more complicated. This is because for the case with independent compression, equations in (\ref{eqn:dual constraint 11}) and (\ref{eqn:dual constraint fronthaul 1}) are linear in terms of $\beta_k$'s and $\lambda_m$'s, while for the case with Wyner-Ziv compression and multivariate compression, equations in (\ref{eqn:eqv dual uplink fronthaul constraint 01}) are not linear in terms of $\beta_k$'s and $\mv{\Lambda}^{(m,m)}$'s. Since there is a unique solution that satisfies all the constraints in problem (\ref{eqn:problem 7}), it follows that maximization problem (\ref{eqn:problem 7}) is equivalent to the following minimization problem \begin{align}\hspace{-8pt} \mathop{\mathrm{minimize}}_{\{\beta_k\},\{\mv{\Lambda}_m^{(m,m)}\}} & ~ \sum\limits_{k=1}^K\beta_k \sigma^2 \label{eqn:problem 9} \\ \mathrm {subject ~ to} \ \ & ~ (\ref{eqn:nonegative beta}), ~ (\ref{eqn:positive Phi}), ~ (\ref{eqn:dual uplink SINR constraint 01}), ~ (\ref{eqn:eqv dual uplink fronthaul constraint 01}). \nonumber \end{align} Last, we relate problem (\ref{eqn:problem 9}) to the power minimization problem (\ref{eqn:problem 1}) in the multiple-access relay channel by the following proposition. \begin{proposition}\label{proposition7} Problem (\ref{eqn:problem 9}) is equivalent to \begin{align}\hspace{-8pt} \mathop{\mathrm{minimize}}_{\{\beta_k\},\{\mv{\Lambda}_m^{(m,m)}\}} & ~ \sum\limits_{k=1}^K\beta_k \sigma^2 \label{eqn:problem 10} \\ \mathrm {subject ~ to} \ \ & ~ \sigma^2+\sum\limits_{j\neq k}\beta_j|\bar{\mv{u}}_k^H\mv{h}_j|^2+\sum\limits_{m=1}^M\mv{\Lambda}_m^{(m,m)}|\bar{u}_{k,m}|^2-\frac{\beta_k|\bar{\mv{u}}_k^H\mv{h}_k|^2}{2^{R_k}-1}\leq 0, ~ \forall k \in \mathcal{K}, \label{eqn:dual uplink SINR constraint 02} \\ & ~ 2^{C_m}\mv{\Lambda}_m^{(m,m)}\geq \mv{\Omega}^{(m,m)}-\mv{\Omega}^{(m,1:m-1)}(\mv{\Omega}^{(1:m-1,1:m-1)})^{-1}\mv{\Omega}^{(1:m-1,m)}, ~ \forall m \in \mathcal{M}, \label{eqn:eqv dual uplink fronthaul constraint 02} \\ & ~ (\ref{eqn:nonegative beta}), ~ (\ref{eqn:positive Phi}). \nonumber \end{align} \end{proposition} \begin{IEEEproof} Similar to Proposition \ref{proposition5}, it can be shown that with the optimal solution to problem (\ref{eqn:problem 10}), constraints shown in (\ref{eqn:dual uplink SINR constraint 02}) and (\ref{eqn:eqv dual uplink fronthaul constraint 02}) should hold with equality. As a result, problem (\ref{eqn:problem 9}) is equivalent to problem (\ref{eqn:problem 10}). Proposition \ref{proposition7} is thus proved. \end{IEEEproof} In problem (\ref{eqn:problem 10}), the dual variables $\beta_k$'s can be viewed as the transmit powers $p_k^{{\rm ul}}$'s, and the dual variables $\mv{\Lambda}_m^{(m,m)}$'s can be viewed as the quantization noise levels $q_m^{{\rm ul}}$'s in the multiple-access relay channel. Moreover, according to (\ref{eqn:Gamma}) and (\ref{eqn:Omega}), $\mv{\Omega}$ can be viewed as the covariance matrix of the received signal in the multiple-access relay channel, i.e., $\mv{\Gamma}$ as defined by (\ref{eqn:Gamma}). As a result, by combining Propositions \ref{proposition1} -- \ref{proposition7}, we can conclude that the power minimization problem (\ref{eqn:problem 2}) for the broadcast relay channel is equivalent to problem (\ref{eqn:problem 10}), and is thus equivalent to power minimization problem (\ref{eqn:problem 1}) for the multiple-access relay channel. Theorem \ref{theorem1} is thus proved. To summarize the difference in the methodology for proving Theorem \ref{theorem3} and Theorem \ref{theorem1}, we note that although the proof of Theorem \ref{theorem1} for the case of Wyner-Ziv compression in the multiple-access relay channel and multivariate compression in the broadcast relay channel follows the same line of reasoning as the proof of Theorem \ref{theorem3} for the case with independent compression (i.e., we first use the Lagrangian duality method to find the dual problem of the power minimization problem in the broadcast relay channel, then show that this dual problem is equivalent to the power minimization problem in the multiple-access relay channel), the validation of the equivalence between the dual problem in the broadcast relay channel and the power minimization problem in the multiple-access channel is much more involved for the case of Wyner-Ziv and multivariate compression. This is because (i) the downlink power minimization problem becomes an SDP, rather than an LP for the case of independent compression, and the duality of SDPs are more complicated than that of LPs; (ii) the step in Section \ref{sec 31} is needed, since constraint (\ref{eqn:dual uplink fronthaul constraint}) in the dual problem (\ref{eqn:problem 4}) is not a direct reverse of constraint (\ref{eqn:uplink fronthaul constraint}) in problem (\ref{eqn:problem 1}); (iii) Proposition \ref{proposition6} involves nonlinear equations, and it is more much difficult to show that its solution is unique. \section{Proof of Theorem \ref{theorem4}}\label{sec 4} In this section, we prove the duality between the multiple-access relay channel with successive interference cancellation at the CP as well as Wyner-Ziv compression across the relays and the broadcast relay channel with dirty-paper coding at the CP as well as multivariate compression across the relays. Similar to Sections \ref{sec 1}, \ref{sec 2}, and \ref{sec 3}, we fix the same beamforming vectors $\bar{\mv{u}}_k$'s in the multiple-access relay channel and the broadcast relay channel as shown in (\ref{eqn:beamforming vector}), where $\bar{\mv{u}}_k$'s satisfy (\ref{eqn:assumption}). Next, for successive interference cancellation at the CP in the multiple-access relay channel and dirty-paper coding at the CP in the broadcast relay channel, we assume that the decoding order is the reverse of the encoding order, i.e., (\ref{eqn:encoding decoding order}). For example, if the decoding order in the multiple-access relay channel is $1,\ldots,K$, then the encoding order in the broadcast relay channel is $K,\ldots,1$. Moreover, for Wyner-Ziv compression across the relays in the multiple-access relay channel and multivariate compression across the relays in the broadcast relay channel, we assume the decompression order is the reverse of the compression order, i.e., (\ref{eqn:compression decompression order}). For example, if the decompression order in the multiple-access relay channel is $1,\ldots,M$, then the compression order in the broadcast relay channel is $M,\ldots,1$. Let $\{R_k>0, k \in \mathcal{K} \}$ and $\{C_m>0, m \in \mathcal{M} \}$ denote sets of strictly feasible user rate requirements and fronthaul rate requirements under the beamforming vectors (\ref{eqn:beamforming vector}), decoding order (\ref{eqn:encoding decoding order}), and decompression order (\ref{eqn:compression decompression order}) for both the uplink and the downlink. For the multiple-access relay channel, the transmit power minimization problem subject to the individual rate constraints as well as the individual fronthaul capacity constraints is formulated as \begin{align}\hspace{-8pt} \mathop{\mathrm{minimize}}_{\{p_k^{\rm ul}\},\{q_m^{\rm ul}\}} & ~ P^{\rm ul}(\{p_k^{\rm ul}\}) \label{eqn:problem5 1} \\ \hspace{-8pt} \mathrm {subject ~ to} & ~ (\ref{eqn:uplink nonnegative power}), ~ (\ref{eqn:uplink positive}), ~ (\ref{eqn:uplink rate constraint4}), ~ (\ref{eqn:uplink fronthaul constraint}). \nonumber \end{align} Likewise, given the transmit beamforming vectors (\ref{eqn:beamforming vector}), a reversed encoding order (\ref{eqn:encoding decoding order}), and a reversed compression order (\ref{eqn:compression decompression order}), for the broadcast relay channel, the transmit power minimization problem is formulated as \begin{align}\hspace{-8pt} \mathop{\mathrm{minimize}}_{\{p_k^{\rm dl}\},\mv{Q}} & ~ P^{\rm dl}(\{p_k^{\rm dl}\},\mv{Q}) \label{eqn:problem5 2} \\ \hspace{-8pt} \mathrm {subject ~ to} & ~ (\ref{eqn:nonnegative power}), ~ (\ref{eqn:positive quantization noise3}), ~ (\ref{eqn:downlink rate constraint4}), ~ (\ref{eqn:downlink fronthaul constraint}). \nonumber \end{align} Similar to Section \ref{sec 3}, we can apply the Lagrangian duality method to show that problem (\ref{eqn:problem5 2}) is equivalent to problem (\ref{eqn:problem5 1}). Since the method adopted is almost the same as that in Section \ref{sec 3}, we omit the proof here. As a result, under the same fronthaul capacity constraints $C_m$'s and total transmit power constraint $P$, any achievable rate-tuple for the multiple-access relay channel is also achievable for the broadcast relay channel by setting the transmit beamforming vectors as the receive beamforming vectors in the multiple-access relay channel and by setting the encoding and compression order to be the reverse of the decoding and decompression order in the multiple-access relay channel, and vice versa. In other words, by trying all the feasible beamforming vectors, it follows that \begin{align} \bar{\mathcal{R}}_{{\rm IV}}^{\rm ul}(\{C_m\},P,\{\bar{\rho}(m)\},\{\bar{\tau}(k)\})=\mathcal{\bar R}_{{\rm IV}}^{\rm dl}(\{C_m\},P,\{\bar{\rho}(M+1-m)\},\{\bar{\tau}(K+1-k)\}). \end{align} Then, by trying all the encoding/decoding orders and compression/decompression orders, we can show that the achievable rate regions of the multiple-access relay channel under successive interference cancellation as well as Wyner-Ziv compression and the broadcast relay channel under dirty-paper coding as well as multivariate compression are identical, i.e., $\mathcal{R}_{{\rm IV}}^{\rm ul}(\{C_m\},P)=\mathcal{R}_{{\rm IV}}^{\rm dl}(\{C_m\},P)$. \section{Duality Relationships}\label{sec:Summary} The main result of this paper is that the Lagrangian duality technique can be used to establish the duality between the sum-power minimization problems for the multiple-access relay channel and the broadacast relay channel. The key observation is that under fixed beamformers, the sum-power minimization problems in both the uplink and the downlink can be transformed into convex optimization problems. In particular, with independent compression (i.e., Cases I and II), both the uplink and downlink problems are LPs, while with Wyner-Ziv or multivariate compression (i.e., Cases III and IV), both the uplink and downlink problems can be transformed into SDPs. Note that for the uplink problem under the Wyner-Ziv compression, the transformation into SDP also involves reversing the minimization into the maximization and reversing the direction of inequalities involving the rate and fronthaul constraints. We further show that the resulting convex problems in the uplink and the downlink after the transformations are the Lagrangian duals of each other for Cases I--IV. Moreover, the dual variables have interesting physical interpretations. Specifically, the Lagrangian dual variables corresponding to the downlink achievable rate constraints are the optimal uplink transmit powers; the dual variables corresponding to the downlink fronthaul rate constraints are the optimal uplink quantization noise levels. These interpretations are summarized in Table \ref{table:interpretation}. \begin{table} \begin{center} \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \caption{Primal and dual relationships between the sum-power minimization problems for the multiple-access relay channel and the broadcast relay channel} \label{table:interpretation} \begin{tabular}{|c|l|l|} \hline \multirow{2}{*}{} & \multicolumn{1}{c|}{Broadcast relay channel} & \multicolumn{1}{c|}{Multiple-access relay channel} \\ \cline{2-3} & \multicolumn{2}{c|}{Fixing beamformers $\{\mathbf{w}_k = \mathbf{v}_k$\}} \\ \hline Case I & \tabincell{l}{Primal problem: Power minimization (\ref{eqn:problem3 2}) \\ $\bullet$ Rate constraint (\ref{eqn:downlink rate constraint}): Optimal dual variables $\{\beta_k\}$\\$\bullet$ Fronthaul constraint (\ref{eqn:downlink fronthaul constraint3}): Optimal dual variables $\{\lambda_m\}$} & \tabincell{l}{Dual problem: Power minimization (\ref{eqn:problem3 1}) \\ $\bullet$ Optimal transmit powers: $\{p_k^{{\rm ul}}=\beta_k\}$\\$\bullet$ Optimal quantization noises: $\{q_m^{{\rm ul}}=\lambda_m\}$ } \\ \hline Case II & \tabincell{l}{Primal problem: Power minimization (\ref{eqn:problem4 2}) \\ $\bullet$ Rate constraint (\ref{eqn:downlink rate constraint4}): Optimal dual variables $\{\beta_k\}$\\$\bullet$ Fronthaul constraint (\ref{eqn:downlink fronthaul constraint3}): Optimal dual variables $\{\lambda_m\}$} & \tabincell{l}{Dual problem: Power minimization (\ref{eqn:problem4 1}) \\ $\bullet$ Optimal transmit powers: $\{p_k^{{\rm ul}}=\beta_k\}$\\$\bullet$ Optimal quantization noises: $\{q_m^{{\rm ul}}=\lambda_m\}$ } \\ \hline Case III & \tabincell{l}{Primal problem: Power minimization (\ref{eqn:problem 2}) \\ $\bullet$ Rate constraint (\ref{eqn:downlink rate constraint}): Optimal dual variables $\{\beta_k\}$\\$\bullet$ Fronthaul constraint (\ref{eqn:downlink convex fronthaul constraint}): Optimal dual variables $\{\mv{\Lambda}_m\}$} & \tabincell{l}{Dual problem: Power minimization (\ref{eqn:problem 1}) \\ $\bullet$ Optimal transmit powers: $\{p_k^{{\rm ul}}=\beta_k\}$\\$\bullet$ Optimal quantization noises: $\{q_m^{{\rm ul}}=\mv{\Lambda}_m^{(m,m)}\}$ } \\ \hline Case IV & \tabincell{l}{Primal problem: Power minimization (\ref{eqn:problem5 2}) \\ $\bullet$ Rate constraint (\ref{eqn:downlink rate constraint4}): Optimal dual variables $\{\beta_k\}$\\$\bullet$ Fronthaul constraint (\ref{eqn:downlink convex fronthaul constraint}): Optimal dual variables $\{\mv{\Lambda}_m\}$} & \tabincell{l}{Dual problem: Power minimization (\ref{eqn:problem5 1}) \\ $\bullet$ Optimal transmit powers: $\{p_k^{{\rm ul}}=\beta_k\}$\\$\bullet$ Optimal quantization noises: $\{q_m^{{\rm ul}}=\mv{\Lambda}_m^{(m,m)}\}$ } \\ \hline \end{tabular} \end{center} \end{table} In the prior literature, the traditional uplink-downlink duality relationship is established by showing that any achievable rate-tuple in the uplink is also achievable in the downlink, and vice versa. However, it is difficult to apply this approach to verify the duality results of this paper, at least for the case with the Wyner-Ziv and multivariate compression strategies. This is because the condition to ensure a feasible solution is easy to characterize only under linear constraints (i.e., with independent compression). But with Wyner-Ziv and multivariate compressions, the fronthaul rates are nonlinear functions of the transmit powers and the quantization noises. For this reason, this paper takes an alternative approach of fixing the beamforming vectors then transforming the sum-power minimization problems subject to the individual rate and fronthaul constraints into suitable convex forms. As a result, we are able to take a unified approach to establish the uplink-downlink duality using the Lagrangian duality theory. This approach in fact works for both independent compression and multivariate compression. \section{Application of Duality}\label{sec:Application} While the main technical proofs in this paper are for the case of fixed beamformers, the duality relationship also extends to the scenario where the beamformers need to be jointly optimized with the transmit powers and quantization noises. In this section, we consider algorithms for such joint optimization problems and show that the duality relationship gives an efficient way of solving the downlink joint optimization problem via its uplink counterpart. The joint optimization of the beamforming vectors, transmit powers, and the quantization noises for the broadcast relay channel is more difficult than the corresponding optimization for the multiple-access relay channel. This is because in the multiple-access relay channel, the receive beamforming vectors $\{\mv{w}_k\}$ affect the user rates only, but do not affect the fronthaul rates; the optimal receive beamformers are simply the minimum mean-squared-error (MMSE) receiver. This is in contrast to the broadcast relay channel, where the transmit beamforming vectors $\{\mv{v}_k\}$ affect both the user rates and the fronthaul rates, which makes the optimization highly nontrivial. Furthermore, the optimization of the quantization noise solution is also conceptually easier in the multiple-access relay channel since the quantization noise levels are scalars, while in the broadcast relay channel, the optimization of the quantization noises is over their covariance matrix. Due to the above reasons, it would be appealing if we can solve the broadcast relay channel problem via its dual multiple-access counterpart. In the following two subsections, we show how this can be done under independent compression and Wyner-Ziv/multivariate compression, respectively. The key observation is that in both cases, the sum-power minimization problem in the multiple-access relay channel can be solved globally via a fixed-point iteration method. \subsection{Independent Compression}\label{sec:Independent Compression} First, consider the sum-power minimization problems in the multiple-access and broadcast relay channels for Cases I and II with independent compression/decompression. For simplicity, we focus on Case I, i.e., linear encoding/decoding, and show how to solve the problem in the broadcast relay channel via solving the dual problem in the multiple-access relay channel. Similar approach can be applied to Case II under nonlinear encoding/decoding. Specifically, in Case I, the sum-power minimization problems in the multiple-access channel and the broadcast relay channel are formulated as \begin{align}\hspace{-8pt} \mathop{\mathrm{minimize}}_{\{p_k^{\rm ul},\mv{w}_k\},\{q_m^{\rm ul}\}} & ~ P^{\rm ul}(\{p_k^{\rm ul}\}) \label{eqn:problem uplink case I} \\ \hspace{-8pt} \mathrm {subject ~ to} & ~ R_k^{\rm ul,TIN}(\{p_k^{\rm ul},\mv{w}_k\},\{q_m^{\rm ul}\})\geq R_k, ~ \forall k\in \mathcal{K}, \label{eqn:Case I 1}\\ & ~ C_m^{\rm ul,IN}(\{p_k^{\rm ul}\},q_m^{\rm ul})\leq C_m, ~ \forall m\in \mathcal{M}, \label{eqn:uplink fronthaul constraint app} \\ \hspace{-8pt} & ~ (\ref{eqn:uplink nonnegative power}), ~ (\ref{eqn:uplink positive}), \nonumber \end{align} and \begin{align}\hspace{-8pt} \mathop{\mathrm{minimize}}_{\{p_k^{\rm dl},\mv{v}_k\},\mv{Q}_{{\rm diag}}} & ~ P^{\rm dl}(\{p_k^{\rm dl}\},\mv{Q}_{{\rm diag}}) \label{eqn:problem downlink case I} \\ \hspace{-8pt} \mathrm {subject ~ to} & ~ R_k^{\rm dl,LIN}(\{p_k^{\rm dl},\mv{v}_k\},\mv{Q}_{{\rm diag}})\geq R_k, ~ \forall k\in \mathcal{K}, \label{eqn:Case I 2} \\ \hspace{-8pt} & ~ C_m^{\rm dl,IN}(\{p_k^{\rm dl},\mv{v}_k\},\mv{Q}_{{\rm diag}})\leq C_m, ~ \forall m\in \mathcal{M}, \\ \hspace{-8pt} & ~ (\ref{eqn:nonnegative power}), ~ (\ref{eqn:positive quantization noise3}). \nonumber \end{align}Note that different from problems (\ref{eqn:problem3 1}) and (\ref{eqn:problem3 2}), here the beamforming vectors need to be jointly optimized with the transmit powers and quantization noises. In the following, we propose a fixed-point iteration method that can globally solve problem (\ref{eqn:problem uplink case I}) with low complexity. The main idea is to transform problem (\ref{eqn:problem uplink case I}) that jointly optimizes the transmit powers, receive beamforming vectors, and quantization noise levels into a power control problem. Specifically, according to Proposition \ref{proposition broadcast 3}, at the optimal solution, constraint (\ref{eqn:uplink fronthaul constraint app}) in problem (\ref{eqn:problem uplink case I}) should be satisfied with equality. In this case, the fronthaul capacity constraint (\ref{eqn:uplink fronthaul constraint app}) gives the following relationship between the optimal quantization noise levels and the optimal transmit powers: \begin{align}\label{eqn:optimal quantization Case I} q_m^{{\rm ul}}(\mv{p}^{{\rm ul}})=\frac{\sum\limits_{k=1}^Kp_k^{{\rm ul}}|h_{m,k}|^2+\sigma^2}{2^{C_m}-1}, ~~~ \forall m\in \mathcal{M}, \end{align}where $\mv{p}^{{\rm ul}}=[p_1^{{\rm ul}},\ldots,p_K^{{\rm ul}}]^T$. Moreover, it is observed from problem (\ref{eqn:problem uplink case I}) that the receive beamforming vectors affect the user rates only, but do not affect the fronthaul rates. It is well-known that the optimal receive beamforming vectors to maximize the user rates are the MMSE receivers, i.e., \begin{align}\label{eqn:MMSE} \mv{w}_k=\frac{\tilde{\mv{w}}_k}{\|\tilde{\mv{w}}_k\|}, ~~~ \forall k\in \mathcal{K}, \end{align}where \begin{align} \tilde{\mv{w}}_k=\left(\sum\limits_{j\neq k}p_j^{{\rm ul}} \mv{h}_j\mv{h}_j^H+{\rm diag}(q_1^{{\rm ul}}(\mv{p}^{{\rm ul}}),\ldots,q_M^{{\rm ul}}(\mv{p}^{{\rm ul}}))+\sigma^2\mv{I}\right)^{-1}\mv{h}_k, \end{align}with $q_m^{{\rm ul}}(\mv{p}^{{\rm ul}})$ as given in (\ref{eqn:optimal quantization Case I}). By plugging the above MMSE beamformers into constraint (\ref{eqn:Case I 1}), problem (\ref{eqn:problem uplink case I}) is now equivalent to the following power control problem: \begin{align}\hspace{-8pt} \mathop{\mathrm{minimize}}_{\mv{p}^{{\rm ul}}} & ~ P^{\rm ul}(\{p_k^{\rm ul}\}) \label{eqn:problem uplink case I 1} \\ \hspace{-8pt} \mathrm {subject ~ to} & ~ p_k^{{\rm ul}}\geq I_k(\mv{p}^{{\rm ul}}), ~ \forall k\in \mathcal{K}, \label{eqn:Case I 11} \end{align} where \begin{align}\label{eqn:interference function case I} I_k(\mv{p}^{{\rm ul}})=\frac{2^{R_k}-1}{\mv{h}_k^H \left(\sum\limits_{j\neq k}p_j^{{\rm ul}} \mv{h}_j\mv{h}_j^H+{\rm diag}(q_1^{{\rm ul}}(\mv{p}^{{\rm ul}}),\ldots,q_M^{{\rm ul}}(\mv{p}^{{\rm ul}}))+\sigma^2\mv{I}\right)^{-1} \mv{h}_k}, ~~~ \forall k\in \mathcal{K}. \end{align} \begin{lemma}\label{lemmacaseI} Given $\mv{p}^{{\rm ul}}\geq \mv{0}$, the functions $I_k(\mv{p}^{{\rm ul}})$'s, $k=1,\ldots,K$, as defined by (\ref{eqn:interference function case I}) satisfy the following three properties: \begin{itemize} \item[1.] $I_k(\mv{p}^{{\rm ul}})>0$, $\forall k$; \item[2.] Given any $\alpha>1$, it follows that $I_k(\alpha \mv{p}^{{\rm ul}})< \alpha I_k(\mv{p}^{{\rm ul}})$, $\forall k$; \item[3.] If $\bar{\mv{p}}^{{\rm ul}}\geq \mv{p}^{{\rm ul}}$, then $I_k(\bar{\mv{p}}^{{\rm ul}})\geq I_k(\mv{p}^{{\rm ul}})$, $\forall k$. \end{itemize} \end{lemma} \begin{IEEEproof} Please refer to Appendix \ref{appendix8}. \end{IEEEproof} Lemma \ref{lemmacaseI} shows that $\mv{I}(\mv{p}^{{\rm ul}})=[I_1(\mv{p}^{{\rm ul}}),\ldots,I_K(\mv{p}^{{\rm ul}})]^T$ is a standard interference function \cite{Yates95}. According to \cite[Theorem 2]{Yates95}, as long as the original problem is feasible, given any initial power solution $\mv{p}^{{\rm ul},(0)}=[p_1^{{\rm ul},(0)},\ldots,p_K^{{\rm ul},(0)}]^T$ that satisfies $\mv{p}^{{\rm ul},(0)}\geq \mv{0}$, the following fixed-point iteration must converge to the globally optimal power control solution to problem (\ref{eqn:problem uplink case I 1}): \begin{align}\label{eqn:fixed-point case I} p_k^{{\rm ul},(t+1)}=I_k(\mv{p}^{{\rm ul},(t)}), ~~~ \forall k\in \mathcal{K}, \end{align}where $I_k(\mv{p}^{{\rm ul}})$'s are defined in (\ref{eqn:interference function case I}), and $\mv{p}^{{\rm ul},(t)}$ is the power obtained after the $t$-th iteration of the fixed-point update (\ref{eqn:fixed-point case I}). After the optimal power solution denoted by $\mv{p}^{{\rm ul},\ast}$ is obtained via the fixed-point iteration, the optimal quantization noise levels denoted by $\{q_m^{{\rm ul},\ast}\}$ can be obtained via (\ref{eqn:optimal quantization Case I}), and the optimal receive beamforming vectors denoted by $\{\mv{w}_k^\ast\}$ can be obtained via (\ref{eqn:MMSE}). Note that the above algorithm to solve problem (\ref{eqn:problem uplink case I}) is very simple, because all the updates have closed-form expressions. After problem (\ref{eqn:problem uplink case I}) is solved globally, we now show how to obtain the optimal solution to problem (\ref{eqn:problem downlink case I}) using the uplink-downlink duality. It is shown in Section \ref{sec 1} that given the same beamforming vectors, the sum-power minimization problems in the multiple-access channel and the broadcast relay channel are equivalent to each other. As a result, the optimal beamforming vectors for problem (\ref{eqn:problem uplink case I}), i.e., $\{\mv{w}_k^\ast\}$, are also the optimal beamforming vectors for problem (\ref{eqn:problem downlink case I}). After setting $\mv{v}_k^\ast=\mv{w}_k^\ast$, $\forall k$, the problem reduces to an optimization problem over the transmit powers and quantization noise levels. As problem (\ref{eqn:problem downlink case I}) is convex in $p_k^{\rm dl}$ and $\mv{Q}_{{\rm diag}}$ given the fixed beamforming vectors as shown in Section \ref{sec 1}, it can be solved efficiently. We contrast the above approach with the direct optimization of the downlink. It can be shown that if the unit-power beamformers $\mv{v}_k$'s and transmit powers $p_k^{{\rm dl}}$'s are combined together as $\tilde{\mv{v}}_k=p_k^{{\rm dl}}\mv{v}_k=[\tilde{v}_{k,1},\ldots,\tilde{v}_{k,M}]^T$, $\forall k$, then problem (\ref{eqn:problem downlink case I}) can be transformed into the following optimization problem: \begin{align}\hspace{-8pt} \mathop{\mathrm{minimize}}_{\{\tilde{\mv{v}}_k\},\{q_m^{{\rm dl}}\}} & ~ \sum\limits_{k=1}^K\|\tilde{\mv{v}}_k\|^2\sigma^2+\sum\limits_{m=1}^Mq_m^{{\rm dl}}\sigma^2 \label{eqn:problem downlink case I 1} \\ \hspace{-8pt} \mathrm {subject ~ to} & ~ \frac{|\tilde{\mv{v}}_k^H\mv{h}_k|^2}{2^{R_k}-1}-\sum\limits_{j\neq k}|\tilde{\mv{v}}_j^H\mv{h}_k|^2-\sum\limits_{m=1}^Mq_m^{{\rm dl}}|h_{k,m}|^2-\sigma^2\geq 0, ~ \forall k\in \mathcal{K}, \label{eqn:downlink rate case I 1} \\ \hspace{-8pt} & ~ \sum\limits_{k=1}^K|\tilde{v}_{k,m}|^2+q_m^{\rm dl}-2^{C_m}q_m^{\rm dl}\leq 0, ~ \forall m\in \mathcal{M}, \label{eqn:downlink fronthaul case I 1} \\ \hspace{-8pt} & ~ (\ref{eqn:nonnegative power}), ~ (\ref{eqn:positive quantization noise3}). \nonumber \end{align} Note that (\ref{eqn:downlink rate case I 1}) can be transformed into a second-order cone constraint \cite{duality8}. As a result, problem (\ref{eqn:problem downlink case I 1}) for the broadcast relay channel is convex. However, the joint optimization of the beamforming vectors and quantization noise levels in problem (\ref{eqn:problem downlink case I 1}) is over a higher dimension, so it can potentially be more computationally complex than the duality-based approach where the beamforming vectors are first obtained by solving problem (\ref{eqn:problem uplink case I}) using fixed-point iterations, then the powers and quantization noise levels are optimized given the beamforming vectors. \subsection{Wyner-Ziv and Multivariate Compression}\label{sec:Wyner-Ziv and Multivariate Compression} The above approach can also be used for the scenarios with Wyner-Ziv and multivariate compressions, i.e., Case III under linear encoding/decoding and Case IV under nonlinear encoding/decoding. For simplicity, in the following, we focus on Case III; similar analysis also applies to Case IV. Suppose that the decompression order for Wyner-Ziv compression and the compression order for multivariate compression are given by (\ref{eqn:compression decompression order}). Then, the sum-power minimization problems under Case III in the multiple-access relay channel and the broadcast relay channel are respectively given by \begin{align}\hspace{-8pt} \mathop{\mathrm{minimize}}_{\{p_k^{\rm ul},\mv{w}_k\},\{q_m^{\rm ul}\}} & ~ P^{\rm ul}(\{p_k^{\rm ul}\}) \label{eqn:problem uplink Case III} \\ \hspace{-8pt} \mathrm {subject ~ to} & ~ R_k^{\rm ul,TIN}(\{p_k^{\rm ul},\mv{w}_k\},\{q_m^{\rm ul}\})\geq R_k, ~ \forall k\in \mathcal{K}, \label{eqn:uplink rate constraint app 2} \\ & ~ C_{\bar\rho(m)}^{{\rm ul,WZ}}(\{p_k^{\rm ul}\},q_{\bar\rho(1)}^{\rm ul},\ldots,q_{\bar\rho(m)}^{\rm ul},\{\bar{\rho}(m)\})\leq C_{\bar\rho(m)}, ~ \forall m \in \mathcal{M}, \label{eqn:uplink fronthaul constraint app 2} \\ & ~ (\ref{eqn:uplink nonnegative power}), ~ (\ref{eqn:uplink positive}), \nonumber \end{align} and \begin{align}\hspace{-8pt} \mathop{\mathrm{minimize}}_{\{p_k^{\rm dl},\mv{v}_k\},\mv{Q}} & ~ P^{\rm dl}(\{p_k^{\rm dl}\},\mv{Q}) \label{eqn:problem downlink Case III} \\ \hspace{-8pt} \mathrm {subject ~ to} & ~ R_k^{\rm dl,LIN}(\{p_k^{\rm dl},\mv{v}_k\}\},\mv{Q}) \geq R_k, ~ \forall k\in \mathcal{K}, \label{eqn:Case I 2 app} \\ \hspace{-8pt} & ~ C_{\bar\rho(M+1-m)}^{\rm dl,MV}(\{p_k^{\rm dl},\mv{v}_k\},\mv{Q},\{\bar{\rho}(M+1-m)\})\leq C_{\bar\rho(M+1-m)}, ~ \forall m \in \mathcal{M}, \\ & ~ (\ref{eqn:positive}), ~ (\ref{eqn:nonnegative power}). \nonumber \end{align} Next, we show how to solve problem (\ref{eqn:problem downlink Case III}) via problem (\ref{eqn:problem uplink Case III}). Similar to problem (\ref{eqn:problem uplink case I}), in the following, we transform problem (\ref{eqn:problem uplink Case III}) into a power control problem, then propose a fixed-point iteration method to solve this power control problem globally. First, according to Proposition \ref{proposition7}, with the optimal solution to problem (\ref{eqn:problem uplink Case III}), constraint (\ref{eqn:uplink fronthaul constraint app 2}) in problem (\ref{eqn:problem uplink Case III}) is satisfied with equality. In this case, the quantization noise level of relay $1$ can be expressed as a function of the transmit powers as follows: \begin{align}\label{eqn:optimal solution Case III} q_1^{{\rm ul}}(\mv{p}^{{\rm ul}})=\frac{\sigma^2+\sum\limits_{k=1}^Kp_k^{{\rm ul}}h_{k,1}h_{k,1}^H}{2^{C_1}-1}. \end{align}Moreover, if $q_1^{{\rm ul}}(\mv{p}^{{\rm ul}}),\ldots,q_{m-1}^{{\rm ul}}(\mv{p}^{{\rm ul}})$ are already expressed as functions of $\mv{p}^{{\rm ul}}$, then $\mv{\Gamma}^{(1:m-1,1:m-1)}$, $\mv{\Gamma}^{(m,1:m-1)}$, and $\mv{\Gamma}^{(1:m-1,m)}$ can also be expressed as functions of $\mv{p}^{{\rm ul}}$ according to (\ref{eqn:Gamma}), given the specific decompression order (\ref{eqn:compression decompression order}). For convenience, we use $\mv{\Gamma}^{(1:m-1,1:m-1)}(\mv{p}^{{\rm ul}})$, $\mv{\Gamma}^{(m,1:m-1)}(\mv{p}^{{\rm ul}})$, and $\mv{\Gamma}^{(1:m-1,m)}(\mv{p}^{{\rm ul}})$ to indicate that they are functions of $\mv{p}^{{\rm ul}}$. In this case, $q_m^{{\rm ul}}(\mv{p}^{{\rm ul}})$ can be uniquely expressed as a function of $\mv{p}^{{\rm ul}}$: \begin{align}\label{eqn:optimal solution Case III 1} q_m^{{\rm ul}}(\mv{p}^{{\rm ul}})=\frac{\sigma^2+\sum\limits_{k=1}^Kp_k^{{\rm ul}}h_{k,m}h_{k,m}^H-\mv{\Gamma}^{(m,1:m-1)}(\mv{p}^{{\rm ul}})(\mv{\Gamma}^{(1:m-1,1:m-1)}(\mv{p}^{{\rm ul}}))^{-1}\mv{\Gamma}^{(1:m-1,m)}(\mv{p}^{{\rm ul}})}{2^{C_m}-1}. \end{align}In other words, given $\mv{p}^{{\rm ul}}$, we can first characterize $q_1^{{\rm ul}}(\mv{p}^{{\rm ul}})$ according to (\ref{eqn:optimal solution Case III}), then characterize $q_2^{{\rm ul}}(\mv{p}^{{\rm ul}})$ given $q_1^{{\rm ul}}(\mv{p}^{{\rm ul}})$ according to (\ref{eqn:optimal solution Case III 1}), then characterize $q_3^{{\rm ul}}(\mv{p}^{{\rm ul}})$ given $q_1^{{\rm ul}}(\mv{p}^{{\rm ul}})$ and $q_2^{{\rm ul}}(\mv{p}^{{\rm ul}})$ according to (\ref{eqn:optimal solution Case III 1}), and so on. Next, similar to problem (\ref{eqn:problem uplink case I}), given the transmit power solution $\mv{p}^{{\rm ul}}$ and the quantization noise levels $\{q_m^{{\rm ul}}(\mv{p}^{{\rm ul}})\}$ as in (\ref{eqn:optimal solution Case III}) and (\ref{eqn:optimal solution Case III 1}), the optimal MMSE beamforming vectors are given in (\ref{eqn:MMSE}). By plugging them into problem (\ref{eqn:problem uplink Case III}), we have the following power control problem: \begin{align}\hspace{-8pt} \mathop{\mathrm{minimize}}_{\mv{p}^{{\rm ul}}} & ~ P^{\rm ul}(\{p_k^{\rm ul}\}) \label{eqn:problem uplink case III 1} \\ \hspace{-8pt} \mathrm {subject ~ to} & ~ p_k^{{\rm ul}}\geq I_k(\mv{p}^{{\rm ul}}), ~ \forall k\in \mathcal{K}, \label{eqn:Case III 11} \end{align} where \begin{align}\label{eqn:interference function case III} I_k(\mv{p}^{{\rm ul}})=\frac{2^{R_k}-1}{\mv{h}_k^H \left(\sum\limits_{j\neq k}p_j^{{\rm ul}} \mv{h}_j\mv{h}_j^H+{\rm diag}(q_1^{{\rm ul}}(\mv{p}^{{\rm ul}}),\ldots,q_M^{{\rm ul}}(\mv{p}^{{\rm ul}}))+\sigma^2\mv{I}\right)^{-1} \mv{h}_k}, ~~~ \forall k\in \mathcal{K}, \end{align}with $\{q_m^{{\rm ul}}(\mv{p}^{{\rm ul}})\}$ given in (\ref{eqn:optimal solution Case III}) and (\ref{eqn:optimal solution Case III 1}). \begin{lemma}\label{lemmacaseIII} Given $\mv{p}^{{\rm ul}}\geq \mv{0}$, the functions $I_k(\mv{p}^{{\rm ul}})$'s, $k=1,\ldots,K$, defined by (\ref{eqn:interference function case III}) satisfy the following three properties: \begin{itemize} \item[1.] $I_k(\mv{p}^{{\rm ul}})>0$, $\forall k$; \item[2.] Given any $\alpha>1$, it follows that $I_k(\alpha \mv{p}^{{\rm ul}})< \alpha I_k(\mv{p}^{{\rm ul}})$, $\forall k$; \item[3.] If $\bar{\mv{p}}^{{\rm ul}}\geq \mv{p}^{{\rm ul}}$, then $I_k(\bar{\mv{p}}^{{\rm ul}})\geq I_k(\mv{p}^{{\rm ul}})$, $\forall k$. \end{itemize} \end{lemma} \begin{IEEEproof} Please refer to Appendix \ref{appendix9}. \end{IEEEproof} Similar to Lemma \ref{lemmacaseI}, Lemma \ref{lemmacaseIII} shows that $\mv{I}(\mv{p}^{{\rm ul}})=[I_1(\mv{p}^{{\rm ul}}),\ldots,I_K(\mv{p}^{{\rm ul}})]^T$ is a standard interference function \cite{Yates95}. According to \cite[Theorem 2]{Yates95}, as long as the original problem is feasible, given any initial power solution $\mv{p}^{{\rm ul},(0)}=[p_1^{{\rm ul},(0)},\ldots,p_K^{{\rm ul},(0)}]^T$ that satisfies $\mv{p}^{{\rm ul},(0)}\geq \mv{0}$, the following fixed-point iteration must converge to the globally optimal power control solution to problem (\ref{eqn:problem uplink case III 1}): \begin{align}\label{eqn:fixed-point case III} p_k^{{\rm ul},(t+1)}=I_k(\mv{p}^{{\rm ul},(t)}), ~~~ \forall k\in \mathcal{K}, \end{align}where $I_k(\mv{p}^{{\rm ul}})$'s are defined in (\ref{eqn:interference function case III}), and $\mv{p}^{{\rm ul},(t)}$ denotes the power solution obtained at the $t$-th iteration of the fixed-point update (\ref{eqn:fixed-point case III}). To implement the above fixed-point iteration, given $\mv{p}^{{\rm ul},(t)}$, we can first get $q_1^{{\rm ul}}(\mv{p}^{{\rm ul},(t)})$ according to (\ref{eqn:optimal solution Case III}), then $q_2^{{\rm ul}}(\mv{p}^{{\rm ul},(t)})$ given $q_1^{{\rm ul}}(\mv{p}^{{\rm ul},(t)})$ according to (\ref{eqn:optimal solution Case III 1}), and so on. Thus, given $\mv{p}^{{\rm ul},(t)}$, we get $\{q_m^{{\rm ul}}(\mv{p}^{{\rm ul},(t)})\}$, then $\{I_k(\mv{p}^{{\rm ul},(t)})\}$ in (\ref{eqn:fixed-point case III}) can be computed by using (\ref{eqn:interference function case III}). After the optimal power $\mv{p}^{{\rm ul},\ast}$ is obtained by the fixed-point method (\ref{eqn:fixed-point case III}), the optimal quantization noise levels $\{q_m^{{\rm ul},\ast}\}$ can then be obtained via (\ref{eqn:optimal solution Case III}) and (\ref{eqn:optimal solution Case III 1}), and the optimal receive beamforming vectors $\{\mv{w}_k^\ast\}$ can be obtained via (\ref{eqn:MMSE}). Note that the above algorithm to solve problem (\ref{eqn:problem uplink Case III}) is simple, because there are closed-form expressions for the update of all the variables. Finally, we solve problem (\ref{eqn:problem downlink Case III}) via duality. According to the duality result in Section \ref{sec 3}, we can set the transmit beamforming vectors in problem (\ref{eqn:problem downlink Case III}) as $\mv{v}_k^\ast=\mv{w}_k^\ast$, $\forall k\in \mathcal{K}$. Then, we can obtain the optimal transmit power and quantization covariance matrix given this beamforming solution by solving the convex problem (\ref{eqn:problem 3}). Note that it is not yet known whether problem (\ref{eqn:problem downlink Case III}) has a convex reformulation. Nevertheless, the above algorithm gives a globally optimal solution for (\ref{eqn:problem downlink Case III}). As a final remark, it is worth emphasizing that a condition for the convergence of the fixed-point iteration algorithm is that the original problem is feasible to start with. However, determining whether a set of user target rates and fronthaul capacities is feasible is by itself not necessarily always easy to do computationally, unless the problem can be reformulated as a convex optimization. \subsection{Numerical Example} \begin{figure}[t] \centering \includegraphics[width=11cm]{sum_power.pdf} \caption{Illustration of the numerical algorithm for using duality to solve the downlink sum-power minimization problem via the dual uplink problem.} \label{simulation} \end{figure} As a numerical example demonstrating the algorithm of using uplink-downlink duality to solve the broadcast relay channel problem, we consider a network with $M=3$ relays and $K=3$ users, where the wireless channels between these relays and users are generated based on the independent and identically distributed Rayleigh fading model with zero mean and unit variance, and the fronthaul capacities between all the relays and the CP are set to be $3$ bps. Moreover, the noise powers at the relays in the uplink and at the users in the downlink are set to be $\sigma^2=1$. The rate targets for all the users are assumed to be identical. Under the above setup, Fig.~\ref{simulation} shows the optimal values of the uplink (UL) problem (\ref{eqn:problem uplink case I}) and the downlink (DL) problem (\ref{eqn:problem downlink case I}) under independent compression (IN), and respectively the UL problem (\ref{eqn:problem uplink Case III}) and the DL problem (\ref{eqn:problem downlink Case III}) under the Wyner-Ziv (WZ) and multivariate (MV) compression, as functions of the user rate target, for both the cases of treating interference as noise (TIN) and linear encoding (LIN), respectively. In addition, the corresponding curves for the cases of successive interference cancellation (SIC) in the uplink or dirty-paper coding (DPC) in the downlink are also plotted. The numerical values are obtained under the optimal beamforming vectors computed based on the multiple-access relay channel; the same beamformers are also optimal for the broadcast relay channel. Under these fixed beamformers, the numerical results also show that for both the cases of independent compression and Wyner-Ziv/multivariate compression, at the same rate targets, the minimum sum power in the broadcast relay channel is the same as that in the multiple-access relay channel. This validates the results in Sections \ref{sec 1} to \ref{sec 4}. Further, it is observed that the minimum sum power under the Wyner-Ziv and multivariate compression is smaller than that under the independent compression, especially when the user rate target is high. This is because the Wyner-Ziv compression can utilize the fronthaul more efficiently by using the decompressed signals as side information, while the multivariate compression can reduce the interference caused by compression as seen by the users. Finally, successive decoding and dirty-paper coding have significant benefit as compared to treating interference as noise and linear precoding, respectively. \section{Concluding Remarks}\label{sec:Conclusion} This paper reveals an interesting duality relationship between the Gaussian multiple-access relay channel and the Gaussian broadcast relay channel. Specifically, we first show that if independent compression is applied in both the uplink and the downlink, then the achievable rate regions of the multiple-access relay channel and the broadcast relay channel are identical under the same total transmit power constraint and individual fronthaul capacity constraints. Furthermore, this duality continues to hold when the Wyner-Ziv compression strategy is applied in the multiple-access relay channel and the multivariate compression strategy is applied in the broadcast relay channel. This duality relationship has an intimate connection to the Lagrangian duality theory in convex optimization. Under fixed beamformers, the power minimization problems for the multiple-access and broadcast relay channels are the Lagrangian duals of each other. The optimal dual variables corresponding to the rate constraints in the broadcast relay channel problem are the optimal transmit powers in the dual multiple-access relay channel problem. The optimal dual variables corresponding to the fronthaul constraints in the broadcast relay channel problem are the optimal quantization noise levels in the dual multiple-access relay channel problem. Furthermore, this paper shows that for jointly optimizing over the transmit powers, the receive beamforming vectors, and the quantization noise levels, the uplink sum-power minimization problem can be globally solved using a low-complexity fixed-point iteration method. Thus, the duality also gives a computationally simple way of finding the optimal beamformers in the downlink problem via its dual uplink. We remark that the duality relationship established in this paper is specific to the compression based relaying strategies. The compression strategies considered in this paper are special cases of the more general noisy network coding \cite{Kim_NNC} and distributed decode-forward \cite{Kim_DDF} schemes for general multihop relay networks. An \emph{operational} duality between noisy network coding and distributed decode-forward has already been observed in \cite{Kim_DDF}, thus the main contribution of this paper is in establishing a \emph{computational} duality for the specific compression schemes in the setting of the specific two-hop relay network. Is there a \emph{computational} duality for the more general network? Is there a \emph{capacity region} duality for either the two-hop or more general relay networks? These remain interesting open questions. \newpage \begin{appendix} \subsection{Proof of Proposition \ref{proposition broadcast 1}}\label{appendix broadcast 1} First, suppose that at the optimal solution to problem (\ref{eqn:dual problem3 4}), which is denoted by $\beta_k^\ast$'s and $\lambda_m^\ast$'s, there exists a $\bar{k}$ such that \begin{align} \sum\limits_{j\neq \bar{k}}\beta_j^\ast|\bar{\mv{u}}_{\bar{k}}^H\mv{h}_j|^2+\sum\limits_{m=1}^M\lambda_m^\ast|\bar{u}_{\bar{k},m}|^2+\sigma^2-\frac{\beta_{\bar{k}}^\ast|\bar{\mv{u}}_{\bar{k}}^H\mv{h}_{\bar{k}}|^2}{2^{R_{\bar{k}}}-1}>0. \end{align}Then, consider another solution where $\lambda_m=\lambda_m^\ast$, $\forall m$, $\beta_k=\beta_k^\ast$, $\forall k\neq \bar{k}$, and \begin{align}\label{eqn:new transmit power} \beta_{\bar{k}}=\frac{\left(\sum\limits_{j\neq \bar{k}}\beta_j^\ast|\bar{\mv{u}}_{\bar{k}}^H\mv{h}_j|^2+\sum\limits_{m=1}^M\lambda_m^\ast|\bar{u}_{\bar{k},m}|^2+\sigma^2\right)(2^{R_{\bar{k}}}-1)}{|\bar{\mv{u}}_{\bar{k}}^H\mv{h}_{\bar{k}}|^2}>\beta_{\bar{k}}^\ast. \end{align} It can be shown that the new solution is a feasible solution to problem (\ref{eqn:dual problem3 4}). Moreover, due to (\ref{eqn:new transmit power}), at the new solution, the objective value of problem (\ref{eqn:dual problem3 4}) is increased. This contradicts to the fact that the optimal solution to problem (\ref{eqn:dual problem3 4}) is $\{\beta_k^\ast, \lambda_m^\ast\}$. As a result, at the optimal solution to problem (\ref{eqn:dual problem3 4}), constraint (\ref{eqn:dual constraint 1}) should hold with equality. Next, suppose that at the optimal solution to problem (\ref{eqn:dual problem3 4}), which is denoted by $\beta_k^\ast$'s and $\lambda_m^\ast$'s, there exists an $\bar{m}$ such that \begin{align} \sum\limits_{k=1}^K\beta_k^\ast|h_{k,\bar{m}}|^2+\lambda_{\bar{m}}^\ast+\sigma^2-2^{C_{\bar{m}}}\lambda_{\bar{m}}^\ast> 0. \end{align}Then, consider another solution where $\beta_k=\beta_k^\ast$, $\forall k$, $\lambda_m=\lambda_m^\ast$, $\forall m\neq \bar{m}$, and \begin{align}\label{eqn:new quantization noise level} \lambda_{\bar{m}}=\frac{\sum\limits_{k=1}^K\beta_k^\ast|h_{k,\bar{m}}|^2+\sigma^2}{2^{C_{\bar{m}}}-1}> \lambda_{\bar{m}}^\ast. \end{align} It can be shown that the new solution is a feasible solution to problem (\ref{eqn:dual problem3 4}). Moreover, due to (\ref{eqn:new quantization noise level}), at the new solution, the objective value of problem (\ref{eqn:dual problem3 4}) is increased. This contradicts to the fact that the optimal solution to problem (\ref{eqn:dual problem3 4}) is $\{\beta_k^\ast,\lambda_m^\ast\}$. As a result, at the optimal solution to problem (\ref{eqn:dual problem3 4}), constraint (\ref{eqn:dual constraint fronthaul}) should hold with equality. Proposition \ref{proposition broadcast 1} is thus proved. \subsection{Proof of Proposition \ref{proposition broadcast 2}}\label{appendix broadcast 2} First, according to (\ref{eqn:dual constraint fronthaul 1}), it follows that \begin{align}\label{eqn:equal quantization noise level} \lambda_m=\frac{\sum\limits_{k=1}^K\beta_k|h_{k,m}|^2+\sigma^2}{2^{C_m}-1}, ~~~ \forall m\in \mathcal{M}. \end{align}By plugging (\ref{eqn:equal quantization noise level}) into (\ref{eqn:dual constraint 11}), it follows that \begin{align}\label{eqn:standard function 1} \beta_k=I_k(\mv{\beta}), ~~~ \forall k\in \mathcal{K}, \end{align}where $\mv{\beta}=[\beta_1,\ldots,\beta_K]^T$, and \begin{align}\label{eqn:standard function} I_k(\mv{\beta})=\frac{\left(\sum\limits_{j\neq k}\beta_j|\bar{\mv{u}}_k^H\mv{h}_j|^2+\sum\limits_{m=1}^M\frac{(\sum\limits_{k=1}^K\beta_k|h_{k,m}|^2+\sigma^2)|\bar{u}_{k,m}|^2}{2^{C_m}-1}+\sigma^2\right)(2^{R_k}-1)}{|\bar{\mv{u}}_k^H\mv{h}_k|^2}. \end{align}It can be shown that with $\beta_k>0$, $\forall k$, we have \begin{align} I_k(\mv{\beta})>0, ~~~ \forall k\in \mathcal{K}. \end{align}Next, given $\alpha>1$, it can be shown that \begin{align} I_k(\alpha\mv{\beta})&=\frac{\left(\sum\limits_{j\neq k}\alpha\beta_j|\bar{\mv{u}}_k^H\mv{h}_j|^2+\sum\limits_{m=1}^M\frac{(\sum\limits_{k=1}^K\alpha\beta_k|h_{k,m}|^2+\sigma^2)|\bar{u}_{k,m}|^2}{2^{C_m}-1}+\sigma^2\right)(2^{R_k}-1)}{|\bar{\mv{u}}_k^H\mv{h}_k|^2}\\ & <\alpha \times \frac{\left(\sum\limits_{j\neq k}\beta_j|\bar{\mv{u}}_k^H\mv{h}_j|^2+\sum\limits_{m=1}^M\frac{(\sum\limits_{k=1}^K\beta_k|h_{k,m}|^2+\sigma^2)|\bar{u}_{k,m}|^2}{2^{C_m}-1}+\sigma^2\right)(2^{R_k}-1)}{|\bar{\mv{u}}_k^H\mv{h}_k|^2} \\ &=\alpha I_k(\mv{\beta}), ~~~ \forall k\in \mathcal{K}. \end{align}Last, given $\bar{\mv{\beta}}=[\bar{\beta}_1,\ldots,\bar{\beta}_K]^T\geq \mv{\beta}$, it follows that \begin{align} I_k(\bar{\mv{\beta}})&=\frac{\left(\sum\limits_{j\neq k}\bar{\beta}_j|\bar{\mv{u}}_k^H\mv{h}_j|^2+\sum\limits_{m=1}^M\frac{(\sum\limits_{k=1}^K\bar{\beta}_k|h_{k,m}|^2+\sigma^2)|\bar{u}_{k,m}|^2}{2^{C_m}-1}+\sigma^2\right)(2^{R_k}-1)}{|\bar{\mv{u}}_k^H\mv{h}_k|^2} \\ & \geq \frac{\left(\sum\limits_{j\neq k}\beta_j|\bar{\mv{u}}_k^H\mv{h}_j|^2+\sum\limits_{m=1}^M\frac{(\sum\limits_{k=1}^K\beta_k|h_{k,m}|^2+\sigma^2)|\bar{u}_{k,m}|^2}{2^{C_m}-1}+\sigma^2\right)(2^{R_k}-1)}{|\bar{\mv{u}}_k^H\mv{h}_k|^2}\\ & = I_k(\mv{\beta}), ~~~ \forall k\in \mathcal{K}. \end{align}As a result, the function $\mv{I}(\mv{\beta})=[I_1(\mv{\beta}),\ldots,I_K(\mv{\beta})]^T$ defined by (\ref{eqn:standard function}) is a standard interference function \cite{Yates95}. It then follows from \cite[Theorem 1]{Yates95} that there exists a unique solution $\mv{\beta}>\mv{0}$ to equation (\ref{eqn:standard function 1}). Moreover, with this solution of $\mv{\beta}$, there is a unique solution of $\lambda_m>0$'s to equation (\ref{eqn:equal quantization noise level}). As a result, if problem (\ref{eqn:dual problem3 5}) is feasible, there exists only one solution to the constraints (\ref{eqn:dual constraint 11}), (\ref{eqn:dual constraint fronthaul 1}), (\ref{eqn:broadcast positive beta 11}), and (\ref{eqn:broadcast positive lambda 11}) in problem (\ref{eqn:dual problem3 5}). Proposition \ref{proposition broadcast 2} is thus proved. \subsection{Proof of Proposition \ref{proposition1}}\label{appendix1} First, it can be easily verified that the SINR constraints (\ref{eqn:downlink convex rate constraint}) are equivalent to the original SINR constraints (\ref{eqn:downlink rate constraint}). Next, we consider the fronthaul constraints (\ref{eqn:downlink fronthaul constraint}), which can be re-formulated as \begin{align}\label{eqn:downlink new fronthaul constraint} 2^{C_m}\mv{Q}^{(m,m)}-\mv{Q}^{(m,m)}-\sum\limits_{k=1}^{K} p_k^{\rm dl} \left|\bar{u}_{k,m}\right|^2-2^{C_m}\mv{Q}^{(m,m+1:M)}(\mv{Q}^{(m+1:M,m+1:M)})^{-1}\mv{Q}^{(m+1:M,m)}\geq 0, ~ \forall m. \end{align} Given $m=1,\ldots,M$, define \begin{align} \mv{D}_m=\left[\begin{array}{cc} 2^{C_m}\mv{Q}^{(m,m)}-\mv{Q}^{(m,m)}-\sum\limits_{k=1}^{K} p_k^{\rm dl} \left|\bar{u}_{k,m}\right|^2 & 2^{C_m} \mv{Q}^{(m,m+1:M)} \\ 2^{C_m} \mv{Q}^{(m+1:M,m)} & 2^{C_m} \mv{Q}^{(m+1:M,m+1:M)} \end{array} \right]\in \mathbb{C}^{(M-m+1)\times (M-m+1)}. \end{align}Note that $\mv{Q}^{(m,m+1:M)}=(\mv{Q}^{(m+1:M,m)})^H$ and thus $\mv{D}_m$ is a Hermitian matrix, $m=1,\ldots,M$. Note that we must have $\mv{Q}\succ\mv{0}$ in problem (\ref{eqn:problem 2}) since otherwise, the fronthaul rates for relays become infinite based on (\ref{eqn:downlink fronthaul rate}) and (\ref{eqn:assumption}). According to \cite[Theorem 7.7.9]{Horn12}, given $\mv{Q}\succ\mv{0}$, (\ref{eqn:downlink new fronthaul constraint}) is equivalent to $\mv{D}_m\succeq \mv{0}$, $\forall m$. Moreover, since \begin{align} & 2^{C_m}\left[\begin{array}{cc}\mv{0}_{(m-1)\times (m-1)} & \mv{0}_{(m-1)\times (M-m+1)} \\ \mv{0}_{(M-m)\times (m-1)} & \mv{Q}^{(m:M,m:M)}\end{array}\right]-\mv{E}_m(\mv{Q}+\mv{\Psi})\mv{E}_m^H \nonumber \\ &=\left[\begin{array}{cc}\mv{0}_{(m-1)\times (m-1)} & \mv{0}_{(m-1)\times (M-m+1)} \\ \mv{0}_{(M-m+1)\times (m-1)} & \mv{D}_m \end{array} \right], ~~~ \forall m, \end{align}the fronthaul constraint (\ref{eqn:downlink new fronthaul constraint}) in the broadcast relay channel is equivalent to (\ref{eqn:downlink convex fronthaul constraint}) given (\ref{eqn:positive}). To summarize, the new SINR constraints (\ref{eqn:downlink convex rate constraint}) and fronthaul constraints (\ref{eqn:downlink convex fronthaul constraint}) are equivalent to the original constraints. Moreover, we multiply the objective function in problem (\ref{eqn:problem 2}) by a constant $\sigma^2$ for convenience. As a result, problem (\ref{eqn:problem 3}) and problem (\ref{eqn:problem 2}) have the same optimal solution. Proposition \ref{proposition1} is proved. \subsection{Proof of Proposition \ref{proposition2}}\label{appendix2} We can write down the Lagrangian of problem (\ref{eqn:problem 3}) as follows: \begin{align} & \mathcal{L}(\{p_k^{\rm dl}\},\mv{Q},\{\beta_k\},\{\mv{\Lambda}_m\}) \nonumber \\ &=\sum\limits_{k=1}^Kp_k^{\rm dl}\sigma^2+{\rm tr}(\mv{Q})\sigma^2 -\sum\limits_{k=1}^K\beta_k\left(\frac{p_k^{\rm dl}|\bar{\mv{u}}_k^H\mv{h}_k|^2}{2^{R_k}-1}-\sum\limits_{j\neq k} p_j^{\rm dl}|\bar{\mv{u}}_j^H\mv{h}_k|^2-{\rm tr}(\mv{Q}\mv{h}_k\mv{h}_k^H)-\sigma^2\right) \nonumber \\ & \quad -\sum\limits_{m=1}^M{\rm tr}\bigg(\mv{\Lambda}_m\bigg(2^{C_m}\left[\begin{array}{cc}\mv{0}_{(m-1)\times (m-1)} & \mv{0}_{(m-1)\times (M-m+1)} \\ \mv{0}_{(M-m+1)\times (m-1)} & \mv{Q}^{(m:M,m:M)}\end{array}\right] -\mv{E}_m(\mv{Q}+\mv{\Psi})\mv{E}_m^H\bigg)\bigg) \\ &= \sum\limits_{k=1}^Kp_k^{\rm dl}\left(\sigma^2+\sum\limits_{j\neq k}\beta_j|\bar{\mv{u}}_k^H\mv{h}_j|^2+\sum\limits_{m=1}^M\mv{\Lambda}_m^{(m,m)}|\bar{u}_{k,m}|^2-\frac{\beta_k|\bar{\mv{u}}_k^H\mv{h}_k|^2}{2^{R_k}-1} \right) \nonumber \\ & \quad + {\rm tr}\bigg(\mv{Q}\bigg(\sigma^2\mv{I}+\sum\limits_{k=1}^K\beta_k\mv{h}_k\mv{h}_k^H+\sum\limits_{m=1}^M\mv{E}_m^H\mv{\Lambda}_m\mv{E}_m \nonumber \\ & \quad -\sum\limits_{m=1}^M2^{C_m}\left[\begin{array}{cc}\mv{0}_{(m-1)\times (m-1)} & \mv{0}_{(m-1)\times (M-m+1)} \\ \mv{0}_{(M-m+1)\times (m-1)} & \mv{\Lambda}_m^{(m:M,m:M)}\end{array}\right] \bigg)\bigg) + \sum\limits_{k=1}^K\beta_k \sigma^2, \label{eqn:downlink Lagrange} \end{align}where $\beta_k\geq 0$, $k=1,\ldots,K$, and $\mv{\Lambda}_m\in \mathbb{C}^{M\times M}\succeq \mv{0}$, $m=1,\ldots,M$, are the dual variables associated with constraints (\ref{eqn:downlink convex rate constraint}) and (\ref{eqn:downlink convex fronthaul constraint}), respectively. The dual function of problem (\ref{eqn:problem 2}) is thus formulated as \begin{align}\label{eqn:dual function problem 2} g(\{\beta_k\},\{\mv{\Lambda}_m\})=\min\limits_{p_k^{\rm dl}\geq 0, \forall k\in \mathcal{K}, \mv{Q}\succeq \mv{0}} ~ \mathcal{L}(\{p_k^{\rm dl}\},\mv{Q},\{\beta_k\},\{\mv{\Lambda}_m\}). \end{align}At last, the dual problem of problem (\ref{eqn:problem 3}) is \begin{align}\hspace{-8pt} \mathop{\mathrm{maximize}}_{\{\beta_k\},\{\mv{\Lambda}_m\}} & ~ g(\{\beta_k\},\{\mv{\Lambda}_m\}) \label{eqn:dual problem of problem 2} \\ \mathrm {subject ~ to} & ~ \beta_k\geq 0, ~ \forall k, \\ & ~ \mv{\Lambda}_m\succeq \mv{0}, ~ \forall m. \end{align}Note that according to (\ref{eqn:downlink Lagrange}), the dual function is $g(\{\beta_k\},\{\mv{\Lambda}_m\})=\sum\limits_{k=1}^K\beta_k \sigma^2$ if and only if \begin{align} & \sigma^2+\sum\limits_{j\neq k}\beta_j|\bar{\mv{u}}_k^H\mv{h}_j|^2+\sum\limits_{m=1}^M\mv{\Lambda}_m^{(m,m)}|\bar{u}_{k,m}|^2-\frac{\beta_k|\bar{\mv{u}}_k^H\mv{h}_k|^2}{2^{R_k}-1}\geq 0, ~ \forall k, \label{eqn:opt con 1}\\ & \sigma^2\mv{I}+\sum\limits_{k=1}^K\beta_k\mv{h}_k\mv{h}_k^H+\sum\limits_{m=1}^M\mv{E}_m^H\mv{\Lambda}_m\mv{E}_m-\sum\limits_{m=1}^M2^{C_m}\left[\begin{array}{cc}\mv{0}_{(m-1)\times (m-1)} & \mv{0}_{(m-1)\times (M-m+1)} \\ \mv{0}_{(M-m+1)\times (m-1)} & \mv{\Lambda}_m^{(M-m+1:M-m+1)}\end{array}\right] \succeq \mv{0}. \label{eqn:opt con 2} \end{align}Otherwise, the dual function is unbounded, i.e., $g(\{\beta_k\},\{\mv{\Lambda}_m\})=-\infty$. As a result, the optimal solution to problem (\ref{eqn:dual function problem 2}) must satisfy constraints (\ref{eqn:opt con 1}) and (\ref{eqn:opt con 2}). In this case, $g(\{\beta_k\},\{\mv{\Lambda}_m\})=\sum\limits_{k=1}^K\beta_k \sigma^2$. This indicates that the dual problem of problem (\ref{eqn:problem 3}), i.e., problem (\ref{eqn:dual problem of problem 2}), is equivalent to problem (\ref{eqn:problem 4}). Proposition \ref{proposition1} is thus proved. \subsection{Proof of Proposition \ref{proposition3}}\label{appendix3} According to (\ref{eqn:A1}) and (\ref{eqn:A2}), we have \begin{align} \mv{A}_1=&2^{C_1}\mv{\Lambda}_1^{(1:M,1:M)}+\left[\begin{array}{cc}0 & \mv{0}_{1\times (M-1)} \\ \mv{0}_{(M-1)\times 1} & \mv{A}_2 \end{array}\right] \\ = & 2^{C_1}\mv{\Lambda}_1^{(1:M,1:M)}+2^{C_2}\left[\begin{array}{cc}0 & \mv{0}_{1\times (M-1)} \\ \mv{0}_{(M-1)\times 1} & \mv{\Lambda}_m^{(M-1:M-1)}\end{array}\right] + \left[\begin{array}{cc}\mv{0}_{2\times 2} & \mv{0}_{2\times (M-2)} \\ \mv{0}_{(M-2)\times 2} & \mv{A}_3 \end{array}\right] \\ \vdots & \nonumber \\ = & \sum\limits_{m=1}^M2^{C_m}\left[\begin{array}{cc}\mv{0}_{(m-1)\times (m-1)} & \mv{0}_{(m-1)\times (M-m+1)} \\ \mv{0}_{(M-m+1)\times (m-1)} & \mv{\Lambda}_m^{(M-m+1:M-m+1)}\end{array}\right]. \label{eqn:A3} \end{align}By combining (\ref{eqn:dual uplink fronthaul constraint 1}) and (\ref{eqn:A3}), we have (\ref{eqn:dual uplink fronthaul constraint}). As a result, problem (\ref{eqn:problem 4}) is equivalent to problem (\ref{eqn:problem 5}). Proposition \ref{proposition3} is thus proved. \subsection{Proof of Proposition \ref{lemma1}}\label{appendix4} We first show that at the optimal solution to problem (\ref{eqn:problem 5}), condition (\ref{eqn:optimal condition 4}) is true, i.e., \begin{align} \mv{B}=\sigma^2\mv{I}+\sum\limits_{k=1}^K\beta_k\mv{h}_k\mv{h}_k^H+\sum\limits_{m=1}^M\mv{E}_m^H\mv{\Lambda}_m\mv{E}_m-\mv{A}_1=\mv{0}. \end{align} Suppose that at the optimal solution $\{\beta_k, \mv{A}_m, \mv{\Lambda}_m\}$ to problem (\ref{eqn:problem 5}), $\mv{B}\succeq \mv{0}$ but $\mv{B}\neq \mv{0}$. Then, let $\bar{m}$ denote the index of the first positive diagonal element of $\mv{B}$. Since $\mv{B}\succeq \mv{0}$, we have \begin{align} \mv{B}=\left[\begin{array}{cc} \mv{0}_{(\bar{m}-1)\times (\bar{m}-1)} & \mv{0}_{(\bar{m}-1)\times (M-\bar{m}+1)} \\ \mv{0}_{(M-\bar{m}+1)\times (\bar{m}-1)} & \mv{B}^{(\bar{m}:M,\bar{m}:M)} \end{array}\right]. \end{align} Consider another solution given by \begin{align} & \hat{\beta}_k=\beta_k, ~~~ \forall k, \label{eqn:opt1}\\ & \hat{\mv{\Lambda}}_m=\mv{\Lambda}_m, ~~~ \forall m\neq \bar{m}, \label{eqn:opt2}\\ & \hat{\mv{\Lambda}}_{\bar{m}}=\mv{\Lambda}_{\bar{m}}+\left[\begin{array}{cc} \mv{0}_{(\bar{m}-1)\times (\bar{m}-1)} & \mv{0}_{(\bar{m}-1)\times (M-\bar{m}+1)} \\ \mv{0}_{(M-\bar{m}+1)\times (\bar{m}-1)} & \mv{C}_{\bar{m}}\end{array}\right], \label{eqn:opt3}\\ & \hat{\mv{A}}_m=2^{C_m}\hat{\mv{\Lambda}}_m^{(m:M,m:M)}+\left[\begin{array}{cc}0 & \mv{0}_{1\times (M-m)} \\ \mv{0}_{(M-m)\times 1} & \hat{\mv{A}}_{m+1} \end{array}\right], ~~~ m=1,\ldots,M-1, \label{eqn:opt4}\\ & \hat{\mv{A}}_M=2^{C_M}\hat{\mv{\Lambda}}_M^{(M,M)}, \label{eqn:opt5} \end{align}where $\mv{C}_{\bar{m}}\in \mathbb{C}^{(M-\bar{m}+1)\times (M-\bar{m}+1)}$ in (\ref{eqn:opt3}) satisfies \begin{align}\label{eqn:lemma81} 2^{C_{\bar{m}}}\mv{C}_{\bar{m}}-\left[\begin{array}{cc} \mv{C}_{\bar{m}}^{(1,1)} & \mv{0}_{1\times (M-\bar{m})} \\ \mv{0}_{(M-\bar{m})\times 1} & \mv{0}_{(M-\bar{m})\times (M-\bar{m})}\end{array}\right]=\mv{B}^{(\bar{m}:M,\bar{m}:M)}. \end{align} Since $\mv{B}^{(\bar{m},\bar{m})}>0$, it follows from (\ref{eqn:lemma81}) that $\mv{C}_{\bar{m}}^{(1,1)}>0$. Moreover, since $\mv{B}^{(\bar{m}:M,\bar{m}:M)} \succeq \mv{0}$ and $\mv{C}_{\bar{m}}^{(1,1)}>0$, according to (\ref{eqn:lemma81}) we have $\mv{C}_{\bar{m}}\succeq \mv{0}$. As a result, (\ref{eqn:opt3}) indicates that $\hat{\mv{\Lambda}}_{\bar{m}}\succeq \mv{0}$, i.e., constraint (\ref{eqn:nonegative Phi}) is satisfied. Further, it follows from (\ref{eqn:opt4}) and (\ref{eqn:opt5}) that \begin{align} \hat{\mv{A}}_1&=\mv{A}_1+2^{C_{\bar{m}}}\left[\begin{array}{cc} \mv{0}_{(\bar{m}-1)\times (\bar{m}-1)} & \mv{0}_{(\bar{m}-1)\times (M-\bar{m}+1)} \\ \mv{0}_{(M-\bar{m}+1)\times (\bar{m}-1)} & \mv{C}_{\bar{m}}\end{array}\right] \\ &=\mv{A}_1+\mv{B}+{\rm diag}([\mv{0}_{1\times (\bar{m}-1)},\mv{C}_{\bar{m}}^{(1,1)},\mv{0}_{1\times (M-\bar{m})}]) \\ &=\sigma^2\mv{I}+\sum\limits_{k=1}^K\beta_k\mv{h}_k\mv{h}_k^H+\sum\limits_{m=1}^M\mv{E}_m^H\mv{\Lambda}_m\mv{E}_m+{\rm diag}([\mv{0}_{1\times (\bar{m}-1)},\mv{C}_{\bar{m}}^{(1,1)},\mv{0}_{1\times (M-\bar{m})}]) \\ &=\sigma^2\mv{I}+\sum\limits_{k=1}^K\hat{\beta}_k\mv{h}_k\mv{h}_k^H+\sum\limits_{m=1}^M\mv{E}_m^H\hat{\mv{\Lambda}}_m\mv{E}_m. \end{align} As a result, at the new solution, constraint (\ref{eqn:dual uplink fronthaul constraint 1}) is satisfied with equality. Last, since $\mv{C}_{\bar{m}}^{(1,1)}>0$, it follows from (\ref{eqn:opt2}) and (\ref{eqn:opt3}) that $\hat{\mv{\Lambda}}_m^{(m,m)}=\mv{\Lambda}_m^{(m,m)}$, $\forall m\neq \bar{m}$, and $\hat{\mv{\Lambda}}_{\bar{m}}^{(\bar{m},\bar{m})}>\mv{\Lambda}_{\bar{m}}^{(\bar{m},\bar{m})}$. Moreover, according to condition (\ref{eqn:assumption}), we have $\sum_{k=1}^K|\bar{u}_{k,\bar{m}}|^2>0$. As a result, at the new solution, constraint (\ref{eqn:dual uplink SINR constraint}) is satisfied with strict inequality at least for one $k$. To summarize, at the new solution, the same objective value of problem (\ref{eqn:problem 5}) is achieved due to (\ref{eqn:opt1}), and all the constraints are also satisfied. Moreover, since constraint (\ref{eqn:dual uplink SINR constraint}) is satisfied with inequality, we can further find a better solution of $\beta_k$'s such that the objective value of problem (\ref{eqn:problem 5}) is enhanced while constraint (\ref{eqn:dual uplink SINR constraint}) is satisfied with equality. This contradicts the assumption that the optimal solution to problem (\ref{eqn:problem 5}) is $\{\beta_k, \mv{A}_m, \mv{\Lambda}_m\}$. As a result, at the optimal solution to problem (\ref{eqn:problem 5}), constraint (\ref{eqn:dual uplink fronthaul constraint 1}) must hold with equality. Next, we show that at the optimal solution to problem (\ref{eqn:problem 5}), conditions (\ref{eqn:optimal condition 2}) and (\ref{eqn:optimal condition}) are true. First, according to (\ref{eqn:optimal condition 4}), at the optimal solution, we have $\mv{A}_1\succ \mv{0}$, and thus \begin{align}\label{eqn:lemma82} \mv{A}_1^{(1,1)}>0. \end{align}Then, we show that at the optimal solution, condition (\ref{eqn:optimal condition}) is true for $m=1$, i.e., \begin{align}\label{eqn:optimal condition 1} 2^{C_1}\mv{\Lambda}_1=\frac{\mv{A}_1^{(1:M,1)}\mv{A}_1^{(1,1:M)}}{\mv{A}_1^{(1,1)}}. \end{align} On the right-hand side (RHS) of (\ref{eqn:A1}), given each $m$, the first row and the first column of the overall matrix is merely contributed by the first row and the first column of $\mv{\Lambda}_m^{(m:M,m:M)}$. With $\mv{A}_1^{(1,1)}>0$, to make (\ref{eqn:A1}) hold, the optimal solution $2^{C_1}\mv{\Lambda}_1$ must be in the following form \begin{align}\label{eqn:A4} 2^{C_1}\mv{\Lambda}_1=\frac{\mv{A}_1^{(1:M,1)}\mv{A}_1^{(1,1:M)}}{\mv{A}_1^{(1,1)}}+\left[\begin{array}{cc}0 & \mv{0}_{1\times (M-1)} \\ \mv{0}_{(M-1)\times 1} & \mv{T}_1\end{array}\right], \end{align}where $\mv{T}_1$ satisfies \begin{align}\label{eqn:T} \left[\begin{array}{cc}0 & \mv{0}_{1\times (M-1)} \\ \mv{0}_{(M-1)\times 1} & \mv{T}_1+\mv{A}_{2}\end{array}\right]=\mv{A}_1-\frac{\mv{A}_1^{(1:M,1)}\mv{A}_1^{(1,1:M)}}{\mv{A}_1^{(1,1)}}. \end{align}It can be shown from (\ref{eqn:T}) that $\mv{T}_1+\mv{A}_{2}\succeq \mv{0}$. \begin{lemma}\label{lemma7} Suppose that $\mv{a}\in \mathbb{C}^{M\times 1}$ with $\mv{a}^{(1)}>0$ and $\mv{B}\in \mathbb{C}^{(M-1)\times (M-1)}$. Then, \begin{align}\label{eqn:rank1} \mv{a}\mv{a}^H+\left[\begin{array}{cc}0 & \mv{0}_{1\times M} \\ \mv{0}_{M\times 1} & \mv{B}\end{array}\right]\succeq \mv{0}, \end{align}implies $\mv{B}\succeq \mv{0}$. \end{lemma} \begin{IEEEproof} Suppose that $\mv{B}$ is not a positive semidefinite matrix. Then, there exists some $\mv{x}\in \mathbb{C}^{(M-1)\times 1}$ such that $\mv{x}^H\mv{B}\mv{x}<0$. Next, with $\mv{a}^{(1)}>0$, define $\mv{y}=[-\mv{x}^H\mv{a}^{(2:M)}/\mv{a}^{(1)},\mv{x}^H]^H\in \mathbb{C}^{M\times 1}$. It can then be shown that \begin{align} \mv{y}^H \left(\mv{a}\mv{a}^H+\left[\begin{array}{cc}0 & \mv{0}_{1\times M} \\ \mv{0}_{M\times 1} & \mv{B}\end{array}\right]\right)\mv{y}=\mv{x}^H\mv{B}\mv{x}<0. \end{align}As a result, as long as $\mv{B}$ is not a positive semidefinite matrix, (\ref{eqn:rank1}) does not hold. Lemma \ref{lemma7} is thus proved. \end{IEEEproof} According to Lemma \ref{lemma7} and (\ref{eqn:lemma82}), if $2^{C_1}\mv{\Lambda}_1$ defined in (\ref{eqn:A4}) is a positive semidefinite matrix, we must have \begin{align} \mv{T}_1\succeq \mv{0}. \end{align} Now, suppose that the optimal solution $\{\beta_k, \mv{\Lambda}_m, \mv{A}_m\}$ to problem (\ref{eqn:problem 5}) satisfies $\mv{T}_1\neq \mv{0}$. Define $\tilde{m}$ as the index of the first positive diagonal element in $\mv{T}_1$. Since $\mv{T}_1\succeq \mv{0}$, $\mv{T}_1$ must be in the form of \begin{align} \mv{T}_1=\left[\begin{array}{cc}\mv{0}_{(\tilde{m}-1)\times (\tilde{m}-1)} & \mv{0}_{(\tilde{m}-1)\times (M-\tilde{m})} \\ \mv{0}_{(M-\tilde{m})\times (\tilde{m}-1)} & \mv{T}_1^{(\tilde{m}:M-1,\tilde{m}:M-1)}\end{array}\right], \end{align}with $\mv{T}_1^{(\tilde{m}:M-1,\tilde{m}:M-1)}\succeq \mv{0}$. Then, consider a new solution in which $\tilde{\beta}_k=\beta_k$, $\forall k$, $\tilde{\mv{\Lambda}}_m$'s are given as follows \begin{align} & \tilde{\mv{\Lambda}}_1=\mv{\Lambda}_1-\left[\begin{array}{cc}0 & \mv{0}_{1\times (M-1)} \\ \mv{0}_{(M-1)\times 1} & \mv{T}_1/2^{C_1}\end{array}\right]=\frac{\mv{A}_1^{(1:M,1)}\mv{A}_1^{(1,1:M)}}{\mv{A}_1^{(1,1)}}\succeq \mv{0}, \label{eqn:new solution 1}\\ & \tilde{\mv{\Lambda}}_{1+\tilde{m}}=\mv{\Lambda}_{1+\tilde{m}}+\left[\begin{array}{cc}0 & \mv{0}_{1\times (M-1)} \\ \mv{0}_{(M-1)\times 1} & \mv{T}_1/2^{C_1}\end{array}\right] \nonumber \\ & ~~~~~~~~~ =\mv{\Lambda}_{1+\tilde{m}}+\left[\begin{array}{cc}\mv{0}_{\tilde{m}\times \tilde{m}} & \mv{0}_{\tilde{m}\times (M-\tilde{m})} \\ \mv{0}_{(M-\tilde{m})\times \tilde{m}} & \mv{T}_1^{(\tilde{m}:M-1,\tilde{m}:M-1)}/2^{C_1}\end{array}\right]\succeq \mv{0}, \label{eqn:new solution 2} \\ & \tilde{\mv{\Lambda}}_m=\mv{\Lambda}_m, ~ \forall m\neq 1 ~ {\rm and} ~ m\neq 1+\tilde{m}, \label{eqn:new solution 3} \end{align}and $\tilde{\mv{A}}_m$'s are given as follows \begin{align} \tilde{\mv{A}}_m=\left\{\begin{array}{ll}\mv{A}_m, & {\rm if} ~ m=1 ~ {\rm or} ~ 1+\tilde{m}<m\leq M, \\ \mv{A}_m+\left[\begin{array}{cc}\mv{0}_{(1+\tilde{m}-m)\times (1+\tilde{m}-m)} & \mv{0}_{(1+\tilde{m}-m)\times (M-\tilde{m})} \\ \mv{0}_{(M-\tilde{m})\times (1+\tilde{m}-m)} & \mv{T}_1^{(\tilde{m}:M-1,\tilde{m}:M-1)}/2^{C_1}\end{array}\right], & {\rm if} ~ 1<m<1+\tilde{m}, \\ \mv{A}_m+\mv{T}_1^{(\tilde{m}:M-1,\tilde{m}:M-1)}/2^{C_1}, & {\rm if} ~ m=1+\tilde{m}.\end{array} \right. \end{align} It can be shown that the above solution satisfies constraints (\ref{eqn:A1}), (\ref{eqn:A2}), (\ref{eqn:nonegative beta}), and (\ref{eqn:nonegative Phi}) in problem (\ref{eqn:problem 5}). Moreover, with this new solution, we have \begin{align} & \tilde{\mv{\Lambda}}_1^{(1,1)}=\mv{\Lambda}_1^{(1,1)}-0=\mv{\Lambda}_1^{(1,1)}, \\ & \tilde{\mv{\Lambda}}_{1+\tilde{m}}^{(1+\tilde{m},1+\tilde{m})}=\mv{\Lambda}_{1+\tilde{m}}^{(1+\tilde{m},1+\tilde{m})}+\mv{T}_1^{(\tilde{m},\tilde{m})}/2^{C_1},\\ & \tilde{\mv{\Lambda}}_m^{(m,m)}=\mv{\Lambda}_m^{(m,m)}, ~ \forall m\neq 1 ~ {\rm and} ~ m\neq 1+\tilde{m}. \end{align}Since $\mv{T}_1(\tilde{m},\tilde{m})>0$, it follows that $\tilde{\mv{\Lambda}}_m^{(m,m)}=\mv{\Lambda}_m^{(m,m)}$ if $m\neq 1+\tilde{m}$, and $\tilde{\mv{\Lambda}}_m^{(m,m)} > \mv{\Lambda}_m^{(m,m)}$ if $m=1+\tilde{m}$. Moreover, according to (\ref{eqn:assumption}), we have $\sum_{k=1}^K|\bar{u}_{k,1+\tilde{m}}|^2>0$. As a result, at the new solution, constraint (\ref{eqn:dual uplink SINR constraint}) in problem (\ref{eqn:problem 5}) is satisfied with strict inequality at least for one $k$. Last, since \begin{align} \sum\limits_{m=1}^M\mv{E}_m^H\tilde{\mv{\Lambda}}_m\mv{E}_m \succeq \sum\limits_{m=1}^M\mv{E}_m^H\mv{\Lambda}_m\mv{E}_m, \end{align}constraint (\ref{eqn:dual uplink fronthaul constraint 1}) in problem (\ref{eqn:problem 5}) is satisfied. To summarize, with the new solution $\{\tilde{\beta}_k, \tilde{\mv{\Lambda}}_m, \tilde{\mv{A}}_m\}$, all the constraints in problem (\ref{eqn:problem 5}) are satisfied, while the optimal objective value is also achieved. Moreover, since constraint (\ref{eqn:dual uplink SINR constraint}) is satisfied with strict inequality for some $k$'s, we can further find a better solution of $\beta_k$'s such that the objective value is enhanced while constraint (\ref{eqn:dual uplink SINR constraint}) is satisfied with equality. This contradicts the assumption that the optimal solution to problem (\ref{eqn:problem 5}) is $\{\beta_k, \mv{\Lambda}_m, \mv{A}_m\}$. As a result, at the optimal solution to problem (\ref{eqn:problem 5}), condition (\ref{eqn:optimal condition 1}) must hold. Given (\ref{eqn:optimal condition 1}), it follows from (\ref{eqn:A1}) that \begin{align} \left[\begin{array}{cc}0 & \mv{0}_{1\times (M-1)} \\ \mv{0}_{(M-1)\times 1} & \mv{A}_2\end{array}\right]=\mv{A}_1-\frac{\mv{A}_1^{(1:M,1)}\mv{A}_1^{(1,1:M)}}{\mv{A}_1^{(1,1)}}. \end{align} Since ${\rm rank}(\mv{A}_1)=M$ and ${\rm rank}(\mv{A}_1^{(1:M,1)}\mv{A}_1^{(1,1:M)})=1$, we have ${\rm rank}(\mv{A}_2)=M-1$. As a result, we have $\mv{A}_2\succ \mv{0}$, and thus $\mv{A}_2^{(1,1)}>0$. Similar to the way to prove (\ref{eqn:optimal condition 1}), we can show that with $\mv{A}_2^{(1,1)}>0$, condition (\ref{eqn:optimal condition}) must hold for $m=2$. Then, similar to the way to prove $\mv{A}_2\succ \mv{0}$, we can show that if condition (\ref{eqn:optimal condition}) holds for $m=2$, then $\mv{A}_3\succ \mv{0}$. We can keep applying the above method to show that with the optimal solution to problem (\ref{eqn:problem 5}), conditions (\ref{eqn:optimal condition 2}) and (\ref{eqn:optimal condition}) are true. Proposition \ref{lemma1} is thus proved. \subsection{Proof of Proposition \ref{proposition4}}\label{appendix5} According to (\ref{eqn:A1}), (\ref{eqn:A2}) and (\ref{eqn:optimal condition}), the optimal solution to problem (\ref{eqn:problem 5}) must satisfy \begin{align} \left[\begin{array}{cc}0 & \mv{0}_{1\times (M-m+1)} \\ \mv{0}_{(M-m+1)\times 1} & \mv{A}_{m} \end{array}\right] = & \mv{A}_{m-1}-2^{C_{m-1}}\mv{\Lambda}_{m-1}^{(m-1:M,m-1:M)} \\ = & \mv{A}_{m-1} - \frac{\mv{A}_{m-1}^{(1:M-m+2,1)}\mv{A}_{m-1}^{(1,1:M-m+2)}}{\mv{A}_{m-1}^{(1,1)}}, ~ m=2,\ldots,M. \label{eqn:optimal solution 1} \end{align}In other words, we have \begin{align}\label{eqn:optimal solution 2} \mv{A}_m=\mv{A}_{m-1}^{(2:M-m+2,2:M-m+2)} - \frac{\mv{A}_{m-1}^{(2:M-m+2,1)}\mv{A}_{m-1}^{(1,2:M-m+2)}}{\mv{A}_{m-1}^{(1,1)}}, ~ m=2,\ldots,M. \end{align}It then follows from (\ref{eqn:optimal solution 2}) that \begin{align}\label{eqn:optimal solution 3} \mv{A}_m^{(1,1)}=\mv{A}_{m-1}^{(2,2)}-\frac{\mv{A}_{m-1}^{(2,1)}\mv{A}_{m-1}^{(1,2)}}{\mv{A}_{m-1}^{(1,1)}}, ~ m=2,\ldots,M. \end{align} \begin{lemma}\label{lemma2} If (\ref{eqn:optimal solution 2}) holds and $\mv{A}_m \succ \mv{0}$, $\forall m$, then for any $m=2,\ldots,M$, we have \begin{align} & \mv{A}_n^{(m-n+1,m-n+1)}-\mv{A}_n^{(m-n+1,1:m-n)}(\mv{A}_n^{(1:m-n,1:m-n)})^{-1}\mv{A}_n^{(1:m-n,m-n+1)} \nonumber \\ &= \mv{A}_{n-1}^{(m-n+2,m-n+2)}-\mv{A}_{n-1}^{(m-n+2,1:m-n+1)}(\mv{A}_{n-1}^{(1:m-n+1,1:m-n+1)})^{-1}\mv{A}_{n-1}^{(1:m-n+1,m-n+2)}, ~ n=2,\ldots,m. \label{eqn:optimal solution 4} \end{align} \end{lemma} \begin{IEEEproof} First, if (\ref{eqn:optimal solution 2}) holds, it then follows that \begin{align} & \mv{A}_n^{(m-n+1,m-n+1)}-\mv{A}_n^{(m-n+1,1:m-n)}(\mv{A}_n^{(1:m-n,1:m-n)})^{-1}\mv{A}_n^{(1:m-n,m-n+1)} \nonumber \\ & = \mv{A}_{n-1}^{(m-n+2,m-n+2)}-\frac{\mv{A}_{n-1}^{(m-n+2,1)}\mv{A}_{n-1}^{(1,m-n+2)}}{\mv{A}_{n-1}^{(1,1)}}-\left(\mv{A}_{n-1}^{(m-n+2,2:m-n+1)}-\frac{\mv{A}_{n-1}^{(m-n+2,1)}\mv{A}_{n-1}^{(1,2:m-n+1)}}{\mv{A}_{n-1}^{(1,1)}}\right) \nonumber \\ &\quad \times \mv{B}_{n-1,m}^{-1}\left(\mv{A}_{n-1}^{(2:m-n+1,m-n+2)}-\frac{\mv{A}_{n-1}^{(1,m-n+2)}\mv{A}_{n-1}^{(2:m-n+1,1)}}{\mv{A}_{n-1}^{(1,1)}}\right), ~ n=2,\ldots,m, \label{eqn:optimal solution 21} \end{align}where \begin{align}\label{eqn:optimal solution 22} \mv{B}_{n-1,m}=\mv{A}_{n-1}^{(2:m-n+1,2:m-n+1)}-\frac{\mv{A}_{n-1}^{(2:m-n+1,1)}\mv{A}_{n-1}^{(1,2:m-n+1)}}{\mv{A}_{n-1}^{(1,1)}}. \end{align} On the other hand, we have \begin{align} & \mv{A}_{n-1}^{(m-n+2,m-n+2)}-\mv{A}_{n-1}^{(m-n+2,1:m-n+1)}(\mv{A}_{n-1}^{(1:m-n+1,1:m-n+1)})^{-1}\mv{A}_{n-1}^{(1:m-n+1,m-n+2)} \nonumber \\ & = \mv{A}_{n-1}^{(m-n+2,m-n+2)}-[\mv{A}_{n-1}^{(m-n+2,1)},\mv{A}_{n-1}^{(m-n+2,2:m-n+1)}]\nonumber \\ & \quad \times \left(\begin{array}{cc}\mv{A}_{n-1}^{(1,1)} & \mv{A}_{n-1}^{(1,2:m-n+1)} \\ \mv{A}_{n-1}^{(2:m-n+1,1)} & \mv{A}_{n-1}^{(2:m-n+1,2:m-n+1)}\end{array}\right)^{-1} \left[\begin{array}{c} \mv{A}_{n-1}^{(1,m-n+2)} \\ \mv{A}_{n-1}^{(2:m-n+1,m-n+2)}\end{array}\right], ~ n=2,\ldots,m. \label{eqn:optimal solution 23} \end{align} \begin{lemma}\label{lemma6} \cite[Section 0.7.3]{Horn12} Consider an invertible matrix \begin{align} \mv{X}=\left[\begin{array}{cc}\mv{C} & \mv{D} \\ \mv{E} & \mv{F}\end{array}\right]. \end{align}Then we have \begin{align} \mv{X}^{-1}=& \left[\begin{array}{cc} \mv{C}^{-1}+\mv{C}^{-1}\mv{D}(\mv{F}-\mv{E}\mv{C}^{-1}\mv{D})^{-1}\mv{E}\mv{C}^{-1} & -\mv{C}^{-1}\mv{D}(\mv{F}-\mv{E}\mv{C}^{-1}\mv{D})^{-1} \\ -(\mv{F}-\mv{E}\mv{C}^{-1}\mv{D})^{-1} \mv{E}\mv{C}^{-1} & (\mv{F}-\mv{E}\mv{C}^{-1}\mv{D})^{-1} \end{array}\right]. \end{align} \end{lemma} According to Lemma \ref{lemma6}, it follows that \begin{align} & [\mv{A}_{n-1}^{(m-n+2,1)},\mv{A}_{n-1}^{(m-n+2,2:m-n+1)}]\left(\begin{array}{cc}\mv{A}_{n-1}^{(1,1)} & \mv{A}_{n-1}^{(1,2:m-n+1)} \\ \mv{A}_{n-1}^{(2:m-n+1,1)} & \mv{A}_{n-1}^{(2:m-n+1,2:m-n+1)}\end{array}\right)^{-1} \left[\begin{array}{c} \mv{A}_{n-1}^{(1,m-n+2)} \\ \mv{A}_{n-1}^{(2:m-n+1,m-n+2)}\end{array}\right] \nonumber \\ & = [\mv{A}_{n-1}^{(m-n+2,1)},\mv{A}_{n-1}^{(m-n+2,2:m-n+1)}] \nonumber \\ & \quad \times \left[\begin{array}{cc}\frac{1}{\mv{A}_{n-1}^{(1,1)}}-\frac{\mv{A}_{n-1}^{(1,2:m-n+1)}\mv{B}_{n-1,m}^{-1}\mv{A}_{n-1}^{(2:m-n+1,1)}}{(\mv{A}_{n-1}^{(1,1)})^2} & -\frac{\mv{A}_{n-1}^{(1,2:m-n+1)}\mv{B}_{n-1,m}^{-1}}{\mv{A}_{n-1}^{(1,1)}} \\ -\frac{\mv{B}_{n-1,m}^{-1}\mv{A}_{n-1}^{(2:m-n+1,1)}}{\mv{A}_{n-1}^{(1,1)}} & \mv{B}_{n-1,m}^{-1}\end{array}\right] \nonumber \\ & \quad \times \left[\begin{array}{c} \mv{A}_{n-1}^{(1,m-n+2)} \\ \mv{A}_{n-1}^{(2:m-n+1,m-n+2)}\end{array}\right] \\ & = \frac{\mv{A}_{n-1}^{(m-n+2,1)}\mv{A}_{n-1}^{(1,m-n+2)}}{\mv{A}_{n-1}^{(1,1)}}+\frac{\mv{A}_{n-1}^{(m-n+2,1)}\mv{A}_{n-1}^{(1,m-n+2)}\mv{A}_{n-1}^{(1,2:m-n+1)}\mv{B}_{n-1,m}^{-1}\mv{A}_{n-1}^{(2:m-n+1,1)}}{(\mv{A}_{n-1}^{(1,1)})^2} \nonumber \\ & \quad -\frac{\mv{A}_{n-1}^{(1,m-n+2)}\mv{A}_{n-1}^{(m-n+2,2:m-n+1)}\mv{B}_{n-1,m}^{-1}\mv{A}_{n-1}^{(2:m-n+1,1)}}{\mv{A}_{n-1}^{(1,1)}} \nonumber \\ & \quad -\frac{\mv{A}_{n-1}^{(m-n+2,1)}\mv{A}_{n-1}^{(1,2:m-n+1)}\mv{B}_{n-1,m}^{-1}\mv{A}_{n-1}^{(2:m-n+1,m-n+2)}}{\mv{A}_{n-1}^{(1,1)}}\nonumber \\ & \quad +\mv{A}_{n-1}^{(m-n+2,2:m-n+1)}\mv{B}_{n-1,m}^{-1}\mv{A}_{n-1}^{(2:m-n+1,m-n+2)}, ~ n=2,\ldots,m-1. \label{eqn:optimal solution 24} \end{align}By taking (\ref{eqn:optimal solution 24}) into (\ref{eqn:optimal solution 23}), it follows that \begin{align} & \mv{A}_{n-1}^{(m-n+2,m-n+2)}-\mv{A}_{n-1}^{(m-n+2,1:m-n+1)}(\mv{A}_{n-1}^{(1:m-n+1,1:m-n+1)})^{-1}\mv{A}_{n-1}^{(1:m-n+1,m-n+2)} \nonumber \\ &= \mv{A}_{n-1}^{(m-n+2,m-n+2)}-\frac{\mv{A}_{n-1}^{(m-n+2,1)}\mv{A}_{n-1}^{(1,m-n+2)}}{\mv{A}_{n-1}^{(1,1)}}-\left(\mv{A}_{n-1}^{(m-n+2,2:m-n+1)}-\frac{\mv{A}_{n-1}^{(m-n+2,1)}\mv{A}_{n-1}^{(1,2:m-n+1)}}{\mv{A}_{n-1}^{(1,1)}}\right) \nonumber \\ &\quad \times \mv{B}_{n-1,m}^{-1}\left(\mv{A}_{n-1}^{(2:m-n+1,m-n+2)}-\frac{\mv{A}_{n-1}^{(1,m-n+2)}\mv{A}_{n-1}^{(2:m-n+1,1)}}{\mv{A}_{n-1}^{(1,1)}}\right), ~ n=2,\ldots,m. \label{eqn:optimal solution 25} \end{align} According to (\ref{eqn:optimal solution 21}) and (\ref{eqn:optimal solution 25}), it is concluded that if (\ref{eqn:optimal solution 2}) is true, then (\ref{eqn:optimal solution 4}) holds. Lemma \ref{lemma2} is thus proved. \end{IEEEproof} By combining (\ref{eqn:optimal solution 3}), Proposition \ref{lemma1}, and Lemma \ref{lemma2}, the optimal solution to problem (\ref{eqn:problem 5}) satisfies \begin{align} \mv{A}_m^{(1,1)}=&\mv{A}_{m-1}^{(2,2)}-\frac{\mv{A}_{m-1}^{(2,1)}\mv{A}_{m-1}^{(1,2)}}{\mv{A}_{m-1}^{(1,1)}} \\ =& \mv{A}_{m-2}^{(3,3)}-\mv{A}_{m-2}^{(3,1:2)}(\mv{A}_{m-2}^{(1:2,1:2)})^{-1}\mv{A}_{m-2}^{(1:2,3)} \\ \vdots & \nonumber \\ =& \mv{A}_1^{(m,m)}-\mv{A}_1^{(m,1:m-1)}(\mv{A}_1^{(1:m-1,1:m-1)})^{-1}\mv{A}_1^{(1:m-1,m)}, ~ m=2,\ldots,M. \label{eqn:optimal solution 5} \end{align}Moreover, according to (\ref{eqn:optimal condition}), it follows that \begin{align}\label{eqn:optimal solution 6} 2^{C_m}\mv{\Lambda}_m^{(m,m)}=\mv{A}_m^{(1,1)}, ~ \forall m. \end{align}By combining (\ref{eqn:optimal solution 5}) and (\ref{eqn:optimal solution 6}), it can be concluded that the optimal solution to problem (\ref{eqn:problem 5}) must satisfy \begin{align}\label{eqn:optimal solution 7} 2^{C_m}\mv{\Lambda}_m^{(m,m)}=\left\{\begin{array}{ll}\mv{A}_1^{(1,1)} & {\rm if} ~ m=1, \\ \mv{A}_1^{(m,m)}-\mv{A}_1^{(m,1:m-1)}(\mv{A}_1^{(1:m-1,1:m-1)})^{-1}\mv{A}_1^{(1:m-1,m)}, & {\rm if} ~ m=2,\ldots,M.\end{array}\right. \end{align} To summarize, according to constraints (\ref{eqn:A1}) and (\ref{eqn:A2}) in problem (\ref{eqn:problem 5}), the optimal $\mv{\Lambda}_m^{(m,m)}$'s are just functions of $\mv{A}_1$ as shown in (\ref{eqn:optimal solution 7}). Moreover, constraints (\ref{eqn:dual uplink fronthaul constraint 1}) and (\ref{eqn:dual uplink SINR constraint}) are only dependent of $\mv{\Lambda}_m^{(m,m)}$'s, but are independent of the other elements of $\mv{\Lambda}_m$'s. As a result, problem (\ref{eqn:problem 5}) is equivalent to the following problem \begin{align}\hspace{-8pt} \mathop{\mathrm{maximize}}_{\{\beta_k\},\{\mv{\Lambda}_m^{(m,m)}\},\mv{A}_1} & ~~~ \sum\limits_{k=1}^K\beta_k \sigma^2 \label{eqn:problem 8} \\ \mathrm {subject ~ to} \ \ \ & ~ \mv{A}_1\succ \mv{0}, \\ \ \ \ & ~ (\ref{eqn:dual uplink SINR constraint}), ~ (\ref{eqn:nonegative beta}), ~ (\ref{eqn:dual uplink fronthaul constraint 1}), ~ (\ref{eqn:positive Phi}), ~ (\ref{eqn:optimal solution 7}). \nonumber \end{align}As compared to problem (\ref{eqn:problem 5}), the optimization variables $\mv{A}_2,\ldots,\mv{A}_M$ disappear in problem (\ref{eqn:problem 8}). Moreover, constraints (\ref{eqn:A1}) and (\ref{eqn:A2}) reduce to constraint (\ref{eqn:optimal solution 7}). \begin{lemma}\label{lemma3} Consider $\mv{X}\in \mathbb{C}^{M\times M}$ and $\mv{Y}\in \mathbb{C}^{M\times M}$. If $\mv{X}\succ \mv{0}$, $\mv{Y}\succ \mv{0}$, and $\mv{X}\succeq \mv{Y}$, it then follows that \begin{equation} \mv{X}^{(1,1)}\geq \mv{Y}^{(1,1)}, \label{eqn:lemma 31} \end{equation} and \begin{align} & \mv{X}^{(m,m)}-\mv{X}^{(m,1:m-1)}(\mv{X}^{(1:m-1,1:m-1)})^{-1}\mv{X}^{(1:m-1,m)} \nonumber \\ & \quad \geq \mv{Y}^{(m,m)}-\mv{Y}^{(m,1:m-1)}(\mv{Y}^{(1:m-1,1:m-1)})^{-1}\mv{Y}^{(1:m-1,m)}, ~ m=2,\ldots,M. \label{eqn:optimal solution 8} \end{align} \end{lemma} \begin{IEEEproof} (\ref{eqn:lemma 31}) directly follows from $\mv{X}\succeq \mv{Y}$. In the following, we prove (\ref{eqn:optimal solution 8}). Since $\mv{X}\succ \mv{0}$, $\mv{Y}\succ \mv{0}$, and $\mv{Y}\succeq \mv{X}$, we have $\mv{X}^{(1:m,1:m)}\succ \mv{0}$, $\mv{Y}^{(1:m,1:m)}\succ \mv{0}$, and $\mv{X}^{(1:m,1:m)}\succeq \mv{Y}^{(1:m,1:m)}$, $\forall m$. As a result, the inverses of $\mv{X}^{(1:m-1,1:m-1)}$'s and $\mv{Y}^{(1:m-1,1:m-1)}$'s exist, $\forall m\geq 2$, and \begin{align} & \mv{X}^{(m,m)}-\mv{X}^{(m,1:m-1)}(\mv{X}^{(1:m-1,1:m-1)})^{-1}\mv{X}^{(1:m-1,m)}\nonumber \\ &= [-\mv{X}^{(m,1:m-1)}(\mv{X}^{(1:m-1,1:m-1)})^{-1}, 1]\mv{X}^{(1:m,1:m)}[-\mv{X}^{(m,1:m-1)}(\mv{X}^{(1:m-1,1:m-1)})^{-1}, 1]^H \\ & \geq [-\mv{X}^{(m,1:m-1)}(\mv{X}^{(1:m-1,1:m-1)})^{-1}, 1]\mv{Y}^{(1:m,1:m)}[-\mv{X}^{(m,1:m-1)}(\mv{X}^{(1:m-1,1:m-1)})^{-1}, 1]^H \\ &= \mv{Y}^{(m,m)}-\mv{Y}^{(m,1:m-1)}(\mv{Y}^{(1:m-1,1:m-1)})^{-1}\mv{Y}^{(1:m-1,m)}+ \nonumber \\ & \quad (-\mv{X}^{(m,1:m-1)}(\mv{X}^{(1:m-1,1:m-1)})^{-1}+\mv{Y}^{(m,1:m-1)}(\mv{Y}^{(1:m-1,1:m-1)})^{-1})\times \mv{Y}^{(1:m-1,1:m-1)}\times \nonumber \\ & \quad (-\mv{X}^{(m,1:m-1)}(\mv{X}^{(1:m-1,1:m-1)})^{-1}+\mv{Y}^{(m,1:m-1)}(\mv{Y}^{(1:m-1,1:m-1)})^{-1})^H \\ & \geq \mv{Y}^{(m,m)}-\mv{Y}^{(m,1:m-1)}(\mv{Y}^{(1:m-1,1:m-1)})^{-1}\mv{Y}^{(1:m-1,m)}, ~~~ m=2,\ldots,M. \end{align} Lemma \ref{lemma3} is thus proved. \end{IEEEproof} According to Lemma \ref{lemma3}, (\ref{eqn:dual uplink fronthaul constraint 1}) and (\ref{eqn:optimal solution 7}) indicate that \begin{align} & \mv{\Omega}^{(m,m)}-\mv{\Omega}^{(m,1:m-1)}(\mv{\Omega}^{(1:m-1,1:m-1)})^{-1}\mv{\Omega}^{(1:m-1,m)} \nonumber \\ & \quad \geq \mv{A}_1^{(m,m)}-\mv{A}_1^{(m,1:m-1)}(\mv{A}_1^{(1:m-1,1:m-1)})^{-1}\mv{A}_1^{(1:m-1,m)}=2^{C_m}\mv{\Lambda}_m^{(m,m)}, ~ m=1,\ldots,M. \label{eqn:optimal solution 9} \end{align}This corresponds to constraint (\ref{eqn:eqv dual uplink fronthaul constraint}). As a result, any feasible solution to problem (\ref{eqn:problem 8}) is feasible to problem (\ref{eqn:problem 6}). In other words, the optimal value of problem (\ref{eqn:problem 6}) is no smaller than that of problem (\ref{eqn:problem 8}). Next, we show that the optimal value of problem (\ref{eqn:problem 6}) is no larger than that of problem (\ref{eqn:problem 8}). \begin{lemma}\label{lemma4} The optimal solution to problem (\ref{eqn:problem 6}) must satisfy \begin{align}\label{eqn:optimal solution 10} \mv{\Omega}^{(m,m)}-\mv{\Omega}^{(m,1:m-1)}(\mv{\Omega}^{(1:m-1,1:m-1)})^{-1}\mv{\Omega}^{(1:m-1,m)}=2^{C_m}\mv{\Lambda}_m^{(m,m)}, ~ m=1,\ldots,M. \end{align} \end{lemma} \begin{IEEEproof} According to (\ref{eqn:Omega}), we have \begin{align} \mv{\Omega}^{(m,m)}=[\sigma^2\mv{I}+\sum\limits_{i=1}^K\beta_k\mv{h}_k\mv{h}_k^H]^{(m,m)}+\mv{\Lambda}_m^{(m,m)}, ~ \forall m. \end{align} Therefore, we can re-express constraint (\ref{eqn:eqv dual uplink fronthaul constraint}) in problem (\ref{eqn:problem 6}) as \begin{align} \left[\sigma^2\mv{I}+\sum\limits_{i=1}^K\beta_k\mv{h}_k\mv{h}_k^H\right]^{(m,m)}-\mv{\Omega}^{(m,1:m-1)}(\mv{\Omega}^{(1:m-1,1:m-1)})^{-1}\mv{\Omega}^{(1:m-1,m)}\geq (2^{C_m}-1)\mv{\Lambda}_m^{(m,m)}, ~ \forall m. \end{align}Note that $[\sigma^2\mv{I}+\sum_{i=1}^K\beta_k\mv{h}_k\mv{h}_k^H]^{(m,m)}$ and $\mv{\Omega}^{(m,1:m-1)}(\mv{\Omega}^{(1:m-1,1:m-1)})^{-1}\mv{\Omega}^{(1:m-1,m)}$ are not functions of $\mv{\Lambda}_m^{(m,m)}$, $\forall m$. Suppose that for a given feasible solution to problem (\ref{eqn:problem 6}), denoted by $\beta_k$'s and $\mv{\Lambda}_m^{(m,m)}$'s, there exists an $1\leq \bar{m} \leq M$ such that \begin{align} \left[\sigma^2\mv{I}+\sum\limits_{k=1}^K\beta_k\mv{h}_k\mv{h}_k^H\right]^{(\bar{m},\bar{m})}-\mv{\Omega}^{(\bar{m},1:\bar{m}-1)}(\mv{\Omega}^{(1:\bar{m}-1,1:\bar{m}-1)})^{-1}\mv{\Omega}^{(1:\bar{m}-1,\bar{m})}> (2^{C_{\bar{m}}}-1)\mv{\Lambda}_{\bar{m}}^{(\bar{m},\bar{m})}. \end{align} Then, we consider another solution where $\tilde{\beta}_k=\beta_k$, $\forall k$, and $\tilde{\mv{\Lambda}}_m^{(m,m)}=\mv{\Lambda}_m^{(m,m)}$, $\forall m\neq \bar{m}$, while \begin{align} & \tilde{\mv{\Lambda}}_{\bar{m}}^{(\bar{m},\bar{m})}=\frac{\left[\sigma^2\mv{I}+\sum\limits_{k=1}^K\beta_k\mv{h}_k\mv{h}_k^H\right]^{(\bar{m},\bar{m})}-\tilde{\mv{\Omega}}^{(\bar{m},1:\bar{m}-1)}(\tilde{\mv{\Omega}}^{(1:\bar{m}-1,1:\bar{m}-1)})^{-1}\tilde{\mv{\Omega}}^{(1:\bar{m}-1,\bar{m})}}{2^{C_{\bar{m}}}-1}, \\ & \tilde{\mv{\Omega}}=\sigma^2\mv{I}+\sum\limits_{k=1}^K\tilde{\beta}_k\mv{h}_k\mv{h}_k^H+{\rm diag}(\tilde{\mv{\Lambda}}_1^{(1,1)},\ldots,\tilde{\mv{\Lambda}}_M^{(M,M)}). \end{align}As a result, $\tilde{\mv{\Lambda}}_{\bar{m}}^{(\bar{m},\bar{m})}>\mv{\Lambda}_{\bar{m}}^{(\bar{m},\bar{m})}$. Consider the above new solution. First, it can be shown that \begin{align} \tilde{\mv{\Omega}}^{(m_1,m_2)}=\left\{\begin{array}{ll}\mv{\Omega}^{(m_1,m_2)}, & {\rm if} ~ (m_1,m_2)\neq (\bar{m},\bar{m}), \\ \mv{\Omega}^{(\bar{m},\bar{m})}-\mv{\Lambda}_{\bar{m}}^{(\bar{m},\bar{m})}+\tilde{\mv{\Lambda}}_{\bar{m}}^{(\bar{m},\bar{m})}>\mv{\Omega}^{(\bar{m},\bar{m})}, & {\rm otherwise}.\end{array}\right. \end{align} As a result, at the new solution, constraint (\ref{eqn:eqv dual uplink fronthaul constraint}) still holds $\forall m\leq \bar{m}$. For $m>\bar{m}$, we have \begin{align} 2^{C_m}\tilde{\mv{\Lambda}}_m^{(m,m)}&=2^{C_m}\mv{\Lambda}_m^{(m,m)} \\ & \leq \mv{\Omega}^{(m,m)}-\mv{\Omega}^{(m,1:m-1)}(\mv{\Omega}^{(1:m-1,1:m-1)})^{-1}\mv{\Omega}^{(1:m-1,m)} \\ & = \tilde{\mv{\Omega}}^{(m,m)}-\tilde{\mv{\Omega}}^{(m,1:m-1)}(\mv{\Omega}^{(1:m-1,1:m-1)})^{-1}\tilde{\mv{\Omega}}^{(1:m-1,m)} \\ &\leq \tilde{\mv{\Omega}}^{(m,m)}-\tilde{\mv{\Omega}}^{(m,1:m-1)}(\tilde{\mv{\Omega}}^{(1:m-1,1:m-1)})^{-1}\tilde{\mv{\Omega}}^{(1:m-1,m)}, \label{eqn:lemma41} \end{align}where (\ref{eqn:lemma41}) is because $\tilde{\mv{\Omega}} \succeq \mv{\Omega}$. Next, consider constraint (\ref{eqn:dual uplink SINR constraint}) in problem (\ref{eqn:problem 6}). Since $\tilde{\mv{\Lambda}}_m^{(m,m)}=\mv{\Lambda}^{(m,m)}$, $\forall m\neq \bar{m}$, $\tilde{\mv{\Lambda}}_{\bar{m}}^{(\bar{m},\bar{m})}>\mv{\Lambda}_{\bar{m}}^{(\bar{m},\bar{m})}$, and $\sum_{k=1}^K|\bar{u}_{k,\bar{m}}|^2>0$ according to (\ref{eqn:assumption}), constraint (\ref{eqn:dual uplink SINR constraint}) in problem (\ref{eqn:problem 6}) is satisfied with inequality at least for one $k$. To summarize, the new solution is a feasible solution to problem (\ref{eqn:problem 6}), with constraint (\ref{eqn:dual uplink SINR constraint}) satisfied with inequality. Thus, given $\tilde{\mv{\Lambda}}_m^{(m,m)}$'s, we can further find a better solution of $\beta_k$'s such that the objective value of problem (\ref{eqn:problem 6}) is enhanced while constraint (\ref{eqn:dual uplink SINR constraint}) is satisfied with equality. This indicates that if (\ref{eqn:optimal solution 10}) does not hold for problem (\ref{eqn:problem 6}), we can always find a better solution. Lemma \ref{lemma4} is thus proved. \end{IEEEproof} Given any feasible solution $\{\beta_k, \mv{\Lambda}_m^{(m,m)}\}$ to problem (\ref{eqn:problem 6}) that satisfies the optimal condition (\ref{eqn:optimal solution 10}), we can construct a solution to problem (\ref{eqn:problem 8}) as follows. First, $\beta_k$'s and $\mv{\Lambda}_m^{(m,m)}$'s are unchanged in problem (\ref{eqn:problem 8}). Second, $\mv{A}_1$ is set as \begin{align}\label{eqn:optimal solution 11} \mv{A}_1=\mv{\Omega}\succ \mv{0}. \end{align}Since $\beta_k$'s and $\mv{\Lambda}_m^{(m,m)}$'s is a feasible solution to problem (\ref{eqn:problem 6}), it must satisfy constraints (\ref{eqn:dual uplink SINR constraint}) and (\ref{eqn:nonegative beta}). Further, it can be observed from (\ref{eqn:Omega}) that $\mv{\Omega}\succ \mv{0}$. As a result, it follows that \begin{align} \mv{\Omega}^{(m,m)}-\mv{\Omega}^{(m,1:m-1)}(\mv{\Omega}^{(1:m-1,1:m-1)})^{-1}\mv{\Omega}^{(1:m-1,m)}>0, ~ \forall m. \end{align} According to (\ref{eqn:optimal solution 10}), we have $\mv{\Lambda}_m^{(m,m)}>0$, $\forall m$. In other words, constraint (\ref{eqn:positive Phi}) holds. Moreover, (\ref{eqn:optimal solution 11}) guarantees that constraint (\ref{eqn:dual uplink fronthaul constraint 1}) holds in problem (\ref{eqn:problem 8}). At last, (\ref{eqn:optimal solution 9}) and (\ref{eqn:optimal solution 10}) guarantee that \begin{align} &\mv{\Omega}^{(m,m)}-\mv{\Omega}^{(m,1:m-1)}(\mv{\Omega}^{(1:m-1,1:m-1)})^{-1}\mv{\Omega}^{(1:m-1,m)}\nonumber \\ &= \mv{A}_1^{(m,m)}-\mv{A}_1^{(m,1:m-1)}(\mv{A}_1^{(1:m-1,1:m-1)})^{-1}\mv{A}_1^{(1:m-1,m)} \\ &= 2^{C_m}\mv{\Lambda}_m^{(m,m)}, ~ \forall m. \end{align}As a result, constraint (\ref{eqn:optimal solution 7}) also holds. To summarize, given any feasible solution to problem (\ref{eqn:problem 6}) that satisfies (\ref{eqn:optimal solution 10}), we can also generate a feasible solution to problem (\ref{eqn:problem 8}). Moreover, according to Lemma \ref{lemma4}, the optimal solution to problem (\ref{eqn:problem 6}) must satisfy condition (\ref{eqn:optimal solution 10}). As a result, the optimal value of problem (\ref{eqn:problem 6}) is achievable for problem (\ref{eqn:problem 8}). In other words, the optimal value of problem (\ref{eqn:problem 6}) is no larger than that of problem (\ref{eqn:problem 8}). To summarize, we have shown that the optimal value of problem (\ref{eqn:problem 6}) is not smaller than that of problem (\ref{eqn:problem 8}), and at the same time, the optimal value of problem (\ref{eqn:problem 6}) is no larger than that of problem (\ref{eqn:problem 8}). This indicates that problem (\ref{eqn:problem 6}) is equivalent to problem (\ref{eqn:problem 8}), which is further equivalent to problem (\ref{eqn:problem 5}). In other words, problem (\ref{eqn:problem 6}) is equivalent to (\ref{eqn:problem 5}). This completes the proof of Proposition \ref{proposition4}. \subsection{Proof of Proposition \ref{proposition5}}\label{appendix6} According to Lemma \ref{lemma4} in Appendix \ref{appendix5}, with the optimal solution to problem (\ref{eqn:problem 6}), all the constraints shown in (\ref{eqn:eqv dual uplink fronthaul constraint}) should hold with equality. In the following, we show that with the optimal solution, to problem (\ref{eqn:problem 6}), all the constraints shown in (\ref{eqn:dual uplink SINR constraint}) should hold with equality, i.e., \begin{align}\label{eqn:optimal solution 51} \sigma^2+\sum\limits_{j\neq k}\beta_j|\bar{\mv{u}}_k^H\mv{h}_j|^2+\sum\limits_{m=1}^M\mv{\Lambda}_m^{(m,m)}|\bar{u}_{k,m}|^2-\frac{\beta_k|\bar{\mv{u}}_k^H\mv{h}_k|^2}{2^{R_k}-1}= 0, ~ \forall k. \end{align} Given any feasible solution to problem (\ref{eqn:problem 6}), suppose that there exists a $\bar{k}$ such that \begin{align} \sigma^2+\sum\limits_{j\neq \bar{k}}\beta_j|\bar{\mv{u}}_{\bar{k}}^H\mv{h}_j|^2+\sum\limits_{m=1}^M\mv{\Lambda}_m^{(m,m)}|\bar{u}_{\bar{k},m}|^2-\frac{\beta_{\bar{k}}|\bar{\mv{u}}_k^H\mv{h}_{\bar{k}}|^2}{2^{R_{\bar{k}}}-1}> 0. \end{align} Then, consider another solution of $\tilde{\beta}_{\bar{k}}$ such that \begin{align}\label{eqn:optimal solution 52} \sigma^2+\sum\limits_{j\neq \bar{k}}\beta_j|\bar{\mv{u}}_{\bar{k}}^H\mv{h}_j|^2+\sum\limits_{m=1}^M\mv{\Lambda}_m^{(m,m)}|\bar{u}_{\bar{k},m}|^2-\frac{\tilde{\beta}_{\bar{k}}|\bar{\mv{u}}_k^H\mv{h}_{\bar{k}}|^2}{2^{R_{\bar{k}}}-1}= 0. \end{align} As a result, we have \begin{align} \tilde{\beta}_{\bar{k}}>\beta_{\bar{k}}. \end{align} First, it can be shown that with $\tilde{\beta}_{\bar{k}}$, constraint (\ref{eqn:dual uplink SINR constraint}) holds for all $k$. Moreover, according to (\ref{eqn:Omega}), it follows that \begin{align} \mv{\Omega}& =\sigma^2\mv{I}+\sum\limits_{k=1}^K\beta_k\mv{h}_k\mv{h}_k^H+\sum\limits_{m=1}^M\mv{E}_m^H\mv{\Lambda}_m\mv{E}_m \nonumber \\ & \preceq \sigma^2\mv{I}+\sum\limits_{j\neq \bar{k}}\beta_k\mv{h}_k\mv{h}_k^H+\tilde{\beta}_{\bar{k}}\mv{h}_{\bar{k}}\mv{h}_{\bar{k}}^H+\sum\limits_{m=1}^M\mv{E}_m^H\mv{\Lambda}_m\mv{E}_m. \label{eqn:optimal solution 53} \end{align}According to Lemma \ref{lemma3}, it follows that the new solution also satisfies constraint (\ref{eqn:eqv dual uplink fronthaul constraint}). As a result, the new solution is also a feasible solution to problem (\ref{eqn:problem 6}). However, with the new solution, the objective value of problem (\ref{eqn:problem 6}) is increased. This indicates that with the optimal solution to problem (\ref{eqn:problem 6}), all the constraints shown in (\ref{eqn:dual uplink SINR constraint}) should hold with equality. Proposition \ref{proposition5} is thus proved. \subsection{Proof of Proposition \ref{proposition6}}\label{appendix7} Given any $\mv{\beta}=[\beta_1,\ldots,\beta_K]>\mv{0}$, let $\mv{\lambda}(\mv{\beta})=[\lambda_1(\mv{\beta}),\ldots,\lambda_M(\mv{\beta})]$ denote the solution of $\mv{\Lambda}_m^{(m,m)}$'s (i.e.,$\mv{\Lambda}_m^{(m,m)}=\lambda_m(\mv{\beta})$, $\forall m$) that satisfies constraint (\ref{eqn:eqv dual uplink fronthaul constraint 01}) in problem (\ref{eqn:problem 7}). The uniqueness of $\mv{\lambda}(\mv{\beta})$ given $\mv{\beta}> \mv{0}$ can be shown as follows. First, it follows from (\ref{eqn:eqv dual uplink fronthaul constraint 01}) that \begin{align}\label{eqn:optimal solution 61} \lambda_1(\mv{\beta})=\frac{\sigma^2+\sum\limits_{k=1}^K\beta_kh_{k,1}h_{k,1}^H}{2^{C_1}-1}. \end{align}Next, it can be shown that if $\lambda_1(\mv{\beta}),\ldots,\lambda_{m-1}(\mv{\beta})$, then $\lambda_m(\mv{\beta})$ can be uniquely determined as \begin{align}\label{eqn:optimal solution 62} \lambda_m(\mv{\beta})=\frac{\sigma^2+\sum\limits_{k=1}^K\beta_kh_{k,m}h_{k,m}^H-\mv{\Omega}^{(m,1:m-1)}(\mv{\Omega}^{(1:m-1,1:m-1)})^{-1}\mv{\Omega}^{(1:m-1,m)}}{2^{C_m}-1}, \end{align}since $\mv{\Omega}^{(m,1:m-1)}$, $\mv{\Omega}^{(1:m-1,1:m-1)}$, and $\mv{\Omega}^{(1:m-1,m)}$ only depend on $\lambda_1(\mv{\beta}),\ldots,\lambda_{m-1}(\mv{\beta})$. As a result, given any $\mv{\beta}> \mv{0}$, there always exists a unique solution $\mv{\lambda}(\mv{\beta})$ such that constraint (\ref{eqn:eqv dual uplink fronthaul constraint 01}) is satisfied in problem (\ref{eqn:problem 7}). Moreover, the following lemma shows some important properties of $\mv{\lambda}(\mv{\beta})$. \begin{lemma}\label{lemma5} Given $\mv{\beta}\geq \mv{0}$, the function $\mv{\lambda}(\mv{\beta})$ defined by (\ref{eqn:optimal solution 61}) and (\ref{eqn:optimal solution 62}) has the following properties: \begin{itemize} \item[1.] $\mv{\lambda}(\mv{\beta})>\mv{0}$; \item[2.] Given any $\alpha>1$, it follows that $\mv{\lambda}(\alpha \mv{\beta})< \alpha \mv{\lambda}(\mv{\beta})$; \item[3.] If $\bar{\mv{\beta}}\geq \mv{\beta}$, then $\mv{\lambda}(\bar{\mv{\beta}})\geq \mv{\lambda}(\mv{\beta})$. \end{itemize} \end{lemma} \begin{IEEEproof} First, since $\lambda_m(\mv{\beta})$'s in (\ref{eqn:optimal solution 61}) and (\ref{eqn:optimal solution 62}) satisfy (\ref{eqn:eqv dual uplink fronthaul constraint 01}) and $\mv{\Omega}\succ \mv{0}$, it then follows that $\lambda_m(\mv{\beta})>0$, $\forall m$. Next, we show that given any $\alpha>1$, we have $\mv{\lambda}(\alpha\mv{\beta})<\alpha\mv{\lambda}(\mv{\beta})$. For convenience, we define \begin{align} & \hat{\mv{\Omega}}=\sigma^2\mv{I}+\sum\limits_{k=1}^K\beta_k\mv{h}_k\mv{h}_k^H+{\rm diag}(\lambda_1(\mv{\beta}),\ldots,\lambda_M(\mv{\beta})), \label{eqn:Omega 1} \\ & \bar{\mv{\Omega}}=\sigma^2\mv{I}+\sum\limits_{k=1}^K\alpha\beta_k\mv{h}_k\mv{h}_k^H+{\rm diag}(\lambda_1(\alpha\mv{\beta}),\ldots,\lambda_M(\alpha\mv{\beta})), \label{eqn:Omega 2} \end{align}as the corresponding $\mv{\Omega}$'s shown in (\ref{eqn:Omega}) with $\mv{\beta}$ and $\mv{\Lambda}_m^{(m,m)}$'s replaced by $\mv{\beta}$ and $\mv{\lambda}(\mv{\beta})$ as well as $\alpha\mv{\beta}$ and $\mv{\lambda}(\alpha\mv{\beta})$, respectively. It is observed from (\ref{eqn:Omega 1}) and (\ref{eqn:Omega 2}) that the non-diagonal elements of $\hat{\mv{\Omega}}$ and $\hat{\mv{\Omega}}$ satisfy \begin{align} \alpha\hat{\mv{\Omega}}^{(m1,m2)}=\bar{\mv{\Omega}}^{(m1,m2)}, ~~~ \forall m1\neq m2. \label{eqn:optimal solution 71} \end{align} As a result, we have \begin{align}\label{eqn:optimal solution 77} \bar{\mv{\Omega}}^{(m,1:m-1)}=\alpha\hat{\mv{\Omega}}^{(m,1:m-1)}, ~ \forall m. \end{align} In the following, we shown by induction that $\alpha\lambda_m(\mv{\beta})>\lambda_m(\alpha\mv{\beta})$, $\forall m$. First, when $m=1$, it follows from (\ref{eqn:optimal solution 61}) that \begin{align} \alpha\lambda_1(\mv{\beta})=& \frac{\alpha\sigma^2+\alpha\sum\limits_{k=1}^K\beta_kh_{k,1}h_{k,1}^H}{2^{C{1}}-1} \\ >&\frac{\sigma^2+\alpha\sum\limits_{k=1}^K\beta_kh_{k,1}h_{k,1}^H}{2^{C{1}}-1} \\ =& \lambda_1(\alpha\mv{\beta}). \label{eqn:optimal solution 72} \end{align} Next, we shown that given any $\bar{m}\geq 2$, if \begin{align}\label{eqn:optimal solution 75} \alpha\lambda_m(\mv{\beta}) > \lambda_m(\alpha \mv{\beta}), ~ \forall m=1,\ldots,\bar{m}-1, \end{align}then \begin{align}\label{eqn:optimal solution 78} \alpha\lambda_{\bar{m}}(\mv{\beta})>\lambda_{\bar{m}}(\alpha\mv{\beta}). \end{align} Given (\ref{eqn:optimal solution 75}), the diagonal elements of $\hat{\mv{\Omega}}$ and $\bar{\mv{\Omega}}$ defined in (\ref{eqn:Omega 1}) and (\ref{eqn:Omega 2}) satisfy \begin{align}\label{eqn:optimal solution 79} \bar{\mv{\Omega}}^{(m,m)}&=\sigma^2+\alpha\sum\limits_{k=1}^K\beta_kh_{k,m}h_{k,m}^H+\lambda_{m}(\alpha\mv{\beta}) \nonumber \\ &< \alpha\left(\sigma^2+\sum\limits_{k=1}^K\beta_kh_{k,m}h_{k,m}^H+\lambda_{m}(\mv{\beta})\right) \nonumber \\ &=\alpha\hat{\mv{\Omega}}^{(m,m)}, ~ \forall m=1,\ldots,\bar{m}-1. \end{align}Based on (\ref{eqn:optimal solution 77}) and (\ref{eqn:optimal solution 79}), it then follows that \begin{align}\label{eqn:optimal solution 710} \bar{\mv{\Omega}}^{(1:\bar{m}-1,1:\bar{m}-1)}\prec\alpha\hat{\mv{\Omega}}^{(1:\bar{m},1:\bar{m}-1)}, \end{align}or \begin{align}\label{eqn:optimal solution 711} \left(\bar{\mv{\Omega}}^{(1:\bar{m}-1,1:\bar{m}-1)}\right)^{-1}\succ\left(\alpha\hat{\mv{\Omega}}^{(1:\bar{m},1:\bar{m}-1)}\right)^{-1}. \end{align} Last, it follows that \begin{align} \lambda_{\bar{m}}(\alpha\mv{\beta})&=\frac{\sigma^2+\alpha\sum\limits_{k=1}^K\beta_kh_{k,{\bar{m}}}h_{k,{\bar{m}}}^H-\bar{\mv{\Omega}}^{({\bar{m}},1:{\bar{m}}-1)}(\bar{\mv{\Omega}}^{(1:{\bar{m}}-1,1:{\bar{m}}-1)})^{-1}\bar{\mv{\Omega}}^{(1:{\bar{m}}-1,{\bar{m}})}}{2^{C_{\bar{m}}}-1}\label{eqn:optimal solution 712} \\ &=\frac{\sigma^2+\alpha\sum\limits_{k=1}^K\beta_kh_{k,{\bar{m}}}h_{k,{\bar{m}}}^H-\alpha^2\hat{\mv{\Omega}}^{({\bar{m}},1:{\bar{m}}-1)}(\bar{\mv{\Omega}}^{(1:{\bar{m}}-1,1:{\bar{m}}-1)})^{-1}\hat{\mv{\Omega}}^{(1:{\bar{m}}-1,{\bar{m}})}}{2^{C_{\bar{m}}}-1}\label{eqn:optimal solution 713} \\ &<\frac{\sigma^2+\alpha\sum\limits_{k=1}^K\beta_kh_{k,{\bar{m}}}h_{k,{\bar{m}}}^H-\alpha^2\hat{\mv{\Omega}}^{({\bar{m}},1:{\bar{m}}-1)}(\alpha\hat{\mv{\Omega}}^{(1:{\bar{m}}-1,1:{\bar{m}}-1)})^{-1}\hat{\mv{\Omega}}^{(1:{\bar{m}}-1,{\bar{m}})}}{2^{C_{\bar{m}}}-1}\label{eqn:optimal solution 714} \\ &<\frac{\alpha\left(\sigma^2+\sum\limits_{k=1}^K\beta_kh_{k,{\bar{m}}}h_{k,{\bar{m}}}^H-\hat{\mv{\Omega}}^{({\bar{m}},1:{\bar{m}}-1)}(\hat{\mv{\Omega}}^{(1:{\bar{m}}-1,1:{\bar{m}}-1)})^{-1}\hat{\mv{\Omega}}^{(1:{\bar{m}}-1,{\bar{m}})}\right)}{2^{C_{\bar{m}}}-1}\label{eqn:optimal solution 715} \\ & = \alpha\lambda_{\bar{m}}(\mv{\beta}), \label{eqn:optimal solution 716} \end{align}where (\ref{eqn:optimal solution 713}) is due to (\ref{eqn:optimal solution 77}) and (\ref{eqn:optimal solution 714}) is due to (\ref{eqn:optimal solution 711}). As a result, given any $\bar{m}\geq 2$, if (\ref{eqn:optimal solution 75}) is true, then (\ref{eqn:optimal solution 78}) is true. By combining the above with (\ref{eqn:optimal solution 72}), it follows that $\alpha\lambda_m(\mv{\beta})>\lambda_m(\alpha\mv{\beta})$, $\forall m$. Next, we shown that if $\bar{\mv{\beta}}=[\bar{\beta}_1,\ldots,\bar{\beta}_K]^T$ satisfies $\bar{\beta}_k\geq \beta_k$, $\forall k$, then $\mv{\lambda}(\bar{\mv{\beta}})\geq \mv{\lambda}(\mv{\beta})$ by induction. First, it can be shown from (\ref{eqn:optimal solution 61}) that if $\bar{\mv{\beta}}\geq \mv{\beta}$, then \begin{align} \lambda_1(\bar{\mv{\beta}})\geq \lambda_1(\mv{\beta}). \end{align}In the following, we prove that given any $\bar{m}\geq 2$, if $\lambda_m(\bar{\mv{\beta}})\geq \lambda_m(\mv{\beta})$, $\forall m\leq \bar{m}-1$, then $\lambda_{\bar{m}}(\bar{\mv{\beta}})\geq \lambda_{\bar{m}}(\mv{\beta})$. To prove this, given any $\bar{m}\geq 2$, define \begin{align} & \mv{X}_{\bar{m}}=\sigma^2I+\sum\limits_{k=1}^K\beta_k\mv{h}_k^{(1:\bar{m})}[\mv{h}_k^{(1:\bar{m})}]^H+{\rm diag}([\lambda_1(\mv{\beta}),\ldots,\lambda_{\bar{m}-1}(\mv{\beta}),0])\in \mathbb{C}^{\bar{m}\times \bar{m}}, \\ & \bar{\mv{X}}_{\bar{m}}=\sigma^2I+\sum\limits_{k=1}^K\bar{\beta}_k\mv{h}_k^{(1:\bar{m})}[\mv{h}_k^{(1:\bar{m})}]^H+{\rm diag}([\lambda_1(\bar{\mv{\beta}}),\ldots,\lambda_{\bar{m}-1}(\bar{\mv{\beta}}),0])\in \mathbb{C}^{\bar{m}\times \bar{m}}. \end{align}With $\bar{\mv{\beta}}\geq \mv{\beta}$ and $\lambda_m(\bar{\mv{\beta}})\geq \lambda_m(\mv{\beta})$, $\forall m\leq \bar{m}-1$, it follows that $\bar{\mv{X}}_{\bar{m}} \succeq \mv{X}_{\bar{m}}$. Based on Lemma \ref{lemma3}, it can be shown from (\ref{eqn:optimal solution 62}) that \begin{align} \lambda_{\bar{m}}(\bar{\mv{\beta}})&=\frac{\bar{\mv{X}}_{\bar{m}}^{(\bar{m},\bar{m})}-\bar{\mv{X}}_{\bar{m}}^{(1:\bar{m}-1,\bar{m})}(\bar{\mv{X}}_{\bar{m}}^{(1:\bar{m}-1,1:\bar{m}-1)})^{-1}\bar{\mv{X}}_{\bar{m}}^{(\bar{m},1:\bar{m}-1)}}{2^{\bar{m}}-1} \\ & \geq \frac{\mv{X}_{\bar{m}}^{(\bar{m},\bar{m})}-\mv{X}_{\bar{m}}^{(1:\bar{m}-1,\bar{m})}(\mv{X}_{\bar{m}}^{(1:\bar{m}-1,1:\bar{m}-1)})^{-1}\mv{X}_{\bar{m}}^{(\bar{m},1:\bar{m}-1)}}{2^{\bar{m}}-1} \\ & = \lambda_{\bar{m}}(\mv{\beta}). \end{align} To summarize, we have shown that $\lambda_1(\bar{\mv{\beta}})\geq \lambda_1(\mv{\beta})$ and given any $\bar{m}\geq 2$, $\lambda_{\bar{m}}(\bar{\mv{\beta}})\geq \lambda_{\bar{m}}(\mv{\beta})$ if $\lambda_m(\bar{\mv{\beta}})\geq \lambda_m(\mv{\beta})$, $\forall m\leq \bar{m}-1$. Therefore, by deduction, it can be shown that $\mv{\lambda}(\bar{\mv{\beta}})\geq \mv{\lambda}(\mv{\beta})$ if $\bar{\mv{\beta}}\geq \mv{\beta}$. Lemma \ref{lemma5} is thus proved. \end{IEEEproof} Then, the remaining job is to show that there is only a unique solution $\mv{\beta}>\mv{0}$ such that $\mv{\beta}>\mv{0}$ and $\mv{\lambda}(\mv{\beta})$ satisfy constraint (\ref{eqn:dual uplink SINR constraint 01}) in problem (\ref{eqn:problem 7}). In the following, we prove this. Note that constraint (\ref{eqn:dual uplink SINR constraint 01}) can be re-expressed as \begin{align}\label{eqn:update} \mv{\beta}=\mv{I}(\mv{\beta}), \end{align}where $\mv{I}(\mv{\beta})=[I_1(\mv{\beta}),\ldots,I_K(\mv{\beta})]$ with \begin{align}\label{eqn:interference function} I_k(\mv{\beta})=\frac{(2^{R_k}-1)(\sigma^2+\sum_{j\neq k}\beta_j|\bar{\mv{u}}_k^H\mv{h}_j|^2+\sum_{m=1}^M\lambda_m(\mv{\beta})|\bar{u}_{k,m}|^2)}{|\bar{\mv{u}}_k^H\mv{h}_k|^2}, ~ \forall k. \end{align} In the following, we show three important properties of the function $\mv{I}(\mv{\beta})$. \begin{lemma}\label{lemma11} Given $\mv{\beta}\geq \mv{0}$, the function $\mv{I}(\mv{\beta})$ defined by (\ref{eqn:interference function}) satisfies the following three properties: \begin{itemize} \item[1.] $\mv{I}(\mv{\beta})>\mv{0}$; \item[2.] Given any $\alpha>1$, it follows that $\mv{I}(\alpha \mv{\beta})< \alpha \mv{I}(\mv{\beta})$; \item[3.] If $\bar{\mv{\beta}}\geq \mv{\beta}$, then $\mv{I}(\bar{\mv{\beta}})\geq \mv{I}(\mv{\beta})$. \end{itemize} \end{lemma} \begin{IEEEproof} First, given any $\mv{\beta}\geq \mv{0}$, it can be shown from (\ref{eqn:interference function}) that \begin{align}\label{eqn:standard interference function 1} \mv{I}(\mv{\beta})>\mv{0}. \end{align} Next, according to Lemma \ref{lemma5}, given any $\alpha>1$, it can be shown that \begin{align} I_k(\alpha \mv{\beta})=& \frac{(2^{R_k}-1)(\sigma^2+\sum_{j\neq k}\alpha \beta_j|\bar{\mv{u}}_k^H\mv{h}_j|^2+\sum_{m=1}^M\lambda_m(\alpha\mv{\beta})|\bar{u}_{k,m}|^2)}{|\bar{\mv{u}}_k^H\mv{h}_k|^2} \\ \leq & \frac{\alpha(2^{R_k}-1)(\sigma^2+\sum_{j\neq k}\beta_j|\bar{\mv{u}}_k^H\mv{h}_j|^2+\sum_{m=1}^M\lambda_m(\mv{\beta})|\bar{u}_{k,m}|^2)}{|\bar{\mv{u}}_k^H\mv{h}_k|^2} \label{eqn:lemma52} \\ =&\alpha I_k(\mv{\beta}), ~ \forall k, \label{eqn:standard interference function 3} \end{align}where (\ref{eqn:lemma52}) is due to Lemma \ref{lemma5}. Last, if $\bar{\mv{\beta}}\geq \mv{\beta}$, it then follows that \begin{align} I_k(\bar{\mv{\beta}})=&\frac{(2^{R_k}-1)(\sigma^2+\sum_{j\neq k}\bar{\beta}_j|\bar{\mv{u}}_k^H\mv{h}_j|^2+\sum_{m=1}^M\lambda_m(\bar{\mv{\beta}})|\bar{u}_{k,m}|^2)}{|\bar{\mv{u}}_k^H\mv{h}_k|^2} \\ \geq & \frac{(2^{R_k}-1)(\sigma^2+\sum_{j\neq k}\beta_j|\bar{\mv{u}}_k^H\mv{h}_j|^2+\sum_{m=1}^M\lambda_m(\mv{\beta})|\bar{u}_{k,m}|^2)}{|\bar{\mv{u}}_k^H\mv{h}_k|^2} \label{eqn:lemma51} \\ = & I_k(\mv{\beta}), ~ \forall k, \label{eqn:standard interference function 2} \end{align}where (\ref{eqn:lemma51}) is due to Lemma \ref{lemma5}. Lemma \ref{lemma11} is thus proved. \end{IEEEproof} Lemma \ref{lemma11} shows that the function $\mv{I}(\mv{\beta})$ defined by (\ref{eqn:interference function}) is a standard interference function \cite{Yates95}. It then follows from \cite[Theorem 1]{Yates95} that there exists a unique solution $\mv{\beta}>\mv{0}$ to equation (\ref{eqn:update}). Note that we have shown in the above that given $\mv{\beta}>\mv{0}$, there exists a unique solution $\mv{\lambda}(\mv{\beta})$ that satisfies constraint (\ref{eqn:eqv dual uplink fronthaul constraint 01}). Therefore, if problem (\ref{eqn:problem 7}) is feasible, there exists only one solution to problem (\ref{eqn:problem 7}). Proposition \ref{proposition6} is thus proved. \subsection{Proof of Lemma \ref{lemmacaseI}}\label{appendix8} First, it can be easily shown that if $\mv{p}^{{\rm ul}}\geq \mv{0}$, then $I_k(\mv{p}^{{\rm ul}})>0$, $\forall k$. Next, given $\alpha>1$, it follows from (\ref{eqn:optimal quantization Case I}) that $q_m^{{\rm ul}}(\alpha \mv{p}^{{\rm ul}})<\alpha q_m^{{\rm ul}}(\mv{p}^{{\rm ul}})$, $\forall m$. As a result, we have \begin{multline} \sum\limits_{j\neq k}\alpha p_j^{{\rm ul}}\mv{h}_j\mv{h}_j^H+{\rm diag}(q_1^{{\rm ul}}(\alpha \mv{p}^{{\rm ul}}),\ldots,q_M^{{\rm ul}}(\alpha \mv{p}^{{\rm ul}}))+\sigma^2\mv{I} \\ \prec \alpha\left(\sum\limits_{j\neq k} p_j^{{\rm ul}}\mv{h}_j\mv{h}_j^H+{\rm diag}(q_1^{{\rm ul}}( \mv{p}^{{\rm ul}}),\ldots,q_M^{{\rm ul}}( \mv{p}^{{\rm ul}}))+\sigma^2\mv{I}\right), ~~~ \forall k \in \mathcal{K}. \end{multline} Based on (\ref{eqn:interference function case I}), it follows that $I_k(\alpha \mv{p}^{{\rm ul}})<\alpha I_k(\mv{p}^{{\rm ul}})$, $\forall k$, given $\alpha>1$. Last, if $\bar{\mv{p}}^{{\rm ul}}\geq \mv{p}^{{\rm ul}}$, then based on (\ref{eqn:optimal quantization Case I}), it follows that $q_m^{{\rm ul}}( \bar{\mv{p}}^{{\rm ul}})\geq q_m^{{\rm ul}}(\mv{p}^{{\rm ul}})$, $\forall m$. As a result, we have \begin{multline} \sum\limits_{j\neq k} \bar{p}_j^{{\rm ul}}\mv{h}_j\mv{h}_j^H+{\rm diag}(q_1^{{\rm ul}}(\bar{\mv{p}}^{{\rm ul}}),\ldots,q_M^{{\rm ul}}(\bar{\mv{p}}^{{\rm ul}}))+\sigma^2\mv{I} \\ \succeq \sum\limits_{j\neq k} p_j^{{\rm ul}}\mv{h}_j\mv{h}_j^H+{\rm diag}(q_1^{{\rm ul}}( \mv{p}^{{\rm ul}}),\ldots,q_M^{{\rm ul}}( \mv{p}^{{\rm ul}}))+\sigma^2\mv{I}, ~~~ \forall k \in \mathcal{K}. \end{multline} Based on (\ref{eqn:interference function case I}), it follows that $I_k(\bar{\mv{p}}^{{\rm ul}})> I_k(\mv{p}^{{\rm ul}})$, $\forall k$. Lemma \ref{lemmacaseI} is thus proved. \subsection{Proof of Lemma \ref{lemmacaseIII}}\label{appendix9} First, it can be easily shown that if $\mv{p}^{{\rm ul}}\geq \mv{0}$, then $I_k(\mv{p}^{{\rm ul}})>0$, $\forall k \in \mathcal{K}$. Next, given $\alpha>1$, it follows from Lemma \ref{lemma5} that $\{q_m^{{\rm ul}}(\mv{p}^{{\rm ul}})\}$ defined in (\ref{eqn:optimal solution Case III}) and (\ref{eqn:optimal solution Case III 1}) satisfies $q_m^{{\rm ul}}(\alpha \mv{p}^{{\rm ul}})<\alpha q_m^{{\rm ul}}(\mv{p}^{{\rm ul}})$, $\forall m \in \mathcal{M}$. Then, similar to Appendix \ref{appendix8}, it can be shown that $\{I_k(\mv{p}^{{\rm ul}})\}$ defined in (\ref{eqn:interference function case III}) satisfies $I_k(\alpha \mv{p}^{{\rm ul}})<\alpha I_k(\mv{p}^{{\rm ul}})$, $\forall k \in \mathcal{K}$, given $\alpha>1$. Last, according to Lemma \ref{lemma5}, if $\bar{\mv{p}}^{{\rm ul}}\geq \mv{p}^{{\rm ul}}$, then $q_m^{{\rm ul}}(\bar{\mv{p}}^{{\rm ul}})\geq q_m^{{\rm ul}}(\mv{p}^{{\rm ul}})$, $\forall m \in \mathcal{M}$. Similar to Appendix \ref{appendix8}, it can be shown that $I_k(\bar{\mv{p}}^{{\rm ul}})\geq I_k(\mv{p}^{{\rm ul}})$, $\forall k \in \mathcal{K}$. Lemma \ref{lemmacaseIII} is thus proved. \end{appendix} \bibliographystyle{IEEEtran}
1,941,325,220,668
arxiv
\section{Introduction} The flux of high-energy protons arriving at 1\,AU is associated with an energy release in a solar eruptive event and/or with the consecutive acceleration via a coronal mass ejection (CME). Solar Proton Events (SPE) or Ground Level Enhancements (GLE), are observed directly over a long time, most probably since the events on 28 February and 7 March 1942 when they were identified by \citet{Forbush1946} and named later as GLE 1 and 2, respectively. Reviews on solar proton events and on GLEs can be found, e.g., in papers by \citet{Shea1990} and \citet{Moraal2012}. The GLEs, which are important also for radiation dose at the airplane altitude, are analyzed according to data of a neutron monitor (NM) network \citep[e.g.,][]{Mishev2014}. Radiation hazard alerts are based also on the NM data if available in real time with high time resolution \citep[e.g.,][]{Souvatzoglou2014}. Altogether, during the systematic investigation of GLEs, 72 events were recorded (see, e.g., \citealp{Belov2010}; \citealp{Poluianov2017}; the GLE database at the University of Oulu). The real time database for high resolution neutron monitor measurements (NMDB) is accessible at \url{http://www.nmdb.eu} and described, e.g., by \citet{Mavromichalaki2011}. A low-energy threshold of particles, detected by high-latitude neutron monitors, is $\approx 450$\,MeV (this threshold is specified by atmospheric absorption), but an effective energy exceeds 600\,--\,700\,MeV. Minimal detected energy for a medium and low-latitude NM is even higher; it is determined by the geomagnetic cutoff rigidity. There is no doubt that GLEs are connected with powerful solar eruptive events, but it is still debated whether protons responsible for the beginning of GLEs and high-energy SPEs are accelerated directly during a flare energy release or later when a shock wave propagates in the upper corona. Particle propagation in the interplanetary magnetic field (IMF) is a complex process controlled by a variety of factors. Angular separation of a site of observation (the Earth) and a source on the Sun affects this propagation \citep[e.g.,][]{Kallenrode1990,Tylka2006, Gopalswamy2013, Plotnikov2017}. In addition, the magnetospheric transmissivity has to be included to interpret correctly the ground based measurements. SPE observations up to proton energies of $\approx 700$\,MeV onboard GOES satellites allowed us to compare a time profile of each GLE obtained from data of the NM network with time profiles of high-energy proton fluxes observed in the outer magnetosphere where shielding by the geomagnetic field is very slight. Here we present and discuss the recent GLE associated with a major eruptive event on 10 September 2017 that occurred in the active region NOAA\,12673 near the west solar limb (S05W88) with X8.2 importance (GOES-13). \section{Data} Let us consider the anisotropy at the beginning of the event. Usually the anisotropy is best clarified by comparison of count rates of northern and southern near-polar NMs. The asymptotic directions of NMs at high latitudes (not truly polar stations) have a rather narrow cone of acceptance in the longitude extent and they are collecting cosmic ray (CR) charged particles from regions near the equator. The ring of such stations is used for space weather studies \citep[Spaceship Earth;][]{Bieber95}. During a rather long time period, there was a pair of NMs looking towards north and towards south, namely Thule and McMurdo (by asymptotic directions indicated in Spaceship Earth). The NM installed at McMurdo has recently been moved by 200\,km and is now operating as a Korean Jang Bogo NM. The comparison of the count rate of these polar NMs is presented in Figure~\ref{fig1}. This comparison did not reveal any north-south anisotropy. All high-latitude NMs situated at altitudes near the sea level have approximately the same dependence of the count rate on energy of primary CRs. They should display almost the same increases caused by solar CRs, if the effect is the isotropic one. There is a lot of such NM stations -- it is mostly an extended group of ground based CR detectors. However, just 11 suitable stations have been found in NMDB so far. Their nominal vertical geomagnetic cut-off rigidities $R_{\rm c}$ are $< 1.4$\,GV and the standard atmospheric pressure is $>980$\,mbar. \begin{figure}[t] \includegraphics[width=\textwidth]{fig1.eps} \caption{The count rate variation of northern (Thule) and southern (Jang Bogo) neutron monitors during the event on 10 September 2017. The variation is normalized to the average for one hour before the start of the GLE (14\,--\,15\,UT).} \label{fig1} \end{figure} We have averaged those "single-type" data from various NMs (see Figure~\ref{fig2}). For the averaged variation the statistical error is decreasing at least four times in comparison with 1-minute data of a typical NM. The main feature of the data is their rather high statistical error with the value above 1\%. The increase is well pronounced, however details are not easy to interpret. It was assumed that the 10 September 2017 GLE was isotropic from its very beginning. However, we can see that this was not the case. It is probable that it was a break through to the Earth of a very narrow stream of accelerated particles which was observed by a single NM Fort Smith (FSMT). Asymptotic cones of Thule, McMurdo (approximately the same for Jang Bogo) and of Fort Smith can be found in the paper by \citet{Kuwabara2006}. The FSMT NM, being one of stations of Spaceship Earth, has $<20^{\circ}$ extent of asymptotic longitudes and its asymptotic latitudes are close to the equator \citep[Figure~2 in][]{Kuwabara2006}. The fact that this NM is the only one which shows rather high increase among high-latitude stations indicates anisotropy of the GLE 72 in the first phase of its detection by NMs. \begin{figure} \includegraphics[width=\textwidth,clip=]{fig2.eps} \caption{Averaged variations of the count rate of high-latitude neutron monitors on 10 September 2017 (two minute averages, the smoothed line, Fort Smith is excluded) and variation at the NM Fort Smith (points).} \label{fig2} \end{figure} The different course of two temporal profiles of variations indicates the anisotropy of solar CR. The anisotropy was sufficiently high within the first hour of the event, which is typical for GLEs. Figure~\ref{fig3} displays time profiles of the count rate observed by three middle-latitude NMs. The count rate profiles of Irkutsk (IRK3) and Lomnick\'{y} \v{S}t\'{\i}t (LMKS) with almost similar cut-off rigidities situated at different longitudes (by $\approx 85^{\circ}$), indicate that anisotropy in the initial stage of GLE was clearly visible at higher energies. Although the middle-latitude NMs have a rather large extent of asymptotic longitudes, the estimate of differences between the range of asymptotics of the IRK3 and of LMKS can be seen from Figure 3 (panels {\it a} and {\it d}) in \citet{Tezari2016}. While for IRK3 the spread of asymptotic longitudes is situated between $120^{\circ}$ and $\approx 260^{\circ}$, for LMKS it is situated between $\approx 40^{\circ}$ and $180^{\circ}$. Thus different time profiles of increases at two mid-latitude NMs probably indicate GLE 72 anisotropy which requires inclusion of more NMs and discussion of the pitch-angle distribution with respect to IMF. Here we selected two mid latitude NMs where some increase was observed from GLE 72. A small variation of the count rate of Almaty NM (AATB), where the cut-off rigidity is equal to 6.69\,GV, indicates that the maximum energy of accelerated protons most probably reached 6\,GeV. \begin{figure} \centerline{\includegraphics[width=\textwidth]{fig3.eps}} \caption{Variations of the count rate at selected middle-latitude stations.} \label{fig3} \end{figure} Figure~\ref{fig4} presents the temporal variation at the NM where, unlike the other high latitude NMs, the increase was observed already at about 16:05\,--\,16:08\,UT. We compared this variation with the flux of solar cosmic rays in the 510\,--\,700\,MeV energy range (data of the GOES-13 satellite). GOES data indicate the onset time between 16:05\,--\,16:10\,UT. Thus we can say with confidence that the first SPE particles arrived to 1\,AU within the 16:06\,--\,16:08\,UT interval. \begin{figure} \centerline{\includegraphics[width=\textwidth]{fig4.eps}} \caption{The count rate at NM Fort Smith (the black curve) and the flux of SPE protons with energies of 510\,--\,700\,MeV (data of GOES-13 -- the gray histogram).} \label{fig4} \end{figure} \section{Discussion and summary} The amplitude of this GLE associated with the 10 September 2017 (X8.2) flare which occurred at the western limb of the Sun, was relatively low -- a slight increase with approximately 6\,--\,7\%. Figure~\ref{fig5} presents a scatter plot of the maximum GLE increases observed on the ground since 30 April 1976 (GLE27) versus the maximum flux of SPE at the energy $>100$\,MeV (P100) observed on satellites (IMP and GOES data). The regression curve can be described as \begin{equation}\label{eq1} \lg(GLE) = (-0.06\pm0.07) + (0.84\pm0.11)\lg(P100)\,. \end{equation} The event on 10 September 2017 is located within the diagram of the scatter plot, although relation of the GLE with respect to SPE is situated at the lower envelope of all events. This event is $5-6\sigma$ smaller than the average value indicated by curve (1). However, it is not the unique case among GLEs. There is a group of GLEs with rather soft energy spectra. Such types of spectra have been observed on 30 April 1976, 19 September 1977, 10 April 1981, 10 May 1981, 11 February 1992, 11 April 2001, and 17 January 2005 GLEs. \begin{figure} \centerline{\includegraphics[width=\textwidth]{fig5.eps}} \caption{Dependence of the maximum increase of GLE on the maximum flux of protons with energy $>100$\,MeV for various GLEs. The black point corresponds to the GLE on 10 September 2017. The point in the upper right corner corresponds to the GLE on 20 January 2005. The linear fit is described by Equation~(\ref{eq1}).} \label{fig5} \end{figure} Maximum of the event was observed within 17:30\,--\,18:00\,UT interval. No north-south anisotropy was found in the event. However, at the beginning of the increase, a considerable longitudinal asymmetry was revealed, according to the difference between temporal behavior of the NM Fort Smith variation and the averaged variation of other high-latitude NMs count rate. Given the different profiles of NMs IRK3 and LMKS (East-West), the longitudinal asymmetry is most probably imprinted at higher rigidities too, at least at $>3.6$\,GV. It is supposed that GLE particles are accelerated at the front of the shock wave created in the solar corona during propagation of a CME \citep[see, e.g.,][]{Ryan2000, Yashiro2004, Kumari2017}. Another hypothesis is that the first high-energy protons arriving to the Earth's orbit are accelerated during the time when the essential energy amount in the flare was released. It is assumed that particle acceleration and subsequent plasma heating are closely connected with the energy release resulting from magnetic reconnection \citep[see,][]{Fletcher2011, Zharkova2011}. Both the process of the proton acceleration caused by the most intensive reconnection and the process of the acceleration caused by the shock wave should last for a certain period of time. We do not prefer either of these models. Observations of the pion-decay emission during a solar eruptive event provide incontestable evidence of the proton acceleration up to high energies ($>300$\,MeV) and following interaction with the dense medium \citep{Ramaty1987, Vilmer2011}. When high-energy protons interact with matter, the pion-decay gamma-rays are emitted almost instantaneously. Fermi/LAT observed the onset of the high-energy emission ($E_{\gamma}>100$\,MeV) at $\approx$15:58\,UT (G.\,Share, private communication). This experimental fact means that the accelerated protons could not escape the Sun earlier than at 15:50\,UT. On the other hand, given the observed time (16:06\,--\,16:08\,UT) of 600\,MeV SPE particles appearance at the Earth orbit, we can estimate the latest moment of particle release from the Sun. Suppose that these protons with $v\approxeq 0.8c$ propagated along the shortest IMF line $L \approx 1.2$\,AU. The propagation time is about 750 s, and consequently protons arrived to the Earth 250\,--\,300\,s later than any neutral emission. In other words, these particles escaped the Sun vicinity not later than 15:53\,--\,15:55\,UT. It should be noted that in most events there are some uncertainties: a) of the time of recording the onset of acceleration by the observed onset of the high-energy gamma-emission $E_{\gamma}>100$\,MeV; b) exact knowledge of energies (velocities) of the protons responsible for the onset of the increase. These uncertainties allow to determine the escaping time interval of GLE particles into interplanetary space with several minutes accuracy. Let us consider the 20 January 2005 flare and GLE associated with it which has almost 100\% anisotropy and the amplitude of 6000\%. The first protons arrived to the Earth at 06:48:30\,UT\,$\pm$30\,s (above $5\sigma$ level). We compare this time with the time of appearance of the pion-decay emission, measured by CORONAS/SONG during the impulsive phase of the 20 January 2005 flare \citep{Grechnev2008, Masson2009, Kurt2013}. Even if the energy of the particles exceeded 10\,GeV, and they arrived to the Earth along the shortest possible trajectory $L \approx 1.1$\,AU, they had to leave the Sun at $\approx$ 06:38\,--\,06:39\,UT. The beginning of the pion-decay emission was observed by CORONAS/SONG at 06:44:40\,UT, and strong increase started from 06:45:30\,$\pm 5$\,s. Taking into account a photon propagation time we obtain that proton acceleration began within 06:36:40\,--\,06:37:40\,UT. Thus, even in this event, with fairly accurate measurements of the onset of sub-relativistic proton acceleration on the Sun and the beginning of the GLE, it is possible to identify the time of particle release from the Sun with the accuracy of 3\,--\,4\,minutes. \acknowledgements The authors wish to acknowledge Dr.\,G.\,Share for the high energy gamma-ray data, PIs of all neutron monitors (\url{http://www.nmdb.eu}), whose data are used in the paper, and GOES data providers. KK wishes to acknowledge support of the grant agency APVV project APVV-15-0194 and VEGA 2/0155/18. This paper was supported by the project CRREAT (reg. CZ.02.1.01/0.0/0.0/15003/0000481) call number 02\,15\,003 of the Operational Programme Research, Development and Education. The authors are thankful to an anonymous reviewer and the CAOSP editor for their help in improving the paper.
1,941,325,220,669
arxiv
\section{Introduction} Measurements of the photon momentum in a dispersive medium is of conceptual and practical importance. Such kind of studies are currently used in quantum metrology, in particular, to determine the ratio $h/m$~\cite{Weiss1993,Battesti2004,Cog2005}, where $h$ and $m$ are the Plank's constant and the atomic mass, respectively, as well as the fine-structure constant $\alpha$~\cite{Wicht2002,Gupta2002}. In a medium, however, the photon momentum experiences a re-normalization due to the index of refraction, so that the photon momentum $\hbar k_0$ [$\hbar = h/(2\pi)$ and $k_0$ is the vacuum photon wave vector] should be replaced by $n\hbar k_0$, where $n$ is the index of refraction~\cite{Minkovski1908,Haugan1982,Lowden2004,Bradshaw2010,Barnett2010}. Experimentally, this problem has been tackled by Campbell {\it et al.}~\cite{bib:Campbell_Leanhardt_etal_PheRevLett_94:170403_2005} with a two-pulse light grating (Ramsay) interferometer, using near-resonant laser light. The scheme of measurements in~\cite{bib:Campbell_Leanhardt_etal_PheRevLett_94:170403_2005} was as follows. An elongated BEC of rubidium atoms $^{87}\rm{Rb}$ in $|5^2S_{1/2} F = 1; m_F = -1\rangle$ state, confined in a magnetic trap, was illuminated in the perpendicular direction with an optical standing wave produced by two identical counter-propagating laser beams of a duration $\delta t$ and of a carrier frequency $\omega_{0}$. As a result, two coherent atomic clouds, moving in the opposite directions, were created. The polarization of the excitation pulses was optimised to suppress the super-radiant Rayleigh scattering in the direction of BEC's elongation. As a result of the Bragg scattering on the optical grating, an atom in its ground state acquires a mean recoil momentum $p$ approximately twice the laser photon momentum $\hbar k_{0} = \hbar\omega_{0}/c$ ($c$ is the speed of light in free space), or recombines to the static cloud. For a given refractive index $n$ of the medium, $p = 2n\hbar k_{0}$. The kinetic energy gained by an atom is equal to $p^{2}/2m$. Therefore, its de Broglie wave frequency is determined by $\omega_{B} = p^{2}/(2m\hbar)$. After some time delay $\tau$, a second identical pulse was applied and the second pair of moving atomic clouds was created. The speed of clouds appears to be low, so that they are not shifted appreciably with respect to each other within the time delay: it leads to their interference and, accordingly, to the density oscillations of clouds as a function of the delay time $\tau$. The latter, in turn, affects the density of the condensate itself, since the total number of atoms is approximately conserved. Measuring the density of the static cloud as a function of the delay time $\tau$ allows one to determine the phase shift $\omega_{B}\tau$ and thus the effective atom recoil momentum. This is, although not a direct, but a highly sophisticated method of measuring the atomic recoil momentum via the influence of the interference of the moving coherent clouds on the condensate itself. We present a simplified microscopic model of the experiment~\cite{bib:Campbell_Leanhardt_etal_PheRevLett_94:170403_2005} on measuring the photon recoil momentum in a Bose-Einstein condensate of a dilute gas, using a semiclassical theory of the superradiant light scattering (SLS) on a BEC. Within the framework of our approach, we enable, first, to reproduce the essential features of the experiment~\cite{bib:Campbell_Leanhardt_etal_PheRevLett_94:170403_2005} and, additionally, to calculate the quantum-mechanical mean of the recoil momentum in the moving atomic clouds and its statistical distribution. We demonstrate that the value of the recoil momentum extracted from the interference data for the static cloud~\cite{bib:Campbell_Leanhardt_etal_PheRevLett_94:170403_2005} with a good accuracy coincides with the quantum-mechanical mean. This point on that in the experiment just the latter is measured. The SLS from a BEC has been observed for the first time in~\cite{bib:Inouye_Science_285:571_1999,bib:Schneble_Science_300:475_2003} and since then an intensive buildup of the theory of the effect has followed~\cite{bib:Moor_Meystre_PhysRevLett_83:5202_1999, bib:Mustecaplioglu_You_PhysRevA_62:063615_2000, bib:Piovella_Gatelli_Bonifacio_OptCommun_194:167_2001, bib:Trifonov_JETP_93:969_2001, bib:Zhang_Meystre_PhysRevLett_91:150407_2003, bib:Benedek_Benedict_JOptB_6:3_2004, bib:Avetisyan_Trifonov_LaserPhysLett_124_2004_2005_2007, bib:Robb_Piovella_Bonifacio_JOptB_7:93_2005, bib:Zobay_Nikolopoulos_PhysRevA_72:041604_2005, bib:Bar-Gill_Rowen_Davidson_PhysRevA_76:043603_2007, bib:Deng_Payne_Hagley_PhysRevLett_104_050402_2010, bib:Avetisyan_Trifonov_PhysRevA_88:025601_2013} that provided substantial insight and understanding of the light-BEC interaction process. More specifically, in Refs.~\cite{bib:Moor_Meystre_PhysRevLett_83:5202_1999,bib:Mustecaplioglu_You_PhysRevA_62:063615_2000,bib:Zhang_Meystre_PhysRevLett_91:150407_2003, bib:Robb_Piovella_Bonifacio_JOptB_7:93_2005}, the quantum-electrodynamic approach in the mean-field approximation has been used to describe the SLS from a BEC of a dilute cold gas. In papers~\cite{bib:Piovella_Gatelli_Bonifacio_OptCommun_194:167_2001, bib:Trifonov_JETP_93:969_2001,bib:Benedek_Benedict_JOptB_6:3_2004,bib:Zobay_Nikolopoulos_PhysRevA_72:041604_2005, bib:Bar-Gill_Rowen_Davidson_PhysRevA_76:043603_2007,bib:Avetisyan_Trifonov_PhysRevA_88:025601_2013}, the semiclassical approach of the light-matter interaction, naturally incorporating the propagation and nonlinear effects, has been applied to explain essential details of SLS, such as the spatial asymmetry between forward- and backward-moving atomic side modes observed in the strong-pulse regime of SLS~\cite{bib:Zobay_Nikolopoulos_PhysRevA_72:041604_2005}, ultraslow group velocity of the backward-propagating superradiant field~\cite{bib:Deng_Payne_Hagley_PhysRevLett_104_050402_2010}, a crucial role of the multiple recoil processes for SLS on a BEC resulting in that the SLS dominates over the usual Rayleigh scattering~\cite{bib:Avetisyan_Trifonov_PhysRevA_88:025601_2013} and many others. The paper is organised as follows. In the next section, we present the formalism based on the semiclassical theory of the Ramsay interference in a BEC, involving the coupled system of Maxwell-Schr\"odinger equations within the framework of the slowly-varying amplitude approximation. In Sec.~\ref{Simulations}, the results of simulations of the condensate density oscillations are presented. In Sec.~\ref{Recoil}, we calculate the quantum-mechanical mean value of the recoil momentum and energy of an atom in moving clouds and compare these data with those obtained in the previous section. Section~\ref{Conclusion} concludes the paper. \section{Formalism} \label{Formalism} In line with the geometry of the experiment~\cite{bib:Campbell_Leanhardt_etal_PheRevLett_94:170403_2005}, we shall use a simplified one dimensional model of the light scattering on a condensate subjected to illumination by two pulses, as described above, separated by a delay time $\tau$. This model underlines the essential features of the problem. An atom will be considered as a two-level Bose-particle with the wave functions $\varphi_a$ and $\varphi_a$ and corresponding eigenenergies $E_{a}$ and $E_{b}$ for the ground and excited states, respectively. We also take into account the atomic translational motion and then seek the atom's wave function in the form \begin{widetext} \begin{equation} \label{eq:psi_xt} \Psi(x,t) = \\ \sum_{j = 0,\pm 2, \ldots} \left[ \phi_{a,j} a_{j}(x,t)+\exp(-i\omega_{0}t) \phi_{b,j+1} b_{j+1}(x,t)\right]\, , \end{equation} where \begin{equation} \label{eq:psi_ab} \phi_{a,j} = \frac{1}{\sqrt{L}}\exp\left[ik_{0}jx\right]\varphi_{a}\, \quad \mathrm{and} \quad \phi_{b,j+1} = \frac{1}{\sqrt{L}}\exp\left[ik_{0}(j+1)x\right]\varphi_{b} \end{equation} \end{widetext} are the wave functions of an atom in $|j\rangle$-th and $|j+1\rangle$-th discrete momentum states, respectively, $L$ is the transversal size of the condensate. To approach the problem, we use the coupled system of the Maxwell-Schr\"{o}dinger (MS) equations and apply the slowly-varying amplitude approximation in time and space. The system of MS equations for amplitudes (in dimensionless units, see below) for our model of the light-condensate interaction reads \begin{widetext} \begin{subequations} \begin{equation} \label{eq:ms_aj} \frac{\partial a_{j}(x,t)}{\partial t} + v_{j}\frac{\partial a_{j}(x,t)}{\partial x} = -i\omega_{j}a_{j}(x,t) + \bar{E}^{+}b_{j+1}(x,t) + \bar{E}^{-}b_{j-1}(x,t)\, , \end{equation} \begin{eqnarray} \label{eq:ms_bj} \frac{\partial b_{j+1}(x,t)}{\partial t} + v_{j+1}\frac{\partial b_{j+1}(x,t)}{\partial x} \nonumber \\ = i\left(\Delta -\omega_{j+1} + i \gamma/2\right)b_{j+1}(x,t) - E^{+}a_{j}(x,t) - E^{-}a_{j+2}(x,t)\, , \end{eqnarray} \begin{equation} \label{eq:E_plus} E^{+}(x,t) = E_{0}(t) + 2\int^{x}_{0}dx'\sum_{j = 0,\pm 2,\ldots}b_{j+1}(x',t)\bar{a}_{j}(x',t)\, , \end{equation} \begin{equation} \label{eq:E_minus} E^{-}(x,t) = E_{0}(t) + 2\int^{1}_{x}dx'\sum_{j = 0,\pm 2,\ldots}b_{j-1}(x',t)\bar{a}_{j}(x',t)\, , \end{equation} \end{subequations} \end{widetext} where $j = 0, \pm 2, \pm 4, \ldots$ We adapted in Eqs.~\eqref{eq:ms_aj} - \eqref{eq:E_minus} as units of length and time, respectively, the condensate transversal size $L$ and the superradiant time constant $\tau_{R} = \hbar/(\pi d^2 k_{0}N_{0}L)$~\cite{bib:Benedict_Super-radiance_1996}, where $d$ is the atom transition dipole moment and $N_{0}$ is the atom number density. The slowly varying field amplitudes of the forward (backward) $E^{+}$ ($E^{-}$) and incident $E_{0}$ fields are scaled by $i\hbar/(d\tau_{R})$ (overbars denote the complex conjugation). The quantities $\omega_{j} = \hbar j^{2}k^{2}_{0}\tau_{R}/(2m)$ and $v_{j} = j\hbar k_{0}\tau_{R}/mL$ are the dimensionless atom recoil frequency and velocity, respectively, where the index $j$ runs over $0,\pm 2, \pm 4, \ldots$ for the ground state, while over $\pm 3, \pm 5, \ldots$ for the excited state. Furtheremore, $\Delta = (\omega_{0}-\omega_{ba})\tau_{R}$ is the dimensionless detuning of the incident field frequency $\omega_{0}$ away from the atomic resonance at $\omega_{ba}$, $\gamma = \Gamma\tau_{R}$, where $\Gamma$ is the spontaneous emission rate of the excited atomic state. The retardation is neglected in Eqs.~(\ref{eq:E_plus}) and~(\ref{eq:E_minus}) as the flight time of light through the system $L/c$ is much shorter than all other times in the problem. The only nonzero initial condition to the system of equations~\eqref{eq:ms_aj} - \eqref{eq:E_minus} is $a_{0}(x,t=0) = 1$. All other variables equal to zero before the first excitation pulse arrives. The similar system of equations~(\ref{eq:ms_aj}) - (\ref{eq:E_minus}) has been previously used for the description of the super-radiant scattering on BECs of a dilute gas~\cite{bib:Avetisyan_Trifonov_PhysRevA_88:025601_2013}, only without spatial derivatives of the wave function amplitudes $a_j(x,t)$ and $b_j(x,t)$. While those terms has no effect on the final results in the underlined studies, in our case, they are of crucial importance to catch out some fine features of the two-pulse Ramsay interference (see for details the next section). Additionally, we do not use the approximation of adiabatic elimination of the excited atomic state, usually assumed when considering the light-condensate interaction. This allows us to consider an arbitrary value of the detuning $\Delta$, i.e. to scan the exact resonance, $\Delta = 0$, that is important for our study. \section{Condensate density oscillations} \label{Simulations} In our modeling of the Ramsay interference, we used the set of the system's parameters, approximated to those in the experiment~\cite{bib:Campbell_Leanhardt_etal_PheRevLett_94:170403_2005}: the BEC transversal size $L = 16\,\mu$m, the atom number density $N_{0} = 4.15\times 10^{13}$cm$^{-3}$, the radiation constant of the $|5^2P_{3/2} F = 1\rangle \rightarrow |5^2S_{1/2} F = 1\rangle$ transition (the wavelength $\lambda = 780$~nm) $\Gamma = 0.37 \cdot 10^8$ s$^{-1}$, the corresponding transition dipole moment $d = 2.07\cdot 10^{-29}$~C\,m. For these parameters, the super-radiant constant is estimated to be $\tau_{R} \approx 1.75\times 10^{-9}$~s. Then, for the dimensionless quantities $v_j$, $\omega_j$ and $\gamma$ one gets: $v_{j} = 7.8\times 10^{-7}\,j$, $\omega_{j} = 5\times 10^{-5}\,j^{2}$, $\gamma = 5\times 10^{-2}$. The detuning $\Delta$ was varied within a range of $[-12,12]$. To excite the condensate, we used rectangular pulses of duration $\delta t = 5\mu$s ($\delta t/\tau_R = 3\times 10^{3}$). The delay time $\tau$ between the pulses was varied within a range of [5,150]~$\mu$s (in dimensionless units $\tau/\tau_R$, within $[3,90] \times 10^{3}$). The dimensionless amplitude of the incident pulse, $E_{0} = 6\times 10^{-3}$ was chosen so that after a $\delta t$-long excitation, the population of the static cloud decreased approximately to a level of 0.9. When solving the MS equations~\eqref{eq:ms_aj} - \eqref{eq:E_minus}, we took into account the generation of atomic clouds up to the 10-th order. First of all, we are interested in the fraction of atoms in the static BEC cloud at a time point $t = \tau + \delta t$, when the second excitation pulse has gone. It is defined as \begin{equation} \begin{split} \label{eq:s0} S_{0}(t) = \int^{1}_{0}dx \left|a_{0}(x,t)\right|^{2} \, . \end{split} \end{equation} Also, it is of our interest the fraction of atoms in the moving atomic clouds in the ground state, $|\pm 2j\rangle$, $j = 1,2,3,\ldots$: \begin{equation} \begin{split} \label{eq:spm2j} S_{\pm 2j}(t) = \int^{1}_{0} dx \left|a_{\pm 2j}(x,t)\right|^{2} \, . \end{split} \end{equation} The results of simulations for the static $|0\rangle$ and moving $|\pm 2\rangle$ clouds of BEC are shown in Fig.~\ref{fig:IntAtClouds}. As is seen from the figure, the fraction of atoms in all clouds, $S_0(t)$ and $S_{\pm 2}$, as a function of the delay time $\tau$, reveals oscillations as it has been observed in the experiment~\cite{bib:Campbell_Leanhardt_etal_PheRevLett_94:170403_2005}. \begin{figure}[ht!] \begin{center} \includegraphics[width=0.476\columnwidth]{fig1a.eps} \includegraphics[width=0.465\columnwidth]{fig1b.eps} \end{center} \caption{\label{fig:IntAtClouds} Lefft panel - Interference fringes of $S_{0}(\tau + \delta t)$ and $S_{\pm 2}(\tau + \delta t)$ for $|0\rangle$ - and $|\pm 2\rangle$ - clouds, respectively. The solid (dashed) curve was calculated for $\Delta = 0.5$ ($\Delta = -0.5$). Right panel - same, only neglecting the first spatial derivatives of the wave function amplitudes $a_j(t)$ and $b_j(t)$ in Eqs.~(\ref{eq:ms_aj}) and~(\ref{eq:ms_bj}).} \end{figure} The authors of~\cite{bib:Campbell_Leanhardt_etal_PheRevLett_94:170403_2005} associated the density oscillations of the static condensate cloud mainly with those of the $|\pm 2\rangle$ clouds. This finds its confirmation in our calculations. Indeed, one observes strong correlations in frequencies, phases and amplitudes of the $S_{\pm 2}$ and $S_{0}$ interference fringes for the same $\Delta$, that reflects the conservation of the total number of atoms in these states. We emphasize that interference fringes are shifted with respect to each other for different signs of $\Delta$. The most important, however, is that their frequencies deviate from each other for altered signs of the detuning $\Delta$ that agrees with the experiment~\cite{bib:Campbell_Leanhardt_etal_PheRevLett_94:170403_2005}. The underlined shift disappears, if one neglects in Eqs.~\eqref{eq:ms_aj} and~\eqref{eq:ms_bj} the terms proportional to the spatial derivative of the wave function amplitudes $a_j(t)$ and $b_j(t)$ (see Fig.~\ref{fig:IntAtClouds}, right panel). This points on the relevance of those terms for the correct description of the Ramsay interference. As is seen from Fig.~\ref{fig:IntAtClouds}, the density oscillations do not show any decay which has been found in Ref.~\cite{bib:Campbell_Leanhardt_etal_PheRevLett_94:170403_2005} and has been explained there by decreasing the overlap between the recoiling atoms and those at rest due to the motion away after the shutoff of the magnetic trap. In our case, the system is confined in a finite interval $[0, L]$, i.e, there is no expansion of clouds and, consequently, any decay of the interference signal. From the density oscillations obtained, one can extract the recoil frequency $\omega_{rec}$ for different values of the detuning $\Delta$. Analyzing the numerical data presented in the left plot of Fig.~\ref{fig:IntAtClouds}, we found that $\omega_{rec} \approx 1.06\, \omega_{2}$ for $\Delta = -0.5$, while $\omega _{rec} \approx 0.94 \, \omega_{2}$ for $\Delta = 0.5$, where $\omega_2 = 4\hbar k^{2}_{0}\tau_{R}/(2m)$ is the bar frequency of atoms in the $|\pm 2\rangle$ coherent clouds. We point out again that the recoil frequency $\omega_{rec}$ differs for altered signs of the detuning $\Delta$. Oppositely, the similar analysis performed for the right plot of Fig.~\ref{fig:IntAtClouds} yields $\omega_{rec} = \omega_2$ independently of the sign of $\Delta$. \section{Momentum and frequency recoil} \label{Recoil} The density oscillations of the static cloud of the condensate provides a tool to determine the actual value of the atom recoil momentum/frequency in moving clouds, as has been implemented in~\cite{bib:Campbell_Leanhardt_etal_PheRevLett_94:170403_2005} within the framework of the phenomenological picture. Our microscopic approach allows one to get a more detailed information about the recoil momentum/frequency for different atomic clouds: not only its mean value, but also the distribution function. The latter for the $j$-th atomic cloud is defined as \begin{equation} \begin{split} \label{eq:wj} w_{j}(k,t) = \frac{\left|f_{j}(k,t)\right|^{2}}{\int^{+\infty}_{-\infty} dk^{\prime}\left|f_{j}(k',t)\right|^{2}}\ , \end{split} \end{equation} where $f_{j}(k,t) = \int^{1}_{0}dx\exp\left(-ikx\right)a_{j}(x,t)$ is the Fourier transform of the amplitude $a_{j}(x,t)$. \begin{figure}[ht!] \begin{center} \includegraphics[width=0.47\columnwidth]{fig2a.eps} \includegraphics[width=0.47\columnwidth]{fig2b.eps} \end{center} \caption{\label{fig:DistribFuncRecMoment} Distribution functions of the recoil momentum shift $\delta k_{\pm 2} = k_{\pm 2} - 2k_0$ for atoms in the $|-2\rangle$ and $|2\rangle$ clouds (dashed and solid curves, respectively) immediately after the first pulse excitation of $\delta t = 5\mu$s - duration for two values of the detuning: $\Delta = -0.5$ (left panel) and $\Delta = 0.5$ (right panel).} \end{figure} Then for the mean recoil momentum of an atom in the $|j\rangle$-th cloud one gets \begin{equation} \begin{split} \label{eq:deltak} k_{j} = \int^{+\infty}_{-\infty} dk \,k w_{j}(k,t)\, , \end{split} \end{equation} and for its variance: \begin{equation} \begin{split} \label{eq:Dj} D_{j} = \int^{+\infty}_{-\infty} dk\, w_{j}(k,t)(k - k_{j})^{2} \, . \end{split} \end{equation} Examples of the distribution functions for the recoil momentum shift $\delta k_{\pm 2} = k_{\pm 2} - 2k_0$ of atoms in the $|\pm 2\rangle$ clouds, obtained immediately after the action of the first pulse, are shown in Fig.~\ref{fig:DistribFuncRecMoment}. As is seen, the distribution for $|2\rangle$ ($|-2\rangle$) clouds is sign-dependent (symmetric with respect to $\Delta = 0$), which is coherent with the data of the interference fringes. Thus, the Fourier transform of the signal after the action of the fist pulse already contains the information deduced from the interference fringes, i.e. after the action of the second pulse. \begin{figure}[ht!] \begin{center} \includegraphics[width=0.9\columnwidth]{fig3.eps} \end{center} \caption{\label{fig:Momentum_Shift} The mean recoil momentum shift $\delta k_{2} = k_{2} - 2k_0$ (solid curve) and its standard deviation $D_2^{1/2}$ (dashed curve), both in units of $k_{0}$, for an atom in the $|2\rangle$ cloud versus the detuning $\Delta$.} \end{figure} The results for the mean recoil momentum shift $\delta k_2 = k_2 - 2k_0$ and its standard deviation $D_2^{1/2}$ as a function of the detuning $\Delta$ for an atom in the $|2\rangle$ cloud are depicted in Fig.~\ref{fig:Momentum_Shift}. Note that $\delta k_2 = k_2 - 2k_0 = 2k_0(n - 1)$, and thus Fig.~\ref{fig:Momentum_Shift} represents in fact the detuning dependence of the refraction index $n$. The $\Delta$ - dependence of $\delta k_{-2} = k_{-2} - 2k_0$ for the $|-2\rangle$ cloud is mirror-symmetric with respect to that of $\delta k_2$. From Fig.~\ref{fig:Momentum_Shift}, one can see that changing the sign of the detuning $\Delta$ alters the sign of the mean recoil momentum shift $\delta k_{2}$. Because of that, the $\Delta$ - dependence of $\delta k_{2}$ has a dispersive shape. We point out on a relatively large standard deviation $D_2^{1/2}$ of $\delta k_{2}$. This is a result of the finite BEC's size $L$, as well as the spatial inhomogeneity of the atomic state amplitude, representing the main source of uncertainty of the recoil momentum. \begin{figure}[ht!] \begin{center} \includegraphics[width=0.5\columnwidth]{fig4a.eps} \includegraphics[width=0.485\columnwidth]{fig4b.eps} \end{center} \caption{\label{fig:Recoil frequency} The calculated detuning dependence of the recoil frequency $\omega_{rec}$ (left plot) and the measured one (right plot), taken from~\cite{bib:Campbell_Leanhardt_etal_PheRevLett_94:170403_2005}, Fig. 3. The dotted lines show the two-photon recoil frequency $4\omega_{rec} = 15 068$ Hz. For further explanation, see text and Ref.~\cite{bib:Campbell_Leanhardt_etal_PheRevLett_94:170403_2005}, Fig. 3. } \end{figure} We recalculated the data presented in Fig.~\ref{fig:Momentum_Shift} into the recoil frequency $\omega_{rec}$, Fig.~\ref{fig:Recoil frequency} (left plot). For comparison, in Fig.~\ref{fig:Recoil frequency} (right plot) the experimental results of~\cite{bib:Campbell_Leanhardt_etal_PheRevLett_94:170403_2005} for $\omega_{rec}$ are shown. Contrasting these two plots, we see that the theoretical and experimental curves (thick dots with error bars) have in common the dispersive shape of the $\Delta$-dependence of the recoil frequency. However, the experimental curve has two features that distinguish it from the theoretical one. First, it is shifted up by approximately 900 KHz (dashed line) that is due to the so-called mean-field shift~\cite{bib:Campbell_Leanhardt_etal_PheRevLett_94:170403_2005}), and second, the lower part of it is displaced to the right by 157 MHz, because of the presence of the other allowed transition $|5^2S_{1/2} F = 1\rangle \rightarrow |5^2P_{3/2} F = 2\rangle $, contributing to the optical response~\cite{bib:Campbell_Leanhardt_etal_PheRevLett_94:170403_2005}. These two effects are not taken into account in our simplified theory. The most important fact is that our approach recovers the dispersive shape of the detuning dependence of the recoil frequency $\omega_{rec}$. Now, let us compare the result for the quantum-mechanical mean recoil frequency shift $\delta\omega_2$ with the one obtained in the simulations of the Ramsay interference, $\delta\tilde{\omega}_2$. We use for that a relation $\delta\omega_{2}/\omega_{2} = 2\delta k_{2}/k_{2}$ between the frequency shift $\delta\omega_{2}$ and the momentum shift $\delta k_{2} = k_2 - 2k_0$. From the calculated for $\delta k_2$ data we obtained $\delta\omega_{2}/\omega_{2}\approx 0.062$ for $\Delta = -0.5$ and $\delta\omega_{2}/\omega_{2}\approx -0.057$ for $\Delta = 0.5$. At the same time, from the interference fringes (Fig.~\ref{fig:IntAtClouds}, left panel), the corresponding values are found to be $\delta\tilde{\omega}_{2}/\omega_{2}\approx 0.059$ and $\delta\tilde{\omega}_{2}/\omega_{2}\approx -0.058$, respectively. A comparison of these data shows that with a good accuracy the value of the recoil frequency shift $\delta\omega_2$, extracted from the interference fringes, coincides with the quantum-mechanical mean. This point on that in the experiment just the the latter quantity is measured. \section{Conclusion and outlook} \label{Conclusion} We have presented a microscopic theory, reproducing the essential features of the experimental results on measuring the photon recoil momentum in a BEC of a dilute gas by means of the two-pulse Ramsay interference~\cite{bib:Campbell_Leanhardt_etal_PheRevLett_94:170403_2005}. For this purpose, we have used the coupled Maxwell-Schr\"odinger equations within the framework of the slowly-varying envelope approximation. We have found that for the adequate description of the experiment~\cite{bib:Campbell_Leanhardt_etal_PheRevLett_94:170403_2005}, it is of principal importance to take into account corrections to the bare recoil energy of an atom because of the inhomogeneity of atomic clouds (the spatial derivatives of the atom wave function amplitudes). Neglecting them results (in the theory) in that the photon recoil momentum in the medium coincides with its vacuum value. The microscopic approach has allowed us to directly calculate the quantum-mechanical mean value of the recoil momentum of an atom and its statistical distribution in moving atomic clouds. We have found that the recoil momentum, extracted from the interference fringes of the static BEC cloud, as it has been done in the experiment~\cite{bib:Campbell_Leanhardt_etal_PheRevLett_94:170403_2005}, represents just the quantum-mechanical mean value. We have considered a Bose-Einstein condensate of an ideal atomic gas. A question that remains to answer is to what extent the interaction between atoms (within the microscopic picture) will affect the Ramsay interference? Additionally, we have used in our analysis the slowly varying amplitude approximation in space for fields. Keeping the second space derivative in the Maxwell equations will allow one to correctly take into account the reflection of the laser beams from the condensate as well as the fields inside the condensate from the boundaries of the latter. The effects of diffraction of beams on the Ramsay interference is also a question to be answered. These issues are a subject of a forthcoming paper. \acknowledgments The authors would like to thank M. G. Benedict, A. K. Belyaev and V. V. Tuchin for discussions and I. V Ryzhov for technical assistance. E. D. T. acknowledges support from the Russian Foundation for Basic Research (grant~15-02-08369 -A). Yu. A. A. thanks the Russian Scientific Foundation (grant 16-19-10455) for support in a part of developing advanced algorithms for the analysis of the primary photon-single-scatter event.
1,941,325,220,670
arxiv
\section{INTRODUCTION} \label{sec:intro} \textbf{Li}ght \textbf{D}etection \textbf{A}nd \textbf{R}anging (LiDAR) (aka laser based radar) systems are a rising sensory capability with applications to many real-world interactive systems. From 3D mapping for augmented reality, urban planning, agriculture, autonomous navigation and autonomous cars to high resolution maps for research in fields from geology to archeology \cite{nex2014uav}, accurate computerized depth maps of our surroundings are becoming a sought commodity. \\ To fix the range to an object, LiDAR systems can use time-of-flight information. Current market sensors providing fast responses to optical signals are large compared to the CCD elements used in digital cameras, making arrays of such sensors large and expensive. As a result the usual approach uses a single detector to scan the surroundings, giving one distance measurement for each angular coordinates.\\ This approach (single-pixel scanning) results in acquisition times that are at least linear in the number of data points acquired. While this appears to be a reasonable scaling, maintaining spatial resolution at greater distances requires increasing the angular resolution by a fixed ratio, leading to acquisition times that increase polynomially in target distance and image resolution. Since applications measure increased resolution by the number of horizontal or vertical pixels measured, the total number of data points is squared with each increase in resolution.\\ Safety and energy efficiency are another two considerations that limits single-pixel scanning LiDAR. To overcome measurement noise the output power of the laser must be increased. Passerby safety is often improved by using less harmful wavelengths, which could require increased energy output to overcome lower sensor efficiency at the chosen wavelength. Eventually safety will win out in any wavelength, and the maximum safe output intensity will affect the effective measurement range of the LiDAR. \\ We address these issues by increasing the sensitivity in two realms, using two emerging technologies: A photon number resolving detector increases our signal sensitivity without increasing output intensity, and compressed sensing allows the capture of more information with each measurement. \\ Use of a photon number resolving detector (details in section \ref{PCD}) allows the reduction of optical illumination power by several orders of magnitude. To distinguish a signal from background light we need only several photons to return from the target. A narrow wavelength filter sufficiently reduces the ambient signal to a point where it is possible to operate in broad daylight. Additionally, high sensitivity allows capturing of targets with a wide dynamic range of intensities by leveraging the temporal separation of signals returning from different distances. \\ Compressed sensing (explained in section \ref{CS}) is a machine-learning technique based on the observation that many signals contain much less information than would be suggested by the Nyquist limit. This is especially evident for naturally occurring pictures, which can easily be compressed to ratios as low as $ 95\% $ with minimal loss of accuracy; represented in the correct bases, natural images are very sparse. We could therefore expect to recover all the information an image carries with a number of measurements closer to the information content of the image, rather than the Nyquist limit. \\ Compressed sensing uses carefully chosen measurement bases to sample the signal (in this case, image) as a whole, and can then reconstruct the signal using non-linear optimization to recover the image with $ 100\% $ fidelity from a number of measurements that is logarithmic in the signal frequency. For images, the Nyquist limit is proportional to the number of pixels, so that a mega-pixel image can be reconstructed from as few as 6000 compressed sensing measurements. \section{PHOTON NUMBER RESOLVING DETECTORS AND SYSTEM OVERVIEW}\label{PCD} Photon number resolving detectors (PNRDs) are single photon detectors capable of distinguishing the number of photons arriving at the detector in a short time period \cite{dolgoshein2006status}. As PNRDs we use a silicon photo-multiplier (SiPM), which is an avalanche photo-diode (APD) array with a common charge collector operating in Geiger mode. \subsection{Silicon photomultiplier} The detector is an array of APD elements, each able to detect a single photon and output a constant electric charge in a short time. All the elements are connected in parallel to the same output, such that the total current from all elements corresponds to the number of photons detected by the array. \\ APDs can be used in linear mode or Geiger mode. In linear mode, the diode is placed under reverse bias below the breakdown voltage. Photons reaching the diode release electron-hole pairs that are accelerated by the electric field, during which they can hit other electrons and release more pairs. In linear mode the energy obtained from the electric current is below that required to create another pair, and the reaction decays with time. In Geiger mode, the diode is places under reverse bias voltage beyond the breakdown voltage. A photon arriving at the diode sets off a chain reaction of released electrons and holes, creating an avalanche and a larger current, increasing the detector's gain. \\ \subsection{System overview} The presented LiDAR setup is a two channel system - one for illumination and one for sensing. The illumination sourse is a Standa micro-chip Nd-Yag passively mode-locked laser frequency doubled to 532-nm wavelength with 0.5ns pulses at 1KHz repetition rate. A pulse length of 0.7nm gives a theoretical lower bound on the depth resolution of the LiDAR of about 10-cm. The beam is passed through a holographic diffuser to create a square illumination in the far field, with 30 milliradian angular spread. An optional polarizer (not shown) allows further beam attenuation if necessary. The sensing channel uses a telescope to image the returned light on a Texas Instruments DLP4500 \cite{instruments_2017} digital micro-mirror device (DMD), with a resolution of 1152 by 912 pixels. Lower resolution images can be captured by binning mirror elements into larger effective pixels. The DMD is programed to show binary masks as the chosen basis for compressed sensing (see section \ref{CS}). The light hitting the DMD is directed either to the PNRD through a beam homogenizer, or to a beam-dump coated with highly absorbent material (See figure \ref{fig_sys_ovr}, \ref{fig_picture}). \\ \begin{figure} [ht] \begin{center} \subfloat[]{\includegraphics[height=5cm]{systemSketch_cap.png} \label{fig_sys_ovr}} \hfil \subfloat[]{\includegraphics[height=7cm]{systempicture.jpeg} \label{fig_picture}} \end{center} \caption{(\textbf{a}) Illustration of the LiDAR system. (\textbf{b})Picture of the LiDAR system. \\(A) Laser (B) Holographic diffuser (C) Object (D) DMD (E) Beam-dump (F) Detector} \end{figure} The DMD allows us to choose non-trivial masks and thereby use compressed sensing. The homogenizer maps every pixel of the DMD to the entire area of the detector, limiting the finite-area effects that cause non-linearity at higher intensities \cite{dolgoshein2006status} and allowing a wider dynamic range. The beam-dump absorbs all the light returned from the target that is not part of the current measurement. This light would otherwise become particularly disruptive noise: because it is correlated with the signal we are trying to measure, it can not be distinguished from the signal and so can not be removed with standard techniques. This setup is inspired by the single photon LiDAR described in Ref. ~\citenum{howland2014compressive}. \subsection{Data acquisition} The signal from the detector is recorded on a fast computer controlled oscilloscope for analysis. The effective range of the LiDAR depends on the illumination power, target albedo and detector sensitivity, and this range in turn dictates the appropriate trace length we need to record after each laser pulse. The distance to the target is calculated by relating the return trip time to the speed of light, so the depth resolution depends on the temporal resolution of the oscilloscope. An example trace is presented in Fig. \ref{dTrace}. Once the data is recorded and the targets in it identified, it is split into depth frames. Each frame is taken to be a 2D image, and compressed sensing (outlined in section \ref{CS}) is used to reconstruct the respective image from the measured signals. \section{Compressed Sensing}\label{CS} Compressed Sensing is a method of reconstructing signals from fewer measurements than the Nyquist limit for the highest frequency in the signal. For monochromatic images, this means reconstructing the image from fewer measurements than the number of pixels in the image. It is based on two assumptions: The image is sparse in some basis (a compressed information representation), and that we can measure in a basis incoherent with the information representation \cite{candes2007sparsity}. \\ Sparsity gives rise to the following argument: the Shannon entropy of an image with $ n $ pixels but only $ k \ll n $ non-zero elements is \cite{howland2014compressive} \begin{equation}\label{entropy} n\left [-\frac{k}{n} \log{\frac{k}{n}} -\left(1-\frac{k}{n}\right) \log\left(1-\frac{k}{n}\right)\right ] \approx k \log \frac{n}{k} \end{equation} We use this as a lower bound for the number of measurements required to reconstruct a signal. We will empirically show that we can get very close to this bound. \\ \subsubsection{Incoherence} \cite{candes2008introduction,howland2014compressive} Two bases are mutually incoherent if the dot product of any vector from one with any vector from the other is not too large: If $\{\phi_i\},\{\psi_j\} $ are basis of dimension $n$, the mutual incoherence is: \begin{equation}\label{mutual_incoherence} \mu({\phi_i},{\psi_j}) = \sqrt{n} \max_{i,j} \|\langle\phi_i, \psi_j \rangle\| \end{equation} This can be intuitively understood as follows: measuring in a coherent base, we only get information if we measure a quantity present in our signal. In an incoherent base, any measurement gives us information about the entire signal. \\ \subsubsection{Sparsity} Natural images tend to be sparse in frequency bases such as Fourier or wavelet. Random measurement bases will be $ C \cdot \sqrt{2 \log n} $ incoherent with any base with high probability for some $ C \approx 1 $, and are therefore suitable for compressed sensing. \subsection{Reconstruction theory} Let $ x $ be some signal, and $\{\phi_i\} $ be some basis in which $ x $ is $ k $-sparse. Let $ \{y_k\} $ be a set of $ m $ measurements of $ x $ in some basis $ \bm{\psi} $, and let \begin{equation}\label{min_measurements} m>\mu(\bm{\phi,\psi})^2 \cdot k\cdot \log\frac{n}{\delta} \end{equation} Then with probability greater than $ 1-\delta $: \begin{equation}\label{reconstruction} x=\min_{x' \in \mathbb{R}^n} \| x' \|_1 \quad \text{s.t.} \quad y_k=\langle\phi_k,x'\rangle \end{equation} This implies that if we take $ m \ge k \log^2 n $ measurements of $ x $ in some random basis, we can expect to reconstruct $ x $ with high probability\cite{candes2008introduction}. \\ \subsection{Reconstruction practice} Practically, $ m \approx k \log n$ measurements are usually sufficient to reconstruct the signal exactly, and signals can be approximately reconstructed from as few as $ m=O(k) $ measurements. \\ The measurements used in this work are random elements of the Dragon Wavelet group (Fig. \ref{dragonLet}), described in Ref. ~\citenum{feldman2016power}, which can be computed efficiently in $ O(n \log n) $ time and in-place for optimal memory use. The Dragon wavelet group resemble fractal noise patterns, and so are highly incoherent with natural images as well as other signals we would be likely to measure. \\ There are many algorithms available for compressed sensing reconstruction, from standard of-the-shelf algorithms (such as linear programming) implemented by every scientific computation package to optimized special-purpose methods for sparse signal reconstruction. We used the MATLAB implementation of Nestorov's algorithm (NESTA) \cite{nesterov1983method} with total variation (TV) minimization instead of $ L_1 $ sparsity primarily for the increased robustness to noise presented by this approach \footnote{A more rigorous analysis is forthcoming}. \\ When reconstructing a 3D image, we treat each depth layer as a 2D image and then stack the layers, which allows a much simplified temporal acquisition scheme. The linear temporal scan also allows a simpler computational pipeline for reconstruction than a fully 3D compressed sensing approach would require, but does not allow the exploitation of compressed sensing for the depth dimension. Figure \ref{dTrace} presents a trace for a single mask recorded by the system, showing the number of photons received as a function of the distance. Each trace is correlated with the detector's response curve and the time of arrival and photon count are calculated. This is one of the traces used to create Fig. \ref{CSChimny}. \begin{figure}[ht] \begin{center} \subfloat[]{\includegraphics[height=5cm]{dragon_wavelet.jpeg} \label{dragonLet}} \hfil \subfloat[]{\includegraphics[height=5cm]{scopeTrace.eps} \label{dTrace}} \end{center} \caption{(\textbf{a}) Example of a dragon wavelet. (\textbf{b}) Raw data from PNR detector.} \end{figure} \section{Results} We aimed the system at a variety of targets available in out laboratory's surroundings, including chimneys, signs and environmental sculptures. The amount of signal required is determined by background light: there can be anywhere from 2 photons on a bright night to 30 solar photons by day per nanosecond in the 532$ \pm $ 1nm wavelength window of the filter attached to the detector. Thus, we need to detect at least 900 photons per measurement in order to reach shot noise levels comparable to the background radiation. The current limit on the system's resolution is 64x64 pixels and stems from the accuracy of the data acquisition and recording equipment compatible with our system. Several software and hardware solutions are available to increase the resolution to the full capacity of the DMD, but have not yet been fully integrated. \subsection{Close targets} We started with a close by target in the 50-60 meter range. Figures \ref{chimFront}, \ref{chimFrontCS} show an optical camera image of several chimney pipes found across the street from our lab, and a top view of the same targets (Fig. \ref{chimTop}). The 3D depth information created by the system is superimposed on the images. Figure \ref{chimFront} is a raster scan, obtained by turning on one pixel at a time in order, with the output intensity set so that approximately 50 photons are returned from the target per pixel above the background radiation. Each pixel is measured 10 times, giving around 500 photons per pixel. This is sufficient to distinguish a target from empty space but does not reveal any more detailed information. \\ For compressed sensing, the information carried by each mask is in the mask's intensity compared to the other masks. The variation in intensity between masks is on the order of the square root of the signal frequency \cite{howland2014compressive}, which for square images is proportional to the number of pixels on each side. Thus we set the output intensity so that approximately 2,500 photons are returned per mask, with each mask covering half the image's pixels. Repeating each measurement 10 times yields 25,000 photons per mask, which gives sufficient measurement accuracy for image reconstruction. Compressed sensing allows us to vary the number of masks used to reconstruct the image. Figure \ref{CSChimny} shows a compressed sensing scan of the same image reconstructed with different numbers of masks, from 512 masks (in figure \ref{chim_512m}) down to just 64 masks (\ref{chim_64m}). Using 512 masks, the detector recorded 3,125 photons for each of the 4096 pixels, giving a theoretical SNR of approximately 3. This is far from ideal conditions, and causes incomplete and noisy reconstruction. On the other hand, the larger collection area of each compressed sensing mask allows us to image depth planes where the return signal is below one photon per pixel. \\ \begin{figure}[ht] \begin{center} \subfloat[]{\includegraphics[height=5cm]{chimnyPic_overlay.png} \label{chimFront}} \hfil \subfloat[]{\includegraphics[width=6cm]{chim_scale.eps}} \hfil \subfloat[]{\includegraphics[height=5cm]{chimnyPic_CS_overlay.png} \label{chimFrontCS}} \hfil \subfloat[]{\includegraphics[height=5cm]{chim_w_gis.png} \label{chimTop}} \end{center} \caption{(\textbf{a}) Picture taken through LiDAR sight with raster scan overlay. (\textbf{b}) Depth scale for LiDAR images.(\textbf{c}) Picture taken through LiDAR sight with compressed sensing overlay. (\textbf{d}) Overhead image with overlay. The red line indicates the direction of view.} \end{figure} \begin{figure}[ht] \begin{center} \subfloat[]{\includegraphics[height=3.7cm]{Gold64CS2_512m.eps} \label{chim_512m}} \subfloat[]{\includegraphics[height=3.7cm]{Gold64CS2_256m.eps} \label{chim_256m}} \subfloat[]{\includegraphics[height=3.7cm]{Gold64CS2_64m.eps} \label{chim_64m}} \subfloat[]{\includegraphics[height=3.7cm]{chim_scale_ud.eps}} \end{center} \caption{Comparison of reconstruction by mask number. (\textbf{a}) Reconstuction using 512 masks, 12.5\% the number of pixels. (\textbf{b}) Reconstuction using 256 masks (6.25\%). (\textbf{c}) Reconstuction using 64 masks (1.56\%). } \label{CSChimny} \end{figure} \subsection{Distant targets} Next we aimed our LiDAR at a more ambitious target; an environmental sculpture 380 meters away. At this distance we detect less than one photons per pixel returning from the target, even at low resolution. Under these conditions a raster scan is impossible without increasing the intensity beyond eye-safe levels for visible wavelengths. In this situation our expanded beam gives a significant advantage: we can illuminate the target with much higher intensity, while leaving the energy density incident on any passerby below the strictest safety threshold. Since compressed sensing collects light from half the view field for every mask, we receive approximately 50 photons per measurement, and measuring each mask 100 times gives sufficiently accurate measurements to reconstruct an image. Figure \ref{stonesFront} shows the sculpture with an overlay of the reconstructed image and depth scale for the overlay. Direct measurement of the area captured in the image and target distance corroborates the angle of view illuminated by the diffused beam. Figure \ref{stonesTop} shows an overhead view of the sculpture, with the depth map overlay and the depth scale. \begin{figure}[ht] \begin{center} \subfloat[]{\includegraphics[height=6cm]{stonesFront.jpeg} \label{stonesFront}} \hfil \subfloat[]{\includegraphics[height=6cm]{stones64_top.jpeg} \label{stonesTop}} \hfil \subfloat[]{\includegraphics[height=6cm]{stones_scale.eps}} \end{center} \caption{(\textbf{a}) Overhead image with overlay, The red line indicates the direction of view. (\textbf{b}) Picture taken through LiDAR sight with overlay. (\textbf{c}) Distance scale for overlays. } \end{figure} \section{Conclusion} We have demonstrated a LiDAR system without scanning optics, that utilizes a compressed sensing scheme, a DMD component and a photon number resolving detector. This system acquires a 3D scene with the minimal number of returning signal photons by exploiting the extreme optical linear sensitivity of the SiPM detector and the efficiency of compressed sensing approaches. Two natural scenes has been acquired, and compared to standard images and in one case also to a raster scan. The strength of the presented approach is demonstrated by the exponential improvement in the number of required masks for reconstruction, and the extremely weak detected returning signal. Further improvement is expected when better electronics and software will be implemented. \subsection{Acknowledgments} The authors would like to thank Yair Weiss for many fruitful discussions and ideas.
1,941,325,220,671
arxiv
\section{Introduction} Weak gravitational lensing by the large-scale structure of the Universe has now become a major tool of cosmology~\cite{revuesWL}, used to study questions ranging from the distribution of dark matter to tests of general relativity~\cite{testGR}. The standard lore~\cite{refbooks,stebbins} states that, in a homogeneous and isotropic spacetime, weak lensing effects induce a shear field which, to leading order, only contains $E$-modes so that the measured level of $B$-modes is used as an important sanity check at the end of the data processing chain. $B$-modes contribution to the observed shear can be related to intrinsic alignments~\cite{spinG}, Born correction and lens-lens coupling~\cite{lenslenssimu,cooray}, and gravitational lensing due to the redshift clustering of source galaxies~\cite{clustering}. From an observational point of view, the separation of $E$- and $B$-modes requires in principle to measure the shear correlation at zero separation~\cite{EBmix,EBseparation} that can be brought down to the percent-level accuracy, e.g. with CFHTLenS data~\cite{obs1}. This paper emphasizes that any deviation from local spatial isotropy, as assumed in the standard cosmological framework in which the background spacetime is described by a Friedmann-Lema\^{\i}tre (FL) universe, induces $B$-modes in the shear field. More importantly, and contrary to the above mentioned effects, these $B$-modes arise on {\it all} cosmological scales. Therefore, any bound on their level can be used as a constraint on spatial isotropy. This is an important signature which, in principle, can be exploited in order to disentangle this geometrical origin of $B$-modes from other non-cosmological effects~\cite{next}. Since it is important for future surveys to predict the level at which these cosmological effects produce $B$-modes, we introduce in this work a new multipolar hierarchy for the weak-lensing shear, convergence and twist that does not assume a specific background geometry. This approach will allow us to pinpoint the origin of the $B$-modes and, in a future work, to access the magnitude of currently observed level of $B$-modes. This work is organized as follows: we start in \S~\ref{GeodesicBundle} by reviewing the basic formalism of weak-gravitational lensing, which will also help us to set up the basic notations and conventions. In \S\ref{shear-convergence} we derive the evolution equations for the irreducible components of the Jacobi map, which are then used to derive the main multipole expansion hierarchy in \S~\ref{multipole}. We then show how the standard FL results are recovered (\S~\ref{FL}) and discuss the particular case of a Bianchi $I$ (B$I$) universe (\S~\ref{BI}). Finally, we present our conclusion in \S~\ref{conclusion}.\\ Throughout this paper we work with units in which $c=\hbar=1$. Spacetime indices are represented by Greek letters. Upper case Latin indices such as $\{I,J,K,\dots\}$ vary from 1 to 3 and represent spatial coordinates. Furthermore components of vectors on a spatial triad (a set of three orthogonal spatial vectors which are normalized to unity) are denoted with lower case Latin indices $\{i,j,k,\dots\}$, whereas the screen projected (two-dimensional) components are represented by indices $\{a,b,c,\dots\}$ which vary from 1 to 2. \section{Multipolar hierarchy for weak-lensing} \label{GeodesicBundle} \subsection{Description of the geodesic bundle}\label{sub2A} A crucial quantity for weak-lensing is the electromagnetic wave-vector, $k_\mu=\partial_\mu w$, where $w$ is the phase of the wave. In the eikonal approximation, $k^\mu$ is a null vector ($k^\mu k_\mu=0$) satisfying a geodesic equation ($k^\nu\nabla_\nu k^\mu=0$). Moreover, if we assume that $\nabla_\mu\nabla_\nu w=\nabla_\nu\nabla_\mu w$ for any scalar function $w$ (torsion-free hypothesis), it follows that its integral curves $x^\mu(v)$ defined by $k^\mu(v) \equiv \mathrm{d} x^\mu/\mathrm{d} v$, where $v$ is the affine parameter along a given geodesic, are irrotational ($\nabla_{[\mu}k_{\nu]}=0$). Second, we consider a family of null (light-like) geodesics collectively characterized by $x^\mu(v,s)$, where $s$ labels each member of the family. We adopt the convention according to which $v=0$ at the observer and increases toward the source. There is a wave-vector for each geodesic, that is $k^\mu(v,s) \equiv \partial x^\mu /\partial v$, and the separation between the geodesics is encompassed by the vector $\eta^\mu\equiv \partial x^\mu/\partial s$ connecting two neighbor geodesics (see Fig.~(\ref{f0})). Hence, we first derive the dynamics for a reference geodesic, and then the dynamics for the deviation vector. \begin{figure}[h] \includegraphics[width=0.9\columnwidth]{geodesic_deviation.pdf} \caption{Representation of two null geodesics of the light bundle. $\eta^a$ is the projection of $\eta^\mu$ in the plane spanned by the basis $\{e_a\}$. The dotted curve represents the worldline of the observer comoving with $u^\mu$. The geodesic bundle is thin so that its transverse dimension has not been depicted and it converges at the observer.} \label{f0} \end{figure} We suppose that the light-rays converge to a fundamental observer comoving with the four-velocity $u^\mu$ of matter, which is normalized such that $u^\mu u_\mu=-1$. This observer measures a redshift $z$ given by \begin{equation}\label{def_z} 1+z(v)\equiv \frac{(k_\mu u^\mu)_v}{(k_\mu u^\mu)_0} \end{equation} so that the energy of the incoming photon is \begin{equation} U=U_0(1+z)\,,\qquad U_0=(k^\mu u_\mu)_0\,. \end{equation} In this work we adopt the perspective of a photon going to the past, which means that in a local Lorentz frame, where $u^\mu=(-1,0,0,0)$, we have $k^0=\mathrm{d} t/\mathrm{d} v=-U$. Incidentally, this suggests that we introduce of a ``reduced wave-vector'' through \begin{equation} \label{khat} \hat{k}^\mu = \frac{\mathrm{d} x^\mu}{\mathrm{d}\hat{v}}\equiv U^{-1} k^\mu \end{equation} in order to simplify our expressions \footnote{Since $\mathrm{d}\hat{v}=-\mathrm{d} t$, the new parameter $\hat{v}$ is simply the negative of the proper time $t$, reflecting our choice of perspective in which the observer sheds light on the source.} At each position $x^\mu$ of a given geodesic we can associate a direction vector $\gr{n}$ whose components are $n^\mu$, and defined from the reduced wave-vector through \begin{equation} \hat{k}^\mu = -u^\mu + n^\mu\,, \end{equation} with \begin{equation} u^\mu n_\mu=0\,,\quad n_\mu n^\mu =1\,. \end{equation} At the observer, $\gr{n}^o\equiv\gr{n}(v=0)$ is the spacelike vector pointing along the line of sight\footnote{Note that our definition of $n^\mu$ differ by a minus sign from that of Ref. \cite{clarkson}}. However, since we now have $$ \frac{\mathrm{d} \hat v}{\mathrm{d} v}=U, $$ it follows that we can either choose $(\gr{n}^o,v)$ or $(\gr{n}^o,\hat v)$ as independent set of variables to parameterize the geodesic, which correspond to two slices of the past lightcone. As we shall see below, the use of $\hat v$ simplifies the derivation of the multipolar expansion for the weak-lensing observables. At a given point of the geodesic, it is necessary to add two vectors to ${\bm u}$ and ${\bm n}$ in order to obtain a complete basis of the tangent space. We choose these two vectors $\gr{n}_a$, with $a=\{1,2\}$, to be orthonormalized and orthogonal to ${\bm u}$ and ${\bm n}$, that is they are defined by \begin{equation} \label{basis2d} n_{a}^\mu n_{b\mu}=\delta_{ab}\,,\quad n_{a}^\mu u_{\mu}=n_{a}^\mu n_{\mu}=0\,,\quad(a=1,2)\,. \end{equation} Since $\gr{n}$ and $\gr{n}_a$ comprise a three-dimensional orthonormal basis, we can simplify the notation by defining $\gr{n}_3\equiv \gr{n}$ so that we can collectively write $\gr{n}_i\equiv \lbrace n^\mu_i\rbrace_{i=1\ldots3}$. Note that at the observer we can again define $\gr{n}_i^o\equiv \gr{n}_i(\hat v=0)$ with a remaining rotation freedom around $\gr{n}^o$ for the choice of $\gr{n}^o_a$. We now introduce the screen projector tensor \begin{equation} S_{\mu\nu}\equiv g_{\mu\nu}+u_\mu u_\nu -n_\mu n_\nu, \end{equation} which projects any tensor on the two-dimensional surface orthogonal to the line of sight. Thanks to the orthogonality relations (\ref{basis2d}), the basis can be parallel transported along the null geodesic as~\cite{Lewis2006} \begin{equation}\label{e:prop_n} S_{\mu \sigma} k^\nu \nabla_\nu n_a^\sigma=0. \end{equation} At $\hat v=0$, each $\gr{n}^o$ of the geodesic bundle can be associated to a spherical basis and this can be used to fix the rotational freedom. Indeed, for each $\gr{n}^o$ there will be a unique choice of $\gr{n}^o_1(\gr{n}^o)$ and $\gr{n}^o_2(\gr{n}^o)$ if we set $\{\gr{e}^o_r,\gr{e}^o_\theta,\gr{e}^o_\varphi\}=\{\gr{n}^o,\gr{n}_1^o,\gr{n}_2^o\}$. The integration of Eq.~(\ref{e:prop_n}) then allows to define this basis at each point on the past lightcone, i.e. to determine $\gr{n}_i(\gr{n}^o,\hat v)$, or, equivalently, $\{\gr{e}_r,\gr{e}_\theta,\gr{e}_\varphi\}(\gr{n}^o,\hat v)$ everywhere. This prescription emphasizes the importance of introducing a reference triad as a way of identifying these projection effects; see Fig.~\ref{f1}. At this point it is convenient to introduce the helicity basis defined as \begin{equation} \label{e:helicity} \gr{e}_\pm =\gr{n}_\pm\equiv \frac{1}{\sqrt{2}}\left(\gr{e}_\theta \mp \mathrm{i} \gr{e}_\varphi\right)=\frac{1}{\sqrt{2}}\left(\gr{n}_1 \mp \mathrm{i} \gr{n}_2\right). \end{equation} Their components in the $\gr{n}_a$ basis read simply \begin{equation} n_\pm^a=\gr{n}_\pm.\gr{n}_a=\frac{1}{\sqrt{2}}(\delta_1^a \mp \mathrm{i} \delta_2^a) \end{equation} and are, by construction, constant. We now note that any event on the lightcone is uniquely specified by $(\gr{n}^o,\hat v)$, i.e. it is of the form $x^\mu(\gr{n}^o,\hat v)$. This means that any local quantity $X(x^\mu)$ evaluated on the lightcone can be seen as a function $X(\gr{n}^o,\hat v)$. The redshift defined in Eq.~(\ref{def_z}) is also a function of $(\gr{n}^o,\hat v)$, and $U$ propagates as (see e.g. Ref.~\cite{clarkson}) \begin{equation} \label{eqHparallel} \frac{\mathrm{d}\ln U}{\mathrm{d}\hat v} = H_\parallel(\gr{n}^o,\hat v) \end{equation} where the parallel Hubble expansion rate along the line of sight is defined by \begin{eqnarray} H_\parallel(\gr{n}^o,\hat v)&\equiv&\hat k^\mu \hat k^\nu\nabla_\mu u_\nu\,. \end{eqnarray} Using the standard $1+3$ decomposition of $\nabla_\mu u_\nu$, it takes the general form \begin{eqnarray}\label{e.Hperp} H_\parallel(\gr{n}^o,\hat v) &=& \frac13 \Theta +\hat\sigma _{\mu\nu}n^\mu n^\nu+ A_\mu n^\mu\,, \end{eqnarray} where $\Theta$, $\hat\sigma_{\mu\nu}$ and $A^\mu$ are the expansion, shear and acceleration of the flow $u^\mu$. All these quantities are evaluated on $[x^\mu(\gr{n}^o,\hat v)]$ and are thus functions of $\gr{n}(\gr{n}^o,\hat v)$ on the past lightcone. \begin{figure*}[!htb] \begin{center} \includegraphics[width=2.0\columnwidth]{lensfig.pdf} \caption{Any position on the past lightcone can be considered as $x^\mu(\gr{n}^o,\hat v)$. While quantities such as ${\cal E}_{\mu\nu}$ are local, quantities such as ${\cal W}_{ab}$ depend on the local basis at $x^\mu(\gr{n}^o,\hat v)$ via the projection on $n_{\langle a}^\mu n_{b\rangle}^\nu$; see Eq.~(\ref{proj_W}). Observational quantities are however defined in terms of $\gr{n}^o$ so that one needs to relate the basis $\lbrace\gr{e}_r,\gr{e}_\theta,\gr{e}_\varphi\rbrace$ in $(\gr{n}^o,\hat v)$ and in $\hat v=0$. The relation $\gr{n}(\gr{n}^o,\hat v)$ induces ``projection effects'' and a non-local relation between quantities like ${\cal E}_{\ell m}$ and ${\cal W}_{ab}$. Once a background spacetime is chosen, its symmetries simplify the comparison. For instance, a B$I$ spacetime provides a natural triad of Killing vectors associated to its principal axis. One can use this ``global reference'' to relate the local $S^2$ in $x^\mu(\gr{n}^o,\hat v)$ to the observer's $S^2$ by comparing them in the reference $S^2$.} \label{f1} \end{center} \end{figure*} \subsection{Shear, twist and convergence propagation}\label{shear-convergence} The purpose of this section is to derive an equation governing the shear, twist and convergence of a light-ray bundle without specifying the spacetime structure. The evolution of the deviation vector $\eta^\mu$ is given by the geodesic {\it deviation} equation \begin{equation} \frac{\mathrm{d}^2\eta^\mu}{\mathrm{d} s^2} = {R^\mu}_{\nu\alpha\beta} k^\nu k^\alpha\eta^\beta\,, \end{equation} where ${R^\mu}_{\nu\alpha\beta}$ is the Riemann tensor. This equation can be rewritten in terms of its component on the screen basis $\lbrace \gr{n}_a\rbrace$ as~\cite{refbooks} \begin{equation}\label{e.gde} \frac{\mathrm{d}^2\eta_a}{\mathrm{d} v^2} = {\cal R}_{ab}\eta^b\,, \end{equation} where \begin{equation} {\cal R}_{ab}\equiv{R}_{\mu\nu\alpha\beta}k^\nu k^\alpha n_a^\mu n_b^\beta \end{equation} is the screen projected Riemann tensor. The linearity of Eq.~(\ref{e.gde}) implies that $$ \eta^a(v) = {\cal D}^a_b(v) \left(\frac{{\mathrm{d}\eta^b}}{{\mathrm{d} v}}\right)_{v=0}\,, $$ where the Jacobi map ${\cal D}_{Ôab}$ satisfies the Sachs equation~\cite{sachs,refbooks} \begin{equation}\label{gde2} \frac{\mathrm{d}^2}{\mathrm{d} v^2}{\cal D}^a_b={\cal R}^a_c{\cal D}^c_b\,, \end{equation} subject to the initial conditions \begin{equation} \label{initialconditions} {\cal D}^a_b(0) =0\,,\quad \frac{\mathrm{d}{\cal D}^a_b}{\mathrm{d} v}(0)=\delta^a_b\,. \end{equation} In order to proceed, we need to decompose both ${\cal D}_{ab}$ and ${\cal R}_{ab}$ in their irreducible pieces. We start by decomposing the projected Ricci tensor into a trace and a traceless part as \begin{equation} {\cal R}_{ab}= U^2\left({\cal R}I_{ab} + {\cal W}_{ab}\right) \end{equation} where ${\cal R}$ and ${\cal W}_{ab}$ are related to the Ricci ($R_{\mu\nu}$) and Weyl ($C_{\mu\rho\sigma\nu}$) tensors through: \begin{equation} {\cal R}\equiv - \frac12 R_{\mu\nu}\hat k^\mu \hat k^\nu\,,\quad {\cal W}_{ab}\equiv C_{\mu\rho\sigma\nu}\hat k^\rho \hat k^\sigma n_a^\mu n_b^\nu\,, \end{equation} and where \begin{equation} I_{ab}\equiv S_{\mu\nu}n^\mu_a n^\nu_b \end{equation} is the identity matrix of the screen space. Note again that ${\cal W}_{ab}$, as well as ${\cal R}$ and ${\cal R}_{ab}$, are evaluated on the central geodesic and thus ${\cal W}_{ab}\left[x^\mu(\gr{n}^o,\hat v)\right]={\cal W}_{ab}(\gr{n}^o,\hat v)$. In terms of the electric and magnetic parts of the Weyl tensor, given respectively by \cite{Ellis:1998ct} \begin{equation} {\cal E}_{\mu\nu}\equiv C_{\mu\rho\nu\sigma} u^\rho u^\sigma\,,\quad {\cal B}_{\mu\nu}\equiv \frac{1}{2}\varepsilon_{\mu \alpha \beta \sigma}u^\sigma C_{\nu\rho}^{\phantom{\mu}\phantom{\rho}\alpha \beta} u^\rho\,, \end{equation} the projected tensor ${\cal W}_{ab}$ becomes \begin{equation}\label{proj_W} {\cal W}_{ab}(\gr{n}^o,\hat v)= -2 n_{\langle a}^\mu n_{b\rangle}^\nu \left[{\cal E}_{\mu\nu} + {\cal B}_\mu^{\phantom{\nu}\sigma} \epsilon_{\sigma \nu}(\gr{n}) \right]_{\tiny\left\vert \begin{array}{l}x^\mu(\gr{n}^o,\hat v)\\ \gr{n}(\gr{n}^o,\hat v)\end{array}\right.} \end{equation} In the expression above, $\langle\rangle$ stands for the traceless part with respect to $I^{ab}$. $\epsilon_{\mu\nu}(\gr{n})$ is the antisymmetric tensor in the projected space and is defined as \begin{equation} \epsilon_{\mu\nu}(\gr{n}) \equiv u^\beta \varepsilon_{\beta \mu\nu \alpha } n^\alpha\,. \end{equation} Now, ${\cal W}_{ab}$ being a spin-$2$ field, it can be decomposed in the helicity basis~(\ref{e:helicity}) as \begin{equation} \label{e:Rpm} {\cal W}_{ab}(\gr{n}^o,\hat v) \equiv - 2 \sum_{ \lambda=\pm}{\cal W}^\lambda (\gr{n}^o,\hat v) {n}^{\lambda}_a {n}^{\lambda}_b. \end{equation} This decomposition emphasizes once more that the two components ${\cal W}^\lambda$ are functions of $(\gr{n}^o,\hat v)$ alone, because they are evaluated on the lightcone. Recall that the $n_a^\lambda$ are constant so that we can use either $\gr{n}$ or $\gr{n}^o$ in Eq.~(\ref{e:Rpm}). We now decompose the Jacobi map in terms of a convergence $\kappa$, a twist $V$ and a traceless shear $\gamma_{ab}$ as \begin{equation} {\cal D}_{ab} \equiv \kappa I_{ab} + V \epsilon_{ab} + \gamma_{ab}\,, \end{equation} where \[ \epsilon_{ab}= 2\mathrm{i} n^-_{[a}n^+_{b]}\,. \] All these quantities are defined on our past lightcone so that we can also think of them as functions of $(\gr{n}^o,\hat v)$. The shear, being also a spin-2 field, is naturally decomposed similarly as \begin{equation} \label{e:gammapm} {\gamma}_{ab}(\gr{n}^o,\hat v) \equiv \sum_{\lambda=\pm}{\gamma}^\lambda (\gr{n}^o,\hat v) {n}^{\lambda}_a {n}^{\lambda}_b\,. \end{equation} Finally, by inserting the decompositions~(\ref{e:Rpm}-\ref{e:gammapm}) in the Sachs equation~(\ref{gde2}) we find the desired equation of evolution \begin{eqnarray}\label{e.evo1} \left(\!\frac{\mathrm{d}^2}{\mathrm{d} \hat v^2}+H_\parallel\frac{\mathrm{d}}{\mathrm{d}\hat v} - {\cal R}\!\right) \left(\!\!\begin{array}{c}\kappa\\ \mathrm{i} V\\ \gamma^\pm\end{array}\!\!\right)= -2 \left(\!\!\begin{array}{c} {\cal W}^{(-}\gamma^{+)} \\ {\cal W}^{[-}\gamma^{+]}\\ {\cal W}^\pm(\kappa\pm\mathrm{i} V) \end{array}\!\!\right). \end{eqnarray} Note that, in practice, the integration of this system requires the evaluation of the past lightcone structure in order to determine $\gr{n}_i(\gr{n}^o,\hat v)$ and then $H_\parallel(\gr{n}^o,\hat v)$, ${\cal R}(\gr{n}^o,\hat v)$ and ${\cal W}^\pm(\gr{n}^o,\hat v)$. \subsection{Multipole expansion} \label{multipole} Equation (\ref{e.evo1}) is composed of scalars ($\kappa$, $V$, ${\cal R}$ and $H_\parallel$) and spin-$2$ fields ($\gamma^\pm$ and ${\cal W}^\pm$) defined on the sphere. The former can be naturally decomposed in a basis of spherical harmonics as \begin{eqnarray} \kappa(\gr{n}^o,\hat{v})&=&\sum_{\ell,m}\kappa_{\ell m}(\hat{v})Y_{\ell m}(\gr{n}^o)\\ V(\gr{n}^o,\hat{v})&=&\sum_{\ell,m}V_{\ell m}(\hat{v})Y_{\ell m}(\gr{n}^o)\\ {\cal R}(\gr{n}^o,\hat{v})&=&\sum_{\ell,m}{\cal R}_{\ell m}(\hat{v})Y_{\ell m}(\gr{n}^o)\\ H_\parallel(\gr{n}^o,\hat{v})&=&\sum_{\ell,m}h_{\ell m}(\hat{v})Y_{\ell m}(\gr{n}^o) \end{eqnarray} The latter, being spin-$2$ fields on the sphere, can be expanded on a basis of spin-weighted spherical harmonics~\cite{Goldberg1967} as \begin{eqnarray}\label{EandBfromX} {\cal W}^\pm (\gr{n}^o,\hat v) &=& \sum_{\ell,m} \left[{\cal E}_{\ell m}(\hat v)\pm \mathrm{i} {\cal B}_{\ell m}(\hat v) \right]Y_{\ell m}^{\pm 2}(\gr{n}^o)\,,\\ {\gamma}^\pm (\gr{n}^o,\hat v) &=& \sum_{\ell,m} \left[{E}_{\ell m}(\hat v)\pm \mathrm{i} {B}_{\ell m}(\hat v) \right]Y_{\ell m}^{\pm 2}(\gr{n}^o). \end{eqnarray} Note that $E$-modes are those having parity $(-1)^\ell$ while $B$-modes have parity $(-1)^{\ell+1}$~\cite{Pontzen2007}. It is important to keep in mind that we are adopting an observer-based point of view so that all quantities are expressed in terms of $(\gr{n}^o,\hat v)$. In general, $\gr{n}(\gr{n}^o,\hat v)\not=\gr{n}^o$, with the obvious exception of e.g. FL spacetimes or for an observer at the center of symmetry of a Lema\^{\i}tre-Tolman spacetime. Part of the difficulty is thus contained in the determination of these coefficients, which include projection effects from the geodesic structure. When inserting these decompositions in Eq.~(\ref{e.evo1}), products of spherical harmonics will appear on the r.h.s. They can be simplified using standard relations between spin-weighted spherical harmonics (see Appendix~\ref{AppA}). It follows that, in terms of multipoles, the equations of evolution for the convergence, twist and shear take the following general form \begin{widetext} {\small \begin{eqnarray} \frac{\mathrm{d}^2 E_{\ell m} }{\mathrm{d} \hat v^2} &=& \,{}^{2}C^{m m_1 m_2}_{\ell \ell_1 \ell_2} \left[\left({\cal R}_{\ell_1 m_1}-h_{\ell_1 m_1}\frac{\mathrm{d}}{\mathrm{d}\hat v}\right) \left( \delta_L^{+}E_{\ell_2 m_2} +\mathrm{i}\delta_L^{-}B_{\ell_2 m_2} \right) -2\kappa_{\ell_1 m_1}\left(\delta_L^{+}{\cal E}_{\ell_2m_2}+\mathrm{i}\delta_L^{-}{\cal B}_{\ell_2m_2}\right)\right.\nonumber\\ &&\left.+2V_{\ell_1m_1}\left(- \mathrm{i} \delta_L^{-}{\cal E}_{\ell_2m_2}+\delta_L^{+}{\cal B}_{\ell_2m_2}\right) \right]\label{mastermultipoles1}\\ \frac{\mathrm{d}^2 B_{\ell m} }{\mathrm{d} \hat v^2} &=& \,{}^{2}C^{m m_1 m_2}_{\ell \ell_1 \ell_2} \left[\left({\cal R}_{\ell_1 m_1}-h_{\ell_1 m_1}\frac{\mathrm{d}}{\mathrm{d}\hat v}\right)\left( \delta_L^{+}B_{\ell_2 m_2} -\mathrm{i}\delta_L^{-}E_{\ell_2 m_2} \right)-2{\kappa}_{\ell_1 m_1} \left( \delta_L^{+}{\cal B}_{\ell_2 m_2} -\mathrm{i}\delta_L^{-} {\cal E}_{\ell_2 m_2} \right) \right.\nonumber\\ &&\left.-2V_{\ell_1 m_1} \left( \delta_L^{-}\mathrm{i} {\cal B}_{\ell_2 m_2} +\delta_L^{+} {\cal E}_{\ell_2 m_2} \right) \right]\label{mastermultipoles2}\\ \frac{\mathrm{d}^2 \kappa_{\ell m} }{\mathrm{d} \hat v^2} &=& \left\{\,{}^{0}C^{m m_1 m_2}_{\ell \ell_1 \ell_2} \left( {\cal R}_{\ell_1 m_1} \kappa_{\ell_2 m_2} - h_{\ell_1 m_1} \frac{\mathrm{d} \kappa_{\ell_2 m_2}}{\mathrm{d}\hat v} \right)\right.\nonumber\\ && \left.{-2(-1)^{m_1} {}^{2}C^{-m_2 -m m_1}_{\ell_2\;\ell\;\ell_1}}\left[ \delta_L^{+}({E}_{\ell_1 m_1}{\cal E}_{\ell_2 m_2}+{B}_{\ell_1 m_1}{\cal B}_{\ell_2 m_2}) +\mathrm{i}\delta_L^{-} ({B}_{\ell_1 m_1}{\cal E}_{\ell_2 m_2}-{E}_{\ell_1 m_1}{\cal B}_{\ell_2 m_2})\right]\right\}\label{mastermultipoles3} \end{eqnarray} \begin{eqnarray} \frac{\mathrm{d}^2 V_{\ell m} }{\mathrm{d} \hat v^2} &=& \left\{\,{}^{0}C^{m m_1 m_2}_{\ell \ell_1 \ell_2} \left( {\cal R}_{\ell_1 m_1} V_{\ell_2 m_2} - h_{\ell_1 m_1} \frac{\mathrm{d} V_{\ell_2 m_2}}{\mathrm{d}\hat v} \right)\right.\nonumber\\ && \left.+2(-1)^{m_1} {}^{2}C^{-m_2 -m m_1}_{\ell_2\;\ell\;\ell_1}\left[\delta_L^{-}\mathrm{i}({E}_{\ell_1 m_1}{\cal E}_{\ell_2 m_2}+{B}_{\ell_1 m_1}{\cal B}_{\ell_2 m_2}) -\delta_L^{+} ({B}_{\ell_1 m_1}{\cal E}_{\ell_2 m_2}-{E}_{\ell_1 m_1}{\cal B}_{\ell_2 m_2})\right]\right\}\label{mastermultipoles4} \end{eqnarray}} \end{widetext} where \begin{equation} \delta_L^{\pm}\equiv [1\pm(-1)^L]/2\,,\quad L=\ell+\ell_1+\ell_2 \end{equation} and an implied sum over $\ell_1$, $\ell_2$, $m_1$, and $m_2$ is understood. This multipolar hierarchy for weak lensing, which does not rely on a particular background spacetime -- and on any perturbative expansion -- has never been derived before and sets the basis for general studies of the constraints on anisotropy and inhomogeneity from the weak-lensing $B$-modes. As soon as the spacetime has a non-vanishing Weyl tensor, $E$- and $B$-modes are generated due to the coupling of the Weyl tensor to the convergence and twist. It shares some similarities with the Boltzmann hierarchy for the cosmic microwave background (see e.g. Refs.~\cite{Pontzen2007,Hu2000}) but one needs to keep in mind that ${\cal R}_{\ell m}$, $h_{\ell m}$, ${\cal E}_{\ell m}$, ${\cal B}_{\ell m}$ are non-local quantities since they have to be evaluated on the geodesic. \section{Applications to spatially homogeneous universes} \subsection{Standard FL case}\label{FL} In order to illustrate the formalism we consider the standard case of (flat) FL spacetime with linear perturbations. At the background level, the metric of the FL spacetime takes the simple form \begin{equation} \mathrm{d} s^2 =-\mathrm{d} t^2 + a^2(t)\delta_{IJ}\mathrm{d} x^I \mathrm{d} x^J \end{equation} This spacetime enjoys 3 translational Killing vectors $\lbrace\gr{e}_I\rbrace_{I\in\lbrace x,y,z\rbrace}\equiv \lbrace\frac{\partial}{\partial x^I}\rbrace_{I\in\lbrace x,y,z\rbrace}$ which define {\em everywhere} a natural Cartesian basis. By normalizing these vectors, we can then define a triad of vectors $\gr{e}_i$ whose components are $e_i^{\,I}=\delta_i^I/a$ (and their associated 1-forms $\gr{e}^i$ whose components are $e^i_{\,I}=\delta^i_{I} a$), that is a set of three orthonormal space-like vectors (and forms) that can be used as a global Euclidian basis. The set of vectors $\gr{n}_i^o$, which was a priori only defined at the observer's position can then be defined everywhere by imposing that their components $n_i^{o\,j}\equiv \gr{n}_i^o.\gr{e}^j$ in this reference basis remain the same everywhere. This enables to compare ${\bm n}_i (\gr{n}^o,\hat v)$ to ${\bm n}^o_i$ even though these sets of vectors are defined first at two different points of spacetime, as illustrated in the right part of Fig.~\ref{f1}. At the background level, the Weyl tensor vanishes (i.e. ${\cal E}_{\mu\nu}=0$ and ${\cal B}_{\mu\nu}=0$ are at least of order 1 in perturbations) and the Ricci scalar, ${\cal R}^{(0)}$, depends only on time. For this spacetime $\gr{n}(\gr{n}^o,\hat v)=\gr{n}^o$ for all $\hat{v}$ so that the only nonzero multipolar coefficient $h^{(0)}_{\ell m}$ is the monopole \begin{equation} h^{(0)}_{00}=H\equiv \frac{\dot a}{a}\, \end{equation} where the dot refers to derivative with respect to $t$. From the expression above and the fact that $\mathrm{d}\hat{v}=-\mathrm{d} t$, we find from Eq.~(\ref{eqHparallel}) that $U\propto a^{-1}$. It then follows from Eq.~(\ref{def_z}) the well known result $1+z=a_0/a$. Moreover, since ${\cal E}^{(0)}_{\ell m}={\cal B}^{(0)}_{\ell m}=0$, it follows from Eqs. (\ref{mastermultipoles3}) and (\ref{mastermultipoles4}) that $\kappa^{(0)}_{00}$ and $V^{(0)}_{00}$ satisfy the same second order homogeneous equation of the form \begin{equation} \frac{\mathrm{d}^2 X^{(0)}_{00} }{\mathrm{d} \hat v^2} = {\cal R}_{00} X^{(0)}_{00} - H \frac{\mathrm{d} X^{(0)}_{00}}{\mathrm{d}\hat v}\,. \end{equation} where $X^{(0)}_{00}$ stands for either $\kappa^{(0)}_{00}$ or $V^{(0)}_{00}$. The initial conditions~(\ref{initialconditions}) then lead to a homogeneous $\kappa^{(0)}_{00}$, given by the usual angular distance, and a vanishing twist so that\begin{equation} \kappa^{(0)}_{00}=D_A\,, \qquad V^{(0)}_{00}=0\,. \end{equation} Then, one concludes that \begin{equation} E_{\ell m}^{(0)}=B_{\ell m}^{(0)}=0. \end{equation}\\ At first order in the perturbations, the perturbed metric with only scalar perturbation reads in the Newton gauge \begin{equation} \mathrm{d} s^2 =-(1+2 \Phi)\mathrm{d} t^2 + a^2(t)(1-2\Psi)\delta_{IJ}\mathrm{d} x^I \mathrm{d} x^J\,, \end{equation} where $\Phi$ and $\Psi$ are the two Bardeen potentials. The projected Ricci tensor is of the form \cite{Lewis2006} \begin{equation} {\cal R}^{(1)}_{ab}=-D_{a}D_b(\Phi+\Psi) \end{equation} where $D_a$ is the covariant derivative on the 2-sphere. It follows that \begin{equation} {\cal B}^{(1)}_{ab}=0. \end{equation} In the Born approximation (i.e. $\gr{n}(\gr{n}^o,\hat v)=\gr{n}^o$), only $h^{(0)}_{00}\not =0$ so that the r.h.s of Eqs.~(\ref{mastermultipoles1}-\ref{mastermultipoles2}) involves only $\,{}^{2}C^{m 0 m_2}_{\ell 0 \ell_2}$. Thus $\ell_2=\ell$ and $L$ is even (i.e. $\delta_L^-=0$). As a conclusion, in Eq.~(\ref{mastermultipoles1}) for the propagation of the $E$-modes, the only remaining term on the r.h.s. is \begin{equation} \left ({\cal R}^{(0)}_{00}-h^{(0)}_{00}\frac{\mathrm{d}}{\mathrm{d}\hat v}\right)E_{\ell m}^{(1)}-2 \kappa^{(0)}_{00}{\cal E}_{\ell m}^{(1)} \end{equation} while in Eq.~(\ref{mastermultipoles2}) for the propagation of the $B$-modes it is \begin{equation} \left({\cal R}^{(0)}_{00}-h^{(0)}_{00}\frac{\mathrm{d}}{\mathrm{d}\hat v}\right)B_{\ell m}^{(1)}. \end{equation} So we see that only $E$-modes are sourced, while $B$-modes would need to be initially non-zero to be non-vanishing today, \begin{equation} E_{\ell m}^{(1)}\not=0\, \qquad B_{\ell m}^{(1)}=0\,. \end{equation} Indeed, first order vector and tensor modes would generate $B$-modes since then ${\cal B}^{(1)}_{ab}\not=0$. The equation~(\ref{mastermultipoles3}) for the convergence has r.h.s. \begin{equation} \left({\cal R}^{(0)}_{00}-h^{(0)}_{00}\frac{\mathrm{d}}{\mathrm{d}\hat v}\right)\kappa_{\ell m}^{(1)}+{\cal R}_{\ell m}^{(1)}\kappa^{(0)}_{00}, \end{equation} as usual,\footnote{The ``standard'' convergence and shear are $-\kappa^{(1)}/\kappa^{(0)}$ and $\gamma_{ab}^{(1)}/\kappa^{(0)}$ in our notations.} since the other terms in Eq.~(\ref{mastermultipoles3}) are at least of second order in the perturbations. For the twist, the argument is similar but ${\cal R}_{\ell m}^{(1)}V^{(0)}_{00}=0$ and the initial conditions~(\ref{initialconditions}) imply $V^{(1)}_{\ell m}=0$. So, in conclusion \begin{equation} \kappa_{\ell m}^{(1)}\not=0\, \qquad V_{\ell m}^{(1)}=0\,. \end{equation} At higher order, ${\cal E}_{ab}$ and ${\cal B}_{ab}$ are non-vanishing (note that one cannot simply drop out ${\cal B}_{\ell m}$ in the hierarchy, even for pure scalar modes), which leads to $B$-modes as well as twist. Moreover, projection effects and couplings induced by $h_{\ell m}$ need to be included; see Ref.~\cite{ordre2} for the case of second-order perturbations. The absence of $B$-modes at first order in perturbations are due to the fact that \begin{enumerate} \item ${\cal B}^{(1)}_{\ell m}=0$ for scalar modes, \item at this order we can work in the Born approximation. \end{enumerate} This latter point is extremely important since otherwise even if ${\cal B}_{\mu\nu}=0$ the dependence $\gr{n}(\gr{n}^o,\hat v)$ would generate a non-vanishing ${\cal B}_{ab}$~\cite{cooray}. Indeed, in Eqs.~(\ref{mastermultipoles1}-\ref{mastermultipoles3}) part of the difficulty lies in the determination of the coefficients ${\cal R}_{\ell m}$, $h_{\ell m}$, ${\cal E}_{\ell m}$ and ${\cal B}_{\ell m}$ that depend on the whole geodesic structure, as we shall now illustrate. \subsection{Example of a Bianchi $I$} \label{BI} We now consider the case of a spatially homogeneous but anisotropic universe described by a Bianchi $I$ spacetime for which the metric takes the form \begin{equation} \mathrm{d} s^2 =-\mathrm{d} t^2 + a^2(t)\gamma_{IJ}(t)\mathrm{d} x^I \mathrm{d} x^J \end{equation} where the coordinates have been chosen so as to diagonalize $\gamma_{IJ}(t)$. This solution is spatially homogeneous and the spatial shear \begin{equation} \sigma_{IJ}\equiv \frac12\frac{\mathrm{d}{\gamma}_{IJ}}{\mathrm{d} t} \end{equation} characterizes the spatial anisotropy, $a^2(t)\gamma_{IJ}$ being the spatial metric, $a(t)$ is the volume averaged scale factor and $\Theta=3H\equiv 3\dot a /a$ (see Refs.~\cite{PPU} for notations and properties). It follows that the kinematical quantities entering $H_\parallel$ in Eq.~(\ref{e.Hperp}) are \begin{equation} \Theta = 3H\,, \qquad \sigma _{IJ}\not=0\,,\qquad A_\mu=0\,. \end{equation} Similarly to the FL case, this spacetime enjoys 3 Killing vectors $\lbrace\gr{e}_I\rbrace_{I\in\lbrace x,y,z\rbrace}\equiv \lbrace\frac{\partial}{\partial x^I}\rbrace_{I\in\lbrace x,y,z\rbrace} $ which define {\em everywhere} a natural Cartesian basis. Normalizing these vectors, we can then also define a triad of vectors $\gr{e}_i$ that can be used as a global Euclidian basis. And similarly to what has been done in the FL case, the set of vectors $\gr{n}_i^o$ can then be defined everywhere by imposing that their components in this reference basis $\gr{e}_i$ remain the same, in order to allow the comparison of ${\bm n}_i (\gr{n}^o,\hat v)$ to ${\bm n}^o_i$. However, contrary to the FL case, one has to consider ({\em i}) the non-vanishing background electric Weyl tensor, \begin{equation} {\cal E}^{(0)}_{IJ} = H \sigma_{IJ} + \frac{1}{3} \sigma^2 \gamma_{IJ} -\sigma_{IK}\sigma^K_J \end{equation} while the magnetic part is identically null, \begin{equation} {\cal B}_{IJ}^{(0)}=0\,, \end{equation} and ({\em ii}) the fact that at background level $\gr{n}_i\not=\gr{n}^o_i$ (unless in the particular case of geodesics along one of the three proper axis), which induces projection effects so that $h_{\ell m}^{(0)}\not=0$. The triad $\gr{n}_i(\gr{n}^o,\hat v)$ is related to the reference triad $\gr{n}_i^o$ by a rotation defined by three Euler angles as \begin{eqnarray}\label{Euler1} \gr{n}_i(\gr{n}^o,\hat v)&=&R_i^{\,\,j}(\alpha,\beta,\gamma)\gr{n}^o_j\nonumber\,,\\ &=&J_{\gr{n}_3^o}(\gamma)_i^{\,\,j} J_{\gr{n}^o_2}(\beta)_j^{\,\,k} J_{\gr{n}_3^o}(\alpha)_k^{\,\,l} \,\gr{n}_l^o \end{eqnarray} where the Euler angles are also functions of $(\gr{n}^o,\hat v)$. The determination of $\alpha$, $\beta$ and $\gamma$ requires the integration of the geodesic equation in the Bianchi spacetime. Then, for a typical tensor ${T}_{\mu\nu}$ at an event $x^\mu$, its projection orthogonally to $\gr{n}$, i.e. its components ${T}^\pm[x^\mu,\gr{n}_i]\equiv {T}^\pm\left[x^\mu(\gr{n}^o,\hat v)\right]$ in the helicity basis $\gr{n}_\pm$, can be related to its projection at the same event $x^\mu$, but orthogonally to $\gr{n}^o$ with components ${T}_o^\pm[x^\mu,\gr{n}^o_i]$ in the helicity basis $\gr{n}^o_\pm$. For a spin $s$ tensor, this transformation reads (see details in appendix~\ref{AppB}) in general \begin{equation}\label{Transformationgen} {T}^\pm[x^\mu,\gr{n}_i] = \exp(\pm \mathrm{i} s \phi) \exp(\beta^a D_a) {T}^\pm_o(x^\mu,\gr{n}^o_i), \end{equation} with \[ \phi\equiv\alpha+\gamma\quad\mbox{and}\quad\bm{\beta}\equiv\beta[\gr{n}_1^o\cos\gamma+\gr{n}_2^o\sin\gamma]\,. \] For a homogeneous spacetime, the dependence in $x^\mu$ of ${T}^\pm_o$ reduces to a time dependence. Eq.~(\ref{Transformationgen}) evaluated for a rank-$2$ tensor (that is $s=2$) is needed to account for the projection effects in the definition of ${\cal W}^\pm[x^\mu(\gr{n}^o,\hat v)]$. Similarly, Eq.~(\ref{Transformationgen}) in the case $s=0$ (that is for a scalar field) is needed for the projection effects of ${H}_\parallel[x^\mu(\gr{n}^o,\hat v)]$ and ${\cal R}[x^\mu(\gr{n}^o,\hat v)]$. The Weyl tensor having only a non-vanishing electric part (with only non-vanishing components ${\cal E}_{xx}$, ${\cal E}_{yy}$ and ${\cal E}_{zz}$ in the natural Cartesian basis), one has \begin{equation}\label{EBdebase} {\cal W}^{\pm}_o(\eta,\gr{n}_i^o) = \sum_{m=0,\pm2} {\cal E}^o_{2 m}[\eta(\hat{v})]Y_{2\, m}^{\pm 2}(\gr{n}_i^{o}) \end{equation} with \begin{eqnarray} {\cal E}^o_{2\,0}&=&\sqrt{\frac{2 \pi}{15}}\left(2{\cal E}_{zz}-{\cal E}_{xx}-{\cal E}_{yy}\right)\\ {\cal E}^o_{2,\pm2}&=&\sqrt{\frac{\pi}{5}}({\cal E}_{xx} - {\cal E}_{yy})\,. \end{eqnarray} The projection of the electric Weyl tensor has a directional dependence for $\ell=2$ and $m=0,\pm2$. However, the directional dependence of $\phi$ and $\beta^a$ in Eq.~(\ref{Transformationgen}), i.e. the projection effects, sources and mixes $E$ and $B$ modes at higher $\ell$, as for CMB polarization $E/B$ modes mixing~\cite{Challinor2005}. This projection effect also induces non-vanishing ${\cal R}_{\ell m}$ terms even if the background Ricci is homogeneous.\\ To go further and understand how this mixing of $E$ and $B$ modes arises, let us assume that $\hat\sigma^2/\Theta^2$ is small, so that we can work at first order on this parameter (we can think of B$I$ has a homogeneous perturbation of FL). Then, the geodesic equation and the parallel transport of ${\bm n}_a$ [Eq.~(\ref{e:prop_n})] lead to \begin{equation} \frac{\mathrm{d}}{\mathrm{d} \hat v} n^i = S^{ik}\sigma_{kj}n^j\,,\qquad S_{ij}\frac{\mathrm{d}}{\mathrm{d} \hat v} n_a^j=0 \end{equation} and thus at lowest order one easily obtains that $\phi\simeq 0$ and $\beta^a(\gr{n}^o,\hat{v}) \simeq \int_0^{\hat{v}} D_a \sigma(\gr{n}^o,\hat{v}') \mathrm{d} \hat{v}'$. Here $\sigma(\gr{n},\hat{v}) \equiv \sigma_{IJ}(\hat{v}) n^I n^J/2$ can be thought as a lensing potential and Eq. (\ref{Transformationgen}) for $\bm{\mathcal{W}}$ gives \begin{equation}\label{lensBianchi} {\cal W}^\pm[x^\mu,\gr{n}_i]\simeq [1+\beta^a D_a] {\cal W}^\pm_o(\eta,\gr{n}_i^o), \end{equation} similar to the form for linearized lensing in FL~\cite{Hu2000,Challinor2002} on light polarization. $\sigma(\gr{n}^o,\hat{v})$ obviously contains only $\ell=2$ multipoles. Because of the derivative coupling, using \cite{Hu2000} \begin{equation}\label{LensingYlms} D_a Y_{\ell_1 m_1} D^a Y_{\ell_2 m_2}^{\pm s}=\sum_{\ell m} L_{\ell \ell_1 \ell_2}\,{}^{\pm s}C^{m m_1 m_2}_{\ell \ell_1 \ell_2}\, Y_{\ell m}^{\pm s} \end{equation} where \begin{equation} L_{\ell \ell_1 \ell_2}\equiv\frac{1}{2}\left[\ell_1(\ell_1+1)+\ell_2(\ell_2+1)-\ell(\ell+1)\right] \end{equation} and further defining $$ {}^{\pm s}I^{m m_1 m_2}_{\ell \ell_1 \ell_2}\equiv L_{\ell \ell_1 \ell_2}\,{}^{\pm s}C^{m m_1 m_2}_{\ell \ell_1 \ell_2}, $$ one can convince oneself that, at background level, terms such as \begin{equation} {\cal E}^{(0)}_{\ell m} \simeq {\cal E}^{o}_{\ell m}+ \,{}^{2}I^{m m_1 m_2}_{\ell \ell_1 \ell_2} \left(\int_0^{\hat{v}} {\sigma}_{\ell_1 m_1} \mathrm{d} \hat{v}'\right)\delta_L^{+}{\cal E}^o_{\ell_2 m_2} \end{equation} and \begin{equation} {\cal B}^{(0)}_{\ell m} \simeq -\mathrm{i} \,{}^{2}I^{m m_1 m_2}_{\ell \ell_1 \ell_2} \left(\int_0^{\hat{v}} {\sigma}_{\ell_1 m_1} \mathrm{d} \hat{v}' \right)\delta_L^{-}{\cal E}^o_{\ell_2 m_2} \end{equation} are expected when extracting the $E$ and $B$ modes out of Eq.~\ref{lensBianchi} . Thus a multipolar $\ell = 4$ $B$-mode will appear. One needs however to rely on the full transformation~(\ref{Transformationgen}) so that the $E$- and $B$-modes shall be generated for larger $\ell$'s. Similar sources arise from ${\cal R}$ and $H_\parallel$ for which projection effects will generate non-vanishing ${\cal R}_{\ell m}^{(0)}$ and $h_{\ell m}^{(0)}$. A full analysis, including perturbations and magnitude estimations will be presented in Ref.~\cite{next}. Our argument sketches the expected effects that arise from the higher multipoles induced by the background Weyl tensor and the fact that $\gr{n}\not=\gr{n}^o$, an effect that cannot be neglected even in the Born approximation for anisotropic spaces. Besides, in B$I$ spacetimes, the amplitude of vectors and tensors is of order of the shear times the amplitude of the scalars, another source of $B$-modes. \section{Conclusion} \label{conclusion} We have provided a new multipolar hierarchy for weak lensing. Our formalism, which is fully covariant, does not rely on perturbation theory nor on the choice of a background spacetime. It allows us to relate the property of the shear to symmetry properties of the background spacetime and discuss the generation of $B$-modes. We have argued that a violation of local isotropy is expected to leave a $B$-mode signature on all scales. This result is important for future surveys, such as the Euclid mission~\cite{euclid} (early results on the $B$-modes have already been obtained from CFHTLS~\cite{cfhtls} and DLS~\cite{lsstpaper} and we can forecast that Euclid will typically decrease the error bars on the $B$-modes by a factor of order 10-40 on scales ranging up to 40 degree, that is in the linear regime where astrophysical sources of $B$-modes are expected to be negligible) and may us allow to set new constraints on the deviation from spatial isotropy on cosmological scales. The quantitative computation of the level of $B$-modes expected on large scales, where the gravitational dynamics can be considered linear, for a Bianchi universe is currently being investigated \cite{next} and requires to study in details the cosmological perturbation theory beyond the analysis of a scalar field \cite{PPU}. \acknowledgements We thank Yannick Mellier and Francis Bernardeau for their comments and insights and Anthony Tyson for bringing the reference \cite{lsstpaper} to our attention. TSP thanks the {\it Institut d'Astrophysique de Paris} for the support and hospitality during the early stages of this work.
1,941,325,220,672
arxiv
\section{Introduction} \subsection{Definition of the groups} \label{section intro} We denote by $\Sigma_{g,1}$ the orientable surface of genus $g$ with one boundary component, and by $\Gamma_{g,1}=\pi_0(\operatorname{Diff}_{\partial}(W_{g,1}))$ its \textit{mapping class group}, defined to be the group of isotopy classes of diffeomorphisms of $\Sigma_{g,1}$ fixing pointwise a neighbourhood of its boundary. We will define the spin mapping class groups using the approach of \cite{harerspin}, which is based on the notion of quadratic refinements. Given an integer-valued skew-symmetric bilinear form $(M,\lambda)$ on a finitely generated free $\mathbb{Z}$-module, a \textit{quadratic refinement} is a function $q: M \rightarrow \mathbb{Z}/2$ such that $q(x+y) \equiv q(x)+q(y)+ \lambda(x,y) (\mod 2)$ for all $x,y \in M$. There are $2^{\operatorname{rk}(M)}$ quadratic refinements since a quadratic refinement is uniquely determined by its values on a basis of $M$, and any set of values is possible. The set of quadratic refinements of $(H_1(\Sigma_{g,1};\mathbb{Z}),\cdot)$ has a right $\Gamma_{g,1}$-action by precomposition. By \cite[Corollary 2]{Johnsonspin} this action has precisely two orbits for $g \geq 1$, distinguished by the \textit{Arf invariant}, which is a $\mathbb{Z}/2$-valued function on the set of quadratic refinements, see Definition \ref{defn arf}. For $\epsilon \in \{0,1\}$ we will denote by $\Gamma_{g,1}^{1/2}[\epsilon]:= \operatorname{Stab}_{\Gamma_{g,1}}(q)$ where $q$ is a choice of quadratic refinement of Arf invariant $\epsilon$, and call this the \textit{spin mapping class group in genus $g$ and Arf invariant $\epsilon$}. This notation is inspired by the one in \cite{harerspin}, where these groups are defined in a similar fashion. Similarly, for $g \geq 1$ the group $Sp_{2g}(\mathbb{Z})$ acts on the set of quadratic refinements of the standard symplectic form $(\mathbb{Z}^{2g},\Omega_g)$ with precisely two orbits, distinguished by the Arf invariant (see Section \ref{section symplectic} for details). Thus, for $\epsilon \in \{0,1\}$ we can define the \textit{quadratic symplectic group in genus $g$ and Arf invariant $\epsilon$} to be $Sp_{2g}^{\epsilon}(\mathbb{Z}):= \operatorname{Stab}_{Sp_{2g}(\mathbb{Z})}(q)$ for a fixed quadratic refinement $q$ of Arf invariant $\epsilon$. \subsection{Statement of results} \label{section results} Before stating the results let us recall what \textit{stabilization maps} mean in this context. We begin by fixing quadratic refinements $q_0,q_1$ of $(H_1(\Sigma_{1,1};\mathbb{Z}),\cdot) \cong (\mathbb{Z}^2,\Omega_1)$ of Arf invariants $0,1$ respectively. Then, given any quadratic refinement $q$ of $(H_1(\Sigma_{g-1,1};\mathbb{Z}),\cdot) \cong (\mathbb{Z}^{2(g-1)},\Omega_{g-1})$ we get a quadratic refinement $q \oplus q_{\epsilon}$ of $(H_1(\Sigma_{g,1};\mathbb{Z}),\cdot) \cong (\mathbb{Z}^{2g},\Omega_{g})$. Moreover, the Arf invariant is additive, see Definition \ref{defn arf}, so $\operatorname{Arf}(q\oplus q_{\epsilon})=\operatorname{Arf}(q)+\epsilon$. Thus, using the inclusions $\Gamma_{g-1,1} \subset \Gamma_{g,1}$ and $Sp_{2(g-1)}(\mathbb{Z}) \subset Sp_{2g}(\mathbb{Z})$ we get \textit{stabilization maps} $$\Gamma_{g-1,1}^{1/2}[\delta-\epsilon] \rightarrow \Gamma_{g,1}^{1/2}[\delta]$$ and $$Sp_{2(g-1)}^{\delta-\epsilon}(\mathbb{Z}) \rightarrow Sp_{2g}^{\delta}(\mathbb{Z}).$$ The goal of this paper is to study homological stability with respect to these stabilisation maps. Before moving to the results observe that additivity of the Arf invariant under direct sum of quadratic refinements also allows us to define products $$\Gamma_{g,1}^{1/2}[\epsilon] \times \Gamma_{g',1}^{1/2}[\epsilon'] \rightarrow \Gamma_{g++g',1}^{1/2}[\epsilon+\epsilon']$$ and $$Sp_{2g}^{\epsilon}(\mathbb{Z}) \times Sp_{2g'}^{\epsilon}(\mathbb{Z})\rightarrow Sp_{2(g+g')}^{\epsilon+\epsilon'}(\mathbb{Z})$$ which contain the stabilisation maps as particular cases. It is known since \cite[Theorem 3.1]{harerspin} that spin mapping class groups satisfy homological stability in the range $d \lesssim g/4$, and their stable homology can be understood by \cite[Section 1]{stablespin}. Thus, improvements in the stability range are important as they lead to new homology computations. In this direction, the previously known bounds can be found in \cite[Theorem 2.14]{rspin}, where a range of the form $d \lesssim 2g/5$ was shown. The main result of this paper improves the known stability range. \begin{thma} \label{theorem A} Consider the stabilization map $$H_d(\Gamma_{g-1,1}^{1/2}[\delta-\epsilon];\mathds{k}) \rightarrow H_d(\Gamma_{g,1}^{1/2}[\delta];\mathds{k}),$$ then: \begin{enumerate}[(i)] \item If $\mathds{k}=\mathbb{Z}$ it is surjective for $2d \leq g-2$ and an isomorphism for $2d \leq g-4$. \item If $\mathds{k}=\mathbb{Z}[\frac{1}{2}]$ it is surjective for $3d \leq 2g-3$ and an isomorphism for $3d \leq 2g-6$. \end{enumerate} Moreover, there is a homology class $\theta \in H_2(\Gamma_{4,1}^{1/2}[0];\operatorname{\mathbb{F}_2})$ such that $$\theta \cdot-: H_{d-2}(\Gamma_{g-4,1}^{1/2}[\delta],\Gamma_{g-5,1}^{1/2}[\delta-\epsilon];\operatorname{\mathbb{F}_2}) \rightarrow H_d(\Gamma_{g,1}^{1/2}[\delta],\Gamma_{g-1,1}^{1/2}[\delta-\epsilon];\operatorname{\mathbb{F}_2})$$ is surjective for $3d \leq 2g-3$ and an isomorphism for $3d \leq 2g-6$. \end{thma} The result with $\mathbb{Z}[1/2]$-coefficients is optimal (up to possibly a better constant term) by Lemma \ref{lem optimallity}, and in particular the ``slope $2/3$'' cannot be improved. Part (iii) of the theorem is an example of \textit{secondary homological stability}, which means that it gives a range in which the defects of homological stability are themselves stable. By Corollary \ref{cor 2 torsion}, a consequence is that the $\operatorname{\mathbb{F}_2}$-homology also has a $2/3$ slope stability if and only if $\theta^2$ destabilizes by $\sigma_{\epsilon}$; and otherwise the slope $1/2$ of part (i) would be optimal integrally. We do not know which of these two options holds. Finally we will remark that the class $\theta$ is not uniquely defined, see Remarks \ref{rem indeterminacy} and \ref{rem theta well-defined}, but the indeterminacy is small and the statement above holds for any such choice of $\theta$. The second main result will be about homological stability of quadratic symplectic groups. \begin{thmb} \label{theorem B} Consider the stabilization map $$H_d(Sp_{2(g-1)}^{\delta-\epsilon}(\mathbb{Z});\mathds{k}) \rightarrow H_d(Sp_{2g}^{\delta}(\mathbb{Z});\mathds{k}),$$ then: \begin{enumerate}[(i)] \item If $\mathds{k}=\mathbb{Z}$ it is surjective for $2d \leq g-2$ and an isomorphism for $2d \leq g-3$. \item If $\mathds{k}=\mathbb{Z}[\frac{1}{2}]$ it is surjective for $3d \leq 2g-3$ and an isomorphism for $3d \leq 2g-6$. \end{enumerate} Moreover, there is a homology class $\theta \in H_2(Sp_8^0(\mathbb{Z});\operatorname{\mathbb{F}_2})$ such that $$\theta \cdot-: H_{d-2}(Sp_{2(g-4)}^{\delta}(\mathbb{Z}),Sp_{2(g-5)}^{\delta-\epsilon}(\mathbb{Z});\operatorname{\mathbb{F}_2}) \rightarrow H_d(Sp_{2g}^{\delta}(\mathbb{Z}),Sp_{2(g-1)}^{\delta-\epsilon}(\mathbb{Z});\operatorname{\mathbb{F}_2})$$ is surjective for $3d \leq 2g-3$ and an isomorphism for $3d \leq 2g-6$. \end{thmb} The groups $Sp_{2g}^{0}(\mathbb{Z})$ have appeared in the literature under the name of \textit{theta subgroups of the symplectic groups}, and sometimes denoted by $Sp_{2g}^{q}(\mathbb{Z})$. These groups are of importance in number theory, see \cite{presentationsymplectic} for example, and in the study of manifolds, as in \cite[Section 4]{framings}. The groups $Sp_{2g}^{1}(\mathbb{Z})$ are less common but have appeared recently in the study of manifolds in \cite[Section 4]{framings}, where they are denoted by $Sp_{2g}^{a}(\mathbb{Z})$. Some results were previously known about homological stability of quadratic symplectic groups. In particular, \cite[Theorem 5.2]{Nina} already gave a stability result of the form $d \lesssim g/2$ following different techniques. However, the improvement to $d \lesssim 2g/3$ in part (ii) of the above theorem is new. As before, part (iii) is a secondary stability result which implies that either the $\operatorname{\mathbb{F}_2}$-homology also has slope $2/3$ stability (if $\theta^2$ destabilises) or the optimal slope of the $\operatorname{\mathbb{F}_2}$-homology is $1/2$ (otherwise). The class $\theta$ is again not well-defined but the indeterminacy is understood by Remarks \ref{rem indeterminacy} and \ref{rem theta well-defined}. We will prove Theorems \hyperref[theorem A]{A} and \hyperref[theorem B]{B} using the technique of \textit{cellular $E_k$-algebras} developed in \cite{Ek}, and in particular we follow some ideas of \cite{E2} where this approach is applied to homological stability of mapping class groups of surfaces. The basic idea is to define $E_2$-algebra structures on both $\bigsqcup_{g,\epsilon} B \Gamma_{g,1}^{1/2}[\epsilon]$ and $\bigsqcup_{g,\epsilon} B \mathsf{Sp}_{2g}^{\epsilon}(\mathbb{Z})$ which are ``induced by the products'' $$\Gamma_{g,1}^{1/2}[\epsilon] \times \Gamma_{g',1}^{1/2}[\epsilon'] \rightarrow \Gamma_{g++g',1}^{1/2}[\epsilon+\epsilon']$$ and $$Sp_{2g}^{\epsilon}(\mathbb{Z}) \times Sp_{2g'}^{\epsilon}(\mathbb{Z})\rightarrow Sp_{2(g+g')}^{\epsilon+\epsilon'}(\mathbb{Z})$$ respectively. This structure contains the stabilisation maps but also captures more information, which will be used to prove the stability ranges and to properly define the class $\theta$ and the secondary stabilisation. \subsection{Overview of cellular $E_2$-algebras} \label{section E2 algebras overview} The purpose of this section is to explain the methods from \cite{Ek} used in this paper: we aim for an informal discussion and refer to \cite{Ek} for details. In the $E_2$-algebras part of the paper we will work in the category $\mathsf{sMod}_{\mathds{k}}^{\mathsf{G}}$ of $\mathsf{G}$-graded simplicial $\mathds{k}$-modules, for $\mathds{k}$ a commutative ring and $\mathsf{G}$ a discrete monoid. Formally, $\mathsf{sMod}_{\mathds{k}}^{\mathsf{G}}$ denotes the category of functors from $\mathsf{G}$, viewed as a category with objects the elements of $\mathsf{G}$ and only identity morphisms, to $\mathsf{sMod}_{\mathds{k}}$. This means that each object $M$ consists of a simplicial $\mathds{k}$-module $M_{\bullet}(x)$ for each $x \in \mathsf{G}$. The tensor product $\otimes$ in this category is given by Day convolution, i.e. $$ (M \otimes N)_p(x)= \bigoplus_{y+z=x}{M_p(y) \otimes_{\mathds{k}} N_p(z)}$$ where $+$ denotes the monoidal structure of $\mathsf{G}$. In a similar way one can define the category of $\mathsf{G}$-graded spaces, denoted by $\mathsf{Top}^{\mathsf{G}}$ and endow it with a monoidal structure by Day convolution using cartesian product of spaces. The \textit{little 2-cubes} operad in $\mathsf{Top}$ has $n$-ary operations given by rectilinear embeddings $I^2 \times \{1,\cdots,n\} \hookrightarrow I^2$ such that the interiors of the images of the $2$-cubes are disjoint. (The space of 0-ary operations is empty.) We define the little $2$-cubes operad in $\mathsf{sMod}_{\mathds{k}}$ by applying the symmetric monoidal functor $(-)_{\mathds{k}}: \mathsf{Top} \rightarrow \mathsf{sMod}_{\mathds{k}}$ given by the free $\mathds{k}$-module on the singular simplicial set of a space. Moreover, $(-)_{\mathds{k}}$ can be promoted to a functor $(-)_{\mathds{k}}: \mathsf{Top}^{\mathsf{G}} \rightarrow \mathsf{sMod}_{\mathds{k}}^{\mathsf{G}}$ between the graded categories, and we define the little $2$-cubes operad in these by concentrating it in grading $0$, where $0 \in \mathsf{G}$ denotes the identity of the monoid. We shall denote this operad by $\mathcal{C}_2$ in all the categories $\mathsf{Top}, \mathsf{Top}^{\mathsf{G}}, \mathsf{sMod}_{\mathds{k}}^{\mathsf{G}}$ which we use, and define an $E_2$-\textit{algebra} to mean an algebra over this operad. The $E_2$-\textit{indecomposables} of an $E_2$-algebra $\textbf{R}$ in $\mathsf{sMod}_{\mathds{k}}^{\mathsf{G}}$ is defined by the exact sequence of graded simplicial $\mathds{k}$-modules $$ \bigoplus_{n \geq 2}{\mathcal{C}_2(n) \otimes \textbf{R}^{\otimes n}} \rightarrow \textbf{R} \rightarrow Q^{E_2}(\textbf{R}) \rightarrow 0.$$ The functor $\textbf{R} \mapsto Q^{E_2}(\textbf{R})$ is not homotopy-invariant but has a derived functor $Q_{\mathbb{L}}^{E_2}(-)$ which is. See \cite[Section 13]{Ek} for details and how to define it in more general categories such as $E_2$-algebras in $\mathsf{Top}$ or $\mathsf{Top}^{\mathsf{G}}$. The $E_2$-\textit{homology} groups of $\textbf{R}$ are defined to be $$H_{x,d}^{E_2}(\textbf{R}):=H_d(Q_{\mathbb{L}}^{E_2}(\textbf{R})(x))$$ for $x \in \mathsf{G}$ and $d \in \mathbb{N}$. One formal property of the derived indecomposables, see \cite[Lemma 18.2]{Ek}, is that it commutes with $(-)_{\mathds{k}}$, so for $\textbf{R} \in \operatorname{Alg}_{E_2}(\mathsf{Top}^{\mathsf{G}})$ its $E_2$-homology with $\mathds{k}$ coefficients is the same as the $E_2$-homology of $\textbf{R}_{\mathds{k}}$. Thus, in order to study homological stability of $\textbf{R}$ with different coefficients we can work with the $E_2$-algebras $\textbf{R}_{\mathds{k}}$ instead, which enjoy better properties as they are cofibrant and the category of graded simplicial $\mathds{k}$-modules offers some technical advantages as explained in \cite[Section 11]{Ek}. However, at the same time, we can do computations in $\mathsf{Top}$ of the homology or $E_2$-homology of $\textbf{R}$ and then transfer them to $\mathsf{sMod}_{\mathds{k}}$. In \cite[Section 6]{Ek} the notion of a \textit{CW $E_2$-algebra} is defined, built in terms of free $E_2$-algebras by iteratively attaching cells in the category of $E_2$-algebras in order of dimension. Let $\Delta^{x,d} \in \mathsf{sSet}^{\mathsf{G}}$ be the standard $d$-simplex placed in grading $x$ and let $\partial \Delta^{x,d} \in \mathsf{sSet}^{\mathsf{G}}$ be its boundary. By applying the free $\mathds{k}$-module functor we get objects $\Delta_{\mathds{k}}^{x,d}, \partial \Delta_{\mathds{k}}^{x,d} \in \mathsf{sMod}_{\mathds{k}}^{\mathsf{G}}$. We then define the graded spheres in $\mathsf{sMod}_{\mathds{k}}^{\mathsf{G}}$ via $S_{\mathds{k}}^{x,d}:=\Delta_{\mathds{k}}^{x,d}/ \partial \Delta_{\mathds{k}}^{x,d}$, there the quotient denotes the cofibre of the inclusion of the boundary into the disc. In $\mathsf{sMod}_{\mathds{k}}^{\mathsf{G}}$, the data for a cell attachment to an $E_2$-algebra $\textbf{R}$ is an \textit{attaching map} $e: \partial \Delta_{\mathds{k}}^{x,d} \rightarrow \textbf{R}$, which is the same as a map $\partial \Delta^d_{\mathds{k}} \rightarrow \textbf{R}(x)$ of simplicial $\mathds{k}$-modules. To attach the cell we form the pushout in $\operatorname{Alg}_{E_2}(\mathsf{sMod}_{\mathds{k}}^{\mathsf{G}})$ \centerline{\xymatrix{ \mathbf{E_2}(\partial \Delta_{\mathds{k}}^{x,d}) \ar[r] \ar[d] & \textbf{R} \ar[d] \\ \mathbf{E_2}(\Delta_{\mathds{k}}^{x,d}) \ar[r] & \textbf{R} \cup_{e}^{E_2} D_{\mathds{k}}^{x,d}. }} A weak equivalence $\textbf{C} \xrightarrow{\sim} \textbf{R}$ from a CW $E_2$-algebra is called a \textit{CW-approximation} to $\textbf{R}$, and a key result, \cite[Theorem 11.21]{Ek}, is that if $\textbf{R}(0) \simeq 0$ then $\textbf{R}$ admits a CW-approximation. Moreover, whenever $\mathds{k}$ is a field, we can construct a CW-approximation in which the number of $(x,d)$-cells needed is precisely the dimension of $H_{x,d}^{E_2}(\textbf{R})$. By “giving the $d$-cells filtration $d$”, see \cite[Section 11]{Ek} for a more precise discussion of what this means, one gets a skeletal filtration of this $E_2$-algebra and a spectral sequence computing the homology of $\textbf{R}$. In order to discuss homological stability of $E_2$-algebras we will need some preparation. For the rest of this section let $\textbf{R} \in \operatorname{Alg}_{E_2}(\mathsf{sMod}_{\mathds{k}}^{\mathsf{G}})$, where $\mathsf{G}$ is equipped with a monoidal functor $\operatorname{rk}: \mathsf{G} \rightarrow \mathbb{N}$; and suppose we are given a homology class $\sigma \in H_{x,0}(\textbf{R})$ with $\operatorname{rk}(x)=1$. By definition $\sigma$ is a homotopy class of maps $\sigma: S_{\mathds{k}}^{x,0} \rightarrow \mathbf{R}$. Following \cite[Section 12.2]{Ek}, there is a strictly associative algebra $\mathbf{\overline{R}}$ which is equivalent to the unitalization $\mathbf{R^+}:=\mathds{1} \oplus \textbf{R}$, where $\mathds{1}$ is the monoidal unit in simplicial modules. Then, $\sigma$ gives a map $\sigma \cdot-: S_{\mathds{k}}^{x,0} \otimes \mathbf{\overline{R}} \rightarrow \mathbf{\overline{R}}$ by using the associative product of $\mathbf{\overline{R}}$. We then define $\mathbf{\overline{R}}/\sigma$ to be the cofibre of this map. Observe that a-priori $\sigma \cdot -$ is not a (left) $\mathbf{\overline{R}}$-module map, so the cofibre $\mathbf{\overline{R}}/\sigma$ is not a $\mathbf{\overline{R}}$-module. However, by the ``adapters construction'' in \cite[Section 12.2]{Ek} and its applications in \cite[Section 12.2.3]{Ek}, there is a way of defining a cofibration sequence $S^{1,0} \otimes \mathbf{\overline{R}} \xrightarrow{\sigma \cdot - } \mathbf{\overline{R}} \rightarrow \mathbf{\overline{R}}/\sigma$ in the category of left $\mathbf{\overline{R}}$-modules in such a way that forgetting the $\mathbf{\overline{R}}$-module structure recovers the previous construction; we will make use of this fact in some of the proofs of Section \ref{section Ek}. By construction $\sigma \cdot -$ induces maps $\textbf{R}(y) \rightarrow \textbf{R}(x+y)$ between the different graded components of $\textbf{R}$ and the homology of the object $\mathbf{\overline{R}}/\sigma$ captures the relative homology of these. Thus, homological stability results of $\textbf{R}$ using $\sigma$ to stabilize can be reformulated as vanishing ranges for $H_{x,d}(\mathbf{\overline{R}}/\sigma)$; the advantage of doing so is that using filtrations for CW approximations of $\textbf{R}$ one also gets filtrations of $\mathbf{\overline{R}}/\sigma$ and hence spectral sequences capable of detecting vanishing ranges. The secondary stability result can be written in terms of $E_2$-algebras in a similar way: this time we will have class $\sigma$ as above and another class $\theta$, and we will prove a vanishing in the homology of the iterated cofibre construction $\mathbf{R}/(\sigma,\theta):=(\mathbf{R}/\sigma)/\theta$, in the sense of \cite[Section 12.2.3]{Ek}. \subsection{Organization of the paper} In Section \ref{section Ek} we will state generic stability results for $E_2$-algebras, inspired by \cite[Theorem 18.1]{Ek}, which will be shown in Section \ref{section proof} and then used to prove Theorems \hyperref[theorem A]{A} and \hyperref[theorem B]{B}. In Section \ref{section 4} we will define the notion of ``quadratic data'' and explain how it produces a so-called ``quadratic $E_2$-algebra''. This construction generalizes the way that spin mapping class groups and quadratic symplectic groups are defined from the mapping class groups and symplectic groups respectively. Finally, Theorem \ref{theorem splitting complexes} and Corollary \ref{cor std connectivity} will give ways to access information about the $E_2$-cells of the quadratic $E_2$-algebra from knowledge about the underlying non-quadratic algebra. Section \ref{section symplectic} is devoted to the proof of Theorem \hyperref[theorem B]{B}, which is an application of the results of the previous two Sections. In the proof of the last two parts of the theorem we will also need some information about the first homology groups of quadratic symplectic groups, which can be found in the \hyperref[appendix]{Appendix}. Section \ref{section mcg} contains the proof of Theorem \hyperref[theorem A]{A}, which follows similar steps to the previous Section. Finally, the \hyperref[appendix]{Appendix} contains detailed computations of the first homology groups of spin mapping class groups and quadratic symplectic groups. Let us remark that a full description of all first homology groups and stabilization maps is included for completeness, even if not everything there is used to prove Theorems \hyperref[theorem A]{A} and \hyperref[theorem B]{B}. The main idea behind the computations is to start with known presentations of the mapping class groups and symplectic groups and then to find presentations for the (finite index) spin mapping class groups and quadratic symplectic groups using GAP. \section*{Acknowledgements.} I am supported by an EPSRC PhD Studentship, grant no. 2261123, and by O. Randal-Williams’ Philip Leverhulme Prize from the Leverhulme Trust. I would like to give special thanks to my PhD supervisor Oscar Randal-Williams for all his advice and all the helpful discussions and corrections. \section{Generic homological stability results} \label{section Ek} In this section we will state three homological stability results for $E_2$-algebras, Theorems \ref{theorem stab 1}, \ref{theorem stab 2} and \ref{thm stab 3}, that will later apply to quadratic symplectic groups and spin mapping class groups, see Sections \ref{section symplectic} and \ref{section mcg}. The first two of these are inspired by the generic homological stability theorem \cite[Theorem 18.1]{Ek}, in the sense that they input a vanishing line on the $E_2$-homology of an $E_2$-algebra along with some information about the homology in small bidegrees, and they output homological stability results for the algebra. The third one is a secondary stability result, and it is inspired by \cite[Lemma 5.6, Theorem 5.12]{E2}. In addition, we have Corollary \ref{cor 2 torsion}, which says that $E_2$-algebras satisfying the assumptions of Theorem \ref{thm stab 3} have homological stability of slope either $1/2$ or $2/3$ depending on the value of a certain homology class. However, the precise statement of this Corollary is delayed to the next Section until we have properly defined what the homology class is. Before stating the results let us define the grading category that will be relevant: let $\mathsf{H}$ be the discrete monoid $\{0\} \cup \mathbb{N}_{>0} \times \mathbb{Z}/2$, where the monoidal structure $+$ is given by addition in both coordinates. We denote by $\operatorname{rk}: \mathsf{H} \rightarrow \mathbb{N}$ the monoidal functor given by projection to the first coordinate. Also, let us recall that on $\operatorname{Alg}_{E_2}(\mathsf{sMod}_{\mathds{k}}^{\mathbb{N}})$ there is a homology operation $Q_{\mathds{k}}^1(-): H_{*,0}(-) \rightarrow H_{2*,1}(-)$ defined in \cite[Page 199]{Ek}. This operation satisfies that $-2 Q_{\mathds{k}}^1(-)=[-,-]$, where $[-,-]$ is the Browder bracket. By using the canonical rank functor $\operatorname{rk}: \mathsf{H} \rightarrow \mathbb{N}$ we can view any $\mathsf{H}$-graded $E_2$-algebra as $\mathbb{N}$-graded and hence make sense of this operation too. \begin{theorem} \label{theorem stab 1} Let $\mathds{k}$ be a commutative ring and let $\mathbf{X} \in \operatorname{Alg}_{E_2}(\operatorname{\mathsf{sMod}}_{\mathds{k}}^{\mathsf{H}})$ be such that $H_{0,0}(\mathbf{X})=0$, $H_{x,d}^{E_2}(\mathbf{X})=0$ for $d<\operatorname{rk}(x)-1$, and $H_{*,0}(\mathbf{\overline{X}})=\frac{\mathds{k}[\sigma_0,\sigma_1]}{(\sigma_1^2-\sigma_0^2)}$ as a ring, for some classes $\sigma_{\epsilon} \in H_{(1,\epsilon),0}(\mathbf{X})$. Then, for any $\epsilon \in \{0,1\}$ and any $x \in \mathsf{G}$ we have $H_{x,d}(\mathbf{\overline{X}}/\sigma_{\epsilon})=0$ for $2d \leq \operatorname{rk}(x)-2$. \end{theorem} \begin{theorem} \label{theorem stab 2} Let $\mathds{k}$ be a commutative $\mathbb{Z}[1/2]$-algebra, let $\textbf{X} \in \operatorname{Alg}_{E_2}(\mathsf{sMod}_{\mathds{k}}^{\mathsf{H}})$ be such that $H_{0,0}(\mathbf{X})=0$, $H_{x,d}^{E_2}(\textbf{X})=0$ for $d<\operatorname{rk}(x)-1$, and $H_{*,0}(\mathbf{\overline{X}})=\frac{\mathds{k}[\sigma_0,\sigma_1]}{(\sigma_1^2-\sigma_0^2)}$ as a ring, for some classes $\sigma_{\epsilon} \in H_{(1,\epsilon),0}(\textbf{X})$. Suppose in addition that for some $\epsilon \in \{0,1\}$ we have: \begin{enumerate}[(i)] \item $\sigma_{\epsilon} \cdot - : H_{(1,1-\epsilon),1}(\textbf{X}) \rightarrow H_{(2,1),1}(\textbf{X})$ is surjective. \item $\operatorname{coker}(\sigma_{\epsilon} \cdot-: H_{(1,\epsilon),1}(\textbf{X}) \rightarrow H_{(2,0),1}(\textbf{X}))$ is generated by $Q_{\mathds{k}}^1(\sigma_0)$ as a $\mathbb{Z}$-module. \item $\sigma_{1-\epsilon} \cdot Q_{\mathds{k}}^1(\sigma_0) \in H_{(3,1-\epsilon),1}(\textbf{X})$ lies in the image of $\sigma_{\epsilon}^2 \cdot -:H_{(1,1-\epsilon),1}(\textbf{X}) \rightarrow H_{(3,1-\epsilon),1}(\textbf{X})$. \end{enumerate} Then $H_{x,d}(\mathbf{\overline{X}}/\sigma_{\epsilon})=0$ for $3d \leq 2 \operatorname{rk}(x)-3$. \end{theorem} \begin{theorem} \label{thm stab 3} Let $\textbf{X} \in \operatorname{Alg}_{E_2}(\mathsf{sMod}_{\operatorname{\mathbb{F}_2}}^{\mathsf{H}})$ be such that $H_{0,0}(\mathbf{X})=0$, $H_{x,d}^{E_2}(\textbf{X})=0$ for $d<\operatorname{rk}(x)-1$, and $H_{*,0}(\mathbf{\overline{X}})=\frac{\operatorname{\mathbb{F}_2}[\sigma_0,\sigma_1]}{(\sigma_1^2-\sigma_0^2)}$ as a ring, for some classes $\sigma_{\epsilon} \in H_{(1,\epsilon),0}(\textbf{X})$. Suppose in addition that for some $\epsilon \in \{0,1\}$ we have: \begin{enumerate}[(i)] \item $\sigma_{\epsilon} \cdot - : H_{(1,1-\epsilon),1}(\textbf{X}) \rightarrow H_{(2,1),1}(\textbf{X})$ is surjective. \item $\operatorname{coker}(\sigma_{\epsilon} \cdot-: H_{(1,\epsilon),1}(\textbf{X}) \rightarrow H_{(2,0),1}(\textbf{X}))$ is generated by $Q^1(\sigma_0)$. \item $\sigma_{1-\epsilon} \cdot Q^1(\sigma_0) \in H_{(3,1-\epsilon),1}(\textbf{X})$ lies in the image of $\sigma_{\epsilon}^2 \cdot -:H_{(1,1-\epsilon),1}(\textbf{X}) \rightarrow H_{(3,1-\epsilon),1}(\textbf{X})$. \item $\sigma_0 \cdot Q^1(\sigma_0) \in H_{(3,0),1}(\textbf{X})$ lies in the image of $\sigma_{\epsilon}^2 \cdot -:H_{(1,0),1}(\textbf{X}) \rightarrow H_{(3,0),1}(\textbf{X})$. \end{enumerate} Then there is a class $\theta \in H_{(4,0),2}(\mathbf{X})$ such that $H_{x,d}(\mathbf{\overline{X}}/(\sigma_{\epsilon},\theta))=0$ for $3d \leq 2 \operatorname{rk}(x)-3$. \end{theorem} \section{Proving Theorems \ref{theorem stab 1}, \ref{theorem stab 2} and \ref{thm stab 3}} \label{section proof} We will need some preparation. For $\mathds{k}$ a commutative ring we define $$\mathbf{A_{\kk}}:= \mathbf{E_2}(S_{\mathds{k}}^{(1,0),0} \sigma_0 \oplus S_{\mathds{k}}^{(1,1),0} \sigma_1) \cup_{\sigma_1^2-\sigma_0^2}^{E_2}{D_{\mathds{k}}^{(2,0),1} \rho} \in \operatorname{Alg}_{E_2}(\operatorname{\mathsf{sMod}}_{\mathds{k}}^{\mathsf{H}}).$$ \begin{proposition} \label{prop Ak} The $E_2$-algebra $\mathbf{A_{\kk}}$ satisfies the assumptions of Theorem \ref{theorem stab 1}, i.e $H_{0,0}(\mathbf{A_{\kk}})=0$, $H_{x,d}^{E_2}(\mathbf{A_{\kk}})=0$ for $d<\operatorname{rk}(x)-1$ and $H_{*,0}(\mathbf{\overline{A_{\mathds{k}}}})=\mathds{k}[\sigma_0,\sigma_1]/(\sigma_1^2-\sigma_0^2)$ as a ring. \end{proposition} \begin{proof} Since $\mathbf{A_{\kk}}$ is built using cells then $Q_{\mathbb{L}}^{E_2}(\mathbf{A_{\kk}})=S_{\mathds{k}}^{(1,0),0} \sigma_0 \oplus S_{\mathds{k}}^{(1,1),0} \sigma_1 \oplus S_{\mathds{k}}^{(2,0),1} \rho$, see \cite[Sections 6.1.3 and 8.2.1]{Ek} for details. Thus $H_{x,d}^{E_2}(\mathbf{A_{\kk}})=0$ for $d<\operatorname{rk}(x)-1$. For the homology computations it suffices to consider the case $\mathds{k}=\mathbb{Z}$ by the following argument: Let us write $- \otimes \mathds{k}: \operatorname{\mathsf{sMod}}_{\mathbb{Z}} \rightarrow \operatorname{\mathsf{sMod}}_{\mathds{k}}$ for the base-change functor and for the corresponding functor between $\mathsf{H}$-graded categories. Base-change is symmetric monoidal, preserves colimits and satisfies $S_{\mathbb{Z}}^{x,d} \otimes \mathds{k} = S_{\mathds{k}}^{x,d}$, and hence we recognize $\mathbf{A_{\kk}}= \mathbf{A_{\mathbb{Z}}} \otimes \mathds{k}$. Thus, the universal coefficients theorem in homological degree $0$ gives that $H_{x,0}(\mathbf{A_{\mathbb{Z}}}) \otimes \mathds{k} \xrightarrow{\cong} H_{x,0}(\mathbf{A_{\kk}})$, implying the claimed reduction. To simplify notation we will not write $_{\mathbb{Z}}$ for the rest of this proof since we will only treat the case $\mathds{k}=\mathbb{Z}$. Consider the cell-attachment filtration $\mathbf{fA} \in \operatorname{Alg}_{E_2}((\operatorname{\mathsf{sMod}}_{\mathbb{Z}}^{\mathsf{H}})^{\mathbb{Z}_{\leq}})$, which by \cite[Section 6.2.1]{Ek} is given by $$\mathbf{fA}= \mathbf{E_2}(S^{(1,0),0,0} \sigma_0 \oplus S^{(1,1),0,0} \sigma_1) \cup_{\sigma_1^2-\sigma_0^2}^{E_2}{D^{(2,0),1,1} \rho},$$ where the last grading represents the filtration stage. By \cite[Corollary 10.17]{Ek} there is a spectral sequence $$E^1_{x,p,q}=H_{x,p+q,p}(\overline{\mathbf{E_2}(S^{(1,0),0,0} \sigma_0 \oplus S^{(1,1),0,0} \sigma_1 \oplus S^{(2,0),1,1} \rho)}) \Rightarrow H_{x,p+q}(\mathbf{\overline{A}}).$$ The first page of this spectral sequence can be accessed by \cite[Theorems 16.4, 16.7]{Ek} and the description of the homology operation $Q_{\mathbb{Z}}^1(-)$ in \cite[Page 199]{Ek}. In homological degrees $p+q \leq 1$ the full answer is given by \begin{enumerate}[$\bullet$] \item $E^1_{x,0,p}$ vanishes for $p \neq 0$, and $\bigoplus_{x \in \mathsf{H}}{E^1_{x,0,0}}$ is the free $\mathbb{Z}$-module on the set of generators $\{\sigma_0^a \cdot \sigma_1^b: \; a+b \geq 0\}$, where $\sigma_0^0=\sigma_1^0=1$. \item The only elements in homological degree $p+q=1$ are stabilizations by powers of $\sigma_0$ and $\sigma_1$ of one of the classes $\rho$, $Q_{\mathbb{Z}}^1(\sigma_0)$, $Q_{\mathbb{Z}}^1(\sigma_1)$, or $[\sigma_0,\sigma_1]$. Thus they have filtration $p \leq 1$. \end{enumerate} By \cite[Section 16.6]{Ek} the spectral sequence is multiplicative and its differential satisfies $d^1(\sigma_0)=0$, $d^1(\sigma_1)=0$ and $d^1(\rho)=\sigma_1^2-\sigma_0^2$. Thus, $\bigoplus_{x \in \mathsf{H}} E^2_{x,0,0}$ is given as a ring by $\mathbb{Z}[\sigma_0,\sigma_1]/(\sigma_1^2-\sigma_0^2)$. Hence to finish the proof it suffices to show that $E^2_{x,0,0}=E^{\infty}_{x,0,0}$ for any $x \in \mathsf{H}$. This holds because $d^r$ decreases filtration by $r$ and homological degree by $1$ so $d^r$ vanishes on all the elements of homological degree $1$ for $r \geq 2$. \end{proof} \subsection{Proof of Theorem \ref{theorem stab 1}} \begin{proof} We will do a series of reductions to get to the case $\mathbf{X}=\mathbf{A_{\kk}}$ for some appropriate coefficients $\mathds{k}$, and then we will do a direct computation. \textbf{Step 1.} The aim of this step is to reduce to the particular case $\mathbf{X}=\mathbf{A_{\kk}}$. Each class $\sigma_{\epsilon} \in H_{(1,\epsilon),0}(\mathbf{X})$ is represented by a homotopy class $\sigma_{\epsilon}: S_{\mathds{k}}^{(1,\epsilon),0} \rightarrow \mathbf{X}$. Thus there is an $E_2$-algebra map $\mathbf{E_2}(S_{\mathds{k}}^{(1,0),0} \sigma_0 \oplus S_{\mathds{k}}^{(1,1),0} \sigma_1) \rightarrow \mathbf{X}$ sending $\sigma_0,\sigma_1$ to the corresponding homology classes of $\mathbf{X}$. By assumption $\sigma_1^2-\sigma_0^2=0 \in H_{(2,0),0}(\mathbf{X})$, so picking a nullhomotopy gives an extension to an $E_2$-algebra map $c: \mathbf{A_{\kk}} \rightarrow \mathbf{X}$. Now we claim that for the map $c$ we have $H_{x,d}^{E_2}(\mathbf{X},\mathbf{A_{\kk}})=0$ for $d<1/2 \operatorname{rk}(x)$. Indeed, by assumption $H_{x,d}^{E_2}(\mathbf{X})=0$ for $d<\operatorname{rk}(x)-1$ and by Proposition \ref{prop Ak} $H_{x,d}^{E_2}(\mathbf{A_{\kk}})=0$ for $d<\operatorname{rk}(x)-1$ too. Thus, it suffices to show the claim for $(\operatorname{rk}(x)=1,d=0)$. Both $\mathbf{X}$ and $\mathbf{A_{\kk}}$ are reduced, i.e. $H_{0,0}(-)$ vanishes on both of them, and hence by \cite[Corollary 11.12]{Ek} it suffices to show that $H_{x,0}(\mathbf{X},\mathbf{A_{\kk}})=0$ for $\operatorname{rk}(x)=1$, which holds by our assumption about the 0th homology of $\mathbf{X}$ and Proposition \ref{prop Ak}. Now let us suppose that the theorem holds for $\mathbf{X}=\mathbf{A_{\kk}}$. By \cite[Corollary 15.10]{Ek} with $\rho(x)= \operatorname{rk}(x)/2$, $\mu(x)=(\operatorname{rk}(x)-1)/2$ and $\mathbf{M}=\mathbf{\overline{A}_{\mathds{k}}}/\sigma_{\epsilon}$ we find that $H_{x,d}(B(\mathbf{\overline{X}}, \mathbf{\overline{A_{\mathds{k}}}},M))=0$ for $d< \mu(x)$. But, by \cite[Section 12.2.4]{Ek}, $B(\mathbf{\overline{X}}, \mathbf{\overline{A_{\mathds{k}}}},M) \simeq \mathbf{\overline{X}}/\sigma_{\epsilon}$, giving the required reduction. Note that we use the ``adapters construction'' of \cite[Section 12.2]{Ek} to view $\mathbf{M}$ as a left $\mathbf{\overline{A_{\mathds{k}}}}$-module. We will also use this construction in the rest of this proof without explicit mention. \textbf{Step 2.} Now we will further reduce to the case $\mathbf{A}_{\operatorname{\mathbb{F}_{\ell}}}$, for $\ell$ a prime number or $0$, where $\mathbb{F}_0:=\mathbb{Q}$. Recall from the proof of Proposition \ref{prop Ak} that $\mathbf{A_{\kk}}= \mathbf{A_{\mathbb{Z}}} \otimes \mathds{k}$. Since base-change preserves colimits, the cofibration sequence $S_{\mathds{k}}^{(1,\epsilon),0} \otimes \mathbf{\overline{A_{\mathds{k}}}} \xrightarrow{\sigma_{\epsilon} \cdot - } \mathbf{\overline{A_{\mathds{k}}}}\rightarrow \mathbf{\overline{A_{\mathds{k}}}}/\sigma_{\epsilon}$ shows that $\mathbf{\overline{A_{\mathds{k}}}}/\sigma_{\epsilon} \cong \mathbf{\overline{A_{\mathbb{Z}}}}/\sigma_{\epsilon} \otimes \mathds{k}$. Thus, by the universal coefficients theorem it suffices to prove the case $\mathds{k}=\mathbb{Z}$. We claim that the homology groups of $\mathbf{\overline{A_{\mathbb{Z}}}}$ are finitely generated. Indeed, by \cite[Theorem 16.4]{Ek} each entry on the first page of the spectral sequence of the proof of Proposition \ref{prop Ak} is finitely generated. Observe that this is a quite general fact that holds for any cellular $E_2$-algebra with finitely many cells by considering the analogous cell attachment filtration. We will use this again in the proof of Theorem \ref{theorem stab 2}. Hence, by the homology long exact sequence of $\sigma_{\epsilon} \cdot -$, the homology groups of $\mathbf{\overline{A_{\mathbb{Z}}}}/\sigma_{\epsilon}$ are also finitely generated. Thus, it suffices to check the cases $\mathds{k}=\operatorname{\mathbb{F}_{\ell}}$ for $\ell$ a prime number or $0$, by another application of the universal coefficients theorem and using the finite generation of the homology groups. \textbf{Step 3.} We will prove the theorem for a fixed $\ell$, $\mathds{k}=\operatorname{\mathbb{F}_{\ell}}$ and $\mathbf{X}= \mathbf{A_{\kk}}$. To simplify notation we will not write the subscripts $\operatorname{\mathbb{F}_{\ell}}$ for the rest of this proof. Consider the cell attachment filtration $\mathbf{fA} \in \operatorname{Alg}_{E_2}((\operatorname{\mathsf{sMod}}_{\operatorname{\mathbb{F}_{\ell}}}^{\mathsf{H}})^{\mathbb{Z}_{\leq}})$ as in the proof of Proposition \ref{prop Ak}. The filtration $0$ part is given by $\mathbf{\overline{fA}}(0)=\overline{\mathbf{E_2}(S^{(1,0),0} \sigma_0 \oplus S^{(1,1),0} \sigma_1)}$ so we can lift the maps $\sigma_{\epsilon}$ to filtered maps $\sigma_{\epsilon}: S^{(1,\epsilon),0,0} \rightarrow \mathbf{\overline{fA}}$. Thus, using adapters we can form the left $\mathbf{\overline{fA}}$-module $\mathbf{\overline{fA}}/\sigma_{\epsilon}$ filtering $\mathbf{\overline{A}}/\sigma_{\epsilon}$. Since $\operatorname{gr}(-)$ commutes with pushouts and with $\overline{(-)}$ by \cite[Lemma 12.7]{Ek}, we get two spectral sequences \begin{enumerate}[(i)] \item $\Scale[0.9]{F^1_{x,p,q}=H_{x,p+q,p}(\overline{\mathbf{E_2}(S^{(1,0),0,0} \sigma_0 \oplus S^{(1,1),0,0} \sigma_1 \oplus S^{(2,0),1,1} \rho)}) \Rightarrow H_{x,p+q}(\mathbf{\overline{A}})}$ \item $\Scale[0.9]{E^1_{x,p,q}=H_{x,p+q,p}(\overline{\mathbf{E_2}(S^{(1,0),0,0} \sigma_0 \oplus S^{(1,1),0,0} \sigma_1 \oplus S^{(2,0),1,1} \rho)}/\sigma_{\epsilon}) \Rightarrow H_{x,p+q}(\mathbf{\overline{A}}/\sigma_{\epsilon})}.$ \end{enumerate} In order to prove the theorem it suffices to show the following claim \begin{claim} $E^2_{x,p,q}=0$ for $p+q<(\operatorname{rk}(x)-1)/2$. \end{claim} We will need some preparation. As in the proof of Proposition \ref{prop Ak}, the first spectral sequence is multiplicative and its differential satisfies $d^1(\sigma_0)=0$, $d^1(\sigma_1)=0$ and $d^1(\rho)=\sigma_1^2-\sigma_0^2$. Moreover, by \cite[Theorem 16.4, Section 16.2]{Ek}, its first page is given by $\Lambda(L)$ where $\Lambda(-)$ denotes the free graded-commutative algebra, and $L$ is the $\operatorname{\mathbb{F}_{\ell}}$-vector space with basis $Q_{\ell}^I(y)$ such that $y$ is a basic Lie word in $\{\sigma_0, \sigma_1, \rho\}$ and $I$ is admissible, in the sense of \cite[Section 16.2]{Ek}. The second spectral sequence is a module over the first one, and we can identify $E^1=F^1/(\sigma_{\epsilon})$ because $\sigma_{\epsilon} \cdot -$ is injective in $F^1$, by the above description of $F^1$, and its image is the ideal $(\sigma_{\epsilon})$. Therefore $E^1= \Lambda(L/\operatorname{\mathbb{F}_{\ell}}\{\sigma_{\epsilon}\})$ and hence the $d^1$ differential in $F^1$ completely determines the $d^1$ differential in $E^1$, making it into a CDGA. \begin{proof}[Proof of Claim.] $E^2$ is given by the homology of the CDGA $(E^1,d^1)$, and to prove the result we will introduce a ``computational filtration'' in this CDGA that has the virtue of filtering away most of the $d^1$ differential. We let $\mathcal{F}^{\bullet} E^1$ be the filtration in which $\sigma_{1-\epsilon}$ and $\rho$ are given filtration $0$, the remaining elements of a homogeneous basis of $L/\operatorname{\mathbb{F}_{\ell}}\{\sigma_{\epsilon}\}$ extending these are given filtration equal to their homological degree, and then we extend the filtration to $\Lambda(L/\operatorname{\mathbb{F}_{\ell}}\{\sigma_{\epsilon}\})$ multiplicatively. Since $d^1$ preserves this filtration we get a spectral sequence converging to $E^2$ whose first page is the homology of the associated graded $\operatorname{gr}(\mathcal{F}^{\bullet} E^1)$. Thus, it suffices to show that $H_*(\operatorname{gr}(\mathcal{F}^{\bullet} E^1))$ already has the required vanishing line. Let us denote by $D$ the corresponding differential on this computational spectral sequence. Since $d^1$ lowers homological degree by $1$ we can decompose $(\operatorname{gr}(\mathcal{F}^{\bullet} E^1),D)$ as a tensor product $$(\Lambda(\operatorname{\mathbb{F}_{\ell}}\{\sigma_{1-\epsilon},\rho\}),D) \otimes_{\operatorname{\mathbb{F}_{\ell}}} (\Lambda(L/\operatorname{\mathbb{F}_{\ell}}\{\sigma_0,\sigma_1,\rho\}),0),$$ where $D$ satisfies $D(\sigma_{1-\epsilon})=0$ and $D(\rho)=(-1)^{\epsilon} \sigma_{1-\epsilon}^2$. By the Künneth theorem the homology of this tensor product is $\operatorname{\mathbb{F}_{\ell}}\{1,\sigma_{1-\epsilon}\} \otimes_{\operatorname{\mathbb{F}_{\ell}}} \Lambda(L/\operatorname{\mathbb{F}_{\ell}}\{\sigma_0,\sigma_1,\rho\})$ when $l \neq 2$, because graded-commutativity forces $\rho^2=0$; and $\mathbb{F}_2\{1,\sigma_{1-\epsilon}\} \otimes_{\mathbb{F}_2} \mathbb{F}_2[\rho^2] \otimes_{\mathbb{F}_2} \Lambda_{\mathbb{F}_2}(L/\mathbb{F}_2\{\sigma_0,\sigma_1,\rho\})$ if $l=2$. By the \textit{slope} of an element we shall mean the ratio between its homological degree and the rank of its $\mathsf{H}$-valued grading. Since $\rho^2$ has slope $1/2$ and $\sigma_{1-\epsilon}$ has homological degree $0$ and rank $1$, in order to prove the required vanishing line it suffices to show that all the elements in $L/\operatorname{\mathbb{F}_{\ell}}\{\sigma_0,\sigma_1,\rho\}$ have slope $\geq 1/2$. Since the slope of $Q_{\ell}^I(y)$ is always larger than or equal to the one of $y$, and the slope of the Browder bracket of two elements always is greater than the minimum of their slopes, the only elements in $L$ that have slope less than $1/2$ are those in the span of $\sigma_0, \sigma_1$, giving the result. \end{proof} \end{proof} \subsection{Proof of Theorem \ref{theorem stab 2}} \begin{proof} The idea of the proof is identical to the previous one, so we will not spell out all the details, but we will focus instead in the extra complications that arise in the computations, specially in Step 3. \textbf{Step 1.} We will construct a certain cellular $E_2$-algebra $\mathbf{S_{\mathds{k}}}$ and show that it suffices to prove that $H_{x,d}(\mathbf{\overline{S_{\mathds{k}}}}/\sigma_{\epsilon})=0$ for $3d \leq 2 \operatorname{rk}(x)-3$. The assumptions of the statement imply that $[\sigma_0,\sigma_1]= \sigma_{\epsilon} \cdot y$ for some $y \in H_{(1,1-\epsilon),1}(\mathbf{X})$, that $Q_{\mathds{k}}^1(\sigma_1)= \sigma_{\epsilon} \cdot x + t Q_{\mathds{k}}^1(\sigma_0)$ for some $x \in H_{(1,\epsilon),1}(\mathbf{X})$ and some $t \in \mathbb{Z}$, and that $\sigma_{1-\epsilon} \cdot Q_{\mathds{k}}^1(\sigma_0)= \sigma_{\epsilon}^2 \cdot z \in H_{(3,1-\epsilon),1}(\mathbf{X})$ for some $z \in H_{(1,1-\epsilon),1}(\mathbf{X})$. Let \begin{equation*} \begin{aligned} \mathbf{S_{\mathds{k}}}:=\mathbf{E_2}(S_{\mathds{k}}^{(1,0),0} \sigma_0 \oplus S_{\mathds{k}}^{(1,1),0} \sigma_1 \oplus S_{\mathds{k}}^{(1,\epsilon),1} x \oplus S_{\mathds{k}}^{(1,1-\epsilon),1} y \oplus S_{\mathds{k}}^{(1,1-\epsilon),1}z) \\\cup_{\sigma_1^2-\sigma_0^2}^{E_2}{D_{\mathds{k}}^{(2,0),1} \rho} \cup_{Q_{\mathds{k}}^1(\sigma_1)-\sigma_{\epsilon} \cdot x-t Q_{\mathds{k}}^1(\sigma_0)}^{E_2} {D_{\mathds{k}}^{(2,0),2} X} \cup_{[\sigma_0,\sigma_1]-\sigma_{\epsilon} \cdot y}^{E_2}{D_{\mathds{k}}^{(2,1),2} Y} \\ \cup_{\sigma_{1-\epsilon} \cdot Q_{\mathds{k}}^1(\sigma_0)- \sigma_{\epsilon}^2 \cdot z}^{E_2}{D_{\mathds{k}}^{(3,1-\epsilon),2} Z} \in \operatorname{Alg}_{E_2}(\operatorname{\mathsf{sMod}}_{\mathds{k}}^{\mathsf{H}}) \end{aligned} \end{equation*} By proceeding as in Step 1 of the proof of Theorem \ref{theorem stab 1} there is an $E_2$-algebra map $f: \mathbf{S_{\mathds{k}}} \rightarrow \mathbf{X}$ sending each of $\sigma_0,\sigma_1, x,y,z$ to the corresponding homology classes in $\mathbf{X}$ with the same name. \begin{claim} $H_{x,d}^{E_2}(\mathbf{X},\mathbf{S_{\mathds{k}}})=0$ for $d<2/3 \operatorname{rk}(x)$. \end{claim} Assuming the claim, we can apply \cite[Corollary 15.10]{Ek} with $\rho(x)= 2\operatorname{rk}(x)/3$, $\mu(x)=(2\operatorname{rk}(x)-2)/3$ and $\mathbf{M}=\mathbf{\overline{S_{\mathds{k}}}}/\sigma_{\epsilon}$ to obtain the required reduction. Thus, to finish this step we just need to show the claim. \begin{proof}[Proof of Claim.] Proceeding as in the proof of Proposition \ref{prop Ak} one can compute $Q_{\mathbb{L}}^{E_2}(\mathbf{S_{\mathds{k}}})$ and check that $H_{x,d}^{E_2}(\mathbf{S_{\mathds{k}}})=0$ for $d<\operatorname{rk}(x)-1$. Since $\mathbf{X}$ has the same vanishing line on its $E_2$-homology it suffices to check that $H_{x,d}^{E_2}(\mathbf{X},\mathbf{S_{\mathds{k}}})=0$ for $(\operatorname{rk}(x)=1, d=0)$ and $(\operatorname{rk}(x)=2, d=1)$. For $(\operatorname{rk}(x)=1,d=0)$ we use \cite[Corollary 11.12]{Ek} to reduce it to showing that $H_{x,0}(\mathbf{X},\mathbf{S_{\mathds{k}}})=0$, as in Step 1 in the proof of Theorem \ref{theorem stab 1}. This holds because the 0th homology of $\mathbf{X}$ in rank $1$ is generated by $\sigma_0,\sigma_1$, which factor trough $f$ by construction. For $(\operatorname{rk}(x)=2,d=1)$ the argument will be more elaborate but use the same ideas. Pick sets of $\mathds{k}$-module generators $\{u_a\}_{a \in A}$ for $H_{0,1}(\mathbf{X})$ and $\{v_b\}_{b \in B}$ for $H_{(1,0),1}(\mathbf{X}) \oplus H_{(1,1),1}(\mathbf{X})$, where each $v_b$ has $\mathsf{H}$-grading $(1,\epsilon_b)$ for some $\epsilon_b \in \{0,1\}$. Consider $\mathbf{\Tilde{S}_{\mathds{k}}}:= \mathbf{S_{\mathds{k}}} \vee^{E_2} \mathbf{E_2}(\bigoplus_{a \in A} S_{\mathds{k}}^{0,1} u_a \oplus \bigoplus_{b \in B} S_{\mathds{k}}^{(1,\epsilon_b),1} v_b)$. The map $f: \mathbf{S_{\mathds{k}}} \rightarrow \mathbf{X}$ factors trough the canonical map $\mathbf{S_{\mathds{k}}} \rightarrow \mathbf{\Tilde{S}_{\mathds{k}}}$ in the obvious way, so we get a long exact sequence in $E_2$-homology for the triple $\mathbf{S_{\mathds{k}}} \rightarrow \mathbf{\Tilde{S}_{\mathds{k}}} \rightarrow \mathbf{X}$: $$ \cdots \rightarrow H_{x,1}^{E_2}(\mathbf{\Tilde{S}_{\mathds{k}}},\mathbf{S_{\mathds{k}}}) \rightarrow H_{x,1}^{E_2}(\mathbf{X},\mathbf{S_{\mathds{k}}}) \rightarrow H_{x,1}^{E_2}(\mathbf{X},\mathbf{\Tilde{S}_{\mathds{k}}}) \rightarrow \cdots.$$ The first term vanishes by direct computation of $Q_{\mathbb{L}}^{E_2}(\mathbf{\Tilde{S}_{\mathds{k}}})$ because $\operatorname{rk}(x)=2$, so it suffices to show that the third term vanishes too. We will use \cite[Corollary 11.12]{Ek} to reduce it to showing that $H_{x',d'}(\mathbf{X},\mathbf{\Tilde{S}_{\mathds{k}}})=0$ for $d' \leq 1$ and $\operatorname{rk}(x') \leq 2$. For a given $x' \in \mathsf{H}$ with $\operatorname{rk}(x') \leq 2$ we have an exact sequence \[\Scale[0.86]{\cdots \rightarrow H_{x',1}(\mathbf{\Tilde{S}_{\mathds{k}}}) \rightarrow H_{x',1}(\mathbf{X}) \rightarrow H_{x',1}(\mathbf{X},\mathbf{\Tilde{S}_{\mathds{k}}}) \rightarrow H_{x',0}(\mathbf{\Tilde{S}_{\mathds{k}}}) \rightarrow H_{x',0}(\mathbf{X}) \rightarrow H_{x',0}(\mathbf{X},\mathbf{\Tilde{S}_{\mathds{k}}}) \rightarrow 0},\] so it suffices to show that $H_{x',1}(\mathbf{\Tilde{S}_{\mathds{k}}}) \rightarrow H_{x',1}(\mathbf{X})$ is surjective and that $H_{x',0}(\mathbf{\Tilde{S}_{\mathds{k}}}) \rightarrow H_{x',0}(\mathbf{X})$ is an isomorphism. The isomorphism part in degree $0$ holds because $H_{*,0}(\mathbf{\Tilde{S}_{\mathds{k}}})=\mathds{k}[\sigma_0,\sigma_1]/(\sigma_1^2-\sigma_0^2)$ as a ring: the proof is analogous to the computation of the 0th homology of $\mathbf{A_{\kk}}$ in the proof of Proposition \ref{prop Ak} because the extra cells that we have in $\mathbf{\Tilde{S}_{\mathds{k}}}$ are either in degree $\geq 2$ or in degree $1$ but attached trivially, so they have no effect in the homological degree $0$ part of the spectral sequence. The surjectivity in degree $1$ holds by construction if $\operatorname{rk}(x') \leq 1$. When $\operatorname{rk}(x')=2$ it holds by assumptions (i) and (ii) in the statement of the theorem plus the surjectivity in ranks $\leq 1$. \end{proof} \textbf{Step 2.} Now we will further reduce it to the case $\mathbf{S_{\operatorname{\mathbb{F}_2}}}$ for $\ell$ an odd prime or $0$. By proceeding as in Proposition \ref{prop Ak} we find that $\mathbf{S_{\mathds{k}}}= \mathbf{S}_{\mathbb{Z}[1/2]} \otimes_{\mathbb{Z}[1/2]} \mathds{k}$, so it suffices to consider the case $\mathds{k}=\mathbb{Z}[1/2]$ by the universal coefficients theorem. By reasoning as in Step 2 in the proof of Theorem \ref{theorem stab 1} we get that the homology groups of $\mathbf{S}_{\mathbb{Z}[1/2]}/ \sigma_{\epsilon}$ are finitely generated ${\mathbb{Z}[1/2]}$-modules because $\mathbf{S}_{\mathbb{Z}[1/2]}$ only has finitely many $E_2$-cells. Thus, another application of the universal coefficients theorem allows us to reduce to the case $\mathds{k}=\operatorname{\mathbb{F}_{\ell}}$ with $\ell$ either an odd prime or $0$. \textbf{Step 3.} Since we are working with $\operatorname{\mathbb{F}_{\ell}}$-coefficients for a fixed $\ell$ we will drop the $\ell$ and $\operatorname{\mathbb{F}_{\ell}}$ subscripts from now on. Let us begin by considering the cellular attachment filtration of $\mathbf{S}$, see \cite[Section 6.2.1]{Ek} for details, where the last grading denotes the filtration. \begin{equation*} \begin{aligned} \mathbf{fS}:=\mathbf{E_2}(S^{(1,0),0,0} \sigma_0 \oplus S^{(1,1),0,0} \sigma_1 \oplus S^{(1,\epsilon),1,0} x \oplus S^{(1,1-\epsilon),1,0} y \oplus S^{(1,1-\epsilon),1,0} z) \\ \cup_{\sigma_1^2-\sigma_0^2}^{E_2}{D^{(2,0),1,1} \rho} \cup_{Q^1(\sigma_1)-\sigma_{\epsilon} \cdot x-t Q^1(\sigma_0)}^{E_2} {D^{(2,0),2,1} X} \cup_{[\sigma_0,\sigma_1]-\sigma_{\epsilon} \cdot y}^{E_2}{D^{(2,1),2,1} Y} \\ \cup_{\sigma_{1-\epsilon} \cdot Q^1(\sigma_0)- \sigma_{\epsilon}^2 \cdot z}^{E_2}{D^{(3,1-\epsilon),2,1} Z} \in \operatorname{Alg}_{E_2}((\operatorname{\mathsf{sMod}}_{\operatorname{\mathbb{F}_{\ell}}}^{\mathsf{H}})^{\mathbb{Z}_{\leq}}) \end{aligned} \end{equation*} This gives two spectral sequences as in Step 3 of the proof of Theorem \ref{theorem stab 1}: \begin{enumerate}[(i)] \item $F^1_{x,p,q}=H_{x,p+q,p}(\overline{\operatorname{gr}(\mathbf{fS})}) \Rightarrow H_{(n,\delta),p+q}(\mathbf{\overline{S}})$ \item $E^1_{x,p,q}=H_{x,p+q,p}(\overline{\operatorname{gr}(\mathbf{fS})}/\sigma_{\epsilon}) \Rightarrow H_{x,p+q}(\mathbf{\overline{S}}/\sigma_{\epsilon}).$ \end{enumerate} The first spectral sequence is multiplicative, its first page is $\Lambda(L)$ where $L$ is the $\operatorname{\mathbb{F}_{\ell}}$-vector space with basis $Q^I(u)$ such that $u$ a basic Lie word in $\{\sigma_0,\sigma_1,x,y,z,\rho,X,Y,Z\}$ and $I$ is admissible; and its $d^1$-differential satisfies $d^1(\sigma_0)=0$, $d^1(\sigma_1)=0$, $d^1(x)=0$, $d^1(y)=0$, $d^1(z)=0$, $d^1(\rho)=\sigma_1^2-\sigma_0^2$, $d^1(X)=Q^1(\sigma_1)-\sigma_{\epsilon} \cdot x-t Q^1(\sigma_0)$, $d^1(Y)=[\sigma_0,\sigma_1]-\sigma_{\epsilon} \cdot y$ and $d^1(Z)=\sigma_{1-\epsilon} \cdot Q^1(\sigma_0)-\sigma_{\epsilon}^2 \cdot z$. The second spectral sequence has the structure of a module over the first one, and its first page is $E^1= \Lambda(L/\operatorname{\mathbb{F}_{\ell}}\{\sigma_{\epsilon}\})$, so $(E^1,d^1)$ has the structure of a CDGA. Thus, in order to finish the proof it suffices to show that $E^2_{x,p,q}=0$ for $p+q<(2 \operatorname{rk}(x)-2)/3$. We will show the required vanishing line on $E^2$ by introducing a filtration on the CDGA $(E^1,d^1)$, similar to the one in Step 3 of the proof of Theorem \ref{theorem stab 1}. We let $\mathcal{F}^{\bullet} E^1$ be the filtration in which $\sigma_{1-\epsilon}$, $x$, $y$, $z$, $\rho$, $Q^1(\sigma_0)$, $Q^1(\sigma_1)$, $[\sigma_0,\sigma_1]$, $X$, $Y$, $Z$ are given filtration $0$, the remaining elements of a homogeneous basis of $L/\operatorname{\mathbb{F}_{\ell}}\{\sigma_{\epsilon}\}$ extending these are given filtration equal to their homological degree, and we extend the filtration to $\Lambda(L/\operatorname{\mathbb{F}_{\ell}}\{\sigma_{\epsilon}\})$ multiplicatively. This gives a spectral sequence converging to $E^2$ whose first page is the homology of the associated graded of the filtration $\mathcal{F}^{\bullet} E^1$. We will show the vanishing line on the first page of this spectral sequence. Applying \cite[Theorems 16.7 and 16.8]{Ek} gives that $d^1([\sigma_0,\sigma_1])=0$, $d^1(Q^1(\sigma_0))=0$ and $d^1(Q^1(\sigma_1))=0$. This allows to split the associated graded as a tensor product \begin{equation*} \begin{aligned} (\operatorname{gr}(\mathcal{F}^{\bullet} E^1),D)= (\Lambda(\operatorname{\mathbb{F}_{\ell}}\{\sigma_{1-\epsilon},\rho,Q^1(\sigma_0),Z,Q^1(\sigma_1),X\}),D) \otimes_{\operatorname{\mathbb{F}_{\ell}}} \\ (\Lambda(\operatorname{\mathbb{F}_{\ell}}\{[\sigma_0,\sigma_1],Y\}),D) \otimes_{\operatorname{\mathbb{F}_{\ell}}} (\Lambda(\operatorname{\mathbb{F}_{\ell}}\{x,y,z\}),0) \otimes_{\operatorname{\mathbb{F}_{\ell}}} \\ (\Lambda(L)/\operatorname{\mathbb{F}_{\ell}}\{\sigma_0,\sigma_1,x,y,z,\rho,Q^1(\sigma_0),Q^1(\sigma_1),[\sigma_0,\sigma_1],X,Y,Z\},0) \end{aligned} \end{equation*} where $D$ is the induced differential and satisfies $D(\sigma_{1-\epsilon})=0$, $D(\rho)=(-1)^{\epsilon} \sigma_{1-\epsilon}^2$, $D(Q^1(\sigma_0))=0$, $D(Z)=\sigma_{1-\epsilon} \cdot Q^1(\sigma_0)$, $D(Q^1(\sigma_1))=0$, $D(X)=Q^1(\sigma_1)-t Q^1(\sigma_0)$, $D([\sigma_0,\sigma_1])=0$, $D(Y)=[\sigma_0,\sigma_1]$. By the Künneth theorem it suffices to compute the homology of each of the factors separately. By direct computation we see that \begin{enumerate}[$\bullet$] \item Elements in $\Lambda(L)/\operatorname{\mathbb{F}_{\ell}}\{\sigma_0,\sigma_1,x,y,z,\rho,Q^1(\sigma_0),Q^1(\sigma_1),[\sigma_0,\sigma_1],X,Y,Z\}$ have slope $\geq 2/3$. \item Elements in $\Lambda(\operatorname{\mathbb{F}_{\ell}}\{x,y,z\})$ have slope $\geq 1$. \item The homology of the factor $(\Lambda(\operatorname{\mathbb{F}_{\ell}}\{[\sigma_0,\sigma_1],Y\}),D) $ is the polynomial ring $\operatorname{\mathbb{F}_{\ell}}[Y^{\ell}]$, so all its elements have slope $\geq 1$. \end{enumerate} Thus, it suffices to check that $H_*(\Lambda(\operatorname{\mathbb{F}_{\ell}}\{\sigma_{1-\epsilon},\rho,Q^1(\sigma_0),Z,Q^1(\sigma_1),X\}),D)$ vanishes for $3 d<2 \operatorname{rk}-2$, where $d$ denotes the homological degree. The remaining of the proof will be about studying this CDGA. We will separate this as an extra step because it will require some additional filtrations and work. \textbf{Step 4.} We firstly claim that it suffices to consider $t=0$: $\sigma_{1-\epsilon}$, $\rho$, $Q^1(\sigma_0)$, $Q^1(\sigma_1)$, $X$, $Z$ are now just the generators of a certain CDGA. Since both $Q^1(\sigma_0)$ and $Q^1(\sigma_1)$ lie in $\ker(D)$ and have the same homological degree and rank, the change of variables $Q^1(\sigma_1) \mapsto Q^1(\sigma_1)-t Q^1(\sigma_0)$ reparameterises $t \mapsto 0$. Secondly, once we are in the case $t=0$, we can further split the CDGA as a tensor product $$(\Lambda(\operatorname{\mathbb{F}_{\ell}}\{\sigma_{1-\epsilon},\rho,Q^1(\sigma_0),Z\}),D) \otimes_{\operatorname{\mathbb{F}_{\ell}}} (\Lambda(X,Q^1(\sigma_1)),D)$$ and a direct computation shows that the homology of the second factor is the polynomial ring $\operatorname{\mathbb{F}_{\ell}}[X^{\ell}]$, so all its elements have slope $\geq 1$. Thus, it suffices to prove that $H_*(\Lambda(\operatorname{\mathbb{F}_{\ell}}\{\sigma_{1-\epsilon},\rho,Q^1(\sigma_0),Z\}),D)$ vanishes for $3 d<2 \operatorname{rk}-2$. For this, we will introduce an additional filtration by giving $Q^1(\sigma_0)$ filtration $0$, $\sigma_{1-\epsilon}$ filtration $1$ and $\rho, Z$ filtration $2$, and then extending the filtration multiplicatively to the whole CDGA. The differential $D$ preserves this filtration and the associated graded splits as a tensor product $$(\Lambda(\sigma_{1-\epsilon},\rho),D(\sigma_{1-\epsilon})=0, D(\rho)=(-1)^{\epsilon} \sigma_{1-\epsilon}^2) \otimes_{\operatorname{\mathbb{F}_{\ell}}} (\Lambda(Q^1(\sigma_0),Z),0)$$ so, using that $\ell \neq 2$ to compute the homology of the first factor, we get a multiplicative spectral sequence of the form $$\mathcal{E}^1= \operatorname{\mathbb{F}_{\ell}}[\sigma_{1-\epsilon}]/(\sigma_{1-\epsilon}^2) \otimes_{\operatorname{\mathbb{F}_{\ell}}} \Lambda(Q^1(\sigma_0),Z) \Rightarrow H_{*}(\Lambda(\operatorname{\mathbb{F}_{\ell}}\{\sigma_{1-\epsilon},\rho,Q^1(\sigma_0),Z\}),D)$$ whose first differential satisfies $D^1(Z)=\sigma_{1-\epsilon} \cdot Q^1(\sigma_0)$, $D^1(\sigma_{1-\epsilon})=0$ and $D^1(Q^1(\sigma_0))=0$. To finish the proof we will establish the required vanishing range on $\mathcal{E}^2$. To do so, we write $\mathcal{E}^1=\operatorname{\mathbb{F}_{\ell}}\{1,\sigma_{1-\epsilon},Q^1(\sigma_0),\sigma_{1-\epsilon} \cdot Q^1(\sigma_0)\} \otimes \operatorname{\mathbb{F}_{\ell}}[Z]$ as a $\operatorname{\mathbb{F}_{\ell}}$-vector space, and then compute $\ker(D^1),\operatorname{im}(D^1)$ explicitly as $\operatorname{\mathbb{F}_{\ell}}$-vector spaces: $$\ker(D^1)=(\sigma_{1-\epsilon})+(Q^1(\sigma_0))+\operatorname{\mathbb{F}_{\ell}}[Z^{\ell}]$$ and $$\operatorname{im}(D^1)=\sigma_{1-\epsilon} \cdot Q^1(\sigma_0) \cdot \operatorname{\mathbb{F}_{\ell}}\{Z^i: \ell \nmid i+1\}.$$ Thus, we get that $\mathcal{E}^2=\ker(D^1)/\operatorname{im}(D^1)$ is, as a $\operatorname{\mathbb{F}_{\ell}}$-vector space, given by $$\mathcal{E}^2=\operatorname{\mathbb{F}_{\ell}}[Z]+\sigma_{1-\epsilon} \cdot \operatorname{\mathbb{F}_{\ell}}[Z]+ Q^1(\sigma_0) \cdot \operatorname{\mathbb{F}_{\ell}}[Z]+ \sigma_{1-\epsilon} \cdot Q^1(\sigma_0) \cdot \operatorname{\mathbb{F}_{\ell}}\{Z^i: \ell \nmid i+1\}.$$ Using the bidegrees of the generators we find that the first summand vanishes for $d<\operatorname{rk}$, the second vanishes for $d<\operatorname{rk}-1$, and the third one for $d-1<\operatorname{rk}-2$; so in particular they all vanish for $3d<2 \operatorname{rk}-2$. Finally, elements on the forth summand lie on bidegrees of the form $(3+2(\ell N-1),1+2(\ell N-1))$ for some $N \geq 0$, so that summand also vanishes if $3d <2 \operatorname{rk}-2$, as required. (This forth summand is just $0$ when $\ell=0$.) \end{proof} \subsection{Construction of the class $\theta$} In this Section we will explain how the class $\theta \in H_{(4,0),2}(\mathbf{X})$ of Theorem \ref{thm stab 3} is defined. The first step will be to define $\theta \in H_{(4,0),2}(\operatorname{\mathbf{A_{\FF}}})$. Since we will only work with $\operatorname{\mathbb{F}_2}$-coefficients for now, we will drop all the $\operatorname{\mathbb{F}_2}$-indices. Consider the spectral sequence (i) of the proof of Theorem \ref{theorem stab 1}: $$F^1_{x,p,q}=H_{x,p+q,p}(\overline{\mathbf{E_2}(S^{(1,0),0,0} \sigma_0 \oplus S^{(1,1),0,0} \sigma_1 \oplus S^{(2,0),1,1} \rho)}) \Rightarrow H_{x,p+q}(\mathbf{\overline{A}}).$$ As we said, this is a multiplicative spectral sequence whose first page is given by $\operatorname{\mathbb{F}_2}[L]$, where $L$ is the $\operatorname{\mathbb{F}_2}$-vector space with basis $Q^I(y)$ such that $y$ is a basic Lie word in $\{\sigma_0,\sigma_1,\rho\}$ and $I$ is admissible. (Note that this time we get a free commutative algebra instead of graded-commutative as we work with $\operatorname{\mathbb{F}_2}$-coefficients.) Thus we have $F^1_{(4,0),2,0}=\operatorname{\mathbb{F}_2}\{\rho^2\}$. \begin{claim} $\rho^2$ survives to $F^{\infty}$. \end{claim} \begin{proof} Since $F^1_{(4,0),2+r,1-r}=0$ for $r \leq 1$ then $\rho^2$ cannot be a boundary of any $d^r$-differential. Moreover, $d^r: F^r_{(4,0),2,0} \rightarrow F^r_{(4,0),2-r,r-1}$ vanishes for $r>2$ since $\mathbf{fA}$ vanishes on negative filtration. Thus, it suffices to show that both $d^1(\rho^2)$ and $d^2(\rho^2)$ vanish. By Leibniz rule we have $d^1(\rho^2)=0$, so we only need to show that $d^2(\rho^2)=0$. Since $\rho^2=Q^1(\rho)$ and $d^1(\rho)=\sigma_1^2-\sigma_0^2$ then \cite[Theorem 16.8 (i)]{Ek} gives that $d^2(\rho^2)$ is represented by $Q^1(\sigma_1^2-\sigma_0^2)$. (As a technical note let us mention that the result we just quoted is stated for $E_{\infty}$-algebras, but the same result holds for $E_2$-algebras as explained in \cite[Page 184]{Ek}.) Finally, $Q^1(\sigma_1^2-\sigma_0^2)$ vanishes by the properties of $Q^1$ shown in \cite[Section 16.2.2]{Ek}. \end{proof} \begin{definition}\label{def theta} The class $\theta \in H_{(4,0),2}(\operatorname{\mathbf{A_{\FF}}})$ is defined to be any lift of the class $[\rho^2] \in F^{\infty}_{(4,0),2,0}$. Given $\mathbf{X}$ satisfying the assumptions of Theorem \ref{thm stab 3} we define $\theta \in H_{(4,0),2}(\mathbf{X})$ as follows: we pick an $E_2$-map $\mathbf{A} \xrightarrow{c} \mathbf{X}$ as in Step 1 in the proof of Theorem \ref{theorem stab 1}, and set $\theta:= c_*(\theta) \in H_{(4,0),2}(\mathbf{X})$. \end{definition} \begin{rem} \label{rem indeterminacy} There is not a unique choice of class $\theta$, however the statement of Theorem \ref{thm stab 3} will be true for \textit{any choice} of class $\theta$ with the property of Definition \ref{def theta}. In fact, $\theta$ is well-defined up to adding any linear combination of $Q^1(\sigma_0)^2$, $Q^1(\sigma_0) \cdot Q^1(\sigma_1)$ and $Q^1(\sigma_1)^2$, or multiples of $\sigma_0^2=\sigma_1^2$ to it. (This fact will not be needed in the rest of the paper but we added an explanation below.) \end{rem} In order to show the above remark one can use the above spectral sequence and check that \begin{enumerate} \item $F^1_{(4,0),0,2}$ is generated by $Q^1(\sigma_0)^2$, $Q^1(\sigma_0) \cdot Q^1(\sigma_1)$ and $Q^1(\sigma_1)^2$, and all these terms are permanent cycles and no boundaries. \item $d^1: F^1_{(4,0),1,1} \rightarrow F^1_{(4,0),0,1}$ is injective, and hence $F^2_{(4,0),1,1}=0$. \end{enumerate} Thus, $\theta \in H_{(4,0),2}(\operatorname{\mathbf{A_{\FF}}})$ is well-defined up to a linear combination of $Q^1(\sigma_0)^2$, $Q^1(\sigma_0) \cdot Q^1(\sigma_1)$ and $Q^1(\sigma_1)^2$. The definition of the map $c$ is not unique as we need to choose a nullhomotopy of $\sigma_1^2-\sigma_0^2$ in $\mathbf{X}$, and the set of such choices is a $H_{(2,0),1}(\mathbf{X})$-torsor. In particular, by assumptions (i) and (ii) about $\mathbf{X}$ any new choice of $\rho$ differs by a class in $\operatorname{im}(\sigma_{\epsilon} \cdot -)$ or by a multiple of $Q^1(\sigma_0)$, giving the result. \subsection{The proof of Theorem \ref{thm stab 3}} \label{section proof of thm stab 3} Before proving the theorem let us briefly recall the construction of $\mathbf{X}/(\sigma_{\epsilon},\theta)$. We start by viewing $\theta$ as a homotopy class of maps $S^{(4,0),2} \rightarrow \mathbf{X}$. Then, using the adapters construction, see \cite[Section 12.3]{Ek} we get an $\mathbf{\overline{X}}$-module map $S^{(4,0),2} \otimes \mathbf{\overline{X}}/\sigma_{\epsilon} \xrightarrow{\theta \cdot-} \mathbf{\overline{X}}/\sigma_{\epsilon}$ and we define $\mathbf{\overline{X}}/(\sigma_{\epsilon},\theta)$ to be its cofibre (in the category of $\mathbf{\overline{X}}$-modules). \begin{proof} The proof will be very similar to that of Theorem \ref{theorem stab 2}, so we will focus on the parts that are different and skip details. \textbf{Step 1.} We will construct a certain cellular $E_2$-algebra $\mathbf{S}$ and show that it suffices to prove that $H_{x,d}(\mathbf{\overline{S}}/\sigma_{\epsilon})=0$ for $3d \leq 2 \operatorname{rk}(x)-3$. The assumptions of the statement imply that $[\sigma_0,\sigma_1]= \sigma_{\epsilon} \cdot y$ for some $y \in H_{(1,1-\epsilon),1}(\mathbf{X})$, that $Q^1(\sigma_1)= \sigma_{\epsilon} \cdot x + t Q^1(\sigma_0)$ for some $x \in H_{(1,\epsilon),1}(\mathbf{X})$ and some $t \in \operatorname{\mathbb{F}_2}$, and that $\sigma_{1-\epsilon} \cdot Q^1(\sigma_0)= \sigma_{\epsilon}^2 \cdot z \in H_{(3,1-\epsilon),1}(\mathbf{X})$ for some $z \in H_{(1,1-\epsilon),1}(\mathbf{X})$. Moreover, we claim that there is $u \in H_{(4,0),3}(\mathbf{X})$ such that $Q^1(\sigma_0)^3=\sigma_{\epsilon}^2 \cdot u$. Indeed, condition (iv) says that $\sigma_0 \cdot Q^1(\sigma_0)= \sigma_{\epsilon}^2 \cdot \tau$ for some $\tau \in H_{(1,0),1}(\mathbf{X})$, and then we can apply $Q^2(-)$ to both sides and use the formulae in \cite[Section 16.2.2]{Ek} to find $Q^1(\sigma_0)^3+ \sigma_0^2 \cdot Q^2(Q^1(\sigma_0))+\sigma_0[\sigma_0,Q^1(\sigma_0)] Q^1(\sigma_0)= \sigma_{\epsilon}^2 \cdot [\sigma_{\epsilon},\sigma_{\epsilon}] \cdot Q^1(\tau)+\sigma_{\epsilon}^4 \cdot Q^2(\tau)+\sigma_{\epsilon}^2\cdot [\sigma_{\epsilon}^2,\tau] \cdot \tau$, hence the result as $\sigma_{\epsilon}^2=\sigma_0^2$ and as $[\sigma_0,Q^1(\sigma_0)]=[\sigma_0,[\sigma_0,\sigma_0]]=0$ (by \cite[Section 16.2.2]{Ek} again). Let \begin{equation*} \begin{aligned} \mathbf{S}:=\mathbf{E_2}(S^{(1,0),0} \sigma_0 \oplus S^{(1,1),0} \sigma_1 \oplus S^{(1,\epsilon),1} x \oplus S^{(1,1-\epsilon),1} y \oplus S^{(1,1-\epsilon),1}z \oplus S^{(4,0),3}u) \\\cup_{\sigma_1^2-\sigma_0^2}^{E_2}{D^{(2,0),1} \rho} \cup_{Q^1(\sigma_1)-\sigma_{\epsilon} \cdot x-t Q^1(\sigma_0)}^{E_2} {D^{(2,0),2} X} \cup_{[\sigma_0,\sigma_1]-\sigma_{\epsilon} \cdot y}^{E_2}{D^{(2,1),2} Y} \\ \cup_{\sigma_{1-\epsilon} \cdot Q^1(\sigma_0)- \sigma_{\epsilon}^2 \cdot z}^{E_2}{D^{(3,1-\epsilon),2} Z} \cup_{Q^1(\sigma_0)^3-\sigma_{\epsilon}^2 \cdot u}^{E_2}{D^{(6,0),4} U} \in \operatorname{Alg}_{E_2}(\operatorname{\mathsf{sMod}}_{\mathds{k}}^{\mathsf{H}}) \end{aligned} \end{equation*} By proceeding as in Step 1 of the proof of Theorem \ref{theorem stab 1} there is an $E_2$-algebra map $f: \mathbf{S} \rightarrow \mathbf{X}$ sending each of $\sigma_0,\sigma_1, x,y,z,u$ to the corresponding homology classes in $\mathbf{X}$ with the same name. Moreover, we can assume that $f$ extends any given map $\mathbf{A} \rightarrow \mathbf{X}$ and hence that it sends $\theta \mapsto \theta$. \begin{claim} $H_{x,d}^{E_2}(\mathbf{X},\mathbf{S})=0$ for $d<2/3 \operatorname{rk}(x)$. \end{claim} The proof is identical to the corresponding claim in Step 1 in the proof of Theorem \ref{theorem stab 2}. The only difference now is that $\mathbf{S}$ has a cell $U$ below the ``critical line'' $d=\operatorname{rk}-1$. However, it causes no trouble since it has bidegree $(\operatorname{rk}=6,d=4)$, so it lies on the region $3d \geq 2 \operatorname{rk}$. Assuming the claim, we can apply \cite[Corollary 15.10]{Ek} with $\rho(x)= 2\operatorname{rk}(x)/3$, $\mu(x)=(2\operatorname{rk}(x)-2)/3$ and $\mathbf{M}=\mathbf{\overline{S}}/(\sigma_{\epsilon},\theta)$ to obtain the required reduction. \textbf{Setp 2.} We proceed as in Step 3 in the proof of Theorem \ref{theorem stab 2} to get a cell attachment filtration $\mathbf{fS} \in \operatorname{Alg}_{E_2}((\operatorname{\mathsf{sMod}}_{\operatorname{\mathbb{F}_2}}^{\mathsf{H}})^{\mathbb{Z}_{\leq}})$. The key now is to observe that $\theta \in H_{(4,0),2}(\mathbf{S})$ lifts to a filtered map $\theta: S^{(4,0),2,2} \rightarrow \mathbf{fS}$ which maps to $\rho^2 \in H_{*,*,*}(\operatorname{gr}(\mathbf{fS}))$. Indeed, $\theta \in H_{(4,0),2}(\mathbf{A})=H_{(4,0),2}(\operatorname{colim}(\mathbf{fA}))=\operatorname{colim}_f(H_{(4,0),2,f}(\mathbf{fA}))$, so it can be represented by a class $\theta \in H_{(4,0),2,f}(\mathbf{fA})$ for some $f$ large. In fact, $f=2$ is the smallest possible such value since the obstruction to lift the class $\theta \in H_{(4,0),2,f}(\mathbf{fA})$ to a class in $ H_{(4,0),2,f-1}(\mathbf{fA})$ is precisely the image of $\theta$ in $H_{(4,0),2,f}(\operatorname{gr}(\mathbf{fA}))$ which is $\rho^2$ by definition, giving the result. Finally observe that there is a canonical map of filtered $E_2$-algebras $\mathbf{fA} \rightarrow \mathbf{fS}$. Thus, we get spectral sequences \begin{enumerate}[(i)] \item $F^1_{x,p,q}=H_{x,p+q,p}(\overline{\operatorname{gr}(\mathbf{fS})}) \Rightarrow H_{(n,\delta),p+q}(\mathbf{\overline{S}})$ \item $E^1_{x,p,q}=H_{x,p+q,p}(\overline{\operatorname{gr}(\mathbf{fS})}/(\sigma_{\epsilon},\theta)) \Rightarrow H_{x,p+q}(\mathbf{\overline{S}}/\sigma_{\epsilon}).$ \end{enumerate} The first spectral sequence is multiplicative, its first page is $\operatorname{\mathbb{F}_2}[L]$ where $L$ is the $\operatorname{\mathbb{F}_2}$-vector space with basis $Q^I(\alpha)$ such that $\alpha$ a basic Lie word in $\{\sigma_0,\sigma_1,x,y,z,u,\rho,X,Y,Z,U\}$ and $I$ is admissible; and its $d^1$-differential satisfies $d^1(\sigma_0)=0$, $d^1(\sigma_1)=0$, $d^1(x)=0$, $d^1(y)=0$, $d^1(z)=0$, $d^1(u)=0$, $d^1(\rho)=\sigma_1^2-\sigma_0^2$, $d^1(X)=Q^1(\sigma_1)-\sigma_{\epsilon} \cdot x-t Q^1(\sigma_0)$, $d^1(Y)=[\sigma_0,\sigma_1]-\sigma_{\epsilon} \cdot y$, $d^1(Z)=\sigma_{1-\epsilon} \cdot Q^1(\sigma_0)-\sigma_{\epsilon}^2 \cdot z$ and $d^1(U)=Q^1(\sigma_0)^3-\sigma_{\epsilon}^2 \cdot u$. The second spectral sequence has the structure of a module over the first one, and its first page is $E^1= \operatorname{\mathbb{F}_2}[L/\operatorname{\mathbb{F}_2}\{\sigma_{\epsilon}\}]/(\rho^2)$ because $\theta$ maps to $\rho^2$ in the homology of the associated graded, so $(E^1,d^1)$ has the structure of a CDGA. Thus, in order to finish the proof it suffices to show that $E^2_{x,p,q}=0$ for $p+q<(2 \operatorname{rk}(x)-2)/3$. \textbf{Step 3.} Now we will introduce additional filtrations to simplify the CDGA until we get the required result. The first filtration is similar to the one of Step 3 in the proof of Theorem \ref{theorem stab 2}: we give $\sigma_{1-\epsilon}$, $x$, $y$, $z$, $u$, $\rho$, $Q^1(\sigma_0)$, $Q^1(\sigma_1)$, $[\sigma_0,\sigma_1]$, $X$, $Y$, $Z$, $U$ filtration $0$, we give the remaining elements of a homogeneous basis of $L/\operatorname{\mathbb{F}_2}\{\sigma_{\epsilon}\}$ extending these filtration equal to their homological degree, and we extend the filtration to $\operatorname{\mathbb{F}_2}(L/\operatorname{\mathbb{F}_2}\{\sigma_{\epsilon}\})/(\rho^2)$ multiplicatively (which we can as $\rho$ is in filtration $0$). This allows us to split the associated graded as a tensor product and all the factors are concentrated in the region $3d \geq 2 \operatorname{rk}$ except possibly the one given by $$(\operatorname{\mathbb{F}_2}[\sigma_{1-\epsilon},\rho,Q^1(\sigma_0),Q^1(\sigma_1),X,Z,U]/(\rho^2),D)$$ where the non-zero part of $D$ is characterized by $D(X)=Q^1(\sigma_1)-tQ^1(\sigma_0)$, $D(Z)=\sigma_{1-\epsilon} \cdot Q^1(\sigma_0)$ and $D(U)=Q^1(\sigma_0)^3$. Then, we can proceed as in Step 4 in the proof of Theorem \ref{theorem stab 2} to go to the case $t=0$ and hence split the CDGA further to simplify it to $$(\operatorname{\mathbb{F}_2}[\sigma_{1-\epsilon},\rho,Q^1(\sigma_0),Z,U]/(\rho^2),D).$$ Next we introduce a new filtration by giving $\sigma_{1-\epsilon}, \rho, Q^1(\sigma_0)$ filtration $0$, and $Z,U$ filtration $1$ and then extending multiplicatively. The associated graded of this splits as a tensor product $$(\operatorname{\mathbb{F}_2}[\sigma_{1-\epsilon},\rho]/(\rho^2), D(\rho)=\sigma_{1-\epsilon}^2) \otimes_{\operatorname{\mathbb{F}_2}} (\operatorname{\mathbb{F}_2}[Q^1(\sigma_0),Z,U],0)$$ and the homology of the first factor is precisely $\operatorname{\mathbb{F}_2}[\sigma_{1-\epsilon}]/(\sigma_{1-\epsilon}^2)$, yielding a spectral sequence of the form \[\Scale[0.9]{\mathcal{E}^1= \operatorname{\mathbb{F}_2}[\sigma_{1-\epsilon}]/(\sigma_{1-\epsilon}^2) \otimes_{\operatorname{\mathbb{F}_2}} \operatorname{\mathbb{F}_2}[Q^1(\sigma_0),Z,U] \Rightarrow H_*(\operatorname{\mathbb{F}_2}[\sigma_{1-\epsilon},\rho,Q^1(\sigma_0),Z,U]/(\rho^2),D)}\] whose first differential $D^1$ satisfies $D^1(Z)= \sigma_{1-\epsilon} \cdot Q^1(\sigma_0)$ and $D^1(U)=Q^1(\sigma_0)^3$. We will establish the required vanishing line on $\mathcal{E}^2$ of this spectral sequence. For that we will introduce yet another filtration by letting $\sigma_{1-\epsilon}, Q^1(\sigma_0), U$ have filtration $0$ and $Z$ have filtration $1$. The associated graded is given by $$(\operatorname{\mathbb{F}_2}[\sigma_{1-\epsilon},Z]/(\sigma_{1-\epsilon}^2),0) \otimes_{\operatorname{\mathbb{F}_2}} (\operatorname{\mathbb{F}_2}[Q^1(\sigma_0),U],\delta(U)=Q^1(\sigma_0)^3)$$ where $\delta$ is the new differential. Thus, its homology is given by $$\operatorname{\mathbb{F}_2}[\sigma_{1-\epsilon}, Q^1(\sigma_0), Z,U^2]/(\sigma_{1-\epsilon}^2,Q^1(\sigma_0)^3)$$ and the $\delta^1$-differential satisfies $\delta^1(Z)= \sigma_{1-\epsilon} \cdot Q^1(\sigma_0)$. Since $U$ has slope $2/3$ itself, in order to prove the required vanishing line we can just focus on the remaining part $$(\operatorname{\mathbb{F}_2}[\sigma_{1-\epsilon}, Q^1(\sigma_0), Z]/(\sigma_{1-\epsilon}^2,Q^1(\sigma_0)^3),\delta^1(Z)=\sigma_{1-\epsilon} \cdot Q^1(\sigma_0)).$$ For that we explicitly compute $\ker(\delta^1)$, $\operatorname{im}(\delta^1)$ as $\operatorname{\mathbb{F}_2}$-vector spaces (similar to the last CDGA of the proof of Theorem \ref{theorem stab 2}). $$\ker(\delta^1)= (\sigma_{1-\epsilon})+(Q^1(\sigma_0)^2)+\operatorname{\mathbb{F}_2}\{1,Q^1(\sigma_0)\} \cdot \operatorname{\mathbb{F}_2}[Z^2]$$ and $$\operatorname{im}(\delta^1)=\operatorname{\mathbb{F}_2}\{\sigma_{1-\epsilon} \cdot Q^1(\sigma_0), \sigma_{1-\epsilon} \cdot Q^1(\sigma_0)^2\} \cdot \operatorname{\mathbb{F}_2}[Z^2].$$ Thus we get \begin{equation*} \begin{aligned} \ker(\delta^1)/\operatorname{im}(\delta^1)= \sigma_{1-\epsilon} \cdot \operatorname{\mathbb{F}_2}[Z]+ \sigma_{1-\epsilon} \cdot Q^1(\sigma_0) \cdot \operatorname{\mathbb{F}_2}\{Z^i: 2 \nmid i\}+ \\ \sigma_{1-\epsilon} \cdot Q^1(\sigma_0)^2 \cdot \operatorname{\mathbb{F}_2}\{Z^i: 2 \nmid i\}+\operatorname{\mathbb{F}_2}\{1,Q^1(\sigma_0),Q(\sigma_0)^2\} \cdot \operatorname{\mathbb{F}_2}[Z]. \end{aligned} \end{equation*} Using the bidegrees of the generators it is immediate that all but the third term vanish for $3d < 2 \operatorname{rk}-2$. Elements of the third term have bidegree $(1,0)+(4,2)+(2i,2i)$ for some $i \geq 1$ odd, and $3(2+2i) \geq 2(5+2i)-2$ for $i \geq 1$, hence the result. \end{proof} Finally we will finish the Section by giving the Corollary of of Theorem \ref{thm stab 3} which is used in Section \ref{section intro}. \begin{corollary} \label{cor 2 torsion} Let $\mathbf{X}$ be as in Theorem \ref{thm stab 3} then \begin{enumerate}[(i)] \item If $\theta^2 \in H_{(8,0),4}(\mathbf{X})$ does not destabilize by $\sigma_{\epsilon}$ then $H_{(4k,0),2k}(\mathbf{\overline{X}}/\sigma_{\epsilon}) \neq 0$ for all $k \geq 1$, and in particular the optimal slope for the stability is $1/2$. \item If $\theta^2 \in H_{(8,0),4}(\mathbf{X})$ destabilizes by $\sigma_{\epsilon}$ then $H_{x,d}(\mathbf{\overline{X}}/\sigma_{\epsilon})=0$ for $3d \leq 2\operatorname{rk}(x)-5$, so $\mathbf{X}$ satisfies homological stability of slope at least $2/3$ with respect to $\sigma_{\epsilon}$. \item If $\theta \in H_{(4,0),2}(\mathbf{X})$ destabilizes by $\sigma_{\epsilon}$ then we can improve the previous stability bound to $H_{x,d}(\mathbf{\overline{X}}/\sigma_{\epsilon})=0$ for $3d \leq 2\operatorname{rk}(x)-3$. \end{enumerate} \end{corollary} \begin{proof} By definition (using the adapters construction as explained in Section \ref{section proof of thm stab 3}) there is a cofibration of left $\mathbf{\overline{X}}$-modules $$S^{(4,0),2} \otimes \mathbf{\overline{X}}/\sigma_{\epsilon} \xrightarrow{\theta \cdot -} \mathbf{\overline{X}}/\sigma_{\epsilon} \rightarrow \mathbf{\overline{X}}/(\sigma_{\epsilon},\theta),$$ and hence a corresponding long exact sequence in homology groups which implies that $\theta \cdot -: H_{x-(4,0),d-2}(\mathbf{\overline{X}}/\sigma_{\epsilon}) \rightarrow H_{x,d}(\mathbf{\overline{X}}/\sigma_{\epsilon})$ is surjective for $3d \leq 2 \operatorname{rk}(x)-3$ and an isomorphism for $3d \leq 2 \operatorname{rk}(x)-6$. Similarly, the cofibration of left $\mathbf{\overline{X}}$-modules $S^{(1,\epsilon),0} \otimes \mathbf{\overline{X}} \xrightarrow{\sigma_{\epsilon} \cdot -} \mathbf{\overline{X}}\rightarrow \mathbf{\overline{X}}/\sigma_{\epsilon}$ gives another long exact sequence in homology groups. \textbf{Proof of (i).} If $\theta^2$ does not destabilise by $\sigma_{\epsilon}$ then the second long exact sequence gives $\theta^2 \neq 0 \in H_{(8,0),4}(\mathbf{\overline{X}}/\sigma_{\epsilon})$. But, by the computation at the beginning of this proof, $\theta \cdot -: H_{4(k-1),2(k-1)}(\mathbf{\overline{X}}/\sigma_{\epsilon}) \rightarrow H_{4k,2k}(\mathbf{\overline{X}}/\sigma_{\epsilon})$ is an isomorphism for $k \geq 3$, hence implying that $\theta^k \neq 0 \in H_{4k,2k}(\mathbf{\overline{X}}/\sigma_{\epsilon})$ for $k \geq 3$ and hence the result. \textbf{Proof of (ii).} Since $\theta^2$ destabilises by $\sigma_{\epsilon}$ then the second long exact sequence gives $\theta^2= 0 \in H_{(8,0),4}(\mathbf{\overline{X}}/\sigma_{\epsilon})$. Thus the map $\theta^2 \cdot -: H_{x-(8,0),d-4}(\mathbf{\overline{X}}/\sigma_{\epsilon}) \rightarrow H_{x,d}(\mathbf{\overline{X}}/\sigma_{\epsilon})$ is zero by construction. However, by the computation at the beginning of this proof, $H_{x-(8,0),d-4}(\mathbf{\overline{X}}/\sigma_{\epsilon}) \xrightarrow{\theta^2 \cdot -} H_{x,d}(\mathbf{\overline{X}}/\sigma_{\epsilon})$ is surjective provided $3d \leq 2 \operatorname{rk}(x)-3$ and $3(d-2) \leq 2(\operatorname{rk}(x)-4)-3$, i.e. provided $3d \leq 2 \operatorname{rk}(x)-5$. Result then follows. \textbf{Proof of (iii).} Analogous to the previous part but using $\theta \cdot -$ instead of $\theta^2 \cdot -$. \end{proof} \begin{rem} \label{rem theta well-defined} By Remark \ref{rem indeterminacy} we know that $\theta$ itself is not well-defined. However, the map $\theta \cdot -: H_{x-(4,0),d-2}(\mathbf{\overline{X}}/\sigma_{\epsilon}) \rightarrow H_{x,d}(\mathbf{\overline{X}}/\sigma_{\epsilon})$ is well-defined up to adding $Q^1(\sigma_0)^2 \cdot -$, and the map $\theta^2 \cdot -$ is well-defined. This is shown by using Remark \ref{rem indeterminacy} and the assumptions on $\mathbf{X}$ about the classes $Q^1(\sigma_0)$, $Q^1(\sigma_1)$ plus the fact that $Q^1(\sigma_0)^3$ destabilises as explained in Step 1 of the proof of Theorem \ref{thm stab 3}. \end{rem} \section{$E_2$-algebras from quadratic data} \label{section 4} There is a general framework of how to get an $E_2$-algebra from a braided monoidal groupoid, see \cite[Section 17.1]{Ek}. In this section we will consider braided monoidal groupoids with the extra data of a strong braided monoidal functor to $\mathsf{Set}$, and we will observe that the ``Grothendieck construction'' yields another braided groupoid, called the ``\textit{associated quadratic groupoid''}, and hence another $E_2$-algebra. This construction will generalize the way quadratic symplectic groups are constructed from symplectic groups and the way that spin mapping class groups are related to mapping class groups, if we let the extra data be the set of quadratic refinements (hence the use of the term ``quadratic''). We will also study the relationship between the $E_2$-algebra of the original groupoid and the one of the associated quadratic groupoid; in particular Theorem \ref{theorem splitting complexes} and Corollary \ref{cor std connectivity} allow us to get some vanishing lines in the $E_2$-homology of the associated quadratic groupoid from knowledge of the original groupoid. \subsection{Definition and construction of the $E_2$-algebras} \label{section quadratic data} Let us start by introducing some notation based on the one in \cite[Section 17]{Ek}. All the categories for the rest of this section are discrete. A \textit{braided monoidal groupoid} is a triple $(\mathsf{G},\oplus,\mathds{1})$, where $\mathsf{G}$ is a groupoid, $\oplus$ a braided monoidal structure on $\mathsf{G}$ and $\mathds{1}$ the monoidal unit. For an object $x \in \mathsf{G}$ we write $\mathsf{G}_x:=\mathsf{G}(x,x)=\operatorname{Aut}_{\mathsf{G}}(x)$. We can view any monoid as an example of a monoidal groupoid where the only morphisms are the identity; for example $\mathbb{N}$ is naturally a symmetric monoidal groupoid, so in particular braided. \begin{definition} \label{defn quadratic data} A quadratic data consists of a triple $(\mathsf{G},\operatorname{rk},Q)$ where \begin{enumerate}[(i)] \item $\mathsf{G}=(\mathsf{G},\oplus,\mathds{1})$ is a braided monoidal groupoid such that $\mathsf{G}_{\mathds{1}}$ is trivial and for any objects $x,y \in \mathsf{G}$ the map $- \oplus -: \mathsf{G}_x \times \mathsf{G}_y \rightarrow \mathsf{G}_{x \oplus y}$ is injective. \item $\operatorname{rk}: \mathsf{G} \rightarrow \mathbb{N}$ is a braided monoidal functor such that $\operatorname{rk}^{-1}(0)$ consists precisely of those objects isomorphic to $\mathds{1}$. \item $Q: \mathsf{G}^{\mathsf{op}} \rightarrow \mathsf{Set}$ is a strong braided monoidal functor. \end{enumerate} \end{definition} Parts (i) and (ii) are precisely the assumptions needed to apply all the constructions of \cite[Section 17]{Ek}, and part (iii) is the extra ``quadratic'' data. One should think of $Q(x)$ as the set of ``quadratic refinements'' of the object $x$; and strong monoidality implies in particular that $Q(\mathds{1})$ is a one element set. \begin{definition} Given a quadratic data $(\mathsf{G},\operatorname{rk},Q)$, its associated quadratic groupoid is the braided monoidal groupoid $(\mathsf{G^q},\oplus^{\mathsf{q}},\mathds{1}^{\mathsf{q}})$ given by the Grothendieck construction $\mathsf{G} \wreath Q$, i.e. \begin{enumerate}[(i)] \item The set of objects of $\mathsf{G^q}$ is $\bigsqcup_{x \in \mathsf{G}}{Q(x)}$. \item The sets of morphisms are given as follows: for $q \in Q(x)$ and $q' \in Q(x')$, $\mathsf{G^q}(q,q')=\{\phi \in \mathsf{G}(x,x'): Q(\phi)(q')=q\}$. \item The braided monoidal structure $\oplus^{\mathsf{q}}$ is induced by the strong braided monoidality of $Q$, and the monoidal unit $\mathds{1}^{\mathsf{q}}$ is given by the only element in $Q(\mathds{1})$. \end{enumerate} \end{definition} Let us denote by $\operatorname{rk}^{\mathsf{q}}: \mathsf{G^q} \rightarrow \mathbb{N}$ the braided monoidal functor given by $q \in Q(x) \mapsto \operatorname{rk}(x)$. By construction the group $\mathsf{G^q}_{\mathds{1}^{\mathsf{q}}}$ is trivial and for any objects $q,q' \in \mathsf{G^q}$ the map $- \oplus^{\mathsf{q}} -: \mathsf{G^q}_q \times \mathsf{G^q}_{q'} \rightarrow \mathsf{G^q}_{q \oplus^{\mathsf{q}} q'}$ is injective. Also, $(\operatorname{rk}^{\mathsf{q}})^{-1}(0)$ consists precisely of those objects isomorphic to $\mathds{1}^{\mathsf{q}}$. Thus, $(\mathsf{G^q},\oplus^{\mathsf{q}},\mathds{1}^{\mathsf{q}},\operatorname{rk}^{\mathsf{q}})$ satisfies all the assumptions of \cite[Section 17]{Ek}, so by \cite[Section 17.1]{Ek} there is $\mathbf{R^q} \in \operatorname{Alg}_{E_2}(\operatorname{\mathsf{sSet}}^{\mathbb{N}})$ such that $$\mathbf{R^q}(n) \simeq \left\{ \begin{array}{lcc} \emptyset & if & n=0 \\ \underset{[q] \in \pi_0(\mathsf{G^q}): \; \operatorname{rk}^{\mathsf{q}}(q)=n}{\bigsqcup}{B \mathsf{G^q}_q} & if & n>0. \end{array} \right.$$ We shall call $\mathbf{R^q}$ the \textit{quadratic $E_2$-algebra} associated to a quadratic data. Alternatively, in the explicit construction of $\mathbf{R^q}$ in \cite[Section 17.1]{Ek} we can perform the Kan extension along the projection $\mathsf{G^q} \rightarrow \pi_0(\mathsf{G^q})$ instead of along $\mathsf{G^q} \xrightarrow{\operatorname{rk}^{\mathsf{q}}} \mathbb{N}$, and hence we can view $\mathbf{R^q} \in \operatorname{Alg}_{E_2}(\operatorname{\mathsf{sSet}}^{\pi_0(\mathsf{G^q})})$ such that $$\mathbf{R^q}([q]) \simeq \left\{ \begin{array}{lcc} \emptyset & if \; q \cong \mathds{1}^{\mathsf{q}} \\ B \mathsf{G^q}_q & otherwise. \end{array} \right.$$ We will not distinguish between these two, as sometimes it will be more convenient to think of $\mathbf{R^q}$ as being $\mathbb{N}$-graded an other times as $\pi_0(\mathsf{G^q})$-graded. \begin{rem} \label{remark path components} When we view $\mathbf{R^q}$ as $\pi_0(\mathsf{G^q})$-graded we have that $\mathbf{R^q}([q])$ is path-connected for any $[q] \neq 0 \in \pi_0(\mathsf{G^q})$. Thus, the strictly associative algebra $\overline{\mathbf{R^q}}$ satisfies that $\pi_0(\overline{\mathbf{R^q}}) \cong \pi_0(\mathsf{G^q})$ as monoids, where the monoid structure on the left-hand-side is induced by the product. In particular, the ring $H_{*,0}(\overline{\mathbf{R^q}})$ is determined by the monoidal structure of $\pi_0(\mathsf{G^q})$. \end{rem} Similarly, we can apply the construction of \cite[Section 17.1]{Ek} to $(\mathsf{G},\oplus,\mathds{1},\operatorname{rk})$ to get $\mathbf{R} \in \operatorname{Alg}_{E_2}(\operatorname{\mathsf{sSet}}^{\mathbb{N}})$ such that $$\mathbf{R}(n) \simeq \left\{ \begin{array}{lcc} \emptyset & if & n=0 \\ \underset{[x] \in \pi_0(\mathsf{G}): \; \operatorname{rk}(x)=n}{\bigsqcup}{B \mathsf{G}_x} & if & n>0. \end{array} \right.$$ We will refer to $\mathbf{R}$ as the \textit{non-quadratic $E_2$-algebra}. The obvious braided monoidal functor $\mathsf{G^q} \rightarrow \mathsf{G}$ then induces an $E_2$-algebra map $\mathbf{R^q} \rightarrow \mathbf{R}$. \subsection{$E_1$-splitting complexes of quadratic groupoids} Recall \cite[Definition 17.9]{Ek} that given a monoidal groupoid $\mathsf{G}$ with a rank functor $\operatorname{rk}: \mathsf{G} \rightarrow \mathbb{N}$ satisfying properties (i) and (ii) of Definition \ref{defn quadratic data} and an element $x \in \mathsf{G}$, the \textit{$E_1$-splitting complex} $S^{E_1,\mathsf{G}}_{\bullet}(x)$ is the semisimplicial set with $p$-simplices given by $$S^{E_1,\mathsf{G}}_p(x):= \underset{(x_0,\cdots,x_{p+1}) \in \mathsf{G}_{\operatorname{rk}>0}^{p+2}}{\operatorname{colim}}{\mathsf{G}(x_0 \oplus \cdots \oplus x_{p+1},x)}$$ and face maps given by the monoidal structure. (Where $\mathsf{G}_{\operatorname{rk}>0}$ denotes the full subgroupoid of $\mathsf{G}$ on those objects $x$ with $\operatorname{rk}(x)>0$, i.e. on those objects not isomorphic to $\mathds{1}$.) The main result of this section is the following result which will allow us to understand splitting complexes of quadratic groupoids. \begin{theorem} \label{theorem splitting complexes} Let $(\mathsf{G},\operatorname{rk},Q)$ be a quadratic data, then for any $q \in Q(x)$ there is an isomorphism of semisimplicial sets $S_{\bullet}^{E_1,\mathsf{G^q}}(q) \cong S_{\bullet}^{E_1,\mathsf{G}}(x)$. \end{theorem} \begin{proof} By definition $$S_{p}^{E_1,\mathsf{G}}(x)= \underset{(x_0,\cdots,x_{p+1}) \in \mathsf{G}_{\operatorname{rk}>0}^{p+2}}{\operatorname{colim}}{\mathsf{G}(x_0 \oplus \cdots \oplus x_{p+1},x)}$$ and $$S_p^{E_1,\mathsf{G^q}}(q)= \underset{(q_0,\cdots,q_{p+1}) \in \mathsf{G^q}_{\operatorname{rk}^{\mathsf{q}}>0}^{p+2}}{\operatorname{colim}}{\mathsf{G^q}(q_0 \oplus^{\mathsf{q}} \cdots \oplus^{\mathsf{q}} q_{p+1},q)}$$ The inclusions $\mathsf{G^q}(q_0 \oplus^{\mathsf{q}} \cdots \oplus^{\mathsf{q}} q_{p+1},q) \subset \mathsf{G}(x_0 \oplus \cdots \oplus x_{p+1},x)$, for each $(q_0,\cdots,q_{p+1}) \in \mathsf{G^q}_{\operatorname{rk}^{\mathsf{q}}>0}^{p+2}$ with $q_i \in Q(x_i)$ and $q \in Q(x)$, assemble into canonical maps \[\Scale[0.75]{S_p^{E_1,\mathsf{G^q}}(q)=\underset{(q_0,\cdots,q_{p+1}) \in \mathsf{G^q}_{\operatorname{rk}^{\mathsf{q}}>0}^{p+2}}{\operatorname{colim}}{\mathsf{G^q}(q_0 \oplus^{\mathsf{q}} \cdots \oplus^{\mathsf{q}} q_{p+1},q)} \rightarrow \underset{(x_0,\cdots,x_{p+1}) \in \mathsf{G}_{\operatorname{rk}>0}^{p+2}}{\operatorname{colim}}{\mathsf{G}(x_0 \oplus \cdots \oplus x_{p+1},x)}=S_p^{E_1,\mathsf{G}}(x)}\] which are compatible with the face maps of both semisimplicial sets because the natural functor $\mathsf{G^q} \rightarrow \mathsf{G}$ is monoidal. Thus, it suffices to show that $S_{p}^{E_1,\mathsf{G^q}}(q) \rightarrow S_{p}^{E_1,\mathsf{G}}(x)$ is a bijection of sets for all $p$. Surjectivity: any element on the right hand side is represented by some $\phi \in \mathsf{G}(x_0 \oplus \cdots \oplus x_{p+1},x)$ which is an isomorphism since $\mathsf{G}$ is a groupoid. Since $Q$ is strong monoidal, $Q(\phi): Q(x) \xrightarrow{\cong} Q(x_0) \times \cdots \times Q(x_{p+1})$ is an isomorphism. Let $q_i:= \operatorname{proj}_i ( Q(\phi)(q)) \in Q(x_i)$, then $\phi \in \mathsf{G^q}(q_0 \oplus^{\mathsf{q}} \cdots \oplus^{\mathsf{q}} q_{p+1},q)$ defines an element on the left hand side mapping to the required element. Injectivity: suppose that two elements on the left hand side have the same image on the right hand side. Represent them by $\phi^i \in \mathsf{G^q}(q_0^i \oplus^{\mathsf{q}} \cdots \oplus^{\mathsf{q}} q_{p+1}^i,q)$, where $Q(\phi^i)(q)=q_0^i \oplus^{\mathsf{q}} \cdots \oplus^{\mathsf{q}} q_{p+1}^i$ and $i \in \{1,2\}$ is an index. Since $\phi^1$ and $\phi^2$ agree on the colimit of the right hand side then there is an element $\phi \in \mathsf{G}(x_0 \oplus \cdots +x_{p+1},x)$ and morphisms $(\psi_0^i,\cdots ,\psi_{p+1}^i): (x_0^i,\cdots ,x_{p+1}^i) \rightarrow (x_0, \cdots ,x_{p+1})$ in $\mathsf{G}_{\operatorname{rk}>0}^{p+2}$ such that $\phi^i \circ (\psi_0^i, \cdots,\psi_{p+1}^i)^{-1}= \phi$ for $i \in \{1,2\}$. Let ${q'}_a^i:= Q({\psi_a^i}^{-1})(q_a^i) \in Q(x_a)$, we claim that ${q'}_a^1={q'}_a^2$ for $i \in \{1,2\}$: $Q(\psi_0^i \oplus \cdots \oplus \psi_{p+1}^i) ({q'}_0^i \oplus^{\mathsf{q}} \cdots \oplus^{\mathsf{q}} {q'}_{p+1}^i)= (q_0^i \oplus^{\mathsf{q}} \cdots \oplus^{\mathsf{q}} q_{p+1}^i)=Q(\phi^i)(q)$ and hence ${q'}_0^i \oplus^{\mathsf{q}} \cdots \oplus^{\mathsf{q}} {q'}_{p+1}^i=Q(\phi^i\circ (\psi_0^i \oplus \cdots \oplus \psi_{p+1}^i)^{-1})(q)=Q(\phi)(q)$. Since $Q(\phi)(q)$ is independent of $i \in \{1,2\}$ then the claim follows by the strong monoidality $Q$ since ${q'}_a^1, {q'}_a^2 \in Q(x_a)$ for all $a$. Now let $q_a:={q'}_a^1={q'}_a^2$, then by definition $Q(\psi_0^i \oplus \cdots \oplus \psi_{p+1}^i) ({q}_0 \oplus^{\mathsf{q}} \cdots \oplus^{\mathsf{q}} {q}_{p+1})= (q_0^i \oplus^{\mathsf{q}} \cdots \oplus^{\mathsf{q}} q_{p+1}^i)$ and hence $(\psi_0^i, \cdots,\psi_{p+1}^i) \in \mathsf{G^q}_{\operatorname{rk}^{\mathsf{q}>0}}^{p+2}$. Since $\phi^i \circ (\psi_0^i, \cdots,\psi_{p+1}^i)^{-1}= \phi$ for $i \in \{1,2\}$ by construction, then $\phi^1$ and $\phi^2$ agree on the left hand side colimit, as required. \end{proof} Recall \cite[Definition 17.6, Lemma 17.10]{Ek}: we say that $(\mathsf{G},\oplus,\mathds{1},\operatorname{rk})$ \textit{satisfies the standard connectivity estimate} if for any $x \in \mathsf{G}$, the reduced homology of $S^{E_1,\mathsf{G}}(x):=||S_{\bullet}^{E_1,\mathsf{G}}||$ is concentrated in degree $\operatorname{rk}(x)-2$. As explained in \cite[Page 188]{Ek} the standard connectivity estimate implies that $H_{n,d}^{E_1}(\mathbf{R})=0$ for $d<n-1$, where $\mathbf{R}$ is the $E_2$-algebra defined in Section \ref{section quadratic data}. The following corollary says that the standard connectivity estimate on the underlying braided groupoid of a quadratic data also gives a vanishing line on the $E_2$-homology of the corresponding quadratic $E_2$-algebra. \begin{corollary} \label{cor std connectivity} If $(\mathsf{G},\operatorname{rk},Q)$ is a quadratic data such that $(\mathsf{G},\operatorname{rk})$ satisfies the standard connectivity estimate then $H_{x,d}^{E_2}(\mathbf{R^q})=0$ for $d<\operatorname{rk}(x)-1$. \end{corollary} \begin{proof} By Theorem \ref{theorem splitting complexes} and the standard connectivity estimate, the reduced homology of $S^{E_1,\mathsf{G^q}}(q)$ is concentrated in degree $\operatorname{rk}^{\mathds{q}}(q)-2$ for any $q \in \mathsf{G^q}$. Thus, by \cite[Page 188]{Ek} we have $H_{x,d}^{E_1}(\mathbf{R^q})=0$ for $d<\operatorname{rk}(x)-1$. Finally, the ``transferring vanishing lines up'' theorem, \cite[Theorem 14.4]{Ek} implies the result. \end{proof} \section{Quadratic symplectic groups} \label{section symplectic} \subsection{Construction of the $E_2$-algebra} For a given $g \geq 0$ we let the \textit{standard symplectic form} on $\mathbb{Z}^{2g}$ be the matrix $\Omega_g$ given by the block diagonal sum of $g$ copies of $\begin{psmallmatrix} 0 & 1 \\ -1 & 0 \end{psmallmatrix}.$ The \textit{genus $g$ symplectic group} is defined by $Sp_{2g}(\mathbb{Z}):= \operatorname{Aut}(\mathbb{Z}^{2g},\Omega_g)$. Let $(\mathsf{Sp},\oplus,0)$ be the symmetric monoidal groupoid with objects the non-negative integers, morphisms $\mathsf{Sp}(g,h)=\left\{ \begin{array}{lcc} Sp_{2g}(\mathbb{Z}) & if \; g=h \\ \emptyset & otherwise, \end{array} \right.$ where the (strict) monoidal structure $\oplus$ is given by addition on objects and block diagonal sum on morphisms, $0$ is the (strict) monoidal unit and the braiding $\beta_{g,h}: g \oplus h \xrightarrow{\cong} h \oplus g$ is given by the matrix $\begin{psmallmatrix} 0 & I_{2h} \\ I_{2g} & 0 \end{psmallmatrix}$, which satisfies $\beta_{g,h} \beta_{h,g}=\operatorname{id}_{g+h}$. We let $\operatorname{rk}: \mathsf{Sp} \rightarrow \mathbb{N}$ be the symmetric monoidal functor given by identity on objects and let $Q: \mathsf{Sp^{op}} \rightarrow \mathsf{Set}$ be the functor given as follows \begin{enumerate}[(i)] \item On objects, $Q(g):=\{q: \mathbb{Z}^{2g} \rightarrow \mathbb{Z}/2: \; q(x+y)\equiv q(x)+ q(y)+ x \cdot y (\mod 2) \}$, where $\cdot$ represents the skew-symmetric product induced by the standrd symplectic form. \item On morphisms, for $\phi \in Sp_{2g}(\mathbb{Z})$ and $q \in Q(g)$ we let $Q(\phi)(q)= q \circ \phi$. \end{enumerate} In other words, $Q(g)$ is the set of quadratic refinements on $(\mathbb{Z}^{2g},\Omega_g)$, as defined in Section \ref{section intro}. Strong symmetric monoidality of $Q$ follows from the fact that a quadratic refinement $q \in Q(g)$ is the same data as a function of sets from a basis of $\mathbb{Z}^{2g}$ to $\mathbb{Z}/2$. Thus, $(\mathsf{Sp},\operatorname{rk},Q)$ is a quadratic data in the sense of Definition \ref{defn quadratic data}. By Section \ref{section quadratic data} we get an associated quadratic groupoid $\mathsf{Sp^q}$ and a quadratic $E_2$-algebra $\mathbf{R^{\mathsf{q}}}$, which in this case is actually $E_{\infty}$ because the groupoid is symmetric and not just braided; however, this will not make a difference for the purposes of this paper. The next goal is to describe $\pi_0(\mathsf{Sp^q})$, which by Remark \ref{remark path components} gives a computation of $H_{*,0}(\overline{\mathbf{R^q}})$. In order to do so, we need to introduce the so called \textit{Arf invariant}. \begin{definition} \label{defn arf} Given a quadratic refinement $q \in Q(g)$ of the standard symplectic form on $\mathbb{Z}^{2g}$, we define the Arf invariant of $q$ via $\operatorname{Arf}(q):=\sum_{i=1}^{g}{q(e_i) q(f_i)} \in \mathbb{Z}/2$, where $(e_1,f_1,\cdots,e_g,f_g)$ is the standard (ordered) basis of $\mathbb{Z}^{2g}$. \end{definition} The key property of this invariant is that for $q,q' \in Q(g)$ we have $\operatorname{Arf}(q)=\operatorname{Arf}(q')$ if and only if there exists $\phi \in Sp_{2g}(\mathbb{Z})$ such that $q'=Q(\phi)(q)$. Moreover, for $g \geq 1$ it is clear that $\operatorname{Arf}: Q(g) \rightarrow \mathbb{Z}/2$ is surjective. Before stating the next result recall the monoid $\mathsf{H}:= \{0\} \cup \mathbb{N}_{>0} \times \mathbb{Z}/2$, where the monoidal structure $+$ is given by addition in both coordinates, considered at the beginning of Section \ref{section Ek}. \begin{lemma} \label{lem arf inv} Taking rank and Arf invariant gives an isomorphism of monoids $(\operatorname{rk},\operatorname{Arf}): \pi_0(\mathsf{Sp^q}) \xrightarrow{\simeq} \mathsf{H}$. \end{lemma} \begin{proof} The map $(\operatorname{rk},\operatorname{Arf}): \pi_0(\mathsf{Sp^q}) \rightarrow \mathsf{H}$ is clearly surjective, it is injective and well-defined by the above discussion of the Arf invariant, and it is monoidal because $\operatorname{rk}$ is monoidal and $\operatorname{Arf}$ is also monoidal by its explicit formula. \end{proof} Under this identification of $\pi_0(\mathbf{R^q})$ we have that $\mathbf{R^q}(g,\epsilon)=B Sp_{2g}^{\epsilon}(\mathbb{Z})$ is the classifying space of a quadratic symplectic group in the sense of Section \ref{section results}. Thus, by Section \ref{section E2 algebras overview}, Theorem \hyperref[theorem B]{B} is equivalent to a vanishing of $H_{*,*}(\mathbf{\overline{\mathbf{R^q}}}/\sigma_{\epsilon};\mathds{k})$. \ \subsection{Proof of Theorem \hyperref[theorem B]{B}} The only additional ingredient that we need to prove Theorem \hyperref[theorem B]{B} is to understand the $E_1$-splitting complex of $(\mathsf{Sp},\operatorname{rk})$. \begin{proposition} \label{prop std connect} $(\mathsf{Sp},\operatorname{rk})$ satisfies the standard connectivity estimate, i.e. for $g \in \mathbb{N}$ the reduced homology of $S^{E_1,\mathsf{Sp}}(g)$ is concentrated in degree $g-2$. \end{proposition} \begin{proof} Let $P(g)$ be the poset whose elements are submodules $0 \subsetneq M \subsetneq \mathbb{Z}^{2g}$ such that $(M,\Omega_g|_{M})$ is isomorphic to the standard symplectic form $(\mathbb{Z}^{2h},\Omega_h)$ for some $0<h<g$, ordered by inclusion. Let $P_{\bullet}(g)$ be the nerve of the poset, viewed as a semisimplicial space with $p$-simplices strict chains $M_0 \subsetneq M_1 \subsetneq \cdots \subsetneq M_p$ in $P(g)$, and face maps given by forgetting elements in the chain. The first step in the proof is about comparing the poset $P(g)$ with the $E_1$-splitting complex. \begin{claim} There is an isomorphism of semisimplicial sets $S_{\bullet}^{E_1,\mathsf{Sp}}(g) \rightarrow P_{\bullet}(g)$. \end{claim} \begin{proof} By \cite[Remark 17.11]{Ek} we have the following more concrete description of $S_{\bullet}^{E_1,\mathsf{Sp}}(g)$: $$S_{p}^{E_1,\mathsf{Sp}}(g)= \bigsqcup_{(g_0,\cdots,g_{p+1}): \; g_i>0, \; \sum_{i} g_i= g}{\frac{Sp_{2g}(\mathbb{Z})}{Sp_{2g_0}(\mathbb{Z}) \times Sp_{2g_1}(\mathbb{Z}) \times \cdots \times Sp_{2g_{p+1}}(\mathbb{Z})}}$$ with the obvious face maps. For each $0<n<g$ we let $M_n:=\mathbb{Z}^{2n} \oplus 0 \subset \mathbb{Z}^{2g}$, so that we have a chain $M_1< \cdots < M_{g-1}$ in $P(g)$. For each tuple $(g_0,\cdots,g_{p+1})$ with $g_i>0$ and $\sum_{i}{g_i}=g$ we have a $p$-simplex $\sigma_{g_0,\cdots,g_{p+1}}:= M_{g_0}< M_{g_0+g_1} < \cdots < M_{g_0+\cdots+g_p} \in P_p(g)$. The group $Sp_{2g}(\mathbb{Z})$ acts simplicially on $P_{\bullet}(g)$, and under this action the stabilizer of $\sigma_{g_0,\cdots,g_{p+1}}$ is precisely $Sp_{2g_0}(\mathbb{Z}) \times Sp_{2g_1}(\mathbb{Z}) \times \cdots \times Sp_{2g_{p+1}}(\mathbb{Z}) \subset Sp_{2g}(\mathbb{Z})$. Thus, we indeed get a levelwise injective map of semisimplicial sets $S_{\bullet}^{E_1,\mathsf{Sp}}(g) \rightarrow P_{\bullet}(g)$. Level-wise surjectivity follows from the fact that for a given $M \in P(g)$, any isomorphism $(M,\Omega_g|_M) \xrightarrow{\cong} (\mathbb{Z}^{\operatorname{rk}{M}},\Omega_h)$ can be extended to an automorphism of $(\mathbb{Z}^{2g},\Omega_g)$. This is a consequence of the classification of non-degenerate skew-symmetric forms over finitely generated free $\mathbb{Z}$-modules. \end{proof} Let us denote $L:= (\mathbb{Z}^{2g},\Omega_g)$. The poset $P(g)$ is then the same as $\mathcal{U}(L)_{0<-<L}$ in the sense of \cite[Section 1]{spherical}. By \cite[Theorem 1.1]{spherical} the poset $\mathcal{U}(L)$ is Cohen-Macaulay of dimension $g$, and in particular the poset $\mathcal{U}(L)_{0<-<L}$ is $(g-3)$-connected and $(g-2)$-dimensional, giving the result. \end{proof} \begin{proof} \textbf{Part (i).} By Lemma \ref{lem arf inv} and Remark \ref{remark path components} we have $\mathbf{R^q} \in \operatorname{Alg}_{E_2}(\operatorname{\mathsf{sSet}}^{\mathsf{H}})$ such that $\mathbf{R^q}(x)$ is path-connected for each $x \in \mathsf{H}\setminus \{0\}$ and $\mathbf{R^q}(0)=\emptyset$. Thus, $H_{0,0}(\mathbf{R^q})=0$ and $H_{*,0}(\mathbf{\overline{\mathbf{R^q}}})=\mathbb{Z}[\sigma_0,\sigma_1]/(\sigma_1^2-\sigma_0^2)$ as a ring, where $\sigma_{\epsilon}$ is generated by a point in $\mathbf{R^q}((1,\epsilon))$. By Proposition \ref{prop std connect}, $(\mathsf{Sp},\operatorname{rk})$ satisfies the standard connectivity estimate, and thus by Corollary \ref{cor std connectivity} we get that $H_{x,d}^{E_2}(\mathbf{R^q})=0$ for $d<\operatorname{rk}(x)-1$. If we now consider $\mathbf{X}:= \mathbf{\mathbf{R^q}_{\mathbb{Z}}} \in \operatorname{Alg}_{E_2}(\operatorname{\mathsf{sMod}}_{\mathbb{Z}}^{\mathsf{H}})$ then it satisfies the assumptions of Theorem \ref{theorem stab 1} by \cite[Lemma 18.2]{Ek} and the properties of $(-)_{\mathbb{Z}}$ explained in Section \ref{section E2 algebras overview}. Thus the claimed homological stability for $\mathbf{R^q}$ follows. \textbf{Part (ii).} This time let $\mathbf{X}:= \mathbf{\mathbf{R^q}_{\mathbb{Z}[1/2]}} \in \operatorname{Alg}_{E_2}(\operatorname{\mathsf{sMod}}_{\mathbb{Z}[1/2]}^{\mathsf{H}})$. As before, this algebra satisfies the unnumbered assumptions of Theorem \ref{theorem stab 2}. We will check that it also verifies assumptions (i),(ii) and (iii) and then the required stability will follow from Theorem \ref{theorem stab 2}. By the universal coefficients theorem, to check them it suffices to prove that $H_{x,1}(\mathbf{R^q})$ is $2$-torsion for $\operatorname{rk}(x) \in \{2,3\}$, which follows from Theorems \ref{thm: 6.7} and \ref{thm: 6.8} in the Appendix. \textbf{Secondary stability.} Let $\mathbf{X}:= \mathbf{\mathbf{R^q}_{\operatorname{\mathbb{F}_2}}} \in \operatorname{Alg}_{E_2}(\operatorname{\mathsf{sMod}}_{\operatorname{\mathbb{F}_2}}^{\mathsf{H}})$. Then Theorem \ref{thm stab 3} applies by Theorems \ref{thm: 6.7} and \ref{thm: 6.8} in the Appendix and the universal coefficients theorem. The result then follows by the long exact sequence of the cofibration $S^{(4,0),2} \otimes \mathbf{\overline{X}}/\sigma_{\epsilon} \rightarrow \mathbf{\overline{X}}/\sigma_{\epsilon} \rightarrow \mathbf{\overline{X}}/(\sigma_{\epsilon},\theta).$ \end{proof} \section{Spin mapping class groups} \label{section mcg} Consider the braided monoidal groupoid $(\mathsf{MCG},\oplus,0)$ defined in \cite[section 4]{E2} whose objects are the non-negative integers and morphisms are given by $$\mathsf{MCG}(g,h)=\left\{ \begin{array}{lcc} \Gamma_{g,1} &if \; g=h \\ \emptyset & otherwise. \end{array} \right.$$ The monoidal structure on $\mathsf{MCG}$ is given by addition on objects and by ``gluing diffeomorphisms'' on morphisms, using the decomposition of $\Sigma_{g+h,1}$ as a boundary connected sum $\Sigma_{g,1} \natural \Sigma_{h,1}$. The braiding is induced by the half right-handed Dehn twist along the boundary. Let $\operatorname{rk}: \mathsf{MCG} \rightarrow \mathbb{N}$ be the braided monoidal functor given by identity on objects. Let $Q: \mathsf{MCG^{op}} \rightarrow \mathsf{Set}$ be the functor given as follows \begin{enumerate}[(i)] \item On objects, $$Q(g)=\{q:H_1(\Sigma_{g,1};\mathbb{Z}) \rightarrow \mathbb{Z}/2: \; q(x+y) \equiv q(x)+q(y)+x \cdot y (\mod 2)\},$$ where $\cdot$ is the homology intersection pairing. \item On morphisms, for $\phi \in \Gamma_{g,1}$ and $q \in Q(g)$ we let $Q(\phi)(q)=q \circ \phi_*$. \end{enumerate} In other words, $Q(g)$ is the set of quadratic refinements of the intersection product in $H_1(\Sigma_{g,1};\mathbb{Z})$, which is isomorphic to the standard hyperbolic form of genus $g$. By the argument of Section \ref{section symplectic}, $Q$ is strong braided monoidal, so $(\mathsf{MCG},\operatorname{rk},Q)$ is a quadratic data. Moreover, by mimicking the proof of Lemma \ref{lem arf inv} we get that $(\operatorname{rk},\operatorname{Arf}): \pi_0(\mathsf{MCG^q}) \xrightarrow{\simeq} \mathsf{H}$ is a monoidal isomorphism. (This uses the surjectivity of the map $\Gamma_{g,1} \rightarrow Sp_{2g}(\mathbb{Z})$.) \begin{rem} Using \cite[Section 2]{rspin} one can check that $\mathbf{R^q}$ agrees with the ``moduli space of spin surfaces with one boundary component'', defined in more geometric terms using tangential structures. \end{rem} Since the $E_2$-algebra $\mathbf{R^q}$ satisfies that $\mathbf{R^q}(n,\epsilon) \simeq B \Gamma_{g,1}^{1/2}[\epsilon]$, Theorem \hyperref[theorem A]{A} is equivalent to certain vanishing lines in the homology of $\mathbf{R^q}/\sigma_{\epsilon}$ and $\mathbf{R^q}/(\sigma_{\epsilon},\theta)$. \subsection{Proof of Theorem \hyperref[theorem A]{A}} \begin{proof} In this case the standard connectivity estimate for $(\mathsf{MCG},\operatorname{rk})$ is proven in \cite[Theorem 3.4]{E2}. Thus, proceeding as in the proof of Theorem \hyperref[theorem B]{B} we can apply Theorem \ref{theorem stab 1} to $\mathbf{\mathbf{R^q}_{\mathbb{Z}}}$ to get part (i) of the Theorem. To prove part (ii) we consider $\mathbf{X}:=\mathbf{\mathbf{R^q}_{\mathbb{Z}[1/2]}}$ and apply Theorem \ref{theorem stab 2}. To verify assumptions (i),(ii) and (iii) we use the universal coefficients theorem and Theorems \ref{thm: 6.1}, \ref{thm: 6.2}, \ref{thm: 6.3} and \ref{thm: 6.4} in the Appendix. The secondary stability part follows by considering $\mathbf{X}:=\mathbf{\mathbf{R^q}_{\operatorname{\mathbb{F}_2}}}$ and applying Theorem \ref{thm stab 3}, where all assumptions needed hold by Theorems \ref{thm: 6.1}, \ref{thm: 6.2}, \ref{thm: 6.3} and \ref{thm: 6.4} in the Appendix. \end{proof} As we said in Section \ref{section intro} we can also prove that the bound of Theorem \hyperref[theorem A]{A} is (almost) optimal. \begin{lemma}\label{lem optimallity} For all $k \geq 1$ and for all $\epsilon, \delta \in \{0,1\}$ the map $$\sigma_{\epsilon} \cdot -: H_{2k}(B\Gamma_{3k-1,1}^{1/2}[\delta-\epsilon];\mathbb{Q}) \rightarrow H_{2k}(B\Gamma_{3k,1}^{1/2}[\delta];\mathbb{Q})$$ is not surjective. \end{lemma} \begin{proof} Suppose for a contradiction that it was surjective for some $k \geq 1$, $\epsilon, \delta \in \{0,1\}$. By the transfer, $H_{2k}(B\Gamma_{3k,1}^{1/2}[\delta];\mathbb{Q}) \rightarrow H_{2k}(B\Gamma_{3k,1};\mathbb{Q})$ is also surjective since the spin mapping class groups are fine index subgroups of the mapping class groups. Thus, the stabilisation map $\sigma \cdot -:H_{2k}(B\Gamma_{3k-1,1};\mathbb{Q}) \rightarrow H_{2k}(B\Gamma_{3k,1};\mathbb{Q})$ (where we are using the notation of \cite{E2}) must be surjective. By the universal coefficients theorem, $H^{2k}(B\Gamma_{3k,1};\mathbb{Q}) \rightarrow H^{2k}(B\Gamma_{3k-1,1};\mathbb{Q})$ is injective; which is false by the computations in the \cite[Proof of Corollary 5.8]{E2}. \end{proof} Thus, the stability bound obtained in Theorem \hyperref[theorem A]{A} is optimal up to a constant of at most one. \section{Appendix} \label{appendix} \subsection{Spin mapping class groups} \label{appendix mcg} In this section we will explain the homology computations of spin mapping class groups and quadratic symplectic groups. These computations are done using GAP and we have included the code that we used. \subsubsection{$g=1$} \label{genus 1} Consider the simple closed curves $\alpha, \beta$ in $\Sigma_{1,1}$ shown below, which orientations chosen so that $\alpha \cdot \beta = +1$. Let $a, b \in \Gamma_{1,1}$ be the isotopy classes represented by the right-handed Dehn twists along the curves $\alpha$ and $\beta$ respectively. \begin{figure}[H] \begin{center} \begin{tikzpicture}[scale=1.2, decoration={ markings, mark=at position 0.5 with {\arrow{<}}} ] \draw (0,0) -- (1,1); \draw (1,1) -- (3,1) -- (2,0) -- (0,0); \churro[scale=1, x=1, y=0.45] \draw[red,dashed] (1.5,1.85) node[above] {$\alpha$} to[in=120,out=-120] (1.5,1.6); \draw[red,postaction={decorate}] (1.5,1.85) to[in=-30,out=30] (1.5,1.6); \draw[blue,->] (1.5,1.25) node[below] {$\beta$} to[in=-45,out=0] (1.75,1.6) to[in=0,out=90+45] (1.5,1.7) to[in=45,out=180] (1.25,1.6) to[in=180,out=-90-45] (1.5,1.25); \end{tikzpicture} \end{center} \end{figure} The set $Q(1)$ of quadratic refinements of $H_1(\Sigma_{1,1};\mathbb{Z})$ is $\{q_{0,0}, q_{1,0}, q_{0,1}, q_{1,1}\}$, where $q_{i,j}$ satisfies $q_{i,j}(a)=i$ and $q_{i,j}(b)=j$. The first three of them have Arf invariant $0$ and the forth one has Arf invariant $1$. Thus, we can get explicit models of $\Gamma_{1,1}^{1/2}[\epsilon]$ via $\Gamma_{1,1}^{1/2}[0]:=\operatorname{Stab}_{\Gamma_{1,1}}(q_{0,0})$ and $\Gamma_{1,1}^{1/2}[1]:=\operatorname{Stab}_{\Gamma_{1,1}}(q_{1,1})$. \begin{theorem}\label{thm: 6.1} \begin{enumerate}[(i)] \item $H_1(\Gamma_{1,1};\mathbb{Z})=\mathbb{Z}\{\tau\}$, where $\tau$ is represented by both $a$ and $b$. \item $H_1(\Gamma_{1,1}^{1/2}[0];\mathbb{Z})=\mathbb{Z}\{x\} \oplus \mathbb{Z}\{y\}$, where $x$ is represented by $a^{-2}$ and $y$ is represented by $a b a^{-1}$. Moreover, $b^{-2}$ also represents the class $x$. \item $H_1(\Gamma_{1,1}^{1/2}[1];\mathbb{Z})=\mathbb{Z}\{z\}$, where $z$ is represented by $a$. \item Under the inclusion $\Gamma_{1,1}^{1/2}[0] \subset \Gamma_{1,1}$ we have $x \mapsto -2 \tau$ and $y \mapsto \tau$. \item Under the inclusion $\Gamma_{1,1}^{1/2}[1] \subset \Gamma_{1,1}$ we have $z \mapsto \tau$. \end{enumerate} \end{theorem} \begin{proof} Parts (i), (ii) and (iii) immediately imply parts (iv) and (v). Moreover, parts (i) and (iii) are equivalent since there is a unique quadratic refinement of Arf invariant $1$ so $\Gamma_{1,1}^{1/2}[1]=\Gamma_{1,1}$. By \cite[Page 8]{korkmaz} we have the presentation $\Gamma_{1,1}=\langle a, b | \; a b a = b a b \rangle$. Abelianizing this presentation gives part (i). We will prove (ii) by finding a presentation for $\Gamma_{1,1}^{1/2}[0]$ and then abelianizing it using GAP. The right action of $a, b$ on the set of quadratic refinements of Arf invariant $0$ is given by: $a^*(q_{0,0})= q_{0,1}$, $a^*(q_{0,1})=q_{0,0}$, $a^*(q_{1,0})=q_{1,0}$, $b^*(q_{0,0})= q_{1,0}$, $b^*(q_{0,1})=q_{0,1}$, $b^*(q_{1,0})=q_{0,0}$. (This is shown by working out the effect on homology of the corresponding right-handed Dehn twists.) We will denote $q_{0,0}:=1$, $q_{0,1}:=2$ and $q_{1,0}:=3$, so that $a$ acts on $\{1,2,3\}$ by the permutation $(1 2)$ and $b$ acts on $\{1,2,3\}$ by the permutation $(1 3)$. \begin{verbatim} gap> F:=FreeGroup("a","b"); gap> AssignGeneratorVariables(F); gap> rel:=[a*b*a*b^-1*a^-1*b^-1]; gap> G:=F/rel; gap> Q:=Group((1,2),(1,3)); gap> hom:=GroupHomomorphismByImages (G,Q,GeneratorsOfGroup(G),GeneratorsOfGroup(Q)); [ a, b ] -> [ (1,2), (1,3) ] of Arf invariant 0. gap> S:=PreImage(hom,Stabilizer(Q,1)); gap> genS:=GeneratorsOfGroup(S); [ a^-2, b^-2, a*b*a^-1 ] gap> iso:= IsomorphismFpGroupByGenerators(S,genS); [ a^-2, b^-2, a*b*a^-1 ] -> [ F1, F2, F3 ] gap> s:=ImagesSource(iso); so that the generators correspond via iso. gap> RelatorsOfFpGroup(s); [ F3*F1*F3^-1*F2^-1, F3*F2*F3^-1*F2*F1^-1*F2^-1 ] Thus we have a presentation of S. gap> AbelianInvariants(s); [ 0, 0 ] gap> q:=MaximalAbelianQuotient(s); gap> AbS:=ImagesSource(q); gap> GeneratorsOfGroup(AbS); [ f1, f2, f3 ] gap> RelatorsOfFpGroup(AbS); [ f1^-1*f2^-1*f1*f2, f1^-1*f3^-1*f1*f3, f2^-1*f3^-1*f2*f3, f1 ] Thus, f2 and f3 are the free generators of AbS. gap> Image(q,s.1); f2 gap> Image(q,s.3); f3 gap> Image(q,s.2)=Image(q,s.1); true \end{verbatim} \end{proof} \subsubsection{$g=2$} Consider the simple closed curves $\alpha_1,\beta_1,\alpha_2,\beta_2$, $\epsilon$ as drawn below, and the corresponding right-handed Dehn twists along them, denoted by $a_1, a_2, b_1, b_2, e \in \Gamma_{2,1}$ respectively. \begin{figure}[H] \begin{center} \begin{tikzpicture}[scale=1, decoration={ markings, mark=at position 0.5 with {\arrow{<}}} ] \draw (0,0) -- (1,1); \draw (1,1) -- (5,1) -- (4,0) -- (0,0); \churro[x=1, y=0.45] \draw[red,dashed] (1.5,1.85) node[above] {$\alpha_1$} to[in=120,out=-120] (1.5,1.6); \draw[red,postaction={decorate}] (1.5,1.85) to[in=-30,out=30] (1.5,1.6); \draw[blue,->] (1.5,1.25) node[below] {$\beta_1$} to[in=-45,out=0] (1.7,1.6) to[in=0,out=90+45] (1.5,1.7) to[in=45,out=180] (1.3,1.6) to[in=180,out=-90-45] (1.5,1.25); \churro[x=3, y=0.45] \draw[red,dashed] (3.5,1.85) node[above] {$\alpha_2$} to[in=120,out=-120] (3.5,1.6); \begin{scope}[red,decoration={ markings, mark=at position 0.4 with {\arrow{<}}}] \draw[postaction={decorate}] (3.5,1.85) to[in=-30,out=30] (3.5,1.6); \end{scope} \draw[blue,->>] (3.5,1.25) node[below] {$\beta_2$} to[in=-45,out=0] (3.7,1.6) to[in=0,out=90+45] (3.5,1.7) to[in=45,out=180] (3.3,1.6) to[in=180,out=-90-45] (3.5,1.25); \draw[blue!40!red] (2.5,0.25) node[below] {$\varepsilon$} to[out=0,in=-90] (3.35,0.75) to[out=90,in=-70] (3.2,1.3) to[out=110,in=180] (3.5,1.75) to[out=0,in=70] (3.8,1.3) to[out=-110,in=90] (3.65,0.75) to[out=-90,in=180] (4,0.35) to[out=0,in=10] (3.7,0.75) ; \draw[blue!40!red,dashed] (3.7,0.75) to[out=180+10,in=-10] (3.3,0.75) ; \draw[blue!40!red] (3.3,0.75) to[out=180-10,in=10] (2.25,0.4) to[out=180+10,in=0] (2,0.35) ; \draw[blue!40!red] (1.35,0.75) to[out=90,in=-70] (1.2,1.3) to[out=110,in=180] (1.5,1.75) to[out=0,in=70] (1.8,1.3) to[out=-110,in=90] (1.65,0.75) to[out=-90,in=180] (2,0.35); \draw[blue!40!red,->] (1.35,0.75) to[out=-90,in=180] (2.5,0.25) ; \end{tikzpicture} \end{center} \end{figure} The set of quadratic refinements $Q(2)$ is $\{q_{i_1,j_1,i_2,j_2}: \; i_1,j_1,i_2,j_2 \in \{0,1\}\}$, where $q=q_{i_1,j_1,i_2,j_2}$ satisfies $q(\alpha_1)=i_1$, $q(\alpha_2)=i_2$, $q(\beta_1)=j_1$ and $q(\beta_2)=j_2$. Now we fix models of $\Gamma_{2,1}^{1/2}[\epsilon]$ via $\Gamma_{2,1}^{1/2}[0]:=\operatorname{Stab}_{\Gamma_{2,1}}(q_{0,0,0,0})$ and $\Gamma_{2,1}^{1/2}[1]:=\operatorname{Stab}_{\Gamma_{2,1}}(q_{1,0,1,1})$. \begin{theorem}\label{thm: 6.2} \begin{enumerate}[(i)] \item $H_1(\Gamma_{2,1};\mathbb{Z})=\mathbb{Z}/10\{\sigma \cdot\tau\}$, and $\sigma \cdot \tau$ is represented by $a_1$. \item $H_1(\Gamma_{2,1}^{1/2}[0];\mathbb{Z})=\mathbb{Z}\{A\} \oplus \mathbb{Z}/2\{B\}$, where $A$ is represented by $a_1 b_1 a_1^{-1} b_1 b_2 e^{-1}$ and $B$ is represented by $(a_1 b_1 a_1)^2 e b_2^{-1} b_1^{-1}$. \item $H_1(\Gamma_{2,1}^{1/2}[1];\mathbb{Z})=\mathbb{Z}/80\{C\}$, where $C$ is represented by $a_1$. \item Under the inclusion $\Gamma_{2,1}^{1/2}[0] \subset \Gamma_{2,1}$ we have $A \mapsto 2 \sigma \cdot\tau$ and $B \mapsto 5 \sigma \cdot\tau$. \item Under the inclusion $\Gamma_{2,1}^{1/2}[1] \subset \Gamma_{2,1}$ we have $C \mapsto \sigma \cdot\tau$. \end{enumerate} \end{theorem} \begin{proof} We say that the pair $(u,v)$ satisfies the \textit{braid relation} if $u v u= v u v$. By \cite[Theorem 2]{wajnryb} there is a presentation $$\Gamma_{2,1}=\langle a_1,a_2,b_1,b_2,e | R_1 \sqcup R_2 \sqcup \{(b_1 a_1 e a_2)^5=b_2 a_2 e a_1 b_1^2 a_1 e a_2 b_2\} \rangle,$$ where $R_1$ says that $(a_1,b_1)$,$(a_2,b_2)$,$(a_1,e)$,$(a_2,e)$ satisfy the braid relation, and $R_2$ says that each of $(a_1,a_2)$,$(b_1, b_2)$,$(a_1,b_2)$,$(a_2,b_1)$,$(b_1,e)$,$(b_2,e)$ commutes. Part (i) follows from abelianizing the above presentation of $\Gamma_{2,1}$, and it is compatible with \cite[Lemma 3.6]{E2}. For part (ii) we will describe the (right) action of $a_1,b_1,a_2,b_2,e$ on the set of $10$ quadratic refinements of Arf invariant $0$, which we will label as $q_{0,0,0,0}:=1$, $q_{0,0,0,1}:=2$, $q_{1,0,0,0}:=3$, $q_{0,0,1,0}:=4$, $q_{1,0,0,1}:=5$, $q_{1,0,1,0}:=6$, $q_{0,1,0,0}:=7$, $q_{0,1,1,0}:=8$, $q_{0,1,0,1}:=9$, $q_{1,1,1,1}:=10$. The explicit action of each generator as a permutation in $S_{10}$ can be found in the GAP computation below. We use GAP to find presentation of $\Gamma_{2,1}^{1/2}[0]$ and its first homology group as follows. \begin{verbatim} gap> F:=FreeGroup("a","x","b","y","c"); gap> AssignGeneratorVariables(F); gap> rel:= [ a*b*a*b^-1*a^-1*b^-1, x*y*x*y^-1*x^-1*y^-1, a*c*a*c^-1*a^-1*c^-1, x*c*x*c^-1*x^-1*c^-1, a*x*a^-1*x^-1, a*y*a^-1*y^-1, b*x*b^-1*x^-1, b*y*b^-1*y^-1, b*c*b^-1*c^-1, y*c*y^-1*c^-1, (y*x*c*a*b^2*a*c*x*y)*(b*a*c*x)^-5]; gap> G:=F/rel; gap> Q:=Group((1,7)(2,9)(4,8),(1,2)(3,5)(7,9),(1,3)(2,5)(4,6), (1,4)(3,6)(7,8),(1,6)(3,4)(9,10)); gap> hom:=GroupHomomorphismByImages (G,Q,GeneratorsOfGroup(G),GeneratorsOfGroup(Q)); [ a, x, b, y, c ] -> [ (1,7)(2,9)(4,8), (1,2)(3,5)(7,9), (1,3)(2,5)(4,6), (1,4)(3,6)(7,8), (1,6)(3,4)(9,10) ] of Arf invariant 0. gap> S:=PreImage(hom,Stabilizer(Q,1)); gap> genS:=GeneratorsOfGroup(S); [ a^-2, x^-2, b^-2, y^-2, c^-2, a*b*a^-1, a*c*a^-1, x*y*x^-1, x*c*x^-1, b*y*c^-1 ] gap> iso:= IsomorphismFpGroupByGenerators(S,genS); gap> s:=ImagesSource(iso); <fp group on the generators [ F1, F2, F3, F4, F5, F6, F7, F8, F9, F10 ]> gap> AbelianInvariants(s); [ 0, 2 ] gap> q:=MaximalAbelianQuotient(s); gap> AbS:=ImagesSource(q); gap> GeneratorsOfGroup(AbS); [ f1, f2, f3, f4, f5, f6, f7, f8, f9, f10 ] gap> RelatorsOfFpGroup(AbS); [ f1^-1*f2^-1*f1*f2, f1^-1*f3^-1*f1*f3, f1^-1*f4^-1*f1*f4, f1^-1*f5^-1*f1*f5, f1^-1*f6^-1*f1*f6, f1^-1*f7^-1*f1*f7, f1^-1*f8^-1*f1*f8, f1^-1*f9^-1*f1*f9, f1^-1*f10^-1*f1*f10, f2^-1*f3^-1*f2*f3, f2^-1*f4^-1*f2*f4, f2^-1*f5^-1*f2*f5, f2^-1*f6^-1*f2*f6, f2^-1*f7^-1*f2*f7, f2^-1*f8^-1*f2*f8, f2^-1*f9^-1*f2*f9, f2^-1*f10^-1*f2*f10, f3^-1*f4^-1*f3*f4, f3^-1*f5^-1*f3*f5, f3^-1*f6^-1*f3*f6, f3^-1*f7^-1*f3*f7, f3^-1*f8^-1*f3*f8, f3^-1*f9^-1*f3*f9, f3^-1*f10^-1*f3*f10, f4^-1*f5^-1*f4*f5, f4^-1*f6^-1*f4*f6, f4^-1*f7^-1*f4*f7, f4^-1*f8^-1*f4*f8, f4^-1*f9^-1*f4*f9, f4^-1*f10^-1*f4*f10, f5^-1*f6^-1*f5*f6, f5^-1*f7^-1*f5*f7, f5^-1*f8^-1*f5*f8, f5^-1*f9^-1*f5*f9, f5^-1*f10^-1*f5*f10, f6^-1*f7^-1*f6*f7, f6^-1*f8^-1*f6*f8, f6^-1*f9^-1*f6*f9, f6^-1*f10^-1*f6*f10, f7^-1*f8^-1*f7*f8, f7^-1*f9^-1*f7*f9, f7^-1*f10^-1*f7*f10, f8^-1*f9^-1*f8*f9, f8^-1*f10^-1*f8*f10, f9^-1*f10^-1*f9*f10, f1, f2, f3, f4, f5, f6, f7, f8, f9^2 ] gap> PreImagesRepresentative(q,AbS.9); (F9*F5^-1)^2*F10^-1 gap> Image(q,(s.9*s.5^-1)^2*s.10^-1)=AbS.9; true the ten generators of S in order. F9 and F6 agree on abelianization, and so do F5 and F1. Thus, we can replace this generator by (s.6*s.1^-1)^2*s.10^-1, which gives B by substituting what s.1,s.6 and s.10 are. gap> Image(q,s.5^-1*s.10^-1*s.9)=AbS.10; true \end{verbatim} Part (iii) is done similarly to part (ii): there are 6 quadratic refinements of Arf invariant 1, which we index as: $q_{1,0,1,1}:=1$, $q_{0,0,1,1}:=2$, $q_{0,1,1,1}:=3$, $q_{1,1,0,1}:=4$, $q_{1,1,0,0}:=5$, $q_{1,1,1,0}:=6$. The explicit action of each generator as a permutation on $S_6$ can be found in the GAP computation below. The group $G$ in the computation represents $\Gamma_{2,1}$ and it is input in the same way as above. \begin{verbatim} gap> Q:=Group((2,3),(4,5),(1,2),(5,6),(3,4)); gap> hom:=GroupHomomorphismByImages (G,Q,GeneratorsOfGroup(G),GeneratorsOfGroup(Q)); [ a, x, b, y, c ] -> [ (2,3), (4,5), (1,2), (5,6), (3,4) ] of Q(2) of Arf invariant 1 gap> S:=PreImage(hom,Stabilizer(Q,1)); gap> genS:=GeneratorsOfGroup(S); [ a, x, b^-2, y, c ] gap> iso:= IsomorphismFpGroupByGenerators(S,genS); gap> s:=ImagesSource(iso); gap> AbelianInvariants(s); [ 5, 16 ] isomorphic to Z/5 \oplus Z/16 \cong Z/80. gap> q:=MaximalAbelianQuotient(s); gap> AbS:=ImagesSource(q); gap> Image(q,s.1); f1 gap> Image(q,s.1)=AbS.5; true gap> Order(AbS.5); 80 \end{verbatim} Finally, parts (iv) and (v) follow from the explicit description of $A,B,C$ plus using the relations in the abelianization of $\Gamma_{2,1}$. \end{proof} \subsubsection{Stabilizations, Browder brackets and the $Q_{\mathbb{Z}}^1(-)$-operation} \begin{theorem}\label{thm: 6.3} \begin{enumerate}[(i)] \item $[\sigma,\sigma]=4 \sigma \cdot \tau$. \item $Q_{\mathbb{Z}}^1(\sigma)=3 \sigma \cdot \tau$. \item $x \cdot \sigma_0 = 4 A$ and $y \cdot \sigma_0= 3 A + B$. \item $x \cdot \sigma_1 = 28 C$ and $y \cdot \sigma_1 = C$. \item $z \cdot \sigma_0= C$. \item $z \cdot \sigma_1= 3 A+ B$. \item $[\sigma_0,\sigma_0]=-8 A$, $[\sigma_1,\sigma_1]=72 A$ and $[\sigma_0,\sigma_0]= 24 C$. \item $Q_{\mathbb{Z}}^1(\sigma_0)=4A+B$ and $Q_{\mathbb{Z}}^1(\sigma_1)=-36A+B$. \end{enumerate} \end{theorem} \begin{proof} Parts (i) and (ii) and appear in \cite[Lemma 3.6]{E2}. For part (iii) we use the same GAP computation as in Theorem \ref{thm: 6.2}, (ii). Right stabilization by $\sigma_0$ sends $q_{0,0} \mapsto q_{0,0,0,0}$ and $a \mapsto a_1$, $b \mapsto b_1$. Therefore, $x \cdot \sigma_0$ is represented by $a_1^{-2}$ and $y \cdot \sigma_0$ is represented by $a_1 b_1 a_1^{-1}$ \begin{verbatim} gap> Image(q,s.1); f1*f9^-2*f10^4 gap> Image(q,s.6); f6*f9^-3*f10^3 \end{verbatim} This says that under abelianization $a_1^{-2}$ (which is $s.1$) is mapped to $f1*f9^{-2}*f10^4$, which is $4A$ by the GAP computation in the proof of Theorem \ref{thm: 6.2}, (ii). Also, since $s.6$ means $a_1 b_1 a_1^{-1}$ then we get $y \cdot \sigma_0= 3A+B$. Proof of (iv): Observe that when we stabilize by $- \cdot \sigma_1$ we send $q_{0,0} \mapsto q_{0,0,1,1}$, whereas our choice of quadratic refinement of Arf invariant $1$ is $q_{0,1,1,1}$. To fix this we will use conjugation: $b_1 \in \Gamma_{2,1}$ satisfies $b_1^*(q_{1,0,1,1})=q_{0,0,1,1}$ and so $$Stab_{\Gamma_{2,1}}(q_{0,0,1,1}) \xrightarrow{b_1 \cdot - \cdot b_1^{-1}} \operatorname{Stab}_{\Gamma_{2,1}}(q_{1,0,1,1})$$ is an isomorphism. This is non-canonical, but its action in group homology is canonical: If $u \in \Gamma_{2,1}$ satisfies $u^*(q_{1,0,1,1})=q_{0,0,1,1}$ then the maps $u \cdot - \cdot u^{-1}$ and $b_1 \cdot - \cdot b_1^{-1}$ differ by conjugation by $b_1 u_1^{-1} \in \operatorname{Stab}_{\Gamma_{2,1}}(q_{1,0,1,1})$, which acts trivially in group homology. Thus, for homology computations, $x \cdot \sigma_1$ is represented by $b_1 a_1^{-2} b_1^{-1} \in \Gamma_{2,1}^{1/2}[1]$ and $y \cdot \sigma_1$ is represented by $b_1 a_1 b_1 a_1^{-1} b_1^{-1}$. Now we use GAP computation in the proof of Theorem \ref{thm: 6.2},(iii) to see where these elements map \begin{verbatim} gap> Image(iso,G.3*G.1^-2*G.3^-1); F1^-1*F3*F1 gap> Image(q,Image(iso,G.3*G.1^-2*G.3^-1))=Image(q,s.1)^28; true in the abelianization. gap> Image(iso,G.3*G.1*G.3*G.1^-1*G.3^-1); F1 gap> Image(q, Image(iso,G.3*G.1*G.3*G.1^-1*G.3^-1))=Image(q,s.1); true \end{verbatim} Part (v) is similar to the previous part: right stabilization by $\sigma_0$ sends $q_{1,1} \mapsto q_{1,1,0,0}$, and so we need to conjugate by $b_1 a_1 e a_2 \cdot - \cdot (b_1 a_1 e a_2)^{-1}$ to go to $\Gamma_{2,1}^{1/2}[0]$. Also, $z$ is represented by $a$, so $z \cdot \sigma_0$ is represented by $b_1 a_1 e a_2 a_1 (b_1 a_1 e a_2)^{-1}$. Using GAP: \begin{verbatim} gap> Image(iso,(G.3*G.1*G.5*G.2)*G.1*(G.3*G.1*G.5*G.2)^-1); F3^-1*F5*F3 gap> Image(q, Image(iso,(G.3*G.1*G.5*G.2)*G.1*(G.3*G.1*G.5*G.2)^-1)) =Image(q,s.1); true \end{verbatim} Part (vi) follows from the following GAP computation: \begin{verbatim} gap> Image(iso,(G.1*G.2*G.5)*G.1*(G.1*G.2*G.5)^-1); F9 gap> Image(q,Image(iso,(G.1*G.2*G.5)*G.1*(G.1*G.2*G.5)^-1)) =Image(q,s.9); true \end{verbatim} To prove part (vii) we will need the following claim \begin{claim} The element $-[\sigma,\sigma] \in \Gamma_{2,1}$ is represented by $(b_2 a_2 e a_1 b_1)^6 (a_1 b_1)^6 (a_2 b_2)^{-6}$ \end{claim} \begin{proof} By \cite[Lemma 3.6, Figure 4]{E2} we can write $-[\sigma,\sigma]$ as $t_w t_u^{-1} t_v^{-1}$, where $u,v,w$ are the following curves called ``a'', ``b'' and ``c'' respectively in \cite{E2}, and $t_w, t_u, t_v$ are the corresponding right-handed Dehn twists along them. Now we use \cite[Lemma 21 (iii)]{wajnryb} to write each of $t_u,t_v,t_w$ in terms of the generators, yielding the following: $t_u=(a_1 b_1)^6$, $t_v=(a_2 b_2)^6$ and $t_w=(b_2 a_2 e a_1 b_1)^6$. \end{proof} The above element lies in $\operatorname{Stab}_{\Gamma_{2,1}}(q)$ for any quadratic refinement $q$ because each of the curves $u,v,w$ is disjoint from the curves $\alpha_1,\beta_1,\alpha_2,\beta_2$, and hence $t_u,t_v,t_w$ do not affect the value of $q$ along the standard generators of $H_1(\Sigma_{2,1};\mathbb{Z})$. Thus, $[\sigma_i,\sigma_j] \in H_1(B\Gamma_{2,1}^{1/2}[i+j (\mod 2)];\mathbb{Z})$ is represented by $$(b_2 a_2 e a_1 b_1)^6 (a_1 b_1)^6 (a_2 b_2)^{-6} \in \operatorname{Stab}_{\Gamma_{2,1}}(q_{i,i,j,j}),$$ and then we conjugate this element so that it lies in our fixed choices of stabilizers. For $[\sigma_0,\sigma_0]$ we don't need to conjugate, so we find \begin{verbatim} gap> Image(q, Image(iso,(G.4*G.2*G.5*G.1*G.3)^6*(G.1*G.3)^-6 *(G.2*G.4)^-6))=AbS.10^8; true \end{verbatim} For $[\sigma_1,\sigma_1]$ need to conjugate by $a_1 a_2 e$, and we find \begin{verbatim} gap> Image(q,Image(iso,G.1*G.2*G.5*(G.4*G.2*G.5*G.1*G.3)^6* (G.1*G.3)^-6*(G.2*G.4)^-6*(G.1*G.2*G.5)^-1))=AbS.10^-72; true \end{verbatim} For $[\sigma_0,\sigma_1]$ we need to conjugate by $b_1$, and we get \begin{verbatim} gap> Image(q,Image(iso,G.3*(G.4*G.2*G.5*G.1*G.3)^6* (G.1*G.3)^-6*(G.2*G.4)^-6*G.3^-1))=AbS.5^56; true \end{verbatim} Finally, to prove (viii) we use that $2 Q_{\mathbb{Z}}^1(\sigma_{\epsilon}) = -[\sigma_{\epsilon},\sigma_{\epsilon}]$, which follows from the discussion in \cite[Page 9]{E2}. For $\epsilon=0$ Theorem \ref{thm: 6.2} together with part (vii) of this Theorem say that $Q_{\mathbb{Z}}^1(\sigma_0)$ is either $4A$ or $4A+B$. But, by part (ii) of this theorem we know that it must map to $3 \tau \in H_1(\Gamma_{2,1};\mathbb{Z})$. Using Theorem \ref{thm: 6.2},(iv) we get that the answer must be $4A+B$. The computation of the case $\epsilon=1$ is similar. \end{proof} \subsubsection{$g=3$} \begin{theorem}\label{thm: 6.4} \begin{enumerate}[(i)] \item $H_1(\Gamma_{3,1}^{1/2}[0];\mathbb{Z}) \cong \mathbb{Z}/4$, where $A \cdot \sigma_0 = y \cdot \sigma_0^2$ is a generator and $B \cdot \sigma_0= 2 A \cdot \sigma_0$. \item $Q_{\mathbb{Z}}^1(\sigma_0) \cdot \sigma_0= 2 y \cdot \sigma_0^2$. \item $H_1(\Gamma_{3,1}^{1/2}[1];\mathbb{Z}) \cong \mathbb{Z}/4$, where $ y \cdot \sigma_0 \cdot \sigma_1$ is a generator. \item $Q_{\mathbb{Z}}^1(\sigma_0) \cdot \sigma_1= 2 y \cdot \sigma_0 \cdot \sigma_1=Q_{\mathbb{Z}}^1(\sigma_1) \cdot \sigma_0$ and $Q_{\mathbb{Z}}^1(\sigma_0) \cdot \sigma_0=Q_{\mathbb{Z}}^1(\sigma_1) \cdot \sigma_1$. \end{enumerate} \end{theorem} \begin{proof} By \cite[Theorem 1]{wajnryb} we get a presentation of $\Gamma_{3,1}$ with generators $a_1,a_2,a_3,b_1,b_2,b_3,e_1,e_2$, where the $a_i,b_i$ are defined as in the cases $g=1,2$, $e_1$ is what was called $e$ in the $g=2$ case using the first two handles, and $e_2$ is defined analogously, but using the second and third handles instead. To prove (i) and (ii) we fix our quadratic refinement of Arf invariant $0$ to be the one evaluating to $0$ on all the $\alpha_i$'s and $\beta_i$'s. In this case the strategy to find a presentation for $\Gamma_{3,1}^{1/2}[0]$ is different: instead of computing the action on quadratic refinements we will write down elements of $\Gamma_{3,1}^{1/2}[0]$ (inspired by expressions from previous computations) and check that the subgroup they generate has index $36$ inside $\Gamma_{3,1}$, and hence that it must agree with $\Gamma_{3,1}^{1/2}[0]$. \begin{verbatim} gap> F:=FreeGroup("a1","a2","a3","b1","b2","e1","e2"); gap> AssignGeneratorVariables(F); gap> rel:=[a1*b1*a1*b1^-1*a1^-1*b1^-1, a2*b2*a2*b2^-1*a2^-1*b2^-1, a1*e1*a1*e1^-1*a1^-1*e1^-1, a2*e1*a2*e1^-1*a2^-1*e1^-1, a2*e2*a2*e2^-1*a2^-1*e2^-1, a3*e2*a3*e2^-1*a3^-1*e2^-1, a1*a2*a1^-1*a2^-1, a1*a3*a1^-1*a3^-1, a3*a2*a3^-1*a2^-1, b1*b2*b1^-1*b2^-1, a1*b2*a1^-1*b2^-1, a2*b1*a2^-1*b1^-1, a3*b2*a3^-1*b2^-1, a3*b1*a3^-1*b1^-1, b1*e1*b1^-1*e1^-1, b2*e1*b2^-1*e1^-1, b1*e2*b1^-1*e2^-1, b2*e2*b2^-1*e2^-1, a1*e2*a1^-1*e2^-1, a3*e1*a3^-1*e1^-1, e1*e2*e1^-1*e2^-1, (b1*a1*e1*a2)^-5*b2*a2*e1*a1*b1^2*a1*e1*a2*b2, ((b2*a2*e1*b1^-1)* (e2*a2*a3*e2)* (a2*e1*a1*b1)^-1*b2*(a2*e1*a1*b1) *(e2*a2*a3*e2)^-1* (b1*e1^-1*a2^-1*b2^-1)*a1*a2*a3)^-1*(a2*e1*a1*b1)^-1*b2* (a2*e1*a1*b1)* (e2*a2*a3*e2)* (a2*e1*a1*b1)^-1*b2*(a2*e1*a1*b1) *(e2*a2*a3*e2)^-1*(e1*a1*a2*e1)*(e2*a2*a3*e2)*(a2*e1*a1*b1)^-1* b2*(a2*e1*a1*b1) *(e2*a2*a3*e2)^-1*(e1*a1*a2*e1)^-1]; gap> G:=F/rel; gap> H:=Subgroup(G,[G.1^-2,G.2^-2,G.3^-2,G.4^-2,G.5^-2,G.6^-2,G.7^-2, G.4*G.5*G.6^-1,G.5*G.2*G.5^-1,G.4*G.1*G.4^-1,G.2*G.7*G.2^-1, G.7*G.3*G.7^-1]); generator of H fixes q_{000000}. gap> Index(G,H); 36 in Q(3) of Arf invariant 0. gap> AbelianInvariants(H); [ 4 ] gap> genH:=GeneratorsOfGroup(H); gap> iso:=IsomorphismFpGroupByGenerators(H,genH); gap> S:=ImagesSource(iso); gap> q:=MaximalAbelianQuotient(S); gap> AbS:=ImagesSource(q); gap> Order(Image(q,Image(iso,G.1*G.4*G.1^-1))); 4 gap> Order( Image(q,Image(iso,(G.1*G.4*G.1)^2*G.6*G.5^-1*G.4^-1))); 2 \end{verbatim} Now, to finish we need to check two things: The first one is that $A \cdot \sigma_0= y \cdot \sigma_0^2$ is a generator: By Theorem \ref{thm: 6.3}(ii) we have $y \cdot \sigma_0= 3A+B$ so using the above GAP computations we find $A \cdot \sigma_0= y \cdot \sigma_0^2$. By Theorem \ref{thm: 6.3} (viii) we have $Q_{\mathbb{Z}}^1(\sigma_0) \cdot \sigma_0= 4 A \cdot \sigma_0+ B \cdot \sigma_0= B \cdot \sigma_0$, as required. To prove parts (iii) and (iv) we fix our quadratic refinement of Arf invariant 1 to be the one with value 1 in all the $a_i$ and $b_i$. Now we use GAP (F,G are as before, so we will not copy that part again) and a similar idea as above to get the result. \begin{verbatim} gap> H:=Subgroup(G,[G.1,G.2,G.3,G.4,G.5,G.6^-2,G.7^-2,G.6*G.4*G.6^-1, G.6*G.5*G.6^-1,G.7*G.5*G.7^-1,(G.6*G.2*G.1)*G.3*(G.6*G.2*G.1)^-1, (G.6*G.2*G.1)*G.7*(G.6*G.2*G.1)^-1, (G.6*G.2*G.1)*(G.6*G.5*G.4)* (G.6*G.2*G.1)^-1,(G.7*G.3*G.2)*G.1*( G.7*G.3*G.2)^-1, ( G.7*G.3*G.2)*G.6*( G.7*G.3*G.2)^-1]); generator of H fixes q_{111111}. gap> Index(G,H); 28 Q(3) of Arf invariant 1. gap> AbelianInvariants(H); [ 4 ] \end{verbatim} This shows that $H_1(\Gamma_{3,1}^{1/2}[1];\mathbb{Z}) \cong \mathbb{Z}/4$. By Theorem \hyperref[theorem A]{A} (i), the map $\sigma_{\epsilon} \cdot - : H_1(\Gamma_{g-1,1}^{1/2}[\delta-\epsilon];\mathbb{Z}) \rightarrow H_1(\Gamma_{g,1}^{1/2}[\delta];\mathbb{Z})$ is surjective for $g \geq 4$ and any $\epsilon, \delta$. (The proof of Theorem \hyperref[theorem A]{A} uses the results of the Appendix, but only for part (ii), part (i) is shown independently of these first homology computations.) Moreover, the stable values $H_1(\Gamma_{\infty,1}^{1/2}[\delta];\mathbb{Z})$ are both isomorphic to $\mathbb{Z}/4$ by \cite[Theorem 1.4]{Picardspin} plus \cite[Theorem 2.14]{rspin}. Thus the groups $H_1(\Gamma_{g,1}^{1/2}[\delta];\mathbb{Z})$ are stable for any $g \geq 3$ and any $\delta \in \{0,1\}$. Since $\sigma_0^2=\sigma_1^2$ then $(Q_{\mathbb{Z}}^1(\sigma_0) \cdot \sigma_1) \cdot \sigma_1 = Q_{\mathbb{Z}}^1(\sigma_0) \cdot \sigma_0^2= 2 y \cdot \sigma_0^3 = (2 y \sigma_0 \cdot \sigma_1) \cdot \sigma_1$, and by the above stability result $Q_{\mathbb{Z}}^1(\sigma_0) \cdot \sigma_1= 2 y \cdot \sigma_0 \cdot \sigma_1$. Also, $ y \cdot \sigma_0^3 = (y \cdot \sigma_0 \cdot \sigma_1) \cdot \sigma_1$ is a generator of $H_1(\Gamma_{4,1}^{1/2}[0];\mathbb{Z})$ by the stability plus part (i) of this theorem. Thus, $y \cdot \sigma_0 \cdot \sigma_1$ is a generator of $H_1(\Gamma_{3,1}^{1/2}[1];\mathbb{Z})$ by applying stability. Finally, by Theorem \ref{thm: 6.3}, $Q_{\mathbb{Z}}^1(\sigma_1)-Q_{\mathbb{Z}}^1(\sigma_0)=-40 A$, so any stabilization of this vanishes because it lives in a $4$-torsion group. \end{proof} \subsection{Quadratic symplectic groups over $\mathbb{Z}$} \label{appendix symplectic} The proofs of this section will be very similar to the ones of Section \ref{appendix mcg}, but using the explicit presentations of $Sp_{2g}(\mathbb{Z})$ given in \cite{presentationsymplectic}. The computation in Theorems \ref{thm: 6.6}, \ref{thm: 6.7} and \ref{thm: 6.8} about the first homology of the quadratic symplectic groups of Arf invariant $1$ is used in \cite[Section 4.1]{framings}. \begin{rem} In \cite{presentationsymplectic} they write matrices using a different basis. We will change the matrices given \cite{presentationsymplectic} to our choice of basis of Section \ref{section symplectic} without further notice in all the following computations. \end{rem} \subsubsection{$g=1$} \label{11.2.1} \begin{theorem}\label{thm: 6.6} \begin{enumerate}[(i)] \item $H_1(Sp_2(\mathbb{Z});\mathbb{Z})=\mathbb{Z}/12\{t\}$, where $t$ is represented by $\begin{psmallmatrix} 1 & 1 \\ 0 & 1 \end{psmallmatrix} \in Sp_2(\mathbb{Z})$. \item $H_1(Sp_2^0(\mathbb{Z});\mathbb{Z})=\mathbb{Z}\{\mu\} \oplus \mathbb{Z}/4\{\lambda\}$, where $\mu$ is represented by $\begin{psmallmatrix} 1 & 2 \\ 0 & 1 \end{psmallmatrix} \in Sp_2^0(\mathbb{Z})$ and $\lambda$ is represented by $\begin{psmallmatrix} 0 & -1 \\ 1 & 0 \end{psmallmatrix} \in Sp_2^0(\mathbb{Z})$. \item $H_1(Sp_2^1(\mathbb{Z});\mathbb{Z})=\mathbb{Z}/12\{t'\}$, where $t'$ is represented by $\begin{psmallmatrix} 1 & 1 \\ 0 & 1 \end{psmallmatrix} \in Sp_2^1(\mathbb{Z})$. \end{enumerate} \end{theorem} \begin{proof} By \cite[Theorem 1]{presentationsymplectic} we have $$Sp_2(\mathbb{Z})= \langle L,N | (LN)^2=N^3, N^6=1 \rangle$$ where $L= \begin{psmallmatrix} 1 & 1 \\ 0 & 1 \end{psmallmatrix}$ and $N= \begin{psmallmatrix} 0 & 1 \\ -1 & 1 \end{psmallmatrix}$. We will use the same notation as in Section \ref{appendix mcg} for the quadratic refinements, where now $\alpha,\beta$ are be the standard hyperbolic basis of $(\mathbb{Z}^2,\Omega_1)$). We let $Sp_2^0(\mathbb{Z}):= \operatorname{Stab}_{Sp_2(\mathbb{Z})}(q_{0,0})$ and $Sp_2^1(\mathbb{Z}):= \operatorname{Stab}_{Sp_2(\mathbb{Z})}(q_{1,1})$. We then compute the action of $L, N$ on the set of quadratic refinements of each invariant (see the GAP formulae below). Since there is a unique quadratic refinement of Arf invariant 1 then $Sp_2^1(\mathbb{Z})=Sp_2(\mathbb{Z})$ so parts (i) and (iii) are equivalent. Thus, it suffices to show parts (i) and (ii). To prove (i) we abelianize the presentation of $Sp_2(\mathbb{Z})$ to get $\mathbb{Z}/12\{L\}$. To prove (ii) we use GAP \begin{verbatim} gap> F:=FreeGroup("L","N"); gap> AssignGeneratorVariables(F); gap> rel:=[(L*N)^2*N^-3, N^6]; gap> G:=F/rel; gap> Q:=Group((1,2),(1,2,3)); gap> hom:=GroupHomomorphismByImages (G,Q,GeneratorsOfGroup(G),GeneratorsOfGroup(Q)); [ L, N ] -> [ (1,2), (1,2,3) ] gap> S:=PreImage(hom,Stabilizer(Q,1)); gap> AbelianInvariants(S); [ 0, 4 ] gap> genS:=GeneratorsOfGroup(S); [ L^-2, N*L^-1 ] gap> iso:=IsomorphismFpGroupByGenerators(S,genS); gap> s:=ImagesSource(iso); gap> q:=MaximalAbelianQuotient(s); [ F1, F2 ] -> [ f2, f_1^-1*f2 ] gap> AbS:=ImagesSource(q); gap> GeneratorsOfGroup(AbS); [ f1, f2 ] gap> RelatorsOfFpGroup(AbS); [ f1^-1*f2^-1*f1*f2, f1^4 ] \end{verbatim} From these we get that $L^2$ is a generator of the $\mathbb{Z}$ summand. Moreover, $N L^{-1} L^2$ maps to a generator of the $\mathbb{Z}/4$ summand, and this matrix is precisely the conjugation by $\Omega_1$ of our choice of matrix for $\lambda$. \end{proof} \subsubsection{$g=2$} \begin{theorem}\label{thm: 6.7} \begin{enumerate}[(i)] \item $H_1(Sp_4(\mathbb{Z});\mathbb{Z})=\mathbb{Z}/2\{t \cdot \sigma\}$, where $\sigma, t$ are as in Theorem \ref{thm: 6.6}.) \item $H_1(Sp_2^0(\mathbb{Z});\mathbb{Z})=\mathbb{Z}/2\{Q_{\mathbb{Z}}^1(\sigma_0)\} \oplus \mathbb{Z}/4\{\lambda \cdot \sigma_0\}$, and $Q_{\mathbb{Z}}^1(\sigma_0)$ is represented by $\begin{psmallmatrix} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \end{psmallmatrix} \in Sp_4^0(\mathbb{Z})$. Moreover, $\mu \cdot \sigma_0=0$. \item $H_1(Sp_4^1(\mathbb{Z});\mathbb{Z})=\mathbb{Z}/4\{t' \cdot \sigma_0\}$, where $t'$ is as in Theorem \ref{thm: 6.6}. \item $t' \cdot \sigma_1= \lambda \cdot \sigma_0$, $\mu \cdot \sigma_1= 0$, $\lambda \cdot \sigma_1= t' \cdot \sigma_0$, $Q_{\mathbb{Z}}^1(\sigma_0)=Q_{\mathbb{Z}}^1(\sigma_1)$ and $[\sigma_0,\sigma_1]=0$. \end{enumerate} \end{theorem} \begin{proof} By \cite[Theorem 2]{presentationsymplectic} $Sp_4(\mathbb{Z})$ has a presentation with two generators $L, N$ (see the GAP computations below for the relations), where $L$ is given by the stabilization of the matrix called $L$ in Section \ref{11.2.1}. To prove (i) we compute \begin{verbatim} gap> F:=FreeGroup("L","N"); gap> AssignGeneratorVariables(F); gap> rel:=[N^6, (L * N)^5, (L *N^-1)^10, (L* N^-1* L * N)^6, L *(N^2*L*N^4)* L^-1 * (N^2*L*N^4)^-1, L *(N^3*L*N^3)* L^-1 * (N^3*L*N^3)^-1, L *(L*N^-1)^5* L^-1 * (L*N^-1)^-5]; gap> G:=F/rel; gap> p:=MaximalAbelianQuotient(G); [ L, N ] -> [ f1, f1 ] gap> AbG:=ImagesSource(p); <pc group of size 2 with 2 generators> gap> Order(Image(p,G.1)); 2 \end{verbatim} To prove part (ii) we add more GAP computations to the above, using a permutation representation of how $L,N$ act on the 10 quadratic refinements of Arf invariant 0 (we use same indexing as in the proof of Theorem \ref{thm: 6.2}, and action is computed similarly). \begin{verbatim} gap> Q:=Group((1,2)(4,6)(5,8),(2,3,4,5,6,7)(8,9,10)); gap> hom:=GroupHomomorphismByImages (G,Q,GeneratorsOfGroup(G),GeneratorsOfGroup(Q)); of Arf invariant 0. gap> S:=PreImage(hom,Stabilizer(Q,1)); gap> genS:=GeneratorsOfGroup(S); [ L^-2, N, L*N*L*N^-1*L^-1, L*N^-1*L*N*L^-1 ] gap> iso:= IsomorphismFpGroupByGenerators(S,genS); gap> s:=ImagesSource(iso); gap> q:=MaximalAbelianQuotient(s); gap> AbS:=ImagesSource(q); gap> AbelianInvariants(S); [ 2, 4 ] gap> Order(Image(q,s.1)); 1 so its stabilization vanishes. gap> Order(Image(q,s.2)); 2 gap> Order(Image(q,s.4)); 4 gap> Image(q,s.2)=Image(q,s.4)^2; false the Z/4 summand, and that N generates the Z/2 summand. \end{verbatim} By \cite[Theorem 2]{presentationsymplectic}, he matrix $N$ is given by N=$\begin{psmallmatrix} 0 & 1 & -1 & 0 \\ -1 & 0 & 0 & 0 \\ -1 & 0 & 0 & 1 \\ 0 & 0 & -1 & 0 \end{psmallmatrix}$. Thus, $N^3= \begin{psmallmatrix} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \end{psmallmatrix} \in Sp_4^0(\mathbb{Z})$ represents $Q_{\mathbb{Z}}^1(\sigma_0)$ because it represents $Q_{\mathbb{Z}}^1(\sigma)$ and it stabilizes the quadratic refinement $q_{0,0,0,0}$, so this generates the $\mathbb{Z}/2$ summand. Also $L N^{-1} L N L= \begin{psmallmatrix} 0 & 1 & 0 & 0 \\ -1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{psmallmatrix}$, which by the last paragraph of the proof of Theorem \ref{thm: 6.6} is the stabilization of the matrix $\lambda$ conjugated by $\Omega_1$. Since $\Omega_1 \in Sp_{4}^0(\mathbb{Z})$ then $L N^{-1} L N L$ represents the homology class $\lambda \cdot \sigma_0$. By the GAP computations $L N^{-1} L N L= L N^{-1} L N L^{-1} L^2$ is a generator of the $\mathbb{Z}/4$ summand, as required. To prove part (iii) we also use the same GAP program but this time we compute the permutation representation on the quadratic refinements of Arf invariant 1. We will pick our quadratic refinement of Arf invariant 1 to be $q_{1,1,0,0}$. \begin{verbatim} gap> T:=Group((3,4),(1,2,3,4,5,6)); gap> homtwo:=GroupHomomorphismByImages (G,T,GeneratorsOfGroup(G),GeneratorsOfGroup(T)); of Arf invariant 1 indexed so that q_{1,1,0,0}=1. gap> SS:=PreImage(homtwo,Stabilizer(T,1)); gap> genSS:=GeneratorsOfGroup(SS); [ L, N*L*N^-1, N^-1*L*N, N^2*L^-2*N^-2, N^3*L^-1*N^-2 ] gap> isotwo:=IsomorphismFpGroupByGenerators(SS,genSS); gap> ss:=ImagesSource(isotwo); gap> qq:=MaximalAbelianQuotient(ss); gap> AbSS:=ImagesSource(qq); <pc group of size 4 with 5 generators> gap> Order(Image(qq,ss.1)); 4 \end{verbatim} To prove (iv) we use the $E_2$-algebra map from the $E_2$-algebra of spin mapping class groups to the one of quadratic symplectic groups, which is induced by the obvious functor $\mathsf{MCG} \rightarrow \mathsf{Sp}$ and the fact that the quadratic refinements functor $Q$ is essentially the same in both cases. In more concrete terms, the functor just sends the spin mapping class groups to their actions on first homology, which are quadratic symplectic groups. The Dehn twist $a \in \Gamma_{1,1}$ maps to the matrix $\begin{psmallmatrix} 1 & 1 \\ 0 & 1 \end{psmallmatrix} \in Sp_2(\mathbb{Z})$, and the Dehn twist $b \in \Gamma_{1,1}$ maps to $\begin{psmallmatrix} 1 & 0 \\ -1 & 1 \end{psmallmatrix} \in Sp_2(\mathbb{Z})$. Thus, $a^{-2} \mapsto \begin{psmallmatrix} 1 & -2 \\ 0 & 1 \end{psmallmatrix}= \begin{psmallmatrix} 1 & 2 \\ 0 & 1 \end{psmallmatrix}^{-1}$ and $a b a^{-1} \mapsto \begin{psmallmatrix} 0 & 1 \\ -1 & 2 \end{psmallmatrix}= \begin{psmallmatrix} 0 & 1 \\ -1 & 0 \end{psmallmatrix} \cdot \begin{psmallmatrix} 1 & 2 \\ 0 & 1 \end{psmallmatrix}^{-1}$ By Theorems \ref{thm: 6.1} and \ref{thm: 6.5} we get $x \mapsto -\mu$, $y \mapsto \lambda - \mu$ and $z \mapsto t'$. Also by definition $\sigma_{\epsilon} \mapsto \sigma_{\epsilon}$ for $\epsilon \in \{0,1\}$. Thus, by Theorem \ref{thm: 6.3} we get: $x \cdot \sigma_1= 28 z \cdot \sigma_0$ and so $- \mu \cdot \sigma_1= 28 t' \cdot \sigma_0 = 0$. Also, $y \cdot \sigma_1= z \cdot \sigma_0$ so $(\lambda-\mu) \cdot \sigma_1= t' \cdot \sigma_0$, giving the result. Furthermore, $z \cdot \sigma_1= y \cdot \sigma_0$ so $t' \cdot \sigma_1= (\lambda- \mu) \cdot \sigma_0$, hence giving the result. Finally, $Q_{\mathbb{Z}}^1(\sigma_1)=Q_{\mathbb{Z}}^1(\sigma_0)-10 x \cdot \sigma_0$, so $Q_{\mathbb{Z}}^1(\sigma_1)=Q_{\mathbb{Z}}^1(\sigma_0)+ 10 \mu \cdot \sigma_0= Q_{\mathbb{Z}}^1(\sigma_0)$, and $[\sigma_0,\sigma_1]=24 z \cdot \sigma_0 \mapsto 0$. \end{proof} \subsubsection{$g=3$} \begin{theorem}\label{thm: 6.8} \begin{enumerate}[(i)] \item $H_1(Sp_{6}^0(\mathbb{Z};\mathbb{Z}) = \mathbb{Z}/4\{\lambda \cdot \sigma_0^2\}$. \item $Q_{\mathbb{Z}}^1(\sigma_0) \cdot \sigma_0 = 2 \lambda \cdot \sigma_0^2$. \item $Q_{\mathbb{Z}}^1(\sigma_0) \cdot \sigma_0= Q_{\mathbb{Z}}^1(\sigma_1) \cdot \sigma_1$ and $Q_{\mathbb{Z}}^1(\sigma_0) \cdot \sigma_1= Q_{\mathbb{Z}}^1(\sigma_1) \cdot \sigma_0 = 2 \lambda \cdot \sigma_0 \cdot \sigma_1$. \item $H_1(Sp_{6}^1(\mathbb{Z});\mathbb{Z}) = \mathbb{Z}/4\{\lambda \cdot \sigma_0 \cdot \sigma_1\}$ and $\lambda \cdot \sigma_0 \cdot \sigma_1= t' \cdot \sigma_0^2$. \end{enumerate} \end{theorem} \begin{proof} By Theorem \ref{thm: 6.4}(i) $H_1(\Gamma_{3,1}^{1/2}[0];\mathbb{Z})=\mathbb{Z}/4\{y \cdot \sigma_0^2\}$. The homomorphism $\Gamma_{3,1}^{1/2}[0] \rightarrow Sp_6^0(\mathbb{Z})$ is surjective because $\Gamma_{3,1} \rightarrow Sp_6(\mathbb{Z})$ is, and hence $\mathbb{Z}/4\{y \cdot \sigma_0^2\}$ surjects onto $H_1(Sp_6^0(\mathbb{Z});\mathbb{Z})$. Using the $E_2$-algebra map of the previous section $y \cdot \sigma_0^2 \mapsto \lambda \cdot \sigma_0^2$. This gives part (ii) by Theorem \ref{thm: 6.4}(ii). The rest of part (i) follows from Theorem 1.1 in \cite[Theorem 1.1]{JohnsonMillson}, which says that $H_1(Sp_6^0(\mathbb{Z});\mathbb{Z}) \cong \mathbb{Z}/4$. Part (iii) follows by using the $E_2$-algebra map again and Theorem \ref{thm: 6.4}. For part (iv) we use Theorem \hyperref[theorem B]{B}, Part (i), to get that all the stabilization maps $\sigma_{\epsilon} \cdot - : H_1(Sp_{2(g-1)}^{\delta-\epsilon}(\mathbb{Z});\mathbb{Z}) \rightarrow H_1(Sp_{2g}^{\delta}(\mathbb{Z});\mathbb{Z})$ are surjective for $g \geq 4$. (The proof of part (i) of Theorem \hyperref[theorem B]{B} is independent of the first homology computations.) By \cite[Theorem 1.1]{JohnsonMillson} the stable first homology group of the quadratic symplectic groups of Arf invariant 0 is $\mathbb{Z}/4$. The stable first homology group of the quadratic symplectic groups of Arf invariant 1 must be the same by homological stability using Theorem \ref{theorem stab 1}. Thus, $H_1(Sp_{6}^{1}(\mathbb{Z});\mathbb{Z})$ surjects onto $\mathbb{Z}/4$. Finally, by a similar reasoning to the one at the beginning of this proof we get that $H_1(\Gamma_{3,1}^{1/2}[1];\mathbb{Z}) \cong \mathbb{Z}/4$ surjects onto $H_1(Sp_{6}^{1}(\mathbb{Z});\mathbb{Z})$. The expression for the generator follows from Theorem \ref{thm: 6.4} and the $E_2$-algebra map. \end{proof} \begin{rem} All the computations of Section \ref{appendix symplectic} are consistent with the ones of \cite[Appendix A]{krannichmcg}. \end{rem} \bibliographystyle{amsalpha}
1,941,325,220,673
arxiv
\section{Introduction} \subsection{Maximum cut}\label{sec:maximumcut} Given an undirected (edge-)weighted graph $G = (V,E,\omega)$, a cut $V_{-1}|V_1$ is a partition of the node set $V$ into two disjoint subsets $V_{-1}$ and $V_1$. The size of a cut $C=V_{-1}|V_1$, denoted by $s(C)$, is the sum of all the weights corresponding to edges that have one end vertex in $V_{-1}$ and one in $V_1$. The maximum cut (Max-Cut) problem is the problem of finding a cut $C^*$ such that for all cuts $C$, $s(C) \leq s(C^*)$. We call such a $C^*$ a maximum cut and say $\mc{G}:=s(C^*)$ is the maximum cut value of the graph $G$. The Max-Cut problem for an unweighted graph is a special case of the Max-Cut problem on a weighted graph which we obtain by assuming all edge weights are $1$. Finding an unweighted graph's Max-Cut is equivalent to finding a bipartite subgraph with the largest number of edges possible. In fact, for an unweighted bipartite graph $\mc{G}=|E|$. The Max-Cut problem is an NP-hard problem; assuming P $\neq$ NP no solution can be acquired in polynomial time. There are a variety of polynomial time approximation algorithms for this problem \cite{goemans1995,bylka1999,trevisan2012}. Some Max-Cut approximation algorithms have a proven lower bound on their accuracy, which asserts the existence of a $\beta\in [0,1]$ such that, for all output cuts $C$ obtained by the algorithm, $s(C) \geq \beta \mc{G}$. We call such a $\beta$ a performance guarantee. For algorithms that incorporate stochastic steps, such a lower bound typically takes the form $E[s(C)] \geq \beta \mc{G}$ instead, where $E[s(C)]$ denotes the expected value of the size of the output cut. In recent years a new type of approach to approximating such graph problems has gained traction. Models from the world of partial differential equations and variational methods that exhibit behaviour of the kind that could be helpful in solving the graph problem are transcribed from their usual continuum formulation to a graph based model. The resulting discrete model can then be solved using techniques from numerical analysis and scientific computing. Examples of problems that have successfully been tackled in this manner include data classification \cite{bertozzi2012}, image segmentation \cite{calatroni2016}, and community detection \cite{hu2013}. In this paper we use a variation on the graph Ginzburg-Landau functional, which was introduced in \cite{bertozzi2012}, to construct an algorithm which approximately solves the Max-Cut problem on simple undirected weighted graphs. We compare our method with the Goemans-Williamson (GW) algorithm \cite{goemans1995}, which is the current state-of-the-art method for approximately solving the Max-Cut problem. In \cite{goemans1995} the authors solve a relaxed Max-Cut objective function and intersect the solution with a random hyperplane in a $n$-dimensional sphere. It is proven that if gw($C$) is the size of the cut produced by the Goemans-Williamson algorithm, then its expected value $E$[gw$(C)]$ satisfies the inequality $E$[gw$(C)] \geq \beta \mc{G}$ where $\beta = 0.878$ (rounded down). If the Unique Games Conjecture \cite{khot2002} is true, the GW algorithm has the best performance guarantee that is possible for a polynomial time approximation algorithm \cite{khot2007}. It has been proven that approximately solving the Max-Cut problem with a performance guarantee of $\frac{16}{17}\approx 0.941$ or better is NP-hard \cite{trevisan2000}. Finding $\mc{G}$ is equivalent to finding the ground state of the Ising Hamiltonian in Ising spin models \cite{haribara2016, barahona1988}, and 0/1 linear programming problems can be restated as Max-Cut problems \cite{lasserre2016}. \subsection{Signless Ginzburg-Landau functional}\label{sec:GL} Spectral graph theory \cite{chung1997} explores the relationships between the spectra of graph operators, such as graph Laplacians (see Section~\ref{sec:setup}), and properties of graphs. For example, the multiplicity of the zero eigenvalue of the (unnormalised, random walk, or symmetrically normalised) graph Laplacian is equal to the number of connected components of the graph. Such properties lie at the basis of the successful usage of the graph Laplacian in graph clustering, such as in spectral clustering \cite{von2007} and in clustering and classification methods that use the graph Ginzburg-Landau functional \cite{bertozzi2012} \begin{equation}\label{eq:graphGL} f_\e(u) := \frac12 \sum_{i,j\in V} \omega_{ij} (u_i-u_j)^2 + \frac1\e \sum_{i\in V} W(u_i). \end{equation} Here $u: V\to {\vz R}$ is a real-valued function defined on the node set $V$, with value $u_i$ on node $i$, $\omega_{ij}$ is a positive weight associated with the edge between nodes $i$ and $j$ (and $\omega_{ij}=0$ if such an edge is absent), and $W(x):= (x^2-1)^2$ is a double-well potential with minima at $x=\pm 1$. In Section~\ref{sec:setup} we will introduce our setting and notation more precisely. The method we use in this paper is based on a variation of $f_\e$, we call the \textit{signless Ginzburg-Landau functional}: \begin{equation}\label{eq:signlessgraphGL} f_\e^+(u): = \frac12 \sum_{i,j\in V} \omega_{ij} (u_i+u_j)^2 + \frac1\e \sum_{i\in V} W(u_i). \end{equation} This nomenclature is suggested by the fact that the signless graph Laplacians are related to $f_\e^+$ in a similar way as the graph Laplacians are related to $f_\e$, as we will see in Section~\ref{sec:setup}. Signless graph Laplacians have been studied because of the connections between their spectra and bipartite subgraphs \cite{desai1994}. In \cite{hein2007, van2012} the authors derive a graph difference operator and a graph divergence operator to form a graph Laplacian operator. In this paper we mimic this framework by deriving a signless difference operator and a signless divergence operator to form a signless Laplacian operator. Whereas the graph Laplacian operator is a discretization of the continuum Laplacian operator, the continuum analogue of the signless Laplacian is an averaging operator which is the subject of current and future research. The functional $f_\e$ is useful in clustering and classification problems, because minimizers of $f_\e$ (in the presence of some constraint or additional term, to prevent trivial minimizers) will be approximately binary (with values close to $\pm 1$), because of the double-well potential term, and will have similar values on nodes that are connected by highly weighted edges, because of the first term in $f_\e$. This intuition can be formalised using the language of $\Gamma$-convergence \cite{dalmaso1993}. In analogy with the continuum case in \cite{modica1977,modica1987}, it was proven in \cite{van2012} that if $\e\downarrow 0$, then $f_\e$ $\Gamma$-converges to \[ f_0(u) := \begin{cases} 2\mathrm{TV}(u), &\text{if } u \text{ only takes the values } \pm1, \\ \infty, &\text{otherwise}, \end{cases} \] where TV$(u):= \frac12 \sum_{i,j\in V} \omega_{ij} |u_i-u_j|$ is the graph total variation\footnote{The multiplicative factor $2$ in $f_0$ above differs from that in \cite{van2012} because in the current paper we choose different locations for the wells of $W$.}. Together with an equi-coercivity property, which we will return to in more detail in Section~\ref{sec:fe+}, this $\Gamma$-convergence result guarantees that minimizers of $f_\e$ converge to minimizers of $f_0$ as $\e \downarrow 0$. If $u$ only takes the values $\pm 1$, we note that $\mathrm{TV}(u) = 2s(C)$, where $C=V_{-1}|V_1$ is the cut given by $V_{\pm 1}:= \{i\in V: u_i = \pm 1\}$. Hence minimizers $u^\e$ of $f_\e$ are expected to approximately solve the {\it minimal} cut problem, if we let $V_{\pm 1} \approx \{i\in V: u_i^\e \approx \pm 1\}$. In Section~\ref{sec:fe+} we prove that $f_\e^+$ $\Gamma$-converges to a limit functional whose minimizers solve the Max-Cut problem. Hence, we expect minimizers $u^\e$ of $f_\e^+$ to approximately solve the Max-Cut problem, if we consider the cut $C=V_{-1}|V_1$, with $V_{\pm 1} = \{i\in V: u_i^\e \approx \pm 1\}$. \subsection{Graph MBO scheme} There are various ways in which the minimization of $f_\e^+$ can be attempted. One such way, which can be explored in a future publication, is to use a gradient flow method. In the case of $f_\e$ the gradient flow is given by an Allen-Cahn type equation on graphs \cite{bertozzi2012,van2014}, \begin{equation}\label{eq:graphAC} \frac{du_i}{dt} = -(\Delta u)_i - \frac1\e d_i^{-r}W'(u_i), \end{equation} where $\Delta u$ is a graph Laplacian of $u$, $d_i$ the degree of node $i$, and $r\in [0,1]$ a parameter (see Section~\ref{sec:setup} for further details). This can be solved using a combination of convex splitting and spectral truncation. In the case of $f_\e^+$ such an approach would lead to a similar equation and scheme, with the main difference being the use of a signless graph Laplacian instead of a graph Laplacian. In this paper, however, we have opted for an alternative approach, which is also inspired by similar approaches which have been developed for the $f_\e$ case. The continuum Merriman-Bence-Osher (MBO) scheme \cite{MBO1992,MBO1993} involves iteratively solving the diffusion equation over a small time step $\tau$ and thresholding the solution to an indicator function. For a short diffusion time $\tau$ this scheme approximates motion by mean curvature \cite{barles1995}. This scheme has been adapted to a graph setting \cite{merkurjev2013,van2014}. Heuristically it is expected that the outcome of the graph MBO scheme closely approximates minimizers of $f_\e$, as the diffusion step involves solving $\frac{du_i}{dt} = -(\Delta u)_i $ and the thresholding step has a similar effect as the nonlinearity $-\frac1\e W'(u_i)$ in \eqref{eq:graphAC}. Experimental results strengthen this expectation, however rigorous confirmation is still lacking. In order to approximately minimize $f_\e^+$, and consequently approximately solve the Max-Cut problem, we use an MBO type scheme in which we replace the graph Laplacian in the diffusion step by a signless graph Laplacian. We use two methods to compute this step: (1) a spectral method, adapted from the one in \cite{bertozzi2012}, which allows us to use a small subset of the eigenfunctions, which correspond to the smallest eigenvalues of the graph Laplacian, and (2) an Euler method. The usefulness of (normalised) signless graph Laplacians when attempting to find maximum cuts can be intuitively understood from the fact that their spectra are in a sense (which is made precise in Proposition~\ref{prop:eigenpairs}) the reverse of the spectra of the corresponding (normalised) graph Laplacians. Hence, where a standard graph Laplacian driven diffusion leads to clustering patterns according to the eigenfunctions corresponding to its smallest eigenvalues, `diffusion' driven by a signless graph Laplacian leads to patterns resembling the eigenfunctions corresponding to the smallest eigenvalues of that signless graph Laplacian and thus the largest eigenvalues of the corresponding standard graph Laplacian. \subsection{Structure of the paper} In Section~\ref{sec:Graphs} we explain the notation we use in this paper and give some preliminary results. Section~\ref{sec:MCGW} gives a precise formulation of the Max-Cut problem and discusses the Goemans-Williamson algorithm in more detail. In Section~\ref{sec:fe+} we introduce the signless graph Ginzburg-Landau functional $f_{\varepsilon}^+$ and use $\Gamma$-convergence techniques to prove that minimizers of $f_\e^+$ can be used to find approximate maximum cuts. We describe the signless MBO algorithm we use to find approximate minimizers of $f_\e^+$ in Section~\ref{sec:signlessMBO} and discuss the results we get in Section~\ref{sec:results}. We analyse the influence of our parameter choices in Section~\ref{sec:parameters} and conclude the paper in Section~\ref{sec:conclusions}. \section{Setup and notation}\label{sec:Graphs} \subsection{Graph based operators and functionals}\label{sec:setup} In this paper we will consider non-empty finite, simple\footnote{By `simple' we mean 'without self-loops and without multiple edges between the same pair of vertices'. Note that removing self-loops from a graph does not change its maximum cut.}, undirected graphs $G = (V,E,\omega)$ without isolated nodes, with vertex set (or node set) $V$, edge set $E \subset V^2$ and non-negative edge weights $\omega$. We denote the set of all such graphs by $\mathcal{G}$. By assumption $V$ has finite cardinality, which we denote by $n:=|V| \in \mathbb{N}$\footnote{For definiteness we use the convention $0\not\in{\vz N}$.}. We assume a node labelling such that $V=\{1, \ldots, n\}$. When $i,j\in V$ are nodes, the undirected edge between $i$ and $j$, if present, is denoted by $(i,j)$. The edge weight corresponding to this edge is $\omega_{\ij}>0$. Since $G$ is undirected, we identify $(i,j)$ with $(j,i)$ in $E$. Within this framework we can also consider unweighted graphs, which correspond to the cases in which, for all $(i,j)\in E$, $\omega_{ij}=1$. We define $\mathcal{V}$ to be the set consisting of all node functions $u: V \to {\vz R}$ and $\mathcal{E}$ to be the set of edge functions $\varphi: E \rightarrow {\vz R}$. We will use the notation $u_i:=u(i)$ and $\varphi_{ij} := \varphi(i,j)$ for functions $u\in \mathcal{V}$ and $\varphi\in \mathcal{E}$, respectively. For notational convenience, we will typically associate $\varphi\in \mathcal{E}$ with its extension to $V^2$ obtained by setting $\varphi_{ij}=0$ if $(i,j)\not\in E$. We also extend $\omega$ to $V^2$ in this way: if $(i,j)\not\in E$, then $\omega_{ij}=0$. Because $G\in \mathcal{G}$ is undirected, we have for all $(i,j)\in E$, $\omega_{ij}=\omega_{ji}$. Because $G\in \mathcal{G}$ is simple, for all $i\in V$, $(i,i) \notin E$. The degree of a node $i$ is $d_i := \sum_{j \in V} \omega_{ij}$. Because $G \in \mathcal{G}$ does not contain isolated nodes, we have for all $i \in V, d_i > 0$. As shown in \cite{hein2007}, it is possible for $\mathcal{V}$ and $\mathcal{E}$ to be defined for directed graphs, but we will not pursue these ideas here. To introduce the graph Laplacians and signless graph Laplacians we use and extend the structure that was used in \cite{hein2007,van2012,van2014}. We define the inner products on $\mathcal{V}$ and $\mathcal{E}$ as \[ \langle u,v \rangle_{\mathcal{V}} := \displaystyle\sum_{i \in V} u_iv_id_i^r, \qquad \langle \varphi,\phi \rangle_{\mathcal{E}} := \frac{1}{2}\displaystyle\sum_{i,j \in V} \varphi_{ij}\phi_{ij}\omega_{ij}^{2q-1}, \] where $r \in [0,1]$ and $q \in [\frac{1}{2},1]$. If $r=0$ and $d_i=0$, we interpret $d_i^r$ as $0$. Similarly for $\omega_{ij}^{2q-1}$ and other such expressions below. We define the graph gradient operator $(\nabla:\mathcal{V} \to \mathcal{E})$ by, for all $(i,j) \in E$, \[ (\nabla u)_{ij} := \omega_{ij}^{1-q}(u_j - u_i). \] We define the graph divergence operator $(\text{div}: \mathcal{E} \to \mathcal{V})$ as the adjoint of the gradient, and a graph Laplacian operator $(\Delta_r: \mathcal{V} \to \mathcal{V})$ as the graph divergence of the graph gradient: for all $i\in V$, \begin{equation}\label{eq:graphLaplacian} {(\text{div}\varphi)}_i := \frac{1}{2} d_i^{-r} \sum_{j \in V} \omega_{ij}^q(\varphi_{ji} - \varphi_{ij}), \qquad {(\Delta_r u)}_i: = (\text{div}(\nabla u))_i = d_i^{-r}\sum_{j \in V} \omega_{ij}(u_i - u_j). \end{equation} We note that the choices $r=0$ and $r=1$ lead to $\Delta_r$ being the unnormalised graph Laplacian and random walk graph Laplacian, respectively \cite{mohar1991,von2007}. Hence it is useful for us to explicitly incorporate $r$ in the notation $\Delta_r$ for the graph Laplacian. In analogy with the graph gradient, divergence, and Laplacian, we now define their `signless' counterparts. We define the signless gradient operator $(\nabla^+:\mathcal{V} \to \mathcal{E})$ by, for all $(i,j) \in E$, \[ (\nabla^+ u)_{ij} := \omega_{ij}^{1-q} (u_j + u_i). \] Then we define the signless divergence operator $(\text{div}^+: \mathcal{E} \to \mathcal{V})$ to be the adjoint of the signless gradient, and the signless Laplacian operator $(\Delta_r^+: \mathcal{V} \to \mathcal{V})$ as the signless divergence of the signless gradient\footnote{In some papers the space $\mathcal{E}$ is defined as the space of all {\it skew-symmetric} edge functions. We do not require the skew-symmetry condition here, hence $\nabla^+u \in \mathcal{E}$, having $\text{div}^+$ act on $\nabla^+ u$ is consistent with our definitions, and $\text{div}^+ \varphi$ is not identically equal to $0$ for all $\varphi\in \mathcal{E}$.}: for all $i\in V$, \[ {(\text{div}^+\varphi)}_i := \frac{1}{2} d_i^{-r} \sum_{j \in V} \omega_{ij}^q(\varphi_{ji} + \varphi_{ij}), \qquad (\Delta_r^+ u)_i: = (\text{div}^+(\nabla^+ u))_i = d_i^{-r}\sum_{j \in V} \omega_{ij}(u_i + u_j). \] By definition we have \[ \langle \nabla u,\phi \rangle_{\mathcal{E}} = \langle u,\textnormal{div} \: \phi \rangle_{\mathcal{V}}, \qquad \langle \nabla^+ u,\phi \rangle_{\mathcal{E}} = \langle u,\textnormal{div}^+\phi \rangle_{\mathcal{V}}. \] \begin{proposition}\label{prop:selfadjoint} The operators $\Delta_r: \mathcal{V}\to \mathcal{V}$ and $\Delta_r^+: \mathcal{V}\to \mathcal{V}$ are self-adjoint and positive-semidefinite. \end{proposition} \begin{proof} Let $u,v\in \mathcal{V}$. Since $\langle u,\Delta_r v \rangle_{\mathcal{V}} = \langle \nabla u,\nabla v \rangle_{\mathcal{E}} = \langle \Delta_r u,v \rangle_{\mathcal{V}}$ and $\langle u,\Delta_r^+ v \rangle_{\mathcal{V}} = \langle \nabla^+ u,\nabla^+ v \rangle_{\mathcal{E}} = \langle \Delta_r^+ u,v \rangle_{\mathcal{V}}$, the operators are self-adjoint. Positive-semidefiniteness follows from $\langle u,\Delta_r u \rangle_{\mathcal{V}} = \langle \nabla u, \nabla u \rangle_{\mathcal{E}} \geq 0$ and $\langle u,\Delta_r^+ u \rangle_{\mathcal{V}} = \langle \nabla^+ u, \nabla^+ u \rangle_{\mathcal{E}} \geq 0$. \end{proof} In the literature a third graph Laplacian is often used, besides the unnormalised and random walk graph Laplacians. This symmetrically normalised graph Laplacian \cite{chung1997} is defined by, for all $i\in V$, \[ (\Delta_{s}u)_i := \frac{1}{\sqrt{d_i}}\sum_{j \in V}\omega_{ij}\left(\frac{u_i}{\sqrt{d_i}} - \frac{u_j}{\sqrt{d_j}}\right). \] This Laplacian cannot be obtained by choosing a suitable $r$ in the framework we introduced above, but will be useful to consider in practical applications. Analogously, we define the signless symmetrically normalised graph Laplacian by, for all $i\in V$, \[ (\Delta_{s}^+u)_i := \frac{1}{\sqrt{d_i}}\sum_{j \in V}\omega_{ij}\left(\frac{u_i}{\sqrt{d_i}} + \frac{u_j}{\sqrt{d_j}}\right). \] There is a canonical way to represent a function $u\in \mathcal{V}$ by a vector in ${\vz R}^n$ with components $u_i$. The operators $\Delta_r$ and $\Delta_r^+$ can then be represented by the $n\times n$ matrices $L_r := D^{1-r} - D^{-r}A$ and $L^+_r := D^{1-r} + D^{-r}A$, respectively. Here $D$ is the degree matrix, i.e. the diagonal matrix with diagonal entries $D_{ii}:=d_i$, and $A$ is the weighted adjacency matrix with entries $A_{ij} := \omega_{ij}$. Similarly the operators $\Delta_s$ and $\Delta_s^+$ are then represented by $L_s := I - D^{-1/2}AD^{-1/2}$ and $L_s^+ := I + D^{-1/2}AD^{-1/2}$, respectively, where $I$ denotes the $n\times n$ identity matrix. Any eigenvalue-eigenvector pair $(\lambda,v)$ of $L_r$, $L_r^+$, $L_s$, $L_s^+$ corresponds via the canonical representation to an eigenvalue-eigenfunction pair $(\lambda, \phi)$ of $\Delta_r$, $\Delta_r^+$, $\Delta_s$, $\Delta_s^+$, respectively. We refer to the eigenvalue-eigenvector pair $(\lambda,v)$ as an eigenpair. For a vertex set $S\subset V$, we define the indicator function (or characteristic function) \[ \chi_S:= \begin{cases} 1, & \text{if} \quad i \in S,\\ 0, & \text{if} \quad i \notin S.\\ \end{cases}\] We define the inner product norms $\|u\|_{\mathcal{V}} := \sqrt{{\langle u,u \rangle}_{\mathcal{V}}}, \: \|\phi\|_{\mathcal{E}} := \sqrt{{\langle \phi,\phi \rangle}_{\mathcal{E}}}$ which we use to define the Dirichlet energy and signless Dirichlet energy, \[ \frac12 \|\nabla u\|_{\mathcal{E}}^2 = \frac{1}{4}\displaystyle\sum_{i,j \in V} \omega_{ij}(u_i - u_j)^2 \quad \text{and} \quad \frac12 \|\nabla^+ u\|_{\mathcal{E}}^2 = \frac{1}{4}\displaystyle\sum_{i,j \in V} \omega_{ij}(u_i + u_j)^2. \] In particular we recognise that the graph Ginzburg-Landau functional $f_\e:\mathcal{V}\to {\vz R}$ from \eqref{eq:graphGL} and the signless graph Ginzburg-Landau functional $f_\e^+:\mathcal{V}\to {\vz R}$ from \eqref{eq:signlessgraphGL} can be written as \[ f_\e(u) = \|\nabla u\|_{\mathcal{E}}^2 + \frac1\e \sum_{i\in V} W(u_i) \quad \text{and} \quad f_\e^+(u) = \|\nabla^+ u\|_{\mathcal{E}}^2 + \frac1\e \sum_{i\in V} W(u_i). \] It is interesting to note here an important difference between the functionals $f_{\e}$ and $f_{\e}^+$. Most of the results that are derived in the literature for $f_\e$ (such as the $\Gamma$-convergence results in \cite{van2012}) do not crucially depend on the specific locations of the wells of $W$. For example, in $f_\e$ the wells are often chosen to be at $0$ and $1$, instead of at $-1$ and $1$. However, for $f_\e^+$ we have less freedom to choose the wells without drastically altering the properties of the functional. The wells have to be placed symmetrically with respect to $0$, because we want $(u_i+u_j)^2$ to be zero when $u_i$ and $u_j$ are located in different wells. In particular, we see that placing a well at $0$ would have the undesired consequence of introducing the trivial minimizer $u=0$. This points to a second, related, difference. Whereas minimization of $f_\e$ in the absence of any further constraints or additional terms in the functional leads to trivial minimizers of the form $u=c \chi_V$, where $c\in {\vz R}$ is one of the values of the wells of $W$ (so $c\in \{-1,1\}$ for our choice of $W$), minimizers of $f_\e^+$ are not constant, if the graph has more than one vertex. The following lemma gives the details. \begin{lemma} Let $G\in\mathcal{G}$ with $n\geq 2$, let $\e>0$, and let $u$ be a minimizer of $f_\e^+: \mathcal{V}\to{\vz R}$ as in \eqref{eq:signlessgraphGL}. Then $u$ is not a constant function. \end{lemma} \begin{proof} Let $c\in{\vz R}$ and $i^*\in V$. Define the functions $u, \bar u\in \mathcal{V}$ by $u:=c\chi_V$ and \[ \bar u_i :=\begin{cases} c, & \text{if } i\neq i^*,\\ -c, & \text{if } i=i^*.\end{cases} \] Since $W$ is an even function, we have $\sum_{i\in V} W(\bar u_i) = \sum_{i\in V} W(u_i)$. Moreover, since for all $j\in V$, $\omega_{i^*j}=0$ or $u_j=-u_{i^*}$, we have \[ \|\nabla^+ \bar u\|_{\mathcal{E}}^2 = \frac12 \sum_{\substack{i\in V \\ i\neq i^*}} \sum_{j\in V} \omega_{ij} (2c)^2 < \frac12 \sum_{i,j\in V} \omega_{ij} (2c)^2 = \|\nabla^+ u\|_{\mathcal{E}}^2. \] The inequality is strict, because per assumption $G$ has no isolated nodes and thus there is a $j\in V$ such that $\omega_{i^*j}>0$. We conclude that $f_\e^+(\bar u) < f_\e^+(u)$, which proves that $u$ is not a minimizer of $f_\e^+$. \end{proof} We define the graph total variation $\textnormal{TV}: \mathcal{V}\to {\vz R}$ as \begin{equation}\label{eq:graphTV} \textnormal{TV}(u):= \max\{\langle u, \textnormal{div}\ \varphi\rangle_{\mathcal{V}}: \varphi\in \mathcal{E}, \forall i,j\in V\ |\varphi_{ij}|\leq 1\} = \frac12 \sum_{i,j\in V} \omega_{ij}^q |u_i-u_j|. \end{equation} The second expression follows since the maximum in the definition is achieved by $\varphi = \textnormal{sgn}(\nabla u)$ \cite{van2014}. We can define an analogous (signless total variation) functional $\textnormal{TV}^+:\mathcal{V}\to {\vz R}$, using the signless divergence: \[ \textnormal{TV}^+(u):= \max\{\langle u, \textnormal{div}^+\ \varphi\rangle_{\mathcal{V}}: \varphi\in \mathcal{E}, \forall i,j\in V\ |\varphi_{ij}|\leq 1\}. \] \begin{lemma} Let $u\in \mathcal{V}$, then $\textnormal{TV}^+(u) = \frac12 \sum_{i,j\in V} \omega_{ij}^q |u_i+u_j|$. \end{lemma} \begin{proof} Let $\varphi\in \mathcal{E}$ such that, for all $i,j\in V$, $|\varphi_{ij}|\leq 1$. We compute \begin{align*} \langle u, \textnormal{div}^+\ \varphi\rangle_{\mathcal{V}} &= \frac12 \sum_{i,j\in V} \omega_{ij}^q u_i (\varphi_{ji}+\varphi_{ij}) = \frac12 \sum_{i,j\in V} \omega_{ij}^q \varphi_{ij} (u_i+u_j)\\ &\leq \frac12 \sum_{i,j\in V} \omega_{ij}^q |\varphi_{ij}| |u_i+u_j| \leq \frac12 \sum_{i,j\in V} \omega_{ij}^q |u_i+u_j|. \end{align*} Moreover, since $\varphi = \textnormal{sgn}(\nabla^+u)$ is an admissable choice for $\varphi$ and \[ \langle\textnormal{sgn}\left(\nabla^+u\right), \textnormal{div}^+\ \varphi\rangle_{\mathcal{V}} = \frac12 \sum_{i,j\in V} \omega_{ij}^q |u_i+u_j|, \] the result follows. \end{proof} Note that the total variation functional that was mentioned in Section~\ref{sec:GL} corresponds to the choice $q=1$ in \eqref{eq:graphTV}. This is the relevant choice for this paper and hence from now on we will assume that $q=1$. Note that the choice of $q$ does not have any influence on the form of the graph (signless) Laplacians. One consequence of the choice $q=1$ is that $\mathrm{TV}$ and $\mathrm{TV}^+$ are now closely connected to cut sizes: If $S\subset V$ and $C=S|S^c$ is the cut induced by $S$, then \begin{equation}\label{eq:TVandcut} \textnormal{TV}\left(\chi_S-\chi_{S^c}\right) = 2 \textnormal{TV}\left(\chi_S\right) = 2 s(C) \quad \text{and} \quad \textnormal{TV}^+\left(\chi_S-\chi_{S^c}\right) = \sum_{i,j\in V} \omega_{ij} - 2 s(C). \end{equation} We will give a precise definition of $s(C)$ in Definition~\ref{def:sizeofcut} below. \begin{definition} Let $G \in \mathcal{G}$. Then $G$ is bipartite if and only if there exist $A \subset V$, $B \subset V$, such that all the conditions below are satisfied: \begin{itemize} \item $A \cup B= V$, \item $A \cap B= \emptyset$, and \item for all $(i,j)\in E$, $i\in A$ and $j\in B$, or $i\in B$ and $j\in A$. \end{itemize} In that case we say that $G$ has a bipartition $(A,B)$. \end{definition} \begin{definition} An Erd\"{o}s-R\'enyi graph $G(n,p)$ is a realization of a random graph generated by the Erd\"{o}s-R\'enyi model, i.e. it is an unweighted, undirected, simple graph with $n$ nodes, in which, for all unordered pairs $\{i,j\}$ of distinct $i,j \in V$, an edge $(i,j) \in E$ has been generated with probability $p\in [0,1]$. \end{definition} \subsection{Spectral properties of the (signless) graph Laplacians} We consider the Rayleigh quotients for $\Delta_r$ and $\Delta_r^+$ defined, for $u\in V$, as \begin{align*} R(u) &:= \frac{{\langle u,\Delta_r u\rangle}_{\mathcal{V}}}{{\|u\|}_{\mathcal{V}}^2} = \frac{\|\nabla u\|_{\mathcal{E}}^2}{{\|u\|}_{\mathcal{V}}^2} = \frac{\frac{1}{2}\sum_{i,j \in V} \omega_{ij} (u_i - u_j)^2}{\sum_{i \in V} d_i^r u_i^2},\\ R^+(u) &:= \frac{{\langle u,\Delta_r^+ u\rangle}_{\mathcal{V}}}{{\|u\|}_{\mathcal{V}}^2} = \frac{\|\nabla^+ u\|_{\mathcal{E}}^2}{{\|u\|}_{\mathcal{V}}^2} = \frac{\frac{1}{2}\sum_{i,j \in V} \omega_{ij} (u_i + u_j)^2}{\sum_{i \in V} d_i^r u_i^2}, \end{align*} respectively. By Proposition~\ref{prop:selfadjoint}, $\Delta_r$ and $\Delta_r^+$ are self-adjoint and positive-semidefinite operators on $\mathcal{V}$, so their eigenvalues will be real and non-negative. The eigenvalues of $\Delta_r$ and $\Delta_r^+$ are linked to the extremal values of their Rayleigh quotients by the min-max theorem \cite{courant1965,golub2012}. In particular, if we denote by $0\leq \lambda_1 \leq \ldots \leq \lambda_n$ the (possibly repeated) eigenvalues of $\Delta^+$, then $\lambda_1=\underset{u_1 \in \mathcal{V} \backslash \{0\}}{\mathrm{min}} \: R^+(u_1)$ and $\lambda_n=\underset{u_n \in \mathcal{V} \backslash \{0\}}{\mathrm{max}} \: R^+(u_n)$. In the following proposition we extend a well-known result for the graph Laplacians \cite{von2007} to include signless graph Laplacians. \begin{proposition}\label{prop:eigenpairs} Let $r\in [0,1]$. The following statements are equivalent: \begin{enumerate} \item $\lambda$ is an eigenvalue of $L_1$ with corresponding eigenvector $v$; \item $\lambda$ is an eigenvalue of $L_s$ with corresponding eigenvector $D^{1/2}v$; \item $2-\lambda$ is an eigenvalue of $L_1^+$ with corresponding eigenvector $v$; \item $2-\lambda$ is an eigenvalue of $L_s^+$ with corresponding eigenvector $D^{1/2} v$; \item $\lambda$ and $v$ are solutions of the generalized eigenvalue problem $L_r v = \lambda D^{1-r} v$. \end{enumerate} \end{proposition} \begin{proof} For $r=1$ the matrix representations of the graph Laplacian and signless graph Laplacian satisfy $L^+_1 = I + D^{-1}A = 2I - (I - D^{-1}A) = 2I - L_1$. Hence $\lambda$ is an eigenvalue of $L_1$ with corresponding eigenvector $v$ if and only if $2 - \lambda$ is an eigenvalue of $L^+_1$ with the same eigenvector. Because $L_s = D^{1/2} L_1 D^{-1/2}$, $\lambda$ is an eigenvalue of $L_1$ with eigenvector $v$ if and only if $\lambda$ is an eigenvalue of $L_s$ with eigenvector $D^{1/2}v$. Moreover, since $L_s^+ = 2I - L_s$, we have that $2-\lambda$ is an eigenvalue of $L_s^+$ with eigenvector $D^{1/2}v$ if and only if $\lambda$ is an eigenvalue of $L_s$ with eigenvalue $D^{1/2}v$. Finally, for $r\in [0,1]$, we have $L_r = D^{1-r}L_1$, hence $\lambda$ is an eigenvalue of $L_1$ with corresponding eigenvector $v$ if and only if $L_r v = \lambda D^{1-r} v$. \end{proof} Inspired by Proposition~\ref{prop:eigenpairs}, we define, for a given graph $G\in \mathcal{G}$ and node subset $S\subset V$, the rescaled indicator function $\tilde \chi_S \in \mathcal{V}$, by, for all $j\in V$, \begin{equation}\label{eq:rescaledindicator} \left(\tilde \chi_S\right)_j:= d^{\frac12}_j \left(\chi_S\right)_j. \end{equation} \begin{proposition}\label{prop:connected} The graph $G=(V,E,\omega)\in \mathcal{G}$ has $k$ connected components if and only if $\Delta\in \{\Delta_r, \Delta_s\}$ ($r\in [0,1]$) has eigenvalue $0$ with algebraic and geometric multiplicity equal to $k$. In that case, the eigenspace corresponding to the $0$ eigenvalue is spanned by \begin{itemize} \item the indicator functions $\chi_{S_i}$, if $\Delta=\Delta_r$, or \item the rescaled indicator functions $\tilde \chi_{S_i}$ (as in \eqref{eq:rescaledindicator}), if $\Delta=\Delta_s$. \end{itemize} Here the node subsets $S_i \subset V$, $i\in \{1, \ldots, k\}$, are such that each connected component of $G$ is the subgraph induced by an $S_i$. \end{proposition} \begin{proof} We follow the proof in \cite{von2007}. First we consider the case where $\Delta=\Delta_r$, $r\in [0,1]$. We note that $\Delta_r$ is diagonizable in the $\mathcal{V}$ inner product and thus the algebraic multiplicity of any of its eigenvalues is equal to its geometric multiplicity. In this proof we will thus refer to both simply as `multiplicity'. For any function $u \in \mathcal{V}$ we have that $\displaystyle \langle u, \Delta_r u \rangle_{\mathcal{V}} = \frac{1}{2}\sum_{i,j \in V}\omega_{ij}(u_i - u_j)^2. $ We have that $0$ is an eigenvalue if and only if there exists a $u\in \mathcal{V}\setminus\{0\}$ such that \begin{equation}\label{eq:uDeltauzero} \langle u, \Delta_r u \rangle_{\mathcal{V}} = 0. \end{equation} This condition is satisfied if and only if, for all $i,j\in V$ for which $\omega_{ij}>0$, $u_i=u_j$. Now assume that $G$ is connected (hence $G$ has $k=1$ connected component), then \eqref{eq:uDeltauzero} is satisfied if and only if, for all $i,j\in V$, $u_i = u_j$. Therefore any eigenfunction corresponding to the eigenvalue $\lambda_1 = 0$ has to be constant, e.g. $u = \chi_V$. In particular, the multiplicity of $\lambda_1$ is 1. Now assume that $G$ has $k \geq 2$ connected components, let $S_i$, $i\in \{1, \ldots, k\}$ be the node sets corresponding to the connected components of the graph. Via a suitable reordering of nodes $G$ will have a graph Laplacian matrix of the form \[ L_r = \begin{pmatrix} L_r^{(1)} & 0 & \cdots & 0 \\ 0 & L_r^{(2)} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & L_r^{(k)} \end{pmatrix}, \] where each matrix $L_r^{(i)}$, $i \in \{1,\dots,k\}$ corresponds to $\Delta_r$ restricted to the connected component induced by $S_i$. This restriction is itself a graph Laplacian for that component. Because each $L_r^{(i)}$ has eigenvalue zero with multiplicity $1$, $L_r$ (and thus $\Delta_r$) has eigenvalue 0 with multiplicity $k$. We can choose the eigenvectors equal to $\chi_{S_i}$ for $i \in \{1,\dots,k\}$ by a similar argument as in the $k=1$ case. Conversely, if $\Delta_r$ has eigenvalue $0$ with multiplicity $k$, then $G$ has $k$ connected components, because if $G$ has $l\neq k$ connected components, then by the proof above the eigenvalue $0$ has multiplicity $l\neq k$. For $\Delta_s$ we use Proposition~\ref{prop:eigenpairs} to find that the eigenvalues are the same as those of $\Delta_r$, with the corresponding eigenfunctions rescaled as stated in the result. \end{proof} \begin{proposition}\label{prop:signlessspectrum} Let $G=(V,E,\omega)\in \mathcal{G}$ have $k$ connected components and let the node subsets $S_i\subset V$, $i\in \{1, \ldots, k\}$ be such that each connected component is the subgraph induced by one of the $S_i$. We denote these subgraphs by $G_i$. Let $\Delta^+ \in \{\Delta_r^+, \Delta_s^+\}$ ($r\in [0,1]$) and let $0\leq k'\leq k$. Then $\Delta^+$ has an eigenvalue equal to 0 with algebraic and geometric multiplicity $k'$ if and only if $k'$ of the subgraphs $G_i$ are bipartite. In that case, assume the labelling is such that $G_i$, $i\in \{1, \ldots, k'\}$ are bipartite with bipartition $(T_i, S_i\setminus T_i)$, where $T_i\subset S_i$. Then the eigenspace corresponding to the 0 eigenvalue is spanned by \begin{itemize} \item the indicator functions $\chi_{T_i} - \chi_{S_i\setminus T_i}$, if $\Delta^+ = \Delta_r^+$, or \item the rescaled indicator functions $\tilde \chi_{T_i}-\tilde \chi_{S_i\setminus T_i}$ (as in \eqref{eq:rescaledindicator}), if $\Delta^+=\Delta_s^+$. \end{itemize} \end{proposition} \begin{proof} First we consider the case where $\Delta^+ = \Delta_r^+, r \in [0,1]$. For any vector $u \in \mathcal{V}$ we have that \[ \langle u, \Delta_r^+ u \rangle_{\mathcal{V}} = \frac{1}{2}\sum_{i,j \in V}\omega_{ij}(u_i + u_j)^2. \] Let $k=1$, then $\lambda_1 = 0$ is an eigenvalue if and only if there exists $u\in \mathcal{V}\setminus\{0\}$ such that $\langle u, \Delta_r^+ u \rangle_{\mathcal{V}} = 0$. This condition is satisfied if and only if, for all $i,j\in V$ for which $\omega_{ij}>0$ we have \begin{equation}\label{eq:ui=-uj} u_i=-u_j. \end{equation} We claim that this condition in turn is satisfied if and only if $G$ is bipartite. To prove the `if' part of that claim, assume $G$ is bipartite with bipartition $(A,A^c)$ for some $A\subset V$, and define $u\in \mathcal{V}$ such that $u|_A=-1$ and $u|_{A^c}=1$. To prove the `only if' statement, assume $G$ is not bipartite, then there exists an odd cycle in $G$ \cite[Theorem 1.4]{bollobas2013}. Let $i\in V$ be a vertex on this cycle, then by applying condition \eqref{eq:ui=-uj} to all the vertices of the cycle, we find $u_i=-u_i=0$. Since $G$ is connected, it now follows, by applying condition \eqref{eq:ui=-uj} to all vertices in $V$, that $u=0$, which is a contradiction. The argument above also shows that, if $G$ is bipartite with bipartition $(A,A^c)$, then any eigenfunction corresponding to $\lambda_1=0$ is proportional to $u = \chi_A - \chi_{A^c}$. Therefore the eigenvalue 0 has geometric multiplicity 1. Since $\Delta_r^+$ is diagonizable in the $\mathcal{V}$ inner product the algebraic multiplicity of $\lambda_1$ is be equal to its geometric multiplicity. Now let $k \ge 2$ and let $S_i$, $i\in \{1, \ldots, k\}$ be the node sets corresponding to the connected components of the graph. Via a suitable reordering of nodes the graph $G$ will have a signless Laplacian matrix of the form \[ L_r^+ = \begin{pmatrix} L^{(1)+} & 0 & \cdots & 0 \\ 0 & L^{(2)+} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & L^{(k)+} \end{pmatrix}, \] where each matrix $L^{(i)+}$, $i \in \{1,\dots,k\}$ corresponds to $\Delta_r^+$ restricted to the connected component induced by $S_i$. This restriction is itself a signless Laplacian for that connected component. Hence, we can apply the case $k=1$ to each component separately to find that the (algebraic and geometric) multiplicity of the eigenvalue 0 of $\Delta_r^+$ is equal to the number of connected components which are also bipartite. If $G$ has $k'\leq k$ connected components which are also bipartite, then, without loss of generality, assume that these components correspond to $S_i$, $i\in \{1, \ldots, k'\}$. Then the corresponding eigenspace is spanned by functions $u^{(i)} = \chi_{T_i} - \chi_{S_i\setminus T_i} \in \mathcal{V}$, $i\in \{1, \ldots, k'\}$, where $T_i\subset S_i$ and $(T_i, S_i\setminus T_i)$ is the bipartition of the bipartite component induced by $S_i$. For $\Delta_s^+$ we use Proposition~\ref{prop:eigenpairs} to find the appropriately rescaled eigenfunctions as given in the result. \end{proof} \begin{corollary} The eigenvalues of $\Delta_1$, $\Delta^+_1$ , $\Delta_s$, and $\Delta_s^+$ are in $[0,2]$. \end{corollary} \begin{proof} For $\Delta_1$ the proof can be found in \cite[Lemma 2.5]{van2014}. For completeness we reproduce it here. By Proposition~\ref{prop:selfadjoint} we know that $\Delta_1$ has non-negative eigenvalues. The upper bound is obtained by maximizing the Rayleigh quotient $R(u)$ over all nonzero $u\in \mathcal{V}$. Since $(u_i - u_j)^2 \leq 2(u_i^2 + u_j^2)$ we have that \begin{align*} \underset{u \in \mathcal{V}\setminus\{0\}}{\mathrm{max}}\ R(u) &= \underset{u \in \mathcal{V}\setminus\{0\}}{\mathrm{max}} \frac{\frac{1}{2}\sum_{i,j \in V} \omega_{ij} (u_i - u_j)^2}{\sum_{i \in V} d_i u_i^2} \leq \underset{u \in \mathcal{V}\setminus\{0\}}{\mathrm{max}} \frac{2 \sum_{i \in V}d_i u_i^2}{\sum_{i \in V}d_i u_i^2} = 2.\\ \end{align*} From Proposition~\ref{prop:eigenpairs} it then follows that the eigenvalues of the other operators are in $[0,2]$ as well. \end{proof} \section{The Max-Cut problem and Goemans-Williamson algorithm}\label{sec:MCGW} \subsection{Maximum cuts} In order to identify candidate solutions to the Max-Cut problem with node functions in $\mathcal{V}$, we define the subset of binary $\{-1,1\}$-valued node functions, \[ \mathcal{V}^b := \{u \in \mathcal{V}: \forall \: i \in V, u_i \in \{-1,1\}\}. \] For a given function $u \in \mathcal{V}^b$ we define the sets $V_k := \{i \in V, u_i = k\}$ for $k \in \{-1,1\}$. We say that the partition $C = V_{-1}|V_1$ is the cut induced by $u$. We define the set of all possible cuts, $\mathcal{C} := \{C: \text{there exists a } u\in \mathcal{V}^b \text{ such that } u \text{ induces the cut } C\}$. \begin{definition}\label{def:sizeofcut} Let $G = (V,E,\omega)\in \mathcal{G}$ and let $V_1$ and $V_{-1}$ be two disjoint subsets of $V$. The size of the cut $C = V_{-1}|V_1$ is \[ s(C) := \sum_{\substack{i \in V_{-1}\\ j \in V_1}}\omega_{ij}. \] A maximum cut of $G$ is a cut $C^*\in \mathcal{C}$ such that, for all cuts $C\in\mathcal{C}$, $s(C) \leq s(C^*)$. The size of the maximum cut is \[ \mc{G} := \underset{C\in \mathcal{C}}\max\ s(C) . \] \end{definition} Note that if the cut $C$ in Definition~\ref{def:sizeofcut} is induced by $u \in \mathcal{V}^b$, then \begin{equation}\label{eq:cutLaplacian} s(C) = \frac{1}{4}\langle u,\Delta_r u \rangle_{\mathcal{V}}. \end{equation} Moreover, if $C=\emptyset|V_1$ or $C=V_{-1}|\emptyset$, then $s(C)=0$. \begin{definition}[Max-Cut problem] Given a simple, undirected graph $G=(V,E,\omega)\in \mathcal{G}$, find a maximum cut for $G$. \end{definition} For a given $G\in \mathcal{G}$ we have $|E|<\infty$, hence a maximum cut for $G$ exists, but note that this maximum cut need not be unique. The cardinality of the set $\mathcal{V}^b$ is equal to the total number of ways a set of $n$ elements can be partitioned into two disjoint subsets, i.e. $|\mathcal{V}^b| = 2^n$. This highlights the difficulty of finding $\mc{G}$ as $n$ increases. It has been proven that the Max-Cut problem is NP-hard \cite{garey1979}. Obtaining a performance guarantee of $\frac{16}{17}$ or better is also NP-hard \cite{trevisan2000}. The problem of determining if a cut of a given size exists on a graph is NP-complete \cite{karp}. \subsection{The Goemans-Williamson algorithm} The leading algorithm for polynomial time Max-Cut approximation is the Goemans-Williamson (GW) algorithm \cite{goemans1995}, which we present in detail below in Algorithm~\ref{alg:GW}. A problem equivalent to the Max-Cut problem is to find a maximizer which achieves \[ \underset{u}{\mathrm{max}} \: \frac{1}{2} \sum_{i,j}\omega_{ij}(1 - u_iu_j) \quad \mathrm{subject \: to } \: \forall i \in V, u_i \in \{-1,1\}. \] The GW algorithm solves a relaxed version of this integer quadratic program, by allowing $u$ to be an $n$-dimensional vector with unit Euclidean norm. In \cite{goemans1995} it is proved that the $n$-dimensional vector relaxation is an upper bound on the original integer quadratic program. This relaxed problem is equivalent to finding a maximizer which achieves \begin{equation}\label{eq:ZP} Z^*_P := \underset{Y}{\mathrm{max}} \: \frac{1}{2}\sum_{i,j\in V, i < j}\omega_{ij}(1 - y_{ij}), \end{equation} where the maximization is over all $n$ by $n$ real positive-semidefinite matrices $Y=(y_{ij})$ with ones on the diagonal. This semidefinite program has an associated dual problem of finding a minimizer which achieves \begin{equation}\label{eq:ZD} Z^*_D := \frac{1}{2}\sum_{i,j \in V}\omega_{ij} + \frac{1}{4}\underset{\gamma\in {\vz R}^n}{\mathrm{min}}\sum_{i \in V} \gamma_i, \end{equation} subject to $A + \mathrm{diag}(\gamma)$ being positive-semidefinite, where $A$ is the adjacency matrix of $G$ and $\mathrm{diag}(\gamma)$ is the diagonal matrix with diagonal entries $\gamma_i$. As mentioned in Section~\ref{sec:maximumcut}, Algorithm~\ref{alg:GW} is proven to have a performance guarantee of $0.878$. In Algorithm~\ref{alg:GW} below, we use the unit sphere $S_n := \{x\in {\vz R}^n: \|x\| = 1\}$, where $\|\cdot\|$ denotes the Euclidean norm on ${\vz R}^n$. For vectors $w, \tilde w \in {\vz R}^n$, $w\cdot \tilde w$ denotes the Euclidean inner product. \begin{asm}{(GW)} \KwData{The weighted adjacency matrix $A$ of a graph $G\in \mathcal{G}$, and a tolerance $\nu$.} \textbf{ Relaxation step:} Use semidefinite programming to find approximate solutions $\tilde Z^*_P$ and $\tilde Z^*_D$ to \eqref{eq:ZP} and \eqref{eq:ZD}, respectively, which satisfy $|\tilde Z^*_P - \tilde Z^*_D|< \nu$. Use an incomplete Cholesky decomposition on the matrix $Y$ that achieves $\tilde Z^*_P$ in \eqref{eq:ZP} to find an approximate solution to \[ w^* \in \underset{w \in (S_n)^n}\mathrm{argmax}\ \frac{1}{2} \sum_{\substack{1\leq i, j, \leq n\\ i<j}} \omega_{ij}(1 - w_i\cdot w_j). \] \medskip \textbf{ Hyperplane step:} Let $r\in S_n$ be a random vector drawn from the uniform distribution on $S_n$. Define the cut $C:=V_{-1}|V_1$, where \[ V_1 := \{i \in V | w_i \cdot r \geq 0\} \quad \text{ and } \quad V_{-1}: = V \setminus V_1. \] \caption{\label{alg:GW} The Goemans-Williamson algorithm} \end{asm} Other polynomial time Max-Cut approximation algorithms can be found in \cite{bylka1999,trevisan2012}. Because of the high proven performance guarantee of \ref{alg:GW}, we focus on comparing our algorithm against it. In \cite{trevisan2012} the authors use the eigenvector corresponding to the smallest eigenvalue of $\Delta_0^+$, showing that thresholding this eigenvector in a particular way achieves a Max-Cut performance guarantee of $\beta = 0.531$, which with further analysis was improved to $\beta = 0.614$ \cite{soto2015}. Algorithms which provide a solution in polynomial time exist if the graph is planar \cite{hadlock1975}, if the graph is a line graph \cite{guruswami1999}, or if the graph is weakly bipartite \cite{grotschel1981}. Comparing against \cite{trevisan2012,hadlock1975,guruswami1999,grotschel1981} is a topic of future research. \section{$\Gamma$-convergence of $f_{\varepsilon}^+$}\label{sec:fe+} In \eqref{eq:signlessgraphGL} we introduced the signless Ginzburg-Landau functional $f_\e^+: \mathcal{V}\to {\vz R}$. In this section we prove minimizers of $f_\e^+$ converge to solutions of the Max-Cut problem, using the tools of $\Gamma$-convergence \cite{braides2002}. We need a concept of convergence in $\mathcal{V}$. Since we can identify $\mathcal{V}$ with ${\vz R}^n$ and all norms on $\mathbb{R}^n$ are topologicallly equivalent, the choice of a particular norm is not of great importance. For definiteness, however, we say that sequence $\{u_k\}_{k\in {\vz N}} \subset \mathcal{V}$ converges to a $u_{\infty} \in \mathcal{V}$ in $\mathcal{V}$ if and only if $\|\hat{u}_k - \hat{u}_{\infty}\|_{\mathcal{V}} \to 0$ as $ k \to \infty$, where $\hat{u}_k,\hat{u}_{\infty} \in \mathbb{R}^n$ are the canonical vector representations of $u_k$, $u_\infty$, respectively. We will prove that $f_\e^+$ $\Gamma$-converges to the functional $f_0^+: \mathcal{V} \to {\vz R}\cup\{+\infty\}$, which is defined as \begin{equation}\label{eq:f0+} f_0^+ (u) := \begin{cases} \sum_{i,j \in V} \omega_{ij} |u_i + u_j|, & \text{if } u \in \mathcal{V}^b, \\ +\infty, & \text{if } u \in \mathcal{V} \setminus \mathcal{V}^b. \end{cases} \end{equation} \begin{lemma}\label{lem:MaxCut} Let $G\in \mathcal{G}$. For every $u\in \mathcal{V}^b$, let $C_u\in \mathcal{C}$ be the cut induced by $u$, then for all $u\in \mathcal{V}$, \[ f_0^+(u) = \begin{cases} 2\sum_{i,j \in V} \omega_{ij} - 4s(C_u), &\text{if } u \in \mathcal{V}^b,\\ +\infty, &\text{if } u \in \mathcal{V} \setminus \mathcal{V}^b. \end{cases} \] In particular, if $u^*\in \underset{u \in \mathcal{V}}\mathrm{argmin}\, f_\e^+(u)$, then $u^*\in \mathcal{V}^b$ and $C_{u^*}$ is a maximum cut of $G$. \end{lemma} \begin{proof} Because, for $u\in \mathcal{V}^b$, $f_0^+(u)=2\textnormal{TV}^+(u)$ (with $q=1$), the result follows by \eqref{eq:TVandcut}. \end{proof} \begin{lemma}\label{lem:minfunc} Let $G\in\mathcal{G}$ and $\e>0$. There exist minimizers for the functionals $f_\e^+: \mathcal{V}\to{\vz R}$ and $f_0^+: \mathcal{V}\to {\vz R}\cup\{+\infty\}$ from \eqref{eq:signlessgraphGL} and \eqref{eq:f0+}, respectively. Moreover, if $u\in \mathcal{V}$ is a minimizer of $f_0^+$, then $u\in \mathcal{V}^b$. \end{lemma} \begin{proof} The potential $W$ satisfies a coercivity condition in the following sense. There exist a $C_1>0$ and a $C_2$ such that, for all $x\in{\vz R}$, \begin{equation}\label{eq:Wcoerc} |x|\geq C_1 \Rightarrow C_2(x^2-1)\leq W(x). \end{equation} Combined with the fact that $\|\nabla^+u\|_{\mathcal{E}} \geq 0$, this shows that $f_\e^+$ is coercive. Since $f_\e^+$ is a (multivariate) polynomial, it is continuous. Thus, by the direct method in the calculus of variations \cite[Theorem 1.15]{dalmaso1993} $f_\e^+$ has a minimizer in $\mathcal{V}$. Since $n\geq 1$, $\mathcal{V}^b\neq \emptyset$ and thus $\underset{u\in\mathcal{V}}\inf\, f_0^+(u)<+\infty$. In particular, any minimizer of $f_0^+$ has to be in $\mathcal{V}^b$. Since $|\mathcal{V}^b|<\infty$ the minimum is achieved. \end{proof} \begin{lemma}\label{lem:Gammaconvergence} Let $G\in \mathcal{G}$ and let $f_\e^+$ and $f_0^+$ be as in \eqref{eq:signlessgraphGL} and \eqref{eq:f0+}, respectively. Then $f_\e^+$ $\Gamma$-converges to $f_0^+$ as $\e \downarrow 0$ in the following sense: If $\{\e_k\}_{k\in{\vz N}}$ is a sequence of positive real numbers such that $\e_k\downarrow 0$ as $k\to \infty$ and $u_0\in \mathcal{V}$, then the following lower bound and upper bound conditions are satisfied: \begin{itemize} \item[(LB)] for every sequence $\{u_k\}_{k=1}^{\infty}\subset \mathcal{V}$ such that $u_k \rightarrow u_0$ as $k\to \infty$, it holds that $f_0^+(u_0) \leq \underset{k\to\infty}\liminf\, f_{\e_k}^+(u_k)$; \item[(UB)] there exists a sequence $\{u_k\}_{k=1}^{\infty}\subset \mathcal{V}$ such that $u_k\to u_0$ as $k\to\infty$ and $f_0^+(u_0)\geq \underset{k\to\infty}\limsup\, f_{\e_k}^+(u_k)$. \end{itemize} \end{lemma} \begin{proof} This proof is an adaptation of the proofs in \cite[Section 3.1]{van2012}. Note that \[ f_\e^+(u) = \frac12 \sum_{i,j\in V} \omega_{ij} (u_i+u_j)^2 + w_\e(u), \] where we define $w_\e: \mathcal{V} \to {\vz R}$ by \[ w_{\varepsilon} (u) := \frac{1}{\varepsilon} \sum_{i \in V} W(u_i). \] First we prove that $w_\e$ $\Gamma$-converges to $w_0$ as $\e\downarrow 0$, where \[ w_0(u) := \begin{cases} 0, & \text{if } u \in \mathcal{V}^b,\\ +\infty, &\text{if } u\in \mathcal{V}\setminus\mathcal{V}^b. \end{cases} \] Let $\{\e_k\}_{k\in{\vz N}}$ is a sequence of positive real numbers such that $\e_k\downarrow 0$ as $k\to \infty$ and $u_0\in \mathcal{V}$. (LB) Note that, for all $u \in \mathcal{V}$ we have $w_{\e}(u) \geq 0$. Let $\{u_k\}_{k=1}^{\infty}$ be a sequence such that $u_k \to u_0$ as $k \to \infty$. First we assume that $u_0 \in \mathcal{V}^b$, then \[ w_0(u_0) = 0 \leq \underset{k\to\infty}\liminf\, w_{\e_k}(u_k). \] Next suppose that $u_0 \in \mathcal{V}\setminus \mathcal{V}^b$, then there is an $i\in V$ such that $(u_0)_i\not\in\{-1,1\}$. Since $u_k\to u_0$ as $k\to\infty$, for every $\eta>0$ there is an $N(\eta)\in {\vz N}$ such that for all $k\geq N(\eta)$ we have that $d_i^r |(u_0)_i - (u_k)_i| < \eta$. Define \[ \bar\eta := \frac12 d_i^r \min\left\{|1-(u_0)_i|, |-1-(u_0)_i|\right\} > 0, \] then, for all $k\geq N(\bar\eta)$, \[ |1-(u_k)_i| \geq \big| |1-(u_0)_i| - |(u_0)_i-(u_k)_i| \big| \geq \frac12 |1-(u_0)_i| >0. \] Similarly, for all $n\geq N(\bar\eta)$, $|-1-(u_k)_i| \geq \frac12 |-1-(u_0)_i| >0$. Hence, there is a $c>0$ such that, for all $k\geq N(\bar\eta)$, $|(u_k)_i| \leq 1-c$. Thus there is a $C>0$ such that, for all $k\geq N(\bar\eta)$, $W((u_k)_i) \geq C$. It follows that \[ \underset{k\to\infty}\liminf\, w_{\e_k}(u_k) \geq \underset{k\to\infty}\liminf\, \frac{1}{\e_k} W((u_k)_i) = \infty = w_0(u_0). \] (UB) If $u_0\in \mathcal{V}\setminus\mathcal{V}^b$, then $w_0(u_0)$ and the upper bound condition is trivially satisfied. Now assume $u_0\in \mathcal{V}^b$. Define the sequence $\{u_k\}_{k=1}^{\infty}$ by, that for all $k \in \mathbb{N}$, $u_k = u_0$. Then, for all $k\in {\vz N}$, $w_{\e_k}(u_k) = 0$ and thus $\displaystyle \underset{k\to\infty}\limsup\, w_{\e_k}(u_k) = 0 = w_0(u_0). $ This concludes the proof that $w_\e$ $\Gamma$-converges to $w_0$ as $\e\downarrow 0$. It is known that $\Gamma$-convergence is stable under continuous perturbations \cite[Proposition 6.21]{dalmaso1993}, \cite[Remark 1.7]{braides2002}; thus $w_\e + p$ $\Gamma$-converges to $w_0+p$ for any continuous $p:\mathcal{V} \to {\vz R}$. Since $u\mapsto \frac12 \sum_{i,j\in V} \omega_{ij} (u_i+u_j)^2$ is a polynomial and hence a continuous function on $\mathcal{V}$, we find that, as $\e\downarrow 0$, $f_\e^+$ $\Gamma$-converges to $g: \mathcal{V}\to {\vz R}\cup\{+\infty\}$, where \[ g(u) := \frac12 \sum_{i,j\in V} \omega_{ij} (u_i+u_j)^2 + w_0(u). \] If $u\in \mathcal{V}\setminus\mathcal{V}^b$, then $g(u)=+\infty$. If $u\in \mathcal{V}^b$, then, for all $i,j\in V$, $(u_i+u_j)^2 = 2|u_i+u_j|$, hence \[ \frac12 \sum_{i,j\in V} \omega_{ij} (u_i+u_j)^2 = \sum_{i,j \in V} \omega_{ij} |u_i+u_j|. \] Thus $g=f_0^+$ and the theorem is proven. \end{proof} \begin{lemma}\label{lem:equicoercivity} Let $G\in \mathcal{G}$ and let $f_\e^+$ be as in \eqref{eq:signlessgraphGL}. Let $\{\varepsilon_k\}_{k\in{\vz N}} \subset (0,\infty)$ be a sequence such that $\varepsilon_k \downarrow 0$ as $k \to \infty$, then the sequence $\{f_{\e_k}^+\}_{k\in{\vz N}}$ satisfies the following equi-coerciveness property: If $\{u_k\}_{k\in{\vz N}} \subset \mathcal{V}$ is a sequence such that there exists $C > 0$ such that, for all $k \in \mathbb{N}$, $f_{\varepsilon_k}^+ (u_k) < C.$, then there exists a subsequence $\{u_{k'}\}_{k'\in{\vz N}} \subset \{u_k\}_{k\in{\vz N}}$ and a $u_0 \in \mathcal{V}^b$ such that $u_{k'} \to u_0$ as $k \to \infty$. \end{lemma} \begin{proof} This proof closely follows \cite[Section 3.1]{van2012}. From the uniform bound $f_{\e_k}^+(u_k) < C$, we have that, for all $k\in{\vz N}$ and all $i\in V$, $0\leq W((u_k)_i) \leq C$. Because of the coercivity property \eqref{eq:Wcoerc} of $W$, the uniform bound on $W((u_k)_i)$ gives, for all $i\in V$, boundedness of $\{d_i^r(u_k)_i^2\}_{k\in{\vz N}}$ and thus $\{u_k\}_{k\in{\vz N}}$ is bounded in the $\mathcal{V}$-norm. The result now follows by the Bolzano-Weierstrass theorem. \end{proof} With the $\Gamma$-convergence and equi-coercivity results from Lemmas~\ref{lem:Gammaconvergence} and~\ref{lem:equicoercivity}, respectively, in place, we now prove that minimizers of $f_\e^+$ converge to solutions of the Max-Cut problem. \begin{theorem} Let $G\in \mathcal{G}$. Let $\{\e_k\}_{k\in{\vz N}} \subset (0,\infty)$ be a sequence such that $\e_k\downarrow 0$ as $k\to\infty$ and, for each $k\in {\vz N}$, let $f_{\e_k}^+$ be as in \eqref{eq:signlessgraphGL} and let $u_{\e_k}$ be a minimizer of $f_{\e_k}^+$. Then there exists $u_0\in \mathcal{V}^b$ and a subsequence $\{u_{\e_{k'}}\}_{k'\in{\vz N}} \subset \{u_{\e_k}\}_{k\in{\vz N}}$, such that $\|u_{\e_{k'}}-u_0\|_{\mathcal{V}} \to 0$ as $k'\to \infty$. Furthermore, $u_0 \in \underset{u\in \mathcal{V}}\mathrm{argmin}\ f_0^+(u)$, where $f_0^+$ is as in \eqref{eq:f0+}. In particular, if $C_{u_0} \in \mathcal{C}$ is the cut induced by $u_0$, then $C_{u_0}$ is a maximum cut of $G$. \end{theorem} \begin{proof} It is a well-known result from $\Gamma$-convergence theory \cite[Corollary 7.20]{dalmaso1993}, \cite[Theorem 1.21]{braides2002} that the equi-coercivity property of $\{f_{\e_k}\}_{k\in{\vz N}}$ from Lemma~\ref{lem:equicoercivity} combined with the $\Gamma$-convergence property of Lemma~\ref{lem:Gammaconvergence} implies that $\underset{u\in\mathcal{V}}\min\, f_\e^+(u)$ converge to $\underset{u\in\mathcal{V}}\min\, f_0^+(u)$ and, up to taking a subsequence, minimizers of $f_\e^+$ converge to a minimizer of $f_0^+$. By Lemma~\ref{lem:minfunc}, if $u_0 \in \underset{u \in \mathcal{V}}\mathrm{argmin}\, f_\e^+(u)$, then $u_0 \in \mathcal{V}^b$. By Lemma~\ref{lem:MaxCut}, the cut $C_{u_0}$ induced by $u_0$ is a maximum cut of $G$. \end{proof} \section{The signless MBO algorithm}\label{sec:signlessMBO} \subsection{Algorithm}\label{sec:algorithm} One way of attempting to find minimizers of $f_\e^+$ is via its gradient flow \cite{ambrosio2008}. This is, for example, the method employed in \cite{bertozzi2012} to find approximate minimizers of $f_\e$. In that case the gradient flow is given by a graph-based analogue of the Allen-Cahn equation \cite{allen1979}. To find the $\mathcal{V}$-gradient flow of $f_\e^+$ we compute the first variation of the functional $f_{\e}^+$: for $t\in {\vz R}$, $u,v\in \mathcal{V}$, we have \[ \frac{d}{dt}f_{\e}^+(u+tv) |_{t=0} = \langle \Delta_r^+u,v \rangle_{\mathcal{V}} + \frac{1}{\e}\langle D^{-r}W' \circ u, v \rangle_{\mathcal{V}}, \] where we used the notation $(D^{-r}W' \circ u)_i = d_i^{-r}W'(u_i)$. This leads to the following $\mathcal{V}$-gradient flow: for all $i\in V$, \begin{equation}\label{eq:gradflow} \begin{cases} \frac{du_i}{dt} = -(\Delta_r^+ u)_i - \frac{1}{\e}d_i^{-r}W'(u_i),& \text{for } t>0,\\ u_i = (u_0)_i,& \text{for } t=0. \end{cases} \end{equation} Since $f_\e^+$ is not convex, as $t \to \infty$ the solution of the $\mathcal{V}$-gradient flow is not guaranteed to converge to a global minimum, and can get stuck in local minimizers. In this paper we will not attempt to directly solve the gradient flow equation. That could be the topic of future research. Instead we will use a graph MBO type scheme, which we call the signless MBO algorithm. It is given in \ref{alg:signlessMBO}. Despite there currently not being any rigorous results on the matter, the outcome of this scheme is believed to approximate minimizers of $f_\e^+$. The original MBO scheme (or threshold dynamics scheme) in the continuum was introduced to approximate motion by mean curvature flow \cite{MBO1992,MBO1993}. It consists of iteratively applying ($N$ times) two steps: diffusing a binary initial condition for a time $\tau$ and then thresholding the result back to a binary function. In the (suitably scaled) limit $\tau\downarrow 0$, $N\to\infty$, solutions of this process converge to solutions of motion by mean curvature \cite{barles1995}. It is also known that solutions the continuum Allen-Cahn equation (in the limit $\e\downarrow 0$) converge to solutions of motion by mean curvature \cite{bronsard1991}. Whether something similar is true for the graph MBO scheme or graph Allen-Cahn equation \cite{van2014} or something analogous is true for the signless graph MBO scheme are as of yet open questions, but it does suggest that solutions of the MBO scheme (signless MBO scheme) could be closely connected to minimizers of $f_\e$ ($f_\e^+$). In practice, the graph MBO scheme has proven to be a fast and accurate method for tackling approximate minimization problems of this kind \cite{merkurjev2013, bertozzi2012}. We see in \ref{alg:signlessMBO} that in the signless diffusion step the equation that is solved is the gradient flow equation from \eqref{eq:gradflow} without the double well potential term. Since we expect the double well potential term in \eqref{eq:gradflow} to force the solution to take values close to $\pm 1$, the signless diffusion step in \ref{alg:signlessMBO} is followed by a thresholding step. Note that, despite our choice of nomenclature, the signless graph `diffusion' dynamics is expected to be significantly different from standard graph diffusion. \begin{asm}{(MBO+)} \KwData{A signless graph Laplacian $\Delta^+ \in \{\Delta_0^+, \Delta_1^+, \Delta_s^+\}$ corresponding to a graph $G\in \mathcal{G}$, a signless diffusion time $\tau>0$, an initial condition $\mu^0:=\chi_{S_0} - \chi_{S_0^c}$ corresponding to a node subset $S_0 \subset V$, a time step $dt$, and a stopping criterion tolerance $\eta$.} \KwOut{A sequence of functions $\{\mu^j\}_{j=0}^{N} \subset \mathcal{V}^b$ giving the signless MBO evolution of $\mu^0$, a sequence of corresponding cuts $\{C^j\}_{j=0}^N \subset \mathcal{C}$ and their sizes $\{s(C^j)\}_{j=0}^N \subset [0,\infty)$, with largest value $s^*$.} \For{$ j = 1 \ \KwTo \ $ \textnormal{stopping criterion is satisfied,}}{ \textbf{ Signless diffusion step:} Compute $u^*(\tau)$, where $u^*\in \mathcal{V}$ is the solution of the initial value problem \begin{equation}\label{eq:signlessdiffusion} \begin{cases} \frac{du(t)}{dt} = -\Delta^+ u(t),& \text{for } t>0,\\ u(0) = \mu^j.& \end{cases} \end{equation} \medskip \textbf{ Threshold step:} Define $\mu^j \in \mathcal{V}^b$ by, for $i\in V$, \begin{equation}\label{eq:threshold} \mu^j_i := T(u^*_i(\tau)) := \begin{cases} 1, &\text{if } u^*_i(\tau) > 0,\\ -1, &\text{if } u^*_i(\tau) \leq 0. \end{cases} \end{equation} Define the cut $C^j:=V_{-1}^j|V_1^j$, where $V_{\pm 1}^j := \{i\in V: \mu^{j}_i = \pm 1\}$ and compute $s(C^j)$. Set $N=j$. \If { $\frac{\|\mu^j - \mu^{j-1}\|_2^2}{\|\mu^j\|_2^2} < \eta$}{ Stop} } \textbf{ Find the largest cut size: } Set $s^* := \max_{1\leq j \leq N} s(C^j)$. \caption{\label{alg:signlessMBO} The signless graph MBO algorithm} \end{asm} In Figures~\ref{fig:EnergyAS8} and \ref{fig:EnergyGNutella} we show the minimization of $f_{\e}^+$ using \ref{alg:signlessMBO} with the spectral method (which is explained in Section~\ref{sec:spectral}) on the AS8 graph and the GNutella09 graph (see Section~\ref{sec:scale}). The \ref{alg:signlessMBO} iteration numbers $j$ are indicated along the $x$-axis. The $y$-axis shows the value of $f_{\e}^+(\mu^j)$. What we see in both figures is that the overall tendency is for the \ref{alg:signlessMBO} algorithm to decrease the value of $f_{\e}^+(\mu^j)$, however, in some iterations the value increases. This is why in \ref{alg:signlessMBO} we output the cut size which is largest among all iterations computed and use that as the final output, if it outperforms the cut $C$ which \ref{alg:signlessMBO} returns. Alternatively, in order to save on computing memory, one could also keep track of the largest cut size found so far in each iteration and discard the other cut sizes, or accept the final cut size $s(C^N)$ as approximation to $s^*$ . The result we report in this paper are all based on the output $s^*$. In our experiments we choose the stopping criterion tolerance $\eta = 10^{-8}$. \begin{figure} \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=7.5cm]{Energy1.jpg} \label{fig:Energy1} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=7.5cm]{Energy3.jpg} \label{fig:Energy3} \end{subfigure} \caption{The value $f_{\e}^+(\mu^j)$ as a function of the iteration number $j$ in the \ref{alg:signlessMBO} scheme on AS8Graph, using the spectral method and $\Delta_1^+$, with $K=100$, and $\tau = 20$. The left hand plot shows the initial condition and all iterations of the \ref{alg:signlessMBO} scheme on AS8Graph, where as the right hand plot displays the 3rd to the final iterations of the \ref{alg:signlessMBO} scheme on AS8Graph.} \label{fig:EnergyAS8} \end{figure} \begin{figure} \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=7.5cm]{Energy2.jpg} \label{fig:Energy2} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=7.5cm]{Energy4.jpg} \label{fig:Energy4} \end{subfigure} \caption{The value $f_{\e}^+(\mu^j)$ as a function of the iteration number $j$ in the \ref{alg:signlessMBO} scheme on the GNutella09 graph, using the spectral method and $\Delta_1^+$, with $K=100$, and $\tau = 20$. The left hand plot shows the initial condition and all iterations of the \ref{alg:signlessMBO} scheme on GNutella09, where as the right hand plot displays all iterations of the \ref{alg:signlessMBO} scheme on GNutella09, without the initial condition.}\label{fig:EnergyGNutella} \end{figure} \subsection{Spectral decomposition method}\label{sec:spectral} In this paper we will compare two implementations of the \ref{alg:signlessMBO} algorithm, which differ in the way they solve \eqref{eq:signlessdiffusion} for $t\in [0,\tau]$. In the next section we consider an explicit Euler method, but first we discuss a spectral decomposition method. In order to solve \eqref{eq:signlessdiffusion} we use spectral decomposition of the signless graph Laplacian $\Delta^+ \in \{\Delta^+_0, \Delta^+_1, \Delta^+_s\}$. Let $\lambda_k\geq 0$, $k\in \{1, \ldots, n\}$ be the eigenvalues of $\Delta^+$. We assume $\lambda_1 \leq \lambda_2 \leq \ldots \lambda_n$ and list eigenvalues multiple times according to their multiplicity. Let $\phi^k\in \mathcal{V}$ be an eigenfunction corresponding to $\lambda_k$, chosen such that $\{\phi_k\}_{k=1}^{n}$ is a set of orthonormal functions in $\mathcal{V}$. We then use the decomposition \begin{equation}\label{eq:decomposition} u^*(\tau) = \sum_{k=1}^n e^{-\lambda_k \tau}\langle \phi^k, u(0) \rangle_{\mathcal{V}} \: \phi^k \end{equation} to solve \eqref{eq:signlessdiffusion}. For $\Delta_s^+$ we use the Euclidean inner product instead of the $\mathcal{V}$ inner product in \eqref{eq:decomposition}, because the Laplacian $\Delta_s^+$ is not of the form as given in \eqref{eq:graphLaplacian}. The optimal choice for $\tau$ with respect to the cut size obtained by \ref{alg:signlessMBO} is a topic for future research. Based on trial and error, we decided to use $\tau=20$ in the results we present in Section~\ref{sec:results}, when using $\Delta_1^+$ or $\Delta_s^+$ as our operator. We use $\tau = \frac{40}{\lambda_n}$ when using $\Delta_0^+$ as our operator, where $\lambda_n$ is the largest eigenvalue of $\Delta_0^+$. The division of $\tau$ by half of the largest eigenvalue of $\Delta_0^+$ is justified in Section~\ref{sec:pinning condition}. In Section~\ref{sec:parameters} we investigate how cut sizes change with varying $\tau$. A computational advantage of the spectral decomposition method is that we do not necessarily need to use all of the eigenvalues and eigenfunctions of the signless Laplacian. We can use only the $K$ eigenfunctions corresponding to the smallest eigenvalues in our decomposition \eqref{eq:decomposition}. To be explicit, doing this replaces $n$ in \eqref{eq:decomposition} by $K$. In Section~\ref{sec:parameters} we show how increasing $K$ beyond a certain point has little effect on the size of the cut obtained by \ref{alg:signlessMBO} for three examples. We refer to using the $K$ eigenfunctions corresponding to the smallest eigenvalues in the decomposition as spectral truncation. By Proposition~\ref{prop:eigenpairs}, we can compute the $K$ smallest eigenvalues $\lambda_k$ ($k\in \{1, \ldots, K\}$) of $\Delta_1^+$ and $\Delta_s^+$ by first computing the $K$ largest eigenvalues $\hat \lambda_l$ ($l\in \{n-K+1, \ldots, n\}$) of $\Delta_1$ and $\Delta_s$ respectively instead and then setting $\lambda_k = 2 - \hat \lambda_{n-k+1}$. There is not a similar property for $\Delta_0^+$ however. Proving upper bounds on the largest eigenvalues of $\Delta_0$ and $\Delta_0^+$ is an active area of research. \cite{guo2005, shu2002, zhang2009signless}. We use the MATLAB \texttt{eigs} function to calculate the $K$ eigenpairs of the signless Laplacian. This function \cite{lehoucq1996} uses the Implicitly Restarted Arnoldi Method (IRAM) \cite{sorensen1997}, which can efficiently compute the largest eigenvalues and corresponding eigenvectors of sparse matrices. The function \texttt{eigs} firstly computes the orthogonal projection of the matrix you want eigenpairs from, and a random vector, onto the matrix's $K$-dimensional Krylov subspace. This projection is represented by a smaller $K \times K$ matrix. Then \texttt{eigs} calculates the eigenvalues of this $K \times K$ matrix, whose eigenvalues are called Ritz eigenvalues. The Ritz eigenvalues are computed efficiently using a QR method \cite{francis1961}. Computationally these Ritz eigenvalues typically approximate the largest eigenvalues of the original matrix. The time complexity of IRAM is currently unknown, but in practice it produces approximate eigenpairs efficiently. If the matrix of which the eigenvalues are to be computed is symmetric, the MATLAB \texttt{eigs} function simplifies to the Implicitly Restarted Lanczos Method (IRLM) \cite{calvetti1994}, therefore typically in practice \texttt{eigs} will usually compute the eigenvalues and eigenfunctions of $\Delta_s^+$ faster than those of $\Delta_1^+$. Using the IRLM for computing the eigenpairs of $\Delta_0^+$ corresponding to its smallest eigenvalues is inefficient. In our experiments using the MATLAB \texttt{eig} function to calculate all eigenpairs of $\Delta_0^+$ and choosing the $K$ eigenpairs corresponding to the smallest eigenvalues for the decomposition \eqref{eq:decomposition} was faster than using the IRLM to calculate the $K$ eigenpairs of $\Delta_0^+$. Hence, the results discussed in this paper are obtained with $\texttt{eig}$ when using $\Delta_0^+$ and \texttt{eigs} when using $\Delta_1^+$ or $\Delta_s^+$. If we use the MATLAB \texttt{eigs} function when using our spectral decomposition method we cannot a priori determine the time complexity for \ref{alg:signlessMBO}, because practical experiments have shown the complexity of the IRAM and IRLM methods is heavily dependent on the matrix to which they are applied \cite{radke1996}. If we choose to use the MATLAB \texttt{eig} function then the time complexity of \ref{alg:signlessMBO} is $\mathcal{O}(n^3)$, which is the time complexity of computing all eigenpairs of an $n \times n$ matrix. All other remaining steps of \ref{alg:signlessMBO} require fewer operations to compute. \subsection{Explicit Euler method} We also compute the solution of \eqref{eq:signlessdiffusion} for $t \in [0,\tau]$ using an explicit finite difference scheme, \begin{equation}\label{eq:euler} \begin{cases} u^{m+1} = u^{m} - \Delta^+u^{m}dt, &\text{ for } m\in \{0, 1, \ldots, M\},\\ u^0 = u(0) \end{cases} \end{equation} for the same choice of $\tau$ as in \eqref{eq:decomposition}. For $M \in \mathbb{N}$, $dt = \frac{\tau}{M}$, and we set $u^*(\tau)= u^M$. If $G \in \mathcal{G}$ then \ref{alg:signlessMBO} using the Euler method will have a time complexity of $\mathcal{O}(|E|)$, because of the sparsity of the signless Laplacian matrix. When zero entries are ignored, the multiplication of the vector $u^m$ by $\Delta^+$ takes $4|E| + 2n$ operations to compute. Since $G\in \mathcal{G}$ has no isolated nodes, $|E| \geq n-1$, therefore, when $n$ is large enough, $4|E| > 2n$ and hence the time complexity of the multiplication is $\mathcal{O}(|E|)$. All other remaining steps in \ref{alg:signlessMBO} using the Euler method require fewer operations to compute. In Section~\ref{sec:implicit} we show some results for \ref{alg:signlessMBO} when solving \eqref{eq:signlessdiffusion} using an implicit finite difference scheme, comparing against the results of \ref{alg:signlessMBO} obtained using \eqref{eq:euler} to solve \eqref{eq:signlessdiffusion}. \subsection{\ref{alg:signlessMBO} pinning condition}\label{sec:pinning condition} For \ref{alg:signlessMBO} we have that choosing $\tau$ too small causes trivial dynamics in the sense that, for any $j$, $\mu^j=\mu^0$ in \ref{alg:signlessMBO}. In this section we prove a result which shows that such a $\tau$ is inversely proportional to the largest eigenvalue of the signless Laplacian chosen for \ref{alg:signlessMBO}. We define $d_- := \underset{i \in V}{\mathrm{min}} \: d_i$, and $d_+ := \underset{i \in V}{\mathrm{max}} \: d_i$. Let $\Delta^+ \in \{\Delta_0^+,\Delta_1^+,\Delta_s^+\}$, then the operator norm $\|\Delta^+\|_{\mathcal{V}}$ is defined by \[ \|\Delta^+\|_{\mathcal{V}} := \underset{u \in \mathcal{V} \setminus \{0\}}{\mathrm{sup}}\frac{\|\Delta^+u\|_{\mathcal{V}}}{\|u\|_{\mathcal{V}}} \] We define the maximum norm of $\mathcal{V}$ by $\|u\|_{\mathcal{V},\infty} := \mathrm{max}\{|u_i|: i \in V\}$. \begin{lemma} Let $\Delta^+ \in \{\Delta_0^+,\Delta_1^+,\Delta_s^+\}$. The operator norm $\|\Delta^+\|_{\mathcal{V}}$ and the largest eigenvalue $\lambda_n$ of $\Delta^+$ are equal. This implies that, for all $u \in \mathcal{V}$, \[ \|\Delta^+u\|_{\mathcal{V}} \leq \lambda_n\|u\|_{\mathcal{V}}. \] \end{lemma} \begin{proof} See \cite[Lemma 2.5]{van2014}. \end{proof} \begin{lemma}\label{lem:normequiv} The norms $\|\cdot\|_{\mathcal{V}}$ and $\|\cdot\|_{\mathcal{V},\infty}$ are equivalent, with optimal constants given by \[ d_{-}^{\frac{r}{2}}\|u\|_{\mathcal{V},\infty} \leq \|u\|_{\mathcal{V}} \leq \|\chi_V\|_{\mathcal{V}} \|u\|_{\mathcal{V},\infty}. \] \end{lemma} \begin{proof} See \cite[Lemma 2.2]{van2014}. \end{proof} \begin{theorem}\label{thm:pin} Let $G \in \mathcal{G}$, and let $\lambda_n$ be the largest eigenvalue of the signless Laplacian $\Delta^+ \in \{\Delta_0^+,\Delta_1^+,\Delta_s^+\}$. Let $S_0\subset V$, $\mu^0:= \chi_{S_0}-\chi_{S_0^c}$, and let $\mu^1\in \mathcal{V}^b$ be the result of applying one \ref{alg:signlessMBO} iteration to $\mu^0$. If \begin{equation}\label{eq:pin} \tau < \lambda_n^{-1}\mathrm{log}(1 + d_{-}^{\frac{r}{2}}\|\chi_V\|_{\mathcal{V}}^{-1}), \end{equation} then $\mu^1=\mu^0$. \end{theorem} \begin{proof} This proof closely follows the proof of a similar result in \cite[Section 4.2]{van2014}. If $\|e^{-\tau\Delta^+}\mu^0 - \mu^0\|_{\mathcal{V},\infty} < 1$, then $\mu^1=\mu^0$. Using Lemma~\ref{lem:normequiv}, we compute \[ \|e^{-\tau\Delta^+}\mu^0 - \mu^0\|_{\mathcal{V},\infty} \leq d_{-}^{-\frac{r}{2}} \|e^{-\tau\Delta^+}\mu^0 - \mu^0\|_{\mathcal{V}} \leq d_{-}^{-\frac{r}{2}} \|e^{-\tau\Delta^+} - \mathrm{Id} \|_{\mathcal{V}} \, \|\mu^0\|_{\mathcal{V}} . \] Moreover, since $\langle \chi_{S_0}, \chi_{S_0^c}\rangle_\mathcal{V} = 0$, we have $ \|\mu^0\|_{\mathcal{V}}^2 = \|\chi_{S_0}\|_{\mathcal{V}}^2 + \|\chi_{S_0^c}\|_{\mathcal{V}}^2 = \|\chi_{S_0}+\chi_{S_0^c}\|_{\mathcal{V}}^2 = \|\chi_V\|_{\mathcal{V}}^2. $ Using the triangle inequality and the submultiplicative property (see \cite{rudin1964} for example) of $\|\cdot\|_{\mathcal{V}}$, we compute $ \|e^{-\tau \Delta^+} - \mathrm{Id}\|_{\mathcal{V}} \leq \sum_{k=1}^{\infty}\frac{1}{k!}(\tau\|\Delta^+\|_{\mathcal{V}})^k = e^{\lambda_n\tau} - 1. $ Therefore, if $\tau < \lambda_n^{-1}\mathrm{log}(1 + d_{-}^{\frac{r}{2}}\|\chi_V\|^{-1})$, then $\mu^1=\mu^0$. \end{proof} As stated in Section~\ref{sec:spectral}, we choose $\tau = 20$ as diffusion time for \ref{alg:signlessMBO} using $\Delta_1^+$ or $\Delta_s^+$, and $\tau = \frac{40}{\lambda_n}$ when using \ref{alg:signlessMBO} with $\Delta_0^+$ as the choice of operator. This is due to $\tau = 20$ often being too large when using \ref{alg:signlessMBO} with $\Delta_0^+$. Choosing $\tau = 20$ for \ref{alg:signlessMBO} using $\Delta_0^+$ causes the solution to converge to $u(\tau) = 0$ to machine precision. We therefore choose $\tau = \frac{40}{\lambda_n}$ for $\Delta_0^+$ since \ref{thm:pin} implies a suitable choice of $\tau$ for \ref{alg:signlessMBO} with respect to obtaining non-trivial output cuts is inversely proportional to the largest eigenvalue of the chosen operator $\Delta^+$. Since $\lambda_n = 2$ for $\Delta_1^+$ and $\Delta_s^+$ we choose to divide $\tau$ by $\frac{\lambda_n}{2}$ for $\Delta_0^+$. \section{Results}\label{sec:results} \subsection{Method}\label{sec:method} \begin{figure} \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=7cm]{Fig2.jpg} \caption{Web graph, maximum cut approximation}\label{fig:webgraph} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=7cm]{Fig4.jpg} \caption{Square-triangle mesh maximum cut approximation}\label{fig:squaretriangle} \end{subfigure} \caption{Visualisation of maximum cut approximations (best viewed in colour)}\label{fig:visualisation} \end{figure} In Section~\ref{sec:results} we compare the results of our new algorithm \ref{alg:signlessMBO} with the results obtained by \ref{alg:GW}. In Sections~\ref{sec:Rand}--\ref{sec:large} we display the results of \ref{alg:signlessMBO} using both the spectral decomposition method and the explicit Euler method, fixing the variable $\tau$ for both methods. We run all our tests on a Windows 7 PC with 16GB RAM and an Intel(R) Core(TM) i5-4590 CPU with clock speed 3.30GHz. For both \ref{alg:signlessMBO} and \ref{alg:GW} we use MATLAB, which is convenient to use when dealing with large sparse matrices. For all of our tests using the spectral decomposition method we choose $K = \floor{\frac{n}{100}}$. In practice it reduces the computation time without sacrificing much accuracy in the cut approximations. We further analyse this choice in Section~\ref{sec:parameters}. For all of our tests using the Euler method we set $M=100$, in order to keep $dt$ small so as to ensure stability on our explicit scheme. We compute the \ref{alg:signlessMBO} evolutions for 50 initial conditions chosen at random from $\mathcal{V}^b$. In the tables which we refer to in this section, we state the greatest (Best), average (Avg), and smallest (Least) sizes of cuts obtained by these 50 runs of \ref{alg:signlessMBO}. We run \ref{alg:signlessMBO} using $\Delta_0^+$, $\Delta_1^+$ and $\Delta_s^+$, fixing the initial conditions for each operator, using both the spectral method and the Euler method for each operator, and compare the results. We compare the results of \ref{alg:signlessMBO} with those of \ref{alg:GW}. To compute the relaxation step of \ref{alg:GW} we use SDPT3 MATLAB software \cite{tutuncu2003} as it exploits the sparse structure of the matrices we work on. According to \cite{mittelmann2012} it is best suited for both smaller problems and for larger problems with sparse matrices. The stopping tolerance is set as $|Z_P^* - Z_D^*| < 10^{-6}$. The recommended tolerance for the SDPT3 software is set as $10^{-8}$. However, in our experiments increasing this tolerance to $10^{-6}$ reduced the computation time of \ref{alg:GW}, without any change in output cut sizes. After the relaxation step, we perform the hyperplane step 50 times, randomly choosing a vector $r$ each time. Each choice of $r$ leads to a resulting cut; in the tables referred to in this section, we list the highest (Best), average (Avg), and lowest (Least) sizes of these cuts. In each of these categories in our tables we highlight the method that obtained the best result, \ref{alg:signlessMBO} using $\Delta_0^+$, \ref{alg:signlessMBO} using $\Delta_1^+$, \ref{alg:signlessMBO} using $\Delta_s^+$, or \ref{alg:GW}. We do the same for the run times (Time) of each method. For both \ref{alg:signlessMBO} and \ref{alg:GW} only the adjacency matrix and the parameter choice $\eta$ is initially provided, therefore the reported run times cover all calculations from that starting point. For each graph we remove the isolated nodes by removing all rows and columns of the graph's adjacency matrix which have all zero entries. (This does not affect the size of any cut of the graph.) For the spectral decompostion variant of \ref{alg:signlessMBO} using $\Delta_1^+$ and $\Delta_s^+$ this includes removing all isolated nodes, computing the matrices $L_1$ and $L_s$, finding their $K$ eigenpairs corresponding to the leading eigenvalues in order to use Proposition~\ref{prop:eigenpairs}, to compute the eigenpairs corresponding to the trailing eigenvalues of $L_1^+$ and $L_s^+$ respectively, generating initial conditions, running the signless diffusion and thresholding steps, and computing the size of the cut from each MBO iteration. For $\Delta_s^+$ the computation time includes calculating $L_1$ in order to compute the size of the output cuts using \eqref{eq:cutLaplacian}. The computation time for \ref{alg:signlessMBO} using $\Delta_0^+$ includes removing all isolated nodes, computing the matrix $L_0^+$, finding all its eigenpairs, choosing the largest eigenvalue for the time step $\tau$, and using the $K$ eigenpairs corresponding to the smallest eigenvalues for the remaining steps. For the explicit Euler method variant of \ref{alg:signlessMBO} the computation time includes removing all isolated nodes, computing $L^+ \in \{L_0^+,L_1^+,L_s^+\}$, generating initial conditions, running the signless diffusion and thresholding steps, and computing the size of the cut induced by each MBO iteration. For $L = L_s$ we also compute $L_1$ to obtain the size of the output cut using \eqref{eq:cutLaplacian}. For every graph there exists a $\tau_{max}$ such that for all $\tau \geq \tau_{max}$ the solution to \eqref{eq:signlessdiffusion} computed using \ref{alg:signlessMBO} converges to $u(\tau) = 0$ to machine precision. In practice $\tau_{max}$ is dependent on the operator $\Delta^+$. In our experiments we see that choosing a $\tau$ which is in between the pinning condition in Theorem~\ref{thm:pin} and $\tau_{max}$ is difficult due to the difference between them being small when $\Delta_0^+$ is our operator for \ref{alg:signlessMBO}. In Section~\ref{sec:scale} and Section~\ref{sec:weighted} we run our experiments on graphs with a scale free structure (see Section~\ref{sec:scale}). When running \ref{alg:signlessMBO} using the explicit Euler method and $\Delta_0^+$ we encounter problems in choosing suitable $\tau$ and $dt$ for such graphs. This is due to the inflexibility of choosing $\tau$ such that it is less than $\tau_{max}$ and also greater than the bound in Theorem~\ref{thm:pin}. Since the Euler method is an approximation of the spectral method, we encounter problems in this case. If \ref{alg:signlessMBO} returns a cut which has pinned due to Theorem~\ref{thm:pin} or is zero due to the solution of \eqref{eq:signlessdiffusion} converging to zero to machine precision then we refer to the cut as a trivial cut. In Section~\ref{sec:implicit} we show that it is possible to obtain non-trivial cut sizes using \ref{alg:signlessMBO} with $\Delta_0^+$ by solving \eqref{eq:signlessdiffusion} using an implicit Euler scheme. Figure~\ref{fig:visualisation} shows two examples of approximate maximum cuts obtained with the \ref{alg:signlessMBO} algorithm. The black nodes are in $V_1$ and the white nodes are in $V_{-1}$. An edge is coloured red, if it connects two nodes of different colour, i.e. if it contributes to the size of the cut. If it does not, it is black. Figure~\ref{fig:webgraph} shows an unweighted web graph which has 201 nodes and 400 edges. We set $\tau = 20$ in \ref{alg:signlessMBO} using $\Delta_1^+$ and the Euler method to solve \eqref{eq:signlessdiffusion}. The resulting approximation of the maximum cut value is 350. The run time is 0.09 seconds. Figure~\ref{fig:squaretriangle} shows an unweighted triangle-square graph which has 162 nodes and 355 edges. We set $\tau = 20$ and $K=20$ in \ref{alg:signlessMBO} using $\Delta_1^+$ and the spectral method to solve \eqref{eq:signlessdiffusion}. The approximation of the maximum cut value is 295 and the run time is 0.14 seconds. \subsection{Random graphs}\label{sec:Rand} In Figures~\ref{fig:G1000}, \ref{fig:G2500}, and~\ref{fig:G5000} we list results obtained for Erd\"os-R\'enyi graphs. For each of $G(1000,0.01)$ (Figure~\ref{fig:G1000}), $G(2500,0.4)$ (Figure~\ref{fig:G2500}), and $G(5000,0.001)$ (Figure~\ref{fig:G5000}) we create 100 realisations. We then run \ref{alg:signlessMBO} with both the spectral method and the Euler method, and we run \ref{alg:GW}. For both of the \ref{alg:signlessMBO} methods we choose either $\Delta_0^+$, $\Delta_1^+$, or $\Delta_s^+$, setting $\tau = 20$ for all tests. The bar chart represents the mean of the best, average, and least cuts over all 100 realisations of the chosen random graph. The error bars are the corrected sample standard deviation\footnote{The corrected sample standard deviation is computed using MATLAB's \texttt{std} code in all experiments in this paper.} of the results obtained over all 100 realisations. Figure~\ref{fig:G1000} shows that \ref{alg:signlessMBO} using either the spectral method or Euler method for $\Delta_1^+$ and $\Delta_s^+$ produces better mean best, mean average, and mean least cuts than \ref{alg:GW} on this set of graphs. Figure~\ref{fig:G2500} shows that \ref{alg:signlessMBO} using the spectral method and either $\Delta_1^+$ or $\Delta_s^+$ produces better mean cut approximations than \ref{alg:GW} on this set of graphs. Figure~\ref{fig:G5000} shows the same conclusions as Figure~\ref{fig:G1000} for this set of graphs. Table~\ref{tab:ERTime} shows that \ref{alg:signlessMBO} using the spectral method produces the fastest run times on all three types of Erd\"os-R\'enyi graphs that we test on. We note that \ref{alg:GW} has a superior run time over \ref{alg:signlessMBO} using the Euler method on the realisations of $G(2500,0.4)$. \begin{figure} \centering \includegraphics[width=12cm]{1000Results.jpg} \caption{Bar chart of Max-Cut approximations on 100 realisations of $G(1000,0.01)$.}\label{fig:G1000} \end{figure} \begin{figure} \centering \includegraphics[width=12cm]{2500Results.jpg} \caption{Bar chart of Max-Cut approximations on 100 realisations of $G(2500,0.4)$.}\label{fig:G2500} \end{figure} \begin{figure} \centering \includegraphics[width=12cm]{SparseResults.jpg} \caption{Bar chart of Max-Cut approximations on 100 realisations of $G(5000,0.001)$.}\label{fig:G5000} \end{figure} \begin{table} \begin{tabular}{ |l|l|l|l|l|l|l|l|} \hline Graph & $\Delta_1^+$ (S) & $\Delta_1^+$ (E) & $\Delta_s^+$ (S) & $\Delta_s^+$ (E) & $\Delta_0^+$ (S) & $\Delta_0^+$ (E) & GW \\ \hline $G(1000,0.01)$ & \textbf{0.20} & 1.58 & 0.34 & 1.52 & 0.56 & 1.06 & 5.25 \\ \hline $G(2500,0.4)$ & 8.04 & 172.91 & 13.33 & 181.40 & \textbf{6.40} & 172.73 & 55.36 \\ \hline $G(5000,0.001)$ & \textbf{4.38} & 16.96 & 6.37 & 14.95 & 24.99 & 6.97 & 257.09\\ \hline \end{tabular} \caption{Average \ref{alg:signlessMBO} and \ref{alg:GW} run-times for each realisation of $G(n,p)$, time in seconds.}\label{tab:ERTime} \end{table} \subsection{Scale-free graphs}\label{sec:scale} The degree distribution $P: \mathbb{N} \rightarrow \mathbb{R}$ of an unweighted graph $G$ is given by $P(j) := \frac{|\{i \in V: \: d_i = j\}|}{n}$. Random graphs such as the ones discussed in Section~\ref{sec:Rand} have a degree distribution which resembles a normal distribution. The graph $G \in \mathcal{G}$ is a scale-free graph if its degree distribution roughly follows a power law, i.e $P(j) \approx j^{-\gamma}$, where often in practice, $\gamma \in (2,3)$ \cite{barabasi2009}. Scale-free graphs have become of interest as graphs such as internet networks, collaboration networks, and social networks are conjectured to more closely resemble scale-free graphs instead of random graphs \cite{barabasi2003}. \begin{figure} \centering \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=7cm]{g25004Degree.jpg} \caption{Degree distribution of a realisation of $G(2500,0.4)$.}\label{fig:g2500deg} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=7cm]{AS1Degree.jpg} \caption{Degree distribution of the AS1 Graph.}\label{fig:AS1deg} \end{subfigure} \caption{Average degree distribution of 100 realisations of a random graph and the degree distribution of a scale free graph.}\label{fig:DEG} \end{figure} In Table~\ref{table:large} we list results for some scale free graphs. We test the algorithms on 8 autonomous systems internet graphs, $\mathrm{AS}i$, $i\in \{1, \ldots, 8\}$. These graphs represent smaller imitations of an internet network, which were acquired from the website \cite{ASI}. We also test on the graph Gnutella09 which is a model of a peer to peer file sharing network, and the graph WikiVote, which is a network representing a Wikipedia administrator election, both obtained from \cite{snapnets}. All of the scale free graphs in this section are unweighted and undirected graphs. Table~\ref{table:ASProp} displays some properties of the random graphs in Section~\ref{sec:Rand} and the scale-free graphs we test on. Figure~\ref{fig:DEG} displays the average degree distribution of 100 realisations of $G(2500,0.4)$, in Figure~\ref{fig:g2500deg}, and the degree distribution of the AS1 Graph, in Figure~\ref{fig:AS1deg}. In Figure~\ref{fig:g2500deg} the yellow points indicate the degree distribution, and the orange lines indicate the corrected sample standard deviation of the average degree distribution. In Figure~\ref{fig:AS1deg} the blue dots indicate the degree distribution. As we see, the average degree distribution of the realisations of $G(2500,0.4)$ is similar to a normal distribution, and the degree distribution of the AS1 graph resembles a power law, as expected. \begin{table} \centering \begin{tabular}{ |l|l|l|l|l|} \hline Graph & $|V|$ & $|E|$ & $d_{-}$ & $d_{+}$ \\ \hline $G(1000,0.01)$(1) & 1000 & 4919 & 1 & 21 \\ \hline $G(1000,0.01)$(2) & 1000 & 4939 & 2 & 21 \\ \hline $G(2500,0.4)$(1) & 2500 & 1248937 & 910 & 1079 \\ \hline $G(2500,0.4)$(2) & 2500 & 1251182 & 904 & 1081\\ \hline $G(5000,0.001)$(1) & 4962 & 12646 & 1 & 16 \\ \hline $G(5000,0.001)$(2) & 4969 & 12642 & 1 & 16 \\ \hline \end{tabular} \ \begin{tabular}{ |l|l|l|l|l|} \hline Graph & $|V|$ & $|E|$ & $d_{-}$ & $d_{+}$ \\ \hline AS1 & 12694 & 26559 & 1 & 2566 \\ \hline AS2 & 7690 & 15413 & 1 & 1713 \\ \hline AS3 & 8689 & 17709 & 1 & 1911 \\ \hline AS4 & 8904 & 17653 & 1 & 1921 \\ \hline GNutella09 & 8114 & 26013 & 1 & 102 \\ \hline Wiki-Vote & 7115 & 100762 & 1 & 1065 \\ \hline \end{tabular} \caption{Properties of $G(n,p)$ graph realisations vs scale free graphs.}\label{table:ASProp} \end{table} For all graphs listed in Table~\ref{table:large}, using either $\Delta_1^+$ or $\Delta_s^+$ \ref{alg:signlessMBO} using the Euler method or the spectral method outperforms \ref{alg:GW} with respect to the average and least obtained cut sizes and the run time, but \ref{alg:GW} obtains the best results when considering the greatest obtained cuts. For any choice of $\Delta_1^+$ and $\Delta_s^+$ and for any choice of signless diffusion solver the greatest cuts obtained by \ref{alg:signlessMBO} are all at least 98.1\% of the greatest cut size obtained by \ref{alg:GW}. The difference in run times is notable though. The time taken by \ref{alg:signlessMBO} stays below 30 seconds for all graphs in Table~\ref{table:large}, irrespective of choice of Laplacian and signless diffusion solver. However, the \ref{alg:GW} algorithm's run times range between 9 and 44 minutes. These results suggest that \ref{alg:signlessMBO} using $\Delta_1^+$ or $\Delta_s^+$, and using either signless diffusion solver offers a significant decrease in run time at the cost of about 1-2\% accuracy in the resulting cut size, in comparison with \ref{alg:GW}, when applied to the graphs in Table~\ref{table:large}. \begin{table} \centering \begin{tabular}{ |l|l|l|l|l|} \hline Graph & $\Delta_1^+$ (S) Best & $\Delta_1^+$ (S) Avg & $\Delta_1^+$ (S) Least & $\Delta_1^+$ (S) Time \\ \hline AS1 & 22744 & 22542.20 & 22183 & \textbf{15.85} \\ \hline AS2 & 13249 & 13153.72 & 13054 & \textbf{3.55} \\ \hline AS3 & 15118 & 15027.22 & 14907 & 4.73 \\ \hline AS4 & 15194 & 15143.44 & 15042 & \textbf{5.67} \\ \hline AS5 & 14080 & 13988.90 & 13928 & \textbf{4.82} \\ \hline AS6 & 18053 & 17964.74 & 17876 & 10.06 \\ \hline AS7 & 22741 & 22535.00 & 22150 & 17.82 \\ \hline AS8 & 22990 & 22720.36 & 22334 & 17.22 \\ \hline GNutella09 & 20280 & 20143.74 & 19983 & 8.16 \\ \hline WikiVote & 72981 & 72856.40 & 72744 & 2.46 \\ \hline \end{tabular} \vspace{0.5cm} \begin{tabular}{ |l|l|l|l|l|} \hline Graph & $\Delta_1^+$ (E) Best & $\Delta_1^+$ (E) Avg & $\Delta_1^+$ (E) Least & $\Delta_1^+$ (E) Time \\ \hline AS1 & 22798 & \textbf{22670.76} & 22268 & 23.62 \\ \hline AS2 & 13281 & \textbf{13199.72} & \textbf{13120} & 8.76 \\ \hline AS3 & 15175 & \textbf{15095.46} & \textbf{15007} & 9.95 \\ \hline AS4 & 15270 & \textbf{15202.70} & \textbf{15117} & 10.88 \\ \hline AS5 & 14120 & \textbf{14020.62} & \textbf{13944} & 9.50 \\ \hline AS6 & 18134 & \textbf{18034.10} & \textbf{17933} & 16.50 \\ \hline AS7 & 22826 & \textbf{22696.42} & \textbf{22525} & 25.78 \\ \hline AS8 & 23070 & \textbf{22951.54} & \textbf{22550} & 25.38 \\ \hline GNutella09 & 20437 & \textbf{20361.92} & \textbf{20295} & 17.14 \\ \hline WikiVote & 73159 & \textbf{73126.34} & \textbf{73086} & 9.06 \\ \hline \end{tabular} \caption{\ref{alg:signlessMBO} cut approximations using $\Delta_1^+$ on graphs with a scale free structure, time in seconds.} \end{table} \vspace{0.5cm} \begin{table} \centering \begin{tabular}{ |l|l|l|l|l|} \hline Graph & $\Delta_s^+$ (S) Best & $\Delta_s^+$ (S) Avg & $\Delta_s^+$ (S) Least & $\Delta_s^+$ (S) Time \\ \hline AS1 & 22809 & 22620.8 & \textbf{22325} & 17.83 \\ \hline AS2 & 13271 & 13178.86 & 13103 & 4.12 \\ \hline AS3 & 15166 & 15082.1 & 14992 & \textbf{4.66} \\ \hline AS4 & 15237 & 15166.24 & 15077 & 5.78 \\ \hline AS5 & 14075 & 14011.96 & 13911 & 5.47 \\ \hline AS6 & 18088 & 17968.04 & 17859 & \textbf{9.14} \\ \hline AS7 & 22822 & 22629.66 & 22218 & \textbf{15.73} \\ \hline AS8 & 23061 & 22884.8 & 22547 & \textbf{15.46} \\ \hline GNutella09 & 20282 & 20186.32 & 20101 & \textbf{6.82} \\ \hline WikiVote & 73169 & 73003.44 & 72917 & \textbf{2.25} \\ \hline \end{tabular} \vspace{0.5cm} \begin{tabular}{ |l|l|l|l|l|} \hline Graph & $\Delta_s^+$ (E) Best & $\Delta_s^+$ (E) Avg & $\Delta_s^+$ (E) Least & $\Delta_s^+$ (E) Time \\ \hline AS1 & 22789 & 22629.62 & 22261 & 27.63 \\ \hline AS2 & 13256 & 13176.64 & 13094 & 9.09 \\ \hline AS3 & 15139 & 15059.54 & 14967 & 10.24 \\ \hline AS4 & 15234 & 15159.76 & 15079 & 11.57 \\ \hline AS5 & 14096 & 14011.9 & 13930 & 10.47 \\ \hline AS6 & 18088 & 17994.66 & 17876 & 16.12 \\ \hline AS7 & 22823 & 22639.58 & 22237 & 24.5 \\ \hline AS8 & 23036 & 22865 & 22440 & 25.08 \\ \hline GNutella09 & 20397 & 20332.28 & 20170 & 18.75 \\ \hline WikiVote & 72993 & 72772.26 & 72549 & 9.00 \\ \hline \end{tabular} \caption{\ref{alg:signlessMBO} cut approximations using $\Delta_s^+$ on graphs with a scale free structure, time in seconds.} \end{table} \vspace{0.5cm} \begin{table} \centering \begin{tabular}{ |l|l|l|l|l|} \hline Graph & $\Delta_0^+$ (S) Best & $\Delta_0^+$ (S) Avg & $\Delta_0^+$ (S) Least & $\Delta_0^+$ (S) Time \\ \hline AS1 & 22578 & 22303.10 & 21844 & 297.79 \\ \hline AS2 & 13081 & 12935.80 & 12763 & 62.41 \\ \hline AS3 & 14995 & 14869.52 & 14702 & 90.32 \\ \hline AS4 & 15097 & 14994.92 & 14885 & 88.53 \\ \hline AS5 & 13952 & 13795.24 & 13561 & 70.81 \\ \hline AS6 & 17836 & 17672.50 & 17527 & 149.60 \\ \hline AS7 & 22571 & 22328.18 & 21932 & 294.26 \\ \hline AS8 & 22824 & 22585.88 & 22075 & 287.79 \\ \hline GNutella09 & 19079 & 18419.36 & 17951 & 72.03 \\ \hline WikiVote & 65504 & 60599.74 & 56917 & 46.11 \\ \hline \end{tabular} \caption{\ref{alg:signlessMBO} cut approximations using $\Delta_0^+$ on graphs with a scale free structure, time in seconds.} \end{table} \vspace{0.5cm} \begin{table} \centering \begin{tabular}{ |l|l|l|l|l|} \hline Graph & GW Best & GW Avg & GW Least & GW Time\\ \hline AS1 & \textbf{22864} & 22346.26 & 20546 & 2324.98\\ \hline AS2 & \textbf{13328} & 13039.10 & 12048 & 594.29\\ \hline AS3 & \textbf{15240} & 14961.56 & 14050 & 826.65\\ \hline AS4 & \textbf{15328} & 15015.34 & 14072 & 832.28\\ \hline AS5 & \textbf{14190} & 13810.82 & 12922 & 721.51 \\ \hline AS6 & \textbf{18191} & 17851.24 & 16483 & 1368.35 \\ \hline AS7 & \textbf{22901} & 22421.80 & 21244 & 2321.34 \\ \hline AS8 & \textbf{23170} & 22593.10 & 21110 & 2613.62 \\ \hline GNutella09 & \textbf{20658} & 20242.02 & 18815 & 1095.04\\ \hline Wiki-Vote & \textbf{73363} & 71510 & 62886 & 1074.98\\ \hline \end{tabular} \caption{\ref{alg:GW} cut approximations on graphs with a scale free structure, time in seconds.}\label{table:large} \end{table} \subsection{Random modular graphs}\label{sec:mod} Modular graphs have a community structure. Nodes in a community have many connections with other members of the same community and noticeably fewer connections with members of other communities. In Figure~\ref{fig:4mod} we show what our Max-Cut approximation looks like on a random modular graph. We generate realisations of random unweighted modular graphs $R(n,c,p,r)$ using the code provided at \cite{Mod}. The variables for the graph are the number of nodes $n$, the number $c \in \mathbb{N}$ of communities that the graph contains, a probability $p$ such that the graph will have an expected number of $\frac{n^2}{2p}$ edges, and a ratio $r \in [0,1]$, with $r|E|$ being the expected number of edges connecting nodes in the same community and $(1-r)|E|$ being the expected number of edges connecting nodes in different communities. \begin{figure} \centering \includegraphics[width=10cm]{4mod.jpg} \caption{A Max-Cut approximation on a random 4-modular graph (best viewed in colour).}\label{fig:4mod} \end{figure} In Figures~\ref{fig:mod2500}, \ref{fig:mod4000}, and~\ref{fig:mod10000} we display results obtained for random modular graphs. For each of $R(2500,2,0.009,0.8)$ (Figure~\ref{fig:mod2500}), $R(4000,20,0.01,0.7)$ (Figure~\ref{fig:mod4000}), and $R(10000,10,0.01,0.8)$ (Figure~\ref{fig:mod10000}) we create 100 realisations. We then run \ref{alg:signlessMBO} with both the spectral method and the Euler method, and we run \ref{alg:GW}. For both of the \ref{alg:signlessMBO} methods we choose either $\Delta_0^+$, $\Delta_1^+$, or $\Delta_s^+$, setting $\tau = 20$ for all tests. The bar chart represents the mean of the best, average, and least cuts over all 100 realisations of the chosen random modular graph. The error bars are the corrected sample standard deviation of the results obtained over all 100 realisations. In Figures~\ref{fig:mod2500}, \ref{fig:mod4000}, and \ref{fig:mod10000} we see that using either $\Delta_1^+$ or $\Delta_s^+$ \ref{alg:signlessMBO} with both the spectral method and the Euler method outperforms \ref{alg:GW} with respect to the best, average, and least cuts. In Table~\ref{table:ModTime} we see that for any choice of operator and method, \ref{alg:signlessMBO} is faster on average than \ref{alg:GW} for our choices for random modular graphs. We note in particular that for our realisations of $R(10000,10,0.01,0.8)$ the average \ref{alg:GW} test took just below 65 minutes, where as the average \ref{alg:signlessMBO} test using the spectral method and either $\Delta_1^+$ or $\Delta_s^+$ took under a minute, obtaining on average better outcomes. \begin{figure} \centering \includegraphics[width=12cm]{mod2500.jpg} \caption{Bar chart of Max-Cut approximations on 100 realisations of $R(2500,2,0.009,0.8)$.}\label{fig:mod2500} \end{figure} \begin{figure} \centering \includegraphics[width=12cm]{mod4000.jpg} \caption{Bar chart of Max-Cut approximations on 100 realisations of $R(4000,20,0.01,0.7)$.}\label{fig:mod4000} \end{figure} \begin{figure} \centering \includegraphics[width=12cm]{mod10000.jpg} \caption{Bar chart of Max-Cut approximations on 100 realisations of $R(10000,10,0.01,0.8)$.}\label{fig:mod10000} \end{figure} \begin{table} \begin{tabular}{ |l|l|l|l|l|l|l|l|} \hline Graph & $\Delta_1^+$ (S) & $\Delta_1^+$ (E) & $\Delta_s^+$ (S) & $\Delta_s^+$ (E) & $\Delta_0^+$ (S) & $\Delta_0^+$ (E) & GW \\ \hline $R(2500,2,0.009,0.8)$ & 0.80 & 10.43 & \textbf{0.79} & 10.26 & 4.36 & 6.13 & 56.30 \\ \hline $R(4000,20,0.01,0.7)$ & \textbf{4.05} & 30.46 & 4.49 & 29.52 & 16.26 & 18.19 & 248.25 \\ \hline $R(10000,10,0.01,0.8)$ & \textbf{49.98} & 266.10 & 52.85 & 266.40 & 210.94 & 194.52 & 3893.87 \\ \hline \end{tabular} \caption{Average \ref{alg:signlessMBO} and \ref{alg:GW} run-times for each realisation of $R(n,c,p,r)$, time in seconds.}\label{table:ModTime} \end{table} \subsection{Weighted graphs}\label{sec:weighted} In this subsection we assign random weights to the edges of selected graphs from Section~\ref{sec:Rand} and Section~\ref{sec:scale}. To create the graphs W1 and W2 we use two of the realisations of $G(1000,0.01)$, and multiply its edges by random real numbers drawn uniformly from in the range $[0,2]$ and $[0,20]$ respectively. W3 and W4 were created by using two of the realisations of $G(2500,0.4)$ in Section~\ref{sec:Rand}, and multiplying its edges by random real numbers drawn uniformly from in the ranges $[0,5]$ and $[0,1]$ respectively. W5, W6, W7 were created by using three of the realisations of $G(5000,0.001)$ in Section~\ref{sec:Rand}, and multiplying its edges by random real numbers drawn uniformly from in the ranges $[0,1],[0,15],$ and $[0,50]$ respectively. W8 is the AS1 graph, whose edges are multiplied by random real numbers drawn uniformly from in the range $[0,12]$, W9 is the AS5 graph whose edges are multiplied by random real numbers drawn uniformly from in the range $[0,4]$ and W10 is the AS8 graph whose edges are multiplied by random real numbers drawn uniformly from in the range $[0,8]$. We run \ref{alg:signlessMBO} for all three choices of $\Delta^+$, on all of these graphs, and compare against \ref{alg:GW} in Table~\ref{table:weighted}. We set $\tau = 20$ for both the spectral decomposition method and the Euler method. We saw that \ref{alg:signlessMBO} using the spectral method produced larger cuts than \ref{alg:GW} on the random graphs considered in Section~\ref{sec:Rand}; when assigning random weights to the edges of these random graphs the same conclusion holds. We see in Table~\ref{table:weighted} that for this collection of random graphs \ref{alg:signlessMBO} using the spectral method (with either $\Delta_1^+$ or $\Delta_s^+$ used) outperforms \ref{alg:GW} with respect to the best, average, and smallest obtained cut sizes, and the run time. In Section~\ref{sec:scale} we saw that \ref{alg:signlessMBO} using both the spectral method and the Euler method produced better average and smallest cuts than \ref{alg:GW} on the scale free graphs considered in that section, but the best cut sizes were produced more often by \ref{alg:GW}. These weighted examples support the same conclusions. The blank results in Table~\ref{table:weighted} for the Euler method using $\Delta_0^+$ are due to \ref{alg:signlessMBO} producing trivial results for these choices as stated in Section~\ref{sec:method}. \begin{table} \centering \begin{tabular}{ |l|l|l|l|l|} \hline Graph & $\Delta_1^+$ (S) Best & $\Delta_1^+$ (S) Avg & $\Delta_1^+$ (S) Least & $\Delta_1^+$ (S) Time \\ \hline W1 & 3612.00 & 3569.08 & 3537.10 & 0.47 \\ \hline W2 & 36487.51 & 36082.58 & 35687.87 & \textbf{0.30} \\ \hline W3 & 1622125.53 & \textbf{1620885.77} & \textbf{1619371.25} & 8.09\\ \hline W4 & 323926.34 & 323639.05 & \textbf{323321.92} & 8.59 \\ \hline W5 & 5054.26 & 5033.54 & 5010.38 & \textbf{4.00}\\ \hline W6 & 74560.24 & 74218.26 & 73776.17 & \textbf{3.90} \\ \hline W7 & 252448.52 & 251045.03 & 249459.89 & 4.18 \\ \hline W8 & 137202.14 & 135952.94 & 133480.08 & 16.17 \\ \hline W9 & 28351.01 & 28194.96 &28009.15 & \textbf{3.99} \\ \hline W10 & 92376.49 & 91570.35 & 90172.90 & 17.02 \\ \hline \end{tabular} \vspace{0.5cm} \begin{tabular}{ |l|l|l|l|l|} \hline Graph & $\Delta_1^+$ (E) Best & $\Delta_1^+$ (E) Avg & $\Delta_1^+$ (E) Least & $\Delta_1^+$ (E) Time \\ \hline W1 & \textbf{3622.58} & \textbf{3580.53} & 3548.82 & 1.41 \\ \hline W2 & \textbf{36530.25} & \textbf{36191.16} & \textbf{35928.56} & 1.67 \\ \hline W3 &1603390.76 & 1600505.43& 1596558.94 &185.03\\ \hline W4 & 320347.01 & 319612.93&318849.26 &195.66\\ \hline W5 & \textbf{5104.45} & \textbf{5081.95} & \textbf{5063.64} &15.31\\ \hline W6 & \textbf{75499.50} & \textbf{75175.73} & \textbf{74833.80} &15.70\\ \hline W7 & \textbf{255793.23} & \textbf{254569.97} & \textbf{253091.91} &15.71\\ \hline W8 & 137569.32 & \textbf{136896.1} & \textbf{136094.60} &23.83\\ \hline W9 & 28545.45 & \textbf{28369.43} & \textbf{28141.76} &9.24\\ \hline W10 & 93021.06 & \textbf{92489.04} & \textbf{91626.99} &25.37\\ \hline \end{tabular} \caption{\ref{alg:signlessMBO} cut approximations using $\Delta_1^+$ on randomly weighted graphs, time in seconds.} \end{table} \vspace{0.5cm} \begin{table} \centering \begin{tabular}{ |l|l|l|l|l|} \hline Graph & $\Delta_s^+$ (S) Best & $\Delta_s^+$ (S) Avg & $\Delta_s^+$ (S) Least & $\Delta_s^+$ (S) Time \\ \hline W1 & 3601.29 & 3569.23 & 3545.85 & \textbf{0.33} \\ \hline W2 & 36192.09 & 36059.80 & 35867.83 & 0.49 \\ \hline W3 & \textbf{1622372.91} & 1620484 & 1618809.76 & 8.40 \\ \hline W4 & \textbf{323933.40} & \textbf{323642.4} & 323114.45 & 7.65 \\ \hline W5 & 5068.19 & 5041.94 & 5015.16 & 4.50 \\ \hline W6 & 74844.37 & 74505.45 & 73963.79 & 4.67 \\ \hline W7 & 253043.96 & 251668.30 & 250600.35 & \textbf{4.12} \\ \hline W8 & 137195.52 & 136360.17 & 134856.06 & \textbf{15.38} \\ \hline W9 & 28389.38 & 28227.09 & 28067.66 & 4.12 \\ \hline W10 & 92439.42 & 91952.98 & 90488.33 & \textbf{15.33} \\ \hline \end{tabular} \vspace{0.5cm} \begin{tabular}{ |l|l|l|l|l|} \hline Graph & $\Delta_s^+$ (E) Best & $\Delta_s^+$ (E) Avg & $\Delta_s^+$ (E) Least & $\Delta_s^+$ (E) Time \\ \hline W1 & 3614.37 & 3577.56 & 3542.19 & 1.40 \\ \hline W2 & 36321.80 & 36150.05 & 35910.90 & 1.53 \\ \hline W3 & 1604257.12 & 1600145.68 & 1597577.4 & 187.88 \\ \hline W4 & 320691.88 & 319596.27 & 318900.13 & 199.01 \\ \hline W5 & 5096.55 & 5072.36 & 5041.89 & 15.9 \\ \hline W6 & 75456.87 & 75089.73 & 74745.17 & 18.09 \\ \hline W7 & 255316.85 & 253821.64 & 252527.13 & 15.48 \\ \hline W8 & 137282.02 & 136475.24 & 134333.1 & 24.51 \\ \hline W9 & 28445.94 & 28258.64 & 28101.22 & 9.18 \\ \hline W10 & 92731.62 & 92093.05 & 90448.61 & 24.36 \\ \hline \end{tabular} \caption{\ref{alg:signlessMBO} cut approximations using $\Delta_s^+$ on randomly weighted graphs, time in seconds.} \end{table} \vspace{0.5cm} \begin{table} \centering \begin{tabular}{ |l|l|l|l|l|} \hline Graph & $\Delta_0^+$ (S) Best & $\Delta_0^+$ (S) Avg & $\Delta_0^+$ (S) Least & $\Delta_0^+$ (S) Time \\ \hline W1 & 3413.96 & 3345.32 & 3276.63 & 0.61 \\ \hline W2 & 34784.30 & 34304.33 & 33627.16 & 0.51 \\ \hline W3 & 1602346.52 & 1600022.33 & 1595791.12 & \textbf{6.97} \\ \hline W4 & 320251.52 & 319940.38 & 319663.40 & \textbf{6.25} \\ \hline W5 & 4793.44 & 4761.72 & 4715.51 & 18.66 \\ \hline W6 & 71219.49 & 70427.83 & 69643.31 & 18.93 \\ \hline W7 & 239991.72 & 237647.45 & 235617.15 & 19.17 \\ \hline W8 & 134097.55 & 131088.97 & 126123.70 & 272.56 \\ \hline W9 & 27528.99 & 26554.77 & 25501.34 & 69.63 \\ \hline W10 & 90271.70 & 88031.84 & 83130.60 & 264.89 \\ \hline \end{tabular} \vspace{0.5cm} \begin{tabular}{ |l|l|l|l|l|} \hline Graph & $\Delta_0^+$ (E) Best & $\Delta_0^+$ (E) Avg & $\Delta_0^+$ (E) Least & $\Delta_0^+$ (E) Time \\ \hline W1 & 3524.24 & 3456.55 & 3406.93 & 1.03 \\ \hline W2 & 35664.18 & 35040.71 & 34383.57 & 1.03 \\ \hline W3 & 1605419.97 & 1602251.82 & 1597064.59 & 203.27 \\ \hline W4 & 320321.63 & 319809.73 & 319237.08 & 192.51 \\ \hline W5 & 5017.66 & 4983.90 & 4954.63 & 7.76 \\ \hline W6 & 74195.87 & 73688.97 & 73231.67 & 7.33 \\ \hline W7 & 251330.73 & 249754.88 & 248091.06 & 7.51 \\ \hline W8 & - & - & - & - \\ \hline W9 & - & - & - & - \\ \hline W10 & - & - & - & - \\ \hline \end{tabular} \caption{\ref{alg:signlessMBO} cut approximations using $\Delta_0^+$ on randomly weighted graphs, time in seconds.} \end{table} \vspace{0.5cm} \begin{table} \centering \begin{tabular}{ |l|l|l|l|l|} \hline Graph & GW Best & GW Avg & GW Least & GW Time\\ \hline W1 & 3585.17 & 3535.63 & 3494.26 & 5.74 \\ \hline W2 & 36101.30 & 35698.47 & 35151.60 & 6.07 \\ \hline W3 & 1620705.80 & 1618813.52 & 1616502.33 & 43.58 \\ \hline W4 & 323573.40 & 323275.84 & 322795.83 & 44.09 \\ \hline W5 & 5038.00 & 5000.74 & 4953.71 & 265.27 \\ \hline W6 & 74372.75 & 73852.36 & 73293.27 & 241.33 \\ \hline W7 & 251802.56 & 250316.08 & 248098.85 & 263.44 \\ \hline W8 & \textbf{138159.14} & 135899.20 & 129576.95 & 2629.60 \\ \hline W9 & \textbf{28705.35} & 28169.25 & 26422.54 & 689.16 \\ \hline W10 & \textbf{93547.26} & 91571.68 & 87487.99 & 2646.94 \\ \hline \end{tabular} \caption{\ref{alg:GW} cut approximations on randomly weighted graphs, time in seconds.}\label{table:weighted} \end{table} \subsection{Large graphs}\label{sec:large} Since the Euler method has a time complexity of $\mathcal{O}(|E|)$, in this section we show that \ref{alg:signlessMBO} using the Euler method can provide Max-Cut approximations in a respectable time on large sparse datasets. The graphs Amazon0302 and Amazon0601 are networks in which the nodes represent products and an edge exists between two nodes if the corresponding products are frequently co-purchased; both of these networks were constructed in 2003. GNutella31 depicts a peer to peer file sharing network in 2002. PA RoadNet is a road network of Pennsylvania with intersections and endpoints acting as nodes and roads connecting them acting as edges. Email-Enron is a network where each edge represents an email being sent between two people. BerkStan-Web is a network of inter-domain and intra-domain hyperlinks between pages on the domains berkeley.edu and stanford.edu in 2002. Stanford is a network of hyperlinks between pages on the domain stanford.edu in 2002. All of these datasets were obtained from the website \cite{snapnets}. The graph WWW1999 is a model of the Internet in 1999 with edges depicting hyperlinks between websites, obtained from \cite{albert1999}. Table~\ref{table:TabLarge} displays the properties of these graphs. Table~\ref{table:EulerLarge} displays the results we obtained on these graphs choosing $\Delta_1^+$ as our operator, the Euler method as our signless diffusion solver, and $\tau = 10$. For these large graphs, we are unable to obtain results for comparison using \ref{alg:GW}, because \ref{alg:GW} requires too much memory for it to run on the same computer setup. \begin{table} \centering \begin{tabular}{ |l|l|l|l|l|} \hline Graph & $|V|$ & $|E|$ & $d_{-}$ & $d_{+}$ \\ \hline Amazon0302 & 262111 & 899792 & 1 & 420 \\ \hline Amazon0601 & 403394 & 2443408 & 1 & 2752 \\ \hline GNutella31 & 62586 & 147892 & 1 & 95 \\ \hline PA RoadNet & 1088092 & 1541898 & 1 & 9 \\ \hline Email-Enron & 36692 & 183831 & 1 & 1383 \\ \hline BerkStan-Web & 685230 & 6649470 & 1 & 84290 \\ \hline Stanford & 281904 & 1992636 & 1 & 38625 \\ \hline WWW1999 & 325729 & 1090108 & 1 & 10721 \\ \hline \end{tabular} \caption{Properties of our large datasets we are testing on.}\label{table:TabLarge} \end{table} \begin{table} \centering \begin{tabular}{ |l|l|l|l|l|} \hline Graph & $\Delta_1^+$ (E) Best & $\Delta_1^+$ (E) Avg & $\Delta_1^+$ (E) Min & $\Delta_1^+$ (E) Time\\ \hline Amazon0302 & 618942 & 618512.18 & 618030 & 0.49 \\ \hline Amazon0601 & 1580070 & 1576960.80 & 1571089 & 1.90 \\ \hline GNutella31 & 116552 & 116213.74 & 115916 & 0.06 \\ \hline PA RoadNet & 1380131 & 1379797.90 & 1379416 & 0.64 \\ \hline Email-Enron & 112665 & 111680.24 & 110279 & 0.02 \\ \hline BerkStan-Web & 5335813 & 5319662.06 & 5281630 & 0.83 \\ \hline Stanford & 1585802 & 1580445.14 & 1570469 & 0.47 \\ \hline WWW1999 & 813000 & 809329.52 & 806130 & 0.21 \\ \hline \end{tabular} \caption{Results of \ref{alg:signlessMBO} using $\Delta_1^+$ and the Euler method on large datasets, time in hours.}\label{table:EulerLarge} \end{table} \section{Parameter choices}\label{sec:parameters} \subsection{Variable $K$} As stated in Section \ref{sec:spectral}, the computational advantage of \ref{alg:signlessMBO} using the spectral method is that not all the eigenpairs of $\Delta^+$ need to be used. In practice, if $K$ is large enough, the cut sizes obtained by \ref{alg:signlessMBO} using the spectral method does not improve significantly when $K$ is increased further. The plots in Figure~\ref{fig:KComp} highlight this. For these three tests we fixed the initial conditions, the choice of operator $\Delta_1^+$, and $\tau = 20$ for each respective graph. For Figure~\ref{fig:K1} we plot the best, average, and least cuts for each choice of $K$. For Figure~\ref{fig:K2} and Figure~\ref{fig:K3} we plot the mean of the best, average, and least cuts over all 100 graphs for each choice of $K$. The error bars indicate the corrected sample standard deviation of the best, average, and least cuts. We ran \ref{alg:signlessMBO} using the spectral method on the AS4 graph, increasing the value for $K$ in increments of 5 from 5 until 100. The plot in Figure~\ref{fig:K1} shows that at $K = 40$ the best, average, and least cut size changes very little for increasing $K$. For Figure~\ref{fig:K2} we ran \ref{alg:signlessMBO} on the 100 realisations of $G(5000,0.001)$ from Section~\ref{sec:Rand}, increasing $K$ in increments of 10 from 10 until 200. For Figure~\ref{fig:K3} we ran \ref{alg:signlessMBO} on the 100 realisations of $R(2500,2,0.009,0.8)$, increasing $K$ in increments of 5 from 5 until 100. The plots in Figure~\ref{fig:K2} and Figure~\ref{fig:K3} show that for our choices of Erd\"os-R\'enyi and random modular graphs increasing $K$ increases the cut sizes. We also note that the best, average, and minimum cut sizes plateau. \begin{figure} \centering \begin{subfigure}{.75\textwidth} \centering \includegraphics[width=9cm]{KAS4.jpg} \caption{AS4: Cut size as function of $K$, $\tau = 20$.}\label{fig:K1} \end{subfigure}% \begin{subfigure}{.75\textwidth} \centering \includegraphics[width=9cm]{K5000Plot.jpg} \caption{100 realisations of $G(5000,0.001)$: Cut size as function of $K$, $\tau = 20$.}\label{fig:K2} \end{subfigure}% \begin{subfigure}{.75\textwidth} \centering \includegraphics[width=9cm]{mod2500vsk.jpg} \caption{100 realisations of $R(2500,2,0.009,0.8)$: Cut size as function of $K$, $\tau = 20$.}\label{fig:K3} \end{subfigure} \caption{Cut size as function of $K$ for three graphs (best viewed in colour).}\label{fig:KComp} \end{figure} For large graphs, however, finding the value of $K$ beyond which the produced cut sizes plateau is problematic. We ran \ref{alg:signlessMBO} using the spectral method with $\Delta_1^+$ on the Amazon0302 graph, increasing $K$ in increments of 100 starting from 100 to 2600. As shown in Figure~\ref{fig:KAmazon} the best, average, and least outcomes of \ref{alg:signlessMBO} are still increasing at the end of the range of $K$ values we plotted. For $K = 200$ and $K = 2600$ the run time of \ref{alg:signlessMBO} was 12 minutes and 26 hours, respectively; this increase in computation time resulted in a 3\% increase in cut values. Comparing the cut size obtained for $K = 2600$ with the cut sizes obtained on Amazon0302 in Table~\ref{table:EulerLarge} we see that using the Euler method as the signless diffusion solver is more accurate and significantly faster. \begin{figure} \centering \includegraphics[width=10cm]{Amazon0302K.jpg} \caption{Comparison of cut size approximation vs $K$ on Amazon0302 graph.}\label{fig:KAmazon} \end{figure} \subsection{Variable $\tau$} Other than the pinning condition stated in Section~\ref{sec:pinning condition}, currently we have very little information on which to base our choice of $\tau$. In this section we compare the cut sizes obtained by \ref{alg:signlessMBO} against the variable $\tau$. We choose $\Delta_1^+$ as the signless Laplacian operator and the spectral method as the signless diffusion solver. Figure~\ref{fig:taucomp} displays the obtained cut sizes from \ref{alg:signlessMBO} on three (sets of) graphs and compares against $\tau$. For Figure~\ref{fig:tau1} we plot the best, average, and least cuts for each choice of $\tau$. For Figure~\ref{fig:tau2} and Figure~\ref{fig:tau3} we plot the mean of the best, average, and least cuts over all 100 graphs for each choice of $\tau$. The error bars indicate the corrected sample standard deviation of the best, average, and least cuts. We ran \ref{alg:signlessMBO} using the spectral method on the AS4 graph, increasing the value for $\tau$ in increments of 5 starting from 5 until 500. In Figure~\ref{fig:tau1} we see in this experiment that $5 \leq \tau \leq 40$ produces the best results with respect to our cut sizes. We also see that for $330 \leq \tau \leq 480$ the best, average, and least cuts are almost identical. For Figure~\ref{fig:tau2} we ran \ref{alg:signlessMBO} on the 100 realisations of $G(5000,0.001)$ from Section~\ref{sec:Rand}, increasing $\tau$ in increments of 5 starting from 5 until 125. For Figure~\ref{fig:tau3} we ran \ref{alg:signlessMBO} on the 100 realisations of $R(4000,20,0.01,0.7)$, increasing $\tau$ in increments of 5 starting from 5 until 100. In Figure~\ref{fig:tau2} and Figure~\ref{fig:tau3} we see the general trend that increasing $\tau$ beyond 20 decreases the mean over the best, average, and least cuts over all 100 realisations of $G(5000,0.001)$. \begin{figure} \centering \begin{subfigure}{.75\textwidth} \centering \includegraphics[width=9cm]{as4vstau.jpg} \caption{AS4: Cut size as function of $\tau$, $K = 89$.}\label{fig:tau1} \end{subfigure}% \begin{subfigure}{.75\textwidth} \centering \includegraphics[width=9cm]{erdos5000vstau.jpg} \caption{100 realisations of $G(5000,0.001)$: Cut size as function of $\tau$, $K = 49$.}\label{fig:tau2} \end{subfigure}% \begin{subfigure}{.75\textwidth} \centering \includegraphics[width=9cm]{mod4000tau.jpg} \caption{100 realisations of $R(4000,20,0.01,0.7)$: Cut size as function of $\tau$, $K = 40$.}\label{fig:tau3} \end{subfigure} \caption{Cut size as function of $\tau$ for three graphs (best viewed in colour).}\label{fig:taucomp} \end{figure} \subsection{Implicit Euler scheme}\label{sec:implicit} On the random graphs we tested on in Section~\ref{sec:Rand} and Section~\ref{sec:mod} our explicit Euler scheme using $\Delta_0^+$ produced non-trivial cut sizes. However, for the scale free graphs in Section~\ref{sec:scale} and Section~\ref{sec:weighted} we did not find a value of $\tau$ or $dt$ such that the cuts induced from \ref{alg:signlessMBO} were non-trivial. In this subsection we show that we can solve the Euler equation implicitly in order to obtain non-trivial cut sizes with the operator $\Delta_0^+$, subject to suitable choices of $dt$ and $\tau$. However, the results are significantly inferior to the operators $\Delta_1^+$ and $\Delta_s^+$ for the implicit Euler scheme. We also compare the \ref{alg:signlessMBO} results obtained using the implicit scheme to the results obtained using the explicit scheme for a set of random graphs. We run \ref{alg:signlessMBO} using the implicit Euler scheme on the AS4 and AS8 graph from Section~\ref{sec:scale} and the W9 graph from Section~\ref{sec:weighted}. We choose $dt = 0.2$ and $\tau = 20$ when $\Delta_1^+$ or $\Delta_s^+$ is the operator. For $\Delta_0^+$ we set $dt = 0.0005$ and $\tau = 0.05$ for the AS4 graph, and for the AS8 graph and the W9 graph we set $dt = 0.0001$ and $\tau = 0.01$. Table~\ref{tab:impScale} shows that \ref{alg:signlessMBO} using the implicit Euler scheme with $\Delta_0^+$ and our choice of parameters produces cut sizes, however they are significantly smaller in comparison to using $\Delta_1^+$ or $\Delta_s^+$. \begin{table} \centering \begin{tabular}{|l|l|l|l|} \hline Graph & $\Delta_1^+$ Best & $\Delta_s^+$ Best & $\Delta_0^+$ Best \\ \hline AS4 & 15276 & \textbf{15279} & 9259 \\ \hline AS8 & \textbf{23083} & 23033 & 13725 \\ \hline W9 & \textbf{28553.66} & 28485.28 & 17146.69 \\ \hline \end{tabular} \quad \begin{tabular}{ |l|l|l|l|} \hline Graph & $\Delta_1^+$ Avg & $\Delta_s^+$ Avg & $\Delta_0^+$ Avg \\ \hline AS4 & \textbf{15196.52} & 15175.52 & 9124.68 \\ \hline AS8 & \textbf{22934.30} & 22844.16 & 13585.56 \\ \hline W9 & \textbf{28360.46} & 28294.40 & 16847.92 \\ \hline \end{tabular} \vspace{0.5cm} \begin{tabular}{ |l|l|l|l|} \hline Graph & $\Delta_1^+$ Least & $\Delta_s^+$ Least & $\Delta_0^+$ Least \\ \hline AS4 & \textbf{15124} & 15056 & 8964 \\ \hline AS8 & \textbf{22521} & 22454 & 13477 \\ \hline W9 & \textbf{28103.28} & 28075.62 & 16521.43 \\ \hline \end{tabular} \quad \begin{tabular}{ |l|l|l|l|} \hline Graph & $\Delta_1^+$ Time & $\Delta_s^+$ Time & $\Delta_0^+$ Time \\ \hline AS4 & 47.83 & 50.94 & \textbf{7.47} \\ \hline AS8 & 105.22 & 114.74 & \textbf{11.48} \\ \hline W9 & 38.61 & 42.57 & \textbf{5.26} \\ \hline \end{tabular} \caption{Cut sizes obtained by \ref{alg:signlessMBO} using the implicit Euler scheme on scale free graphs, time in seconds.}\label{tab:impScale} \end{table} We run \ref{alg:signlessMBO} on the 100 realisations of $G(1000,0.01)$ and $R(4000,20,0.01,0.7)$ in Section~\ref{sec:Rand} and Section~\ref{sec:mod} respectively, using the implicit and explicit Euler method for each operator $\Delta^+ \in \{\Delta_0^+,\Delta_1^+,\Delta_s^+\}$. We choose the same values of $\tau$ and $dt$ as chosen in Section~\ref{sec:Rand} and Section~\ref{sec:mod}, fixing the initial conditions for both methods. Figure~\ref{fig:G1000imp} and Figure~\ref{fig:mod4000imp} show that the average obtained cut sizes using the implicit Euler method are slightly better than the average obtained cut sizes obtained using the explicit method. However, Table~\ref{tab:impexpTime} shows that \ref{alg:signlessMBO} using the explicit Euler method produces cut sizes in less time than using the implicit Euler method on these sets of random graphs. This is why we choose the explicit method for the Euler method in Section~\ref{sec:results}. \begin{figure} \centering \includegraphics[width=12cm]{impexperdos.jpg} \caption{Bar chart of Max-Cut approximations on 100 realisations of $G(1000,0.01)$ using the implicit Euler method and the explicit Euler method.}\label{fig:G1000imp} \end{figure} \begin{figure} \centering \includegraphics[width=12cm]{impexpmod4000.jpg} \caption{Bar chart of Max-Cut approximations on 100 realisations of $R(4000,20,0.01,0.7)$.}\label{fig:mod4000imp} \end{figure} \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|} \hline Graph & $\Delta_1^+$ (I) & $\Delta_1^+$ (E) & $\Delta_s^+$ (I) & $\Delta_s^+$ (E) & $\Delta_0^+$ (I) & $\Delta_0^+$ (E) \\ \hline $G(1000,0.01)$ & 3.36 & 1.82 & 3.28 & 1.79 & 2.20 & \textbf{1.19} \\ \hline $R(4000,20,0.01,0.7)$ & 62.97 & 44.23 & 62.16 & 44.18 & 41.53 & \textbf{24.01} \\ \hline \end{tabular} \caption{Average \ref{alg:signlessMBO} run-times for each realisation of $G(1000,0.01)$ and $R(4000,20,0.01,0.7)$, time in seconds. (I) indicates the implicit Euler method and (E) indicates the explicit Euler method.}\label{tab:impexpTime} \end{table} \section{Conclusions}\label{sec:conclusions} We have proven that the signless graph Ginzburg-Landau functional $f_{\varepsilon}^+$ $\Gamma$-converges to a Max-Cut objective functional as $\e\downarrow 0$ and thus minimizers of $f_{\varepsilon}^+$ can be used to approximate maximal cuts of a graph. We use an adaptation of the graph MBO scheme involving signless graph Laplacians to approximately minimize $f_\e^+$. We solve the signless diffusion step of our graph MBO scheme using a spectral truncation method and an Euler method. We tested the resulting \ref{alg:signlessMBO} algorithm on various graphs using both these signless diffusion solvers, and compared the results and run times with those obtained using the \ref{alg:GW} algorithm. In our tests on realizations of random Erd\"os-R\'enyi graphs and on realizations of random modular graphs our \ref{alg:signlessMBO} algorithm using the spectral method outperforms \ref{alg:GW} with reduced run times. On our examples of scale free graphs \ref{alg:GW} usually gives the best maximum cut approximations, but requires run times that are two orders of magnitude longer than those of \ref{alg:signlessMBO}, which obtains cut sizes within about 2\% of those obtained by \ref{alg:GW}. Similar conclusions follow from our tests on weighted graphs, that used randomly generated Erd\"os-R\'enyi graphs and modular graphs, and some scale free graphs, all with random edge weights. We have also shown that our algorithm using the Euler method can be used on large sparse datasets, with reasonable computation times. In our tests (and for our parameter choices) we see that \ref{alg:signlessMBO} using both $\Delta_1^+$ and $\Delta_s^+$ produces larger Max-Cut approximations than $\Delta_0^+$ for all of the graphs that we tested on. There are still many open questions related to the \ref{alg:signlessMBO} algorithm, for example questions related to a priori parameter choices (such as $\tau$ and $K$), and performance guarantees. These can be the subject of future research. \section*{Acknowledgements} We would like to thank the EPSRC for supporting this work through the DTP grant EP/M50810X/1. We would also like to thank Matthias Kurzke and Braxton Osting for helpful discussions. This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sk\l{}odowska--Curie grant agreement No 777826. \bibliographystyle{ieeetr}
1,941,325,220,674
arxiv
\section{Introduction} Let $G$ be a semisimple Lie group and $\mathcal{X}$ the associated symmetric space of dimension $n$. Let $M$ be a connected, orientable, aspherical, tame manifold of the same dimension as $\mathcal{X}$. First assume that $M$ is compact. To each representation $\rho :\pi_1(M) \rightarrow G$, one can associate a volume of $\rho$ in the following way. First, associate a flat bundle $E_\rho$ over $M$ with fiber $\mathcal{X}$ to $\rho$. Since $\mathcal{X}$ is contractible, there always exists a section $s : M \rightarrow E_\rho$. Let $\omega_{\mathcal{X}}$ be the Riemannian volume form on $\mathcal{X}$. One may think of $\omega_{\mathcal{X}}$ as a closed differential form on $E_\rho$ by spreading $\omega_{\mathcal{X}}$ over the fibers of $E_\rho$. Then the volume of $\rho$ is defined by $$\mathrm{Vol}(\rho)=\int_M s^*\omega_{\mathcal{X}}.$$ Since any two sections are homotopic to each other, the volume $\mathrm{Vol}(\rho)$ does not depend on the choice of section. The volume of representations has been used to characterize discrete faithful representations. Let $\Gamma$ be a uniform lattice in $G$. Then the volume of representations satisfies a Milnor-Wood type inequality. More precisely, it holds that for any representation $\rho :\Gamma\rightarrow G$, \begin{eqnarray}\label{MWinequality} |\mathrm{Vol}(\rho)| \leq \mathrm{Vol}(\Gamma\backslash \mathcal{X}).\end{eqnarray} Furthermore, equality holds in (\ref{MWinequality}) if and only if $\rho$ is discrete and faithful. This is the so-called \emph{volume rigidity theorem}. Goldman \cite{Go92} proved the volume rigidity theorem in higher rank case and, Besson, Courtois and Gallot \cite{BCG07} proved the theorem in rank $1$ case. Now assume that $M$ is noncompact. Then the definition of volume of representations as above is not valid anymore since some problems of integrability arise. So far, three definitions of volume of representations have been given under some conditions on $M$. Let us first fix the following notations throughout the paper. \smallskip \noindent {\bf Setup.} Let $M$ be a noncompact, connected, orientable, aspherical, tame manifold. Denote by $\overline M$ the compact manifold with boundary whose interior is homeomorphic to $M$. Assume that each connected component of $\partial \overline M$ has amenable fundamental group. Let $G$ be a rank $1$ semisimple Lie group with trivial center and no compact factors. Let $\mathcal{X}$ be the associated symmetric space of dimension $n$. Assume that $M$ has the same dimension as $\mathcal{X}$. \smallskip First of all, Dunfield \cite{Du99} introduced the notion of pseudo-developing map to define the volume of representations of a nonuniform lattice $\Gamma$ in $\mathrm{SO}(3,1)$. It was successful to make an invariant associated with a representation $\rho :\Gamma \rightarrow \mathrm{SO}(3,1)$ but he did not prove that the volume of representations does not depend on the chosen pseudo-developing map. After that, Francaviglia \cite{Fr04} proved the well-definedness of the volume of representations. Then Francaviglia and Klaff \cite{FK06} extended the definition of volume of representations and the volume rigidity theorem to general nonuniform hyperbolic lattices. We call the definition of volume of representations via pseudo-developing map {\bf D1}. For more detail about {\bf D1}, see \cite{FK06} or Section \ref{sec:pseudo}. The second definition {\bf D2} of volume of representations was given by Bucher, Burger and Iozzi \cite{BBI}, which generalizes the one introduced in \cite{BIW10} for noncompact surfaces. They used the bounded cohomology theory to make an invariant associated with a representation. Given a representation $\rho : \pi_1(M) \rightarrow G$, one can not get any information from the pull-back map in degree $n$ in continuous cohomology, $\rho^*_c : H^n_c(G,\mathbb R) \rightarrow H^n(\pi_1(M),\mathbb R)$, since $H^n(\pi_1(M),\mathbb R) \cong H^n(M,\mathbb R)$ is trivial. However the situation is different in continuous bounded cohomology. Not only may be the pull-back map $\rho^*_b : H^n_{c,b}(G,\mathbb R) \rightarrow H^n_b(\pi_1(M),\mathbb R)$ nontrivial but also encodes subtle algebraic and topological properties of a representation such as injectivity and discreteness. Bucher, Burger and Iozzi \cite{BBI} gave a proof of the volume rigidity theorem for representations of hyperbolic lattices from the point of view of bounded cohomology. We refer the reader to \cite{BBI} or Section \ref{sec:bounded} for further discussion about {\bf D2}. Recently, S. Kim and I. Kim \cite{KK14} give a new definition, called {\bf D3}, of volume of representations in the case that $M$ is a complete Riemannian manifold with finite Lipschitz simplicial volume. See \cite{KK14} or Section \ref{sec:lipschitz} for the exact definition of {\bf D3}. In {\bf D3}, it is not necessary that each connected component of $\partial \overline M$ has amenable fundamental group while the amenable condition on $\partial \overline M$ is necessary in {\bf D2}. They only use the bounded cohomology and $\ell^1$-homology of $M$. It is quite useful to define the volume of representations in the case that the amenable condition on $\partial \overline M$ does not hold. They give a proof of the volume rigidity theorem for representations of lattices in an arbitrary semisimple Lie group in their setting. In this note, we will give another definition of volume of representations, called {\bf D4}. In {\bf D4}, $\rho$-equivariant maps are involved as {\bf D1} and the bounded cohomology of $M$ is involved as {\bf D2} and {\bf D3}. In fact, {\bf D4} seems a kind of definition connecting the other definitions {\bf D1}, {\bf D2} and {\bf D3}. Eventually we show that all definitions are equivalent. \begin{theorem}\label{thm:main} Let $G$ be a rank $1$ simple Lie group with trivial center and no compact factors. Let $M$ be a noncompact, connected, orientable, aspherical, tame manifold. Suppose that each end of $M$ has amenable fundamental group. Then all definitions {\bf D1}, {\bf D2} and {\bf D3} of volume of representations of $\pi_1(M)$ into $G$ are equivalent. Furthermore if $M$ admits a complete Riemannian metric with finite Lipschitz simplicial volume, all definitions {\bf D1}, {\bf D2}, {\bf D3} and {\bf D4} are equivalent. \end{theorem} The paper is organized as follows: For our proof, we recall the definitions of volume of representations in the order {\bf D2, D4, D1, D3}. In Section \ref{sec:bounded}, we first recall the definition {\bf D2}. In Section \ref{sec:cone}, we give the definition {\bf D4} and then prove that {\bf D2} and {\bf D4} are equivalent. In Section \ref{sec:pseudo}, after recalling the definition {\bf D1}, we show the equivalence of {\bf D1} and {\bf D4}. Finally in Section \ref{sec:lipschitz}, we complete the proof of Theorem \ref{thm:main} by proving that {\bf D3} and {\bf D4} are equivalent. \section{Bounded cohomology and Definition {\bf D2}}\label{sec:bounded} We choose the appropriate complexes for the continuous cohomology and continuous bounded cohomology of $G$ for our purpose. Consider the complex $C^*_c(\mathcal X,\mathbb R)_\mathrm{alt}$ with the homogeneous coboundary operator, where $$C^k_c(\mathcal X,\mathbb R)_\mathrm{alt} =\{ f : \mathcal X^{k+1} \rightarrow \mathbb R \ | \ f \text{ is continuous and alternating} \}.$$ The action of $G$ on $C^k_c(\mathcal X,\mathbb R)_\mathrm{alt}$ is given by $$g\cdot f (x_0,\ldots,x_k)=f(g^{-1}x_0,\ldots,g^{-1}x_k).$$ Then the continuous cohomology $H^*_c(G,\mathbb R)$ can be isomorphically computed by the cohomology of the $G$-invariant complex $C^*_c(\mathcal X,\mathbb R)_\mathrm{alt}^G$ (see \cite[Chapitre III]{Gu80}). According to the Van Est isomorphism \cite[Proposition IX.5.5]{BW}, the continuous cohomology $H^*_c(G,\mathbb R)$ is isomorphic to the set of $G$-invariant differential forms on $\mathcal{X}$. Hence in degree $n$, $H^n_c(G,\mathbb R)$ is generated by the Riemannian volume form $\omega_{\mathcal X}$ on $\mathcal{X}$. Let $C^k_{c,b}(\mathcal{X},\mathbb R)_\mathrm{alt}$ be subcomplex of continuous, alternating, bounded real valued functions on $\mathcal{X}^{k+1}$. The continuous bounded cohomology $H^*_{c,b}(G,\mathbb R)$ is obtained by the cohomology of the $G$-invariant complex $C^*_{c,b}(\mathcal{X},\mathbb R)_\mathrm{alt}^G$ (see \cite[Corollary 7.4.10]{Mo01}). The inclusion of complexes $C^*_{c,b}(\mathcal{X},\mathbb R)^G_\mathrm{alt} \subset C^*_{c}(\mathcal{X},\mathbb R)^G_\mathrm{alt}$ induces a comparison map $H^*_{c,b}(G,\mathbb R) \rightarrow H^*_{c}(G,\mathbb R)$. Let $Y$ be a countable CW-complex. Denote by $C^k_b(Y,\mathbb R)$ the complex of bounded real valued $k$-cochains on $Y$. For a subspace $B \subset Y$, let $C^k_b(Y,B,\mathbb R)$ be the subcomplex of those bounded $k$-cochains on $Y$ that vanish on simplices with image contained in $B$. The complexes $C^*_b(Y,\mathbb R)$ and $C^*_b(Y,B,\mathbb R)$ define the bounded cohomologies $H^*_b(Y,\mathbb R)$ and $H^*_b(Y,B,\mathbb R)$ respectively. For our convenience, we give another complex which computes the bounded cohomology $H^*_b(Y,\mathbb R)$ of $Y$. Let $C^k_b(\widetilde Y,\mathbb R)_\mathrm{alt}$ denote the complex of bounded, alternating real valued Borel functions on $(\widetilde Y)^{k+1}$. The $\pi_1(Y)$-action on $C^*_b(\widetilde Y,\mathbb R)_\mathrm{alt}$ is defined as the $G$-action on $C^*_c(\mathcal X,\mathbb R)$. Then Ivanov \cite{Iva85} proved that the $\pi_1(Y)$-invariant complex $C^*_b(\widetilde Y,\mathbb R)_\mathrm{alt}^{\pi_1(Y)}$ defines the bounded cohomology of $Y$. Bucher, Burger and Iozzi \cite{BBI} used bounded cohomology to define the volume of representations. Let $\overline M$ be a connected, orientable, compact manifold with boundary. Suppose that each component of $\partial \overline M$ has amenable fundamental group. In that case, it is proved in \cite{BBIPP,KK} that the natural inclusion $i:(\overline M,\emptyset) \rightarrow (\overline M,\partial \overline M)$ induces an isometric isomorphism in bounded cohomology, $$i_b^* : H^*_b(\overline M, \partial \overline M,\mathbb R) \rightarrow H^*_b(\overline M,\mathbb R),$$ in degrees $* \geq 2$. Noting the remarkable result of Gromov \cite[Section 3.1]{Gro82} that the natural map $H^n_b(\pi_1(\overline M),\mathbb R)\rightarrow H^n_b(\overline M,\mathbb R)$ is an isometric isomorphism in bounded cohomology, for a given representation $\rho : \pi_1(M) \rightarrow G$ we have a map $$\rho^*_b : H^n_{c,b}(G,\mathbb R) \rightarrow H^n_b(\pi_1(\overline M),\mathbb R) \cong H^n_b(\overline M,\mathbb R) \cong H^n_b(\overline M,\partial \overline M,\mathbb R).$$ The $G$-invariant Riemannian volume form $\omega_\mathcal{X}$ on $\mathcal{X}$ gives rise to a continuous bounded cocycle $\Theta :\mathcal{X}^{n+1} \rightarrow \mathbb R$ defined by $$\Theta(x_0,\ldots,x_n)=\int_{[x_0,\ldots,x_n]}\omega_\mathcal{X},$$ where $[x_0,\ldots,x_n]$ is the geodesic simplex with ordered vertices $x_0,\ldots,x_n$ in $\mathcal{X}$. The boundedness of $\Theta$ is due to the fact that the volume of geodesic simplices in $\mathcal{X}$ is uniformly bounded from above \cite{IY82}. Hence the cocycle $\Theta$ induces a continuous cohomology class $[\Theta]_c \in H^n_c(G,\mathbb R)$ and moreover, a continuous bounded cohomology class $[\Theta]_{c,b} \in H^n_{c,b}(G,\mathbb R)$. The image of $((i^*_b)^{-1} \circ \rho^*_b)[\Theta]_{c,b}$ via the comparison map $c : H^n_b(\overline M,\partial \overline M,\mathbb R) \rightarrow H^n(\overline M,\partial \overline M,\mathbb R)$ is an ordinary relative cohomology class. Its evaluation on the relative fundamental class $[\overline M,\partial \overline M]$ gives an invariant associated with $\rho$. \begin{definition}[{\bf D2}] For a representation $\rho : \pi_1(M) \rightarrow G$, define an invariant $\mathrm{Vol}_2(\rho)$ by $$\mathrm{Vol}_2(\rho)= \left\langle (c\circ (i^*_b)^{-1} \circ \rho^*_b) [\Theta]_{c,b}, [\overline M, \partial \overline M] \right\rangle.$$ \end{definition} In the definition {\bf D2}, a specific continuous bounded volume class $[\Theta]_{c,b}$ in $H^n_{c,b}(G,\mathbb R)$ is involved. The question is naturally raised as to whether, if another continuous bounded volume class is used in {\bf D2} instead of $[\Theta]_{c,b}$, the value of the volume of representations changes or not. One could expect that the definition {\bf D2} does not depend on the choice of continuous bounded volume class but it seems not easy to get an answer directly. It turns out that {\bf D2} is independent of the choice of continuous bounded volume class. For a proof, see Section \ref{sec:lipschitz}. \begin{proposition}\label{prop:indepwb} The definition {\bf D2} does not depend on the choice of continuous bounded volume class, that is, for any two continuous bounded volume classes $\omega_b$, $\omega_b' \in H^n_{c,b}(G,\mathbb R)$, $$\left\langle (c\circ (i^*_b)^{-1} \circ \rho^*_b) (\omega_b), [\overline M, \partial \overline M] \right\rangle=\left\langle (c\circ (i^*_b)^{-1} \circ \rho^*_b) (\omega_b'), [\overline M, \partial \overline M] \right\rangle.$$ \end{proposition} Bucher, Burger and Iozzi proved the volume rigidity theorem for hyperbolic lattices as follows. \begin{theorem}[Bucher, Burger and Iozzi, \cite{BBI}] Let $n\geq 3$. Let $i :\Gamma \hookrightarrow \mathrm{Isom}^+(\mathbb H^n)$ be a lattice embedding and let $\rho:\Gamma \rightarrow \mathrm{Isom}^+(\mathbb H^n)$ be any representation. Then $$| \mathrm{Vol}_2(\rho)| \leq |\mathrm{Vol}_2(i)|=\mathrm{Vol}(\Gamma \backslash \mathbb H^n),$$ with equality if and only if $\rho$ is conjugated to $i$ by an isometry. \end{theorem} \section{New definition {\bf D4}}\label{sec:cone} In this section we give a new definition of volume of representations. It will turn out that the new definition is useful in proving that all definitions of volume of representations are equivalent. \subsection{End compactification} Let $\widehat{M}$ be the end compactification of $M$ obtained by adding one point for each end of $M$. Let $\widetilde M$ denote the universal cover of $M$. Let $\widehat{\widetilde{M}}$ denote the space obtained by adding to $\widetilde{M}$ one point for each lift of each end of $M$. The points added to $M$(resp. $\widetilde M$) are called \emph{ideal points} of $M$(resp. $\widetilde M$). Denote by $\partial \widehat M$(resp. $\partial \widehat{\widetilde M}$) the set of ideal points of $M$(resp. $\widetilde M$). Let $p : \widetilde{M} \rightarrow M$ be the universal covering map. The covering map $p : \widetilde{M} \rightarrow M$ extends to a map $\widehat{p}: \widehat{\widetilde{M}} \rightarrow \widehat{M}$ and moreover, the action of $\pi_1(M)$ on $\widetilde{M}$ by covering transformations induces an action on $\widehat{\widetilde{M}}$. The action on $\widehat{\widetilde{M}}$ is not free because each point of $\partial \widehat{\widetilde{M}}$ is stabilized by some peripheral subgroup of $\pi_1(M)$. Note that $\widehat M$ can be obtained by collapsing each connected component of $\partial \overline M$ to a point. Similarly, $\widehat{\widetilde M}$ can be obtained by collapsing each connected component of $\bar p^{-1}(\partial \overline M)$ to a point where $\bar p : \widetilde{\overline M} \rightarrow \overline M$ is the universal covering map. We denote the collapsing map by $\pi : \widetilde {\overline M} \rightarrow \widehat{\widetilde M}$. One advantage of $\widehat M$ is the existence of a fundamental class in singular homology. While the top dimensional singular homology of $M$ vanishes, the top dimensional singular homology of $\widehat M$ with coefficients in $\mathbb Z$ is isomorphic to $\mathbb Z$. Moreover, it can be easily seen that $H_*(\widehat M,\mathbb R)$ is isomorphic to $H_*(\overline M,\partial \overline M,\mathbb R)$ in degree $* \geq 2$. Hence the fundamental class of $\widehat M$ is well-defined and denote it by $[\widehat M]$. \subsection{The cohomology groups} Let $Y$ be a topological space and suppose that a group $L$ acts continuously on $Y$. Then the cohomology group $H^*(Y;L,\mathbb R)$ associated with $Y$ and $L$ is defined in the following way. Our main reference for this cohomology is \cite{Du}. For $k>0$, define $$F^k_\mathrm{alt}(Y,\mathbb R)=\{ f : Y^{k+1} \rightarrow \mathbb R \ | \ f \text{ is alternating} \}.$$ Let $F^k_\mathrm{alt}(Y,\mathbb R)^L$ denote the subspace of $L$-invariant functions where the action of $L$ on $F^k_\mathrm{alt}(Y,\mathbb R)$ is given by $$(g \cdot f)(y_0,\ldots,y_k)=f(g^{-1}y_0,\ldots, g^{-1}y_k),$$ for $f \in F^k_\mathrm{alt}(Y)$ and $g \in L$. Define a coboundary operator $\delta_k : F^k_\mathrm{alt}(Y,\mathbb R) \rightarrow F^{k+1}_\mathrm{alt}(Y,\mathbb R)$ by the usual formula $$(\delta_k f)(y_0,\ldots,y_{k+1})=\sum_{i=0}^{k+1} (-1)^i f(y_0,\ldots, \hat y_i,\ldots, y_{k+1}).$$ The coboundary operator restricts to the complex $F^*_\mathrm{alt}(Y,\mathbb R)^L$. The cohomology $H^*(Y;L,\mathbb R)$ is defined as the cohomology of this complex. Define $F^*_{\mathrm{alt},b}(Y,\mathbb R)$ as the subspace of $F^*_\mathrm{alt}(Y,\mathbb R)$ consisting of bounded alternating functions. Clearly the coboundary operator restricts to the complex $F^*_{\mathrm{alt},b}(Y,\mathbb R)^L$ and so it defines a cohomology, denoted by $H^*_b(Y;L,\mathbb R)$. In particular, for a manifold $M$, the cohomology $H^*(\widetilde M;\pi_1(M),\mathbb R)$ is actually isomorphic to the group cohomology $H^*(\pi_1(M),\mathbb R)$ and, $H^*_b(\widetilde M;\pi_1(M), \mathbb R)$ is isomorphic to the bounded cohomology $H^*_b(\pi_1(M),\mathbb R)$. \begin{remark}\label{remark1} Let $L$ and $L'$ be groups acting continuously on topological spaces $Y$ and $Y'$, respectively. Given a homomorphism $\rho :L \rightarrow L'$, any $\rho$-equivariant continuous map $P:Y \rightarrow Y'$ defines a chain map $$P^* :F^*_\mathrm{alt}(Y',\mathbb R)^{L'}\rightarrow F^*_\mathrm{alt}(Y,\mathbb R)^L.$$ Thus it gives a morphism in cohomology. Let $Q:Y \rightarrow Y'$ be another $\rho$-equivariant map. For each $k>0$, one may define $$H_k (y_0,\ldots,y_k)=\sum_{i=0}^i (-1)^k (P(y_0),\ldots,P(y_i),Q(y_i),\ldots, Q(y_k)).$$ Then by a straightforward computation, $$(\partial_{k+1}H_k + H_{k-1}\partial_k)(y_0,\ldots,y_k)=(P(y_0),\ldots,P(y_k))-(Q(y_0),\ldots, Q(y_k)).$$ It follows from the above identity that for any cocycle $f \in F^k_\mathrm{alt}(Y',\mathbb R)^{L'}$, $$ (P^*f - Q^*f)(y_0,\ldots,y_k)=\delta_k (f \circ H_{k-1})(y_0,\ldots,y_k).$$ From this usual process in cohomology theory, one could expect that $P$ and $Q$ induce the same morphism in cohomology. However, since $f \circ H_{k-1}$ may be not alternating, $P$ and $Q$ may not induce the same morphism in cohomology. \end{remark} Recalling that $\Theta :\mathcal{X}^{n+1} \rightarrow \mathbb R$ is a $G$-invariant continuous bounded alternating cocycle, it yields a bounded cohomology class $[\Theta]_b \in H^n_b(\mathcal{X};G,\mathbb R).$ Let $\overline \mathcal{X}$ be the compactification of $\mathcal{X}$ obtained by adding the ideal boundary $\partial \mathcal{X}$. Extending the $G$-action on $\mathcal{X}$ to $\overline{\mathcal{X}}$, we can define a cohomology $H^*(\overline{\mathcal{X}};G,\mathbb R)$ and bounded cohomology $H^*_b(\overline{\mathcal{X}};G,\mathbb R)$. In rank $1$ case, since the geodesic simplex is well-defined for any $(n+1)$-tuple of points of $\overline X$, the cocycle $\Theta$ can be extended to a $G$-invariant alternating bounded cocycle $\overline \Theta :\overline X^{n+1} \rightarrow \mathbb R$. Hence $\overline \Theta$ determines a cohomology class $[\overline \Theta] \in H^n(\overline \mathcal{X};G,\mathbb R)$ and $[\overline \Theta]_b \in H^n_b(\overline \mathcal{X};G,\mathbb R)$. Let $\widehat D:\widehat{\widetilde{M}} \rightarrow \overline \mathcal{X}$ be a $\rho$-equivariant continuous map whose restriction to $\widetilde{M}$ is a $\rho$-equivariant continuous map from $\widetilde M$ to $\mathcal{X}$. We will consider only such kinds of equivariant maps throughout the paper. Denote by $D : \widetilde M \rightarrow \mathcal X$ the restriction of $\widehat D$ to $\widetilde M$. Then $\widehat D$ induces a homomorphism in cohomology, $$\widehat D^* : H^n(\overline \mathcal{X};G,\mathbb R) \rightarrow H^n(\widehat{\widetilde{M}};\pi_1(M),\mathbb R).$$ Note that the action of $\pi_1(M)$ on $\widehat{\widetilde M}$ is not free and hence $H^*(\widehat{\widetilde{M}};\pi_1(M),\mathbb R)$ may not be isomorphic to $H^*(\widehat M,\mathbb R)$. Let $H^*_{simp}(\widehat M,\mathbb R)$ be the simplicial cohomology induced from a simplicial structure on $\widehat M$. Then there is a natural restriction map $H^*(\widehat{\widetilde M};\pi_1(M),\mathbb R) \rightarrow H^*_{simp}(\widehat M,\mathbb R) \cong H^*(\widehat M,\mathbb R)$. Thus we regard the cohomology class $\widehat D^*[\overline \Theta]$ as a cohomology class of $H^n(\widehat M,\mathbb R)$. Let $[\widehat M]$ be the fundamental cycle in $H_n(\widehat M,\mathbb R)\cong \mathbb R$. \begin{definition}[{\bf D4}] Let $D:\widetilde M \rightarrow \mathcal{X}$ be a $\rho$-equivariant continuous map which is extended to a $\rho$-equivariant map $\widehat D : \widehat{\widetilde{M}} \rightarrow \overline \mathcal{X}$. Then we define an invariant $\mathrm{Vol}_4(\rho,D)$ by $$\mathrm{Vol}_4(\rho,D)=\langle \widehat D^*[\overline{\Theta}], [\widehat M] \rangle.$$ \end{definition} As observed before, $\widehat D^*[\overline{\Theta}]$ may depend on the choice of $\rho$-equivariant map. However it turns out that the value $\mathrm{Vol}_4(\rho,D)$ is independent of the choice of $\rho$-equivariant continuous map as follows. \begin{proposition}\label{prop:1} Let $\rho :\pi_1(M) \rightarrow G$ be a representation. Then $$\mathrm{Vol}_2(\rho)=\mathrm{Vol}_4(\rho,D).$$ \end{proposition} \begin{proof} Reminding that the continuous bounded cohomology $H^*_{c,b}(G,\mathbb R)$ can be computed isomorphically from the complex $C^*_{c,b}(\mathcal X,\mathbb R)_\mathrm{alt}$, there is the natural inclusion $C^*_{c,b}(\mathcal X,\mathbb R)_\mathrm{alt} \subset F^*_{\mathrm{alt},b}(\mathcal X,\mathbb R)$. Denote the homomorphism in cohomology induced from the inclusion by $i_G : H^k_{c,b}(G,\mathbb R)\rightarrow H^k_b(\mathcal{X};G,\mathbb R)$. Clearly, $i_G([\Theta]_{c,b})=[\Theta]_b$. The bounded cohomology $H^*_b(\pi_1(M),\mathbb R)$ is obtained by the cohomology of the complex $C^*_b(\widetilde M,\mathbb R)_\mathrm{alt}^{\pi_1(M)}$. Since $C^*_b(\widetilde M,\mathbb R)_\mathrm{alt}= F^*_{\mathrm{alt},b}(\widetilde M,\mathbb R)$, the induced map $i_M : H^k_b(\pi_1(M),\mathbb R) \rightarrow H^k_b(\widetilde M;\pi_1(M),\mathbb R)$ is the identity map. Let $\widehat D : \widehat{\widetilde M}\rightarrow \overline{\mathcal X}$ be a $\rho$-equivariant map which maps $\widetilde M$ to $\mathcal X$. Then consider the following commutative diagram. $$ \xymatrixcolsep{4pc}\xymatrix{ H^n(\overline \mathcal{X};G,\mathbb R) \ar[r]^-{\widehat D^*} & H^n(\widehat{\widetilde{M}};\pi_1(M),\mathbb R) \ar[rd]^-{\pi^*} \\ H^n_b(\overline{\mathcal{X}};G,\mathbb R) \ar[r]^-{\widehat D^*_b} \ar[d]^-{res_\mathcal{X}} \ar[u]_-{\bar c} & H^n_b(\widehat{\widetilde{M}};\pi_1(M),\mathbb R) \ar[d]^-{res_M} \ar[rd]^-{\pi^*_b} \ar[u]_-{\hat c} & H^n(\overline M,\partial \overline M,\mathbb R)\\ H^n_b(\mathcal{X};G,\mathbb R) \ar[r]^-{D_b^*} & H^n_b(\widetilde M;\pi_1(M),\mathbb R) & H^n_b(\overline M,\partial \overline M,\mathbb R) \ar[l]_-{i^*_b} \ar[u]_-{c} \\ H^n_{c,b}(G,\mathbb R) \ar[u]_-{i_G} \ar[r]^-{\rho^*_b} & H^n_b(\pi_1(M),\mathbb R) \ar[u]_-{i_M} }$$ where $\pi : \widetilde{\overline M} \rightarrow \widehat{\widetilde M}$ is the collapsing map. Note that the map $\rho^*_b$ in the bottom of the diagram is actually induced from the restriction map $D: \widetilde M \rightarrow \mathcal{X}$. However it does not depend on the choice of equivariant map but only on the homomorphism $\rho$. In other words, any continuous equivariant map from $\widetilde M$ to $\mathcal{X}$ gives rise to the same map $\rho^*_b: H^*_{c,b}(G,\mathbb R) \rightarrow H^*_b(\pi_1(M),\mathbb R)$. For this reason, we denote it by $\rho^*_b$ instead of $D^*_{c,b}$. Note that $\pi$ induces a map $\pi^* : F^*_\mathrm{alt}(\widehat{\widetilde M},\mathbb R) \rightarrow F^*_\mathrm{alt}(\widetilde{\overline M},\mathbb R)$. It follows from the alternating property that the image of $\pi^*$ is contained in $C^*(\overline M,\partial \overline M,\mathbb R)$. Hence the map $\pi^* : H^n(\widehat{\widetilde M};\pi_1(M),\mathbb R) \rightarrow H^n(\overline M,\partial \overline M,\mathbb R)$ makes sense. One can understand $\pi^*_b : H^n_b(\widehat{\widetilde M};\pi_1(M),\mathbb R) \rightarrow H^n_b(\overline M,\partial \overline M,\mathbb R)$ in a similar way. Noting that $\bar c([\overline \Theta]_b)=[\overline \Theta]$ and $res_\mathcal{X}([\overline \Theta]_b)=[\Theta]_b$, it follows from the above commutative diagram that {\setlength\arraycolsep{2pt} \begin{eqnarray*} ((i^*_b)^{-1}\circ i_M \circ \rho_b^*)[\Theta]_{c,b} &=& ((i^*_b)^{-1}\circ D^*_b \circ i_G) [\Theta]_{c,b} \\ &=& ((i^*_b)^{-1}\circ D^*_b \circ res_\mathcal{X}) [\overline \Theta]_b\\ &=& ((i^*_b)^{-1}\circ res_M \circ \widehat D^*_b) [\overline \Theta]_b \\ &=& (\pi_b^* \circ \widehat D^*_b)[\overline \Theta]_b \end{eqnarray*}} Hence {\setlength\arraycolsep{2pt} \begin{eqnarray*} \mathrm{Vol}_2(\rho)&=& \langle (c \circ (i^*_b)^{-1}\circ i_M \circ \rho_b^*)[\Theta]_{c,b}, [\overline M, \partial \overline M] \rangle \\ &=& \langle (c \circ \pi_b^* \circ \widehat D^*_b)[\overline \Theta]_b, [\overline M, \partial \overline M] \rangle \\ &=& \langle (\pi^* \circ \widehat D^* \circ \bar c)[\overline \Theta]_b, [\overline M, \partial \overline M] \rangle \\ &=& \langle (\pi^* \circ \widehat D^*)[\overline \Theta], [\overline M, \partial \overline M] \rangle \\ &=& \langle \widehat D^*[\overline \Theta], \pi_* [\overline M, \partial \overline M] \rangle \\ &=& \langle \widehat D^*[\overline \Theta], [\widehat M] \rangle \\ &=& \mathrm{Vol}_4(\rho,D) \end{eqnarray*}} This completes the proof. \end{proof} Proposition \ref{prop:1} implies that the value $\mathrm{Vol}_4(\rho,D)$ does not depend on the choice of continuous equivariant map. Hence from now on we use the notation $\mathrm{Vol}_4(\rho):=\mathrm{Vol}(\rho,D)$. Furthermore, Proposition \ref{prop:1} allows us to interpret the invariant $\mathrm{Vol}_2(\rho)$ in terms of a pseudo-developing map via $\mathrm{Vol}_4(\rho)$ in the next section. Note that a pseudo-developing map for $\rho$ is a specific kind of $\rho$-equivariant continuous map $\widehat{\widetilde{M}} \rightarrow \overline \mathcal{X}$. \section{Pseudo-developing map and Definition {\bf D1}}\label{sec:pseudo} Dunfield \cite{Du99} introduced the notion of pseudo-developing map in order to define the volume of representations $\rho : \pi_1(M)\rightarrow \mathrm{SO}(3,1)$ for a noncompact complete hyperbolic $3$-manifold $M$ of finite volume. We start by recalling the definition of pseudo-developing map. \begin{definition}[Cone map] Let $\mathcal A$ be a set, $t_0 \in \mathbb R$, and $Cone(\mathcal A)$ be the cone obtained from $\mathcal A\times [t_0,\infty]$ by collapsing $\mathcal A \times \{\infty\}$ to a point, called $\infty$. A map $\widehat D:Cone(\mathcal A) \rightarrow \overline \mathcal{X}$ is a \emph{cone map} if $\widehat D (Cone(\mathcal A))\cap \partial \mathcal{X} =\{\widehat D(\infty)\}$ and, for all $a \in \mathcal A$ the map $\widehat D|_{a\times [t_0,\infty]}$ is either the constant to $\widehat D(\infty)$ or the geodesic ray from $\widehat D(a,t_0)$ to $\widehat D(\infty)$, parametrized in such a way that the parameter $(t-t_0)$, $t\in [t_0,\infty]$, is the arc length. \end{definition} For each ideal point $v$ of $M$, fix a product structure $T_v \times [0,\infty)$ on the end relative to $v$. The fixed product structure induces a cone structure on a neighborhood of $v$ in $\widehat M$, which is obtained from $T_v \times [0,\infty]$ by collapsing $T_v \times \{\infty\}$ to a point $v$. We lift such structures to the universal cover. Let $\tilde v$ be an ideal point of $\widetilde M$ that projects to the ideal point $v$. Denote by $E_{\tilde v}$ the cone at $\tilde v$ that is homeomorphic to $P_{\tilde v} \times [0,\infty]$, where $P_{\tilde v}$ covers $T_v$ and $P_{\tilde v} \times \{\infty\}$ is collapsed to $\tilde v$. \begin{definition}[Pseudo-developing map]\label{def:3.2} Let $\rho : \pi_1(M) \rightarrow G$ be a representation. A \emph{pseudo-developing map} for $\rho$ is a piecewise smooth $\rho$-equivariant map $D : \widetilde M \rightarrow \mathcal{X}$. Moreover $D$ is required to extend to a continuous map $\widehat D: \widehat{\widetilde{M}} \rightarrow \overline \mathcal{X}$ with the property that there exists $t \in \mathbb R^+$ such that for each end $E_{\tilde v}=P_{\tilde v} \times [0,\infty]$ of $\widehat{\widetilde{M}}$, the restriction of $\widehat D$ to $P_{\tilde v} \times [t,\infty]$ is a cone map. \end{definition} \begin{definition} A \emph{triangulation} of $\widehat M$ is an identification of $\widehat M$ with a complex obtained by gluing together with simplicial attaching maps. It is not required for the complex to be simplicial, but it is required that open simplicies embed. \end{definition} Note that a triangulation of $\widehat M$ always exists and it lifts uniquely to a triangulation of $\widehat{\widetilde M}$. Given a triangulation of $\widehat M$, one can define the straightening of pseudo-developing maps as follows. \begin{definition}[Straightening map] Let $\widehat M$ be triangulated. Let $\rho:\pi_1(M)\rightarrow G$ be a representation and $D :\widetilde{M} \rightarrow \mathcal{X}$ a pseudo-developing map for $\rho$. A straightening of $D$ is a continuous piecewise smooth $\rho$-equivariant map $Str( D):\widehat{\widetilde{M}} \rightarrow \overline \mathcal{X}$ such that \begin{itemize} \item for each simplex $\sigma$ of the triangulation, $Str( D)$ maps $\widetilde \sigma$ to $Str( D \circ \widetilde \sigma)$, \item for each end $E_{\tilde v}=P_{\tilde v}\times [0,\infty]$ there exists $t\in \mathbb R$ such that $Str( D)$ restricted to $P_{\tilde v} \times [t,\infty]$ is a cone map. \end{itemize} where $\widetilde \sigma$ is a lift of $\sigma$ to $\widehat{\widetilde M}$ and $Str(D \circ \widetilde \sigma)$ is the geodesic straightening of $ D\circ \widetilde \sigma : \Delta^n \rightarrow \overline \mathcal{X}$. \end{definition} Note that any straightening of a pseudo-developing map is also a pseudo-developing map. \begin{lemma} Let $\widehat M$ be triangulated. Let $\rho :\pi_1(M) \rightarrow G$ be a representation and $ D:\widetilde{M} \rightarrow \mathcal{X}$ a pseudo-developing map for $\rho$. Then a straightening $Str(D)$ of $D$ exists and furthermore, $Str(D) : \widehat{\widetilde{M}} \rightarrow \overline \mathcal{X}$ is always equivariantly homotopic to $\widehat D$ via a homotopy that fixes the vertices of the triangulation. \end{lemma} \begin{proof} First, set $Str(D)(V)=f(V)$ for every vertex $V$ of the triangulation. Then extend $Str(D)$ to a map which is piecewise straight with respect to the triangulation. This is always possible because $\mathcal{X}$ is contractible. Note that $\widehat D$ and $Str(D)$ agree on the ideal vertices of $\widehat{\widetilde{M}}$ and are equivariantly homotopic via the straight line homotopy between them. Hence it can be easily seen that the extension is a straightening of $D$. \end{proof} For any pseudo-developing map $D:{\widetilde{M}} \rightarrow \mathcal{X}$ for $\rho$, $$\int_M D^*\omega_\mathcal{X}$$ is always finite. This can be seen as follows. We stick to the notations used in Definition \ref{def:3.2}. We may assume that the restriction of $\widehat D$ to each $E_{\tilde v}=P_{\tilde v} \times [0,\infty]$ is a cone map. Choose a fundamental domain $F_0$ of $T_{v}$ in $P_{\tilde v}$. Then, there exists $t\in \mathbb R^+$ such that $$ \left|\int_{T_v \times [t,\infty)} D^*\omega_\mathcal{X} \right| =\mathrm{Vol}_n (\mathrm{Cone}(D( F_0 \times \{t\}))) \leq \frac{1}{n-1}\mathrm{Vol}_{n-1}(D(F_0\times \{t \}))$$ where $\mathrm{Vol}_{n-1}$ denotes the $(n-1)$-dimensional volume. The last inequality holds for any Hadamard manifold with sectional curvature at most $-1$. See \cite[Section 1.2]{Gro82}. Hence the integral of $D^*\omega_\mathcal{X}$ over $M$ is finite. \begin{definition}[{\bf D1}] Let $D:{\widetilde{M}} \rightarrow \mathcal{X}$ be a pseudo-developing map for a representation $\rho : \pi_1(M) \rightarrow G$. Define an invariant $\mathrm{Vol}_1(\rho,D)$ by $$\mathrm{Vol}_1(\rho,D)=\int_M D^*\omega_\mathcal{X}. $$ \end{definition} In the case that $G=\mathrm{SO}(n,1)$, Francaviglia \cite{Fr04} showed that the definition {\bf D1} does not depend on the choice of pseudo-developing map. We give a self-contained proof for this in rank $1$ case. \begin{proposition}\label{prop:14equi} Let $\rho : \pi_1(M) \rightarrow G$ be a representation. Then for any pseudo-developing map $D :\widetilde M \rightarrow \mathcal{X}$, $$\mathrm{Vol}_1(\rho,D)=\mathrm{Vol}_4(\rho).$$ Thus, $\mathrm{Vol}_1(\rho,D)$ does not depend on the choice of pseudo-developing map. \end{proposition} \begin{proof} Let $\mathcal T$ be a triangulation of $\widehat M$ with simplices $\sigma_1, \ldots, \sigma_N$. Then the triangulation gives rise to a fundamental cycle $\sum_{i=1}^N \sigma_i$ of $\widehat M$. Let $Str(D)$ be a straightening of $D$ with respect to the triangulation $\mathcal T$. Since $Str(D)$ is a $\rho$-equivariant continuous map, we have \begin{eqnarray*} \mathrm{Vol}_4(\rho):=\mathrm{Vol}_4(\rho,D)&=&\langle Str(D)^*[\overline \Theta], [\widehat M] \rangle = \langle \overline \Theta, \sum_{i=1}^N Str(\widehat D(\sigma_i))\rangle \\ &=& \sum_{i=1}^N \int_{Str(\widehat D(\sigma_i))} \omega_\mathcal{X} = \int_M Str(D)^*\omega_\mathcal{X}. \end{eqnarray*} Since both $Str(D)$ and $\widehat D$ are pseudo-developing maps for $\rho$ that agree on the ideal points of $\widehat{\widetilde{M}}$, it can be proved, using the same arguments as the proof of \cite[Lemma 2.5.1]{Du99}, that $$\int_M Str(D)^*\omega_\mathcal{X} =\int_M D^*\omega_\mathcal{X}=\mathrm{Vol}_1(\rho,D)$$ Finally we obtain the desired equality. \end{proof} \begin{remark} While {\bf D1} is defined with only pseudo-developing map, the definition {\bf D4} is defined with any equivariant map. This is one advantage of the definition {\bf D4}. By Proposition \ref{prop:14equi}, the notation $\mathrm{Vol}_1(\rho):=\mathrm{Vol}_1(\rho,D)$ makes sense. \end{remark} \section{Lipschitz simplicial volume and Definition {\bf D3}}\label{sec:lipschitz} In this section, $M$ is assumed to be a Riemannian manifold with finite Lipschitz simplicial volume. Gromov \cite[Section 4.4]{Gro82} introduced the Lipschitz simplicial volume of Riemannian manifolds. One can define the Lipschitz constant for each singular simplex in $M$ by giving the Euclidean metrics on the standard simplices. Then the Lipschitz constant of a locally finite chain $c$ of $M$ is defined as the supremum of the Lipschitz constants of all singular simplices occurring in $c$. The Lipschitz simplicial volume of $M$ is defined by the infimum of the $\ell^1$-norms of all locally finite fundamental cycles with finite Lipschitz constant. Let $[M]_\mathrm{Lip}^{\ell^1}$ be the set of all locally finite fundamental cycles of $M$ with finite $\ell^1$-seminorm and finite Lipschitz constant. If $[M]_\mathrm{Lip}^{\ell^1}=\emptyset$, the Lipschitz simplicial volume of $M$ is infinite. In the case that $[M]_\mathrm{Lip}^{\ell^1} \neq \emptyset$, S. Kim and I. Kim \cite{KK14} give a new definition of volume of representations as follows: Given a representation $\rho : \pi_1(M) \rightarrow G$, $\rho$ induces a canonical pullback map $\rho^*_b : H^*_{c,b}(G,\mathbb{R}) \rightarrow H^*_b(\pi_1(M),\mathbb{R})\cong H^*_b(M,\mathbb R)$ in continuous bounded cohomology. Hence for any continuous bounded volume class $\omega_b \in H^n_{c,b}(G,\mathbb R)$, we obtain a bounded cohomology class $\rho^*_b(\omega_b)\in H^n_b(M,\mathbb{R})$. Then, the bounded cohomology class $\rho^*_b(\omega_b)$ can be evaluated on $\ell^1$-homology classes in $H^{\ell^1}_n(M,\mathbb{R})$ by the Kronecker products $$ \langle\cdot ,\cdot \rangle : H^*_b(M,\mathbb{R}) \otimes H^{\ell^1}_*(M,\mathbb{R}) \rightarrow \mathbb{R}.$$ For more detail about this, see \cite{KK14}. \begin{definition}[{\bf D3}] We define an invariant $\mathrm{Vol}_3(\rho)$ of $\rho$ by $$\mathrm{Vol}_3(\rho) = \inf \langle \rho^*_b(\omega_b), \alpha \rangle$$ where the infimum is taken over all $\alpha\in [M]^{\ell^1}_\mathrm{Lip}$ and all $\omega_b \in H^n_{c,b}(G,\mathbb R)$ with $c(\omega_b)=\omega_{\mathcal X}$. \end{definition} One advantage of {\bf D3} is to not need the isomorphism $H^n_b(\overline M,\partial \overline M,\mathbb R) \rightarrow H^n_b(\overline M,\mathbb R)$. When $M$ admits the isomorphism above, we will verify that the definition {\bf D3} is eventually equivalent to the other definitions of volume of representations. \begin{lemma}\label{lem:indep} Suppose that $M$ is a noncompact, connected, orientable, apherical, tame Riemannian manifold with finite Lipschitz simplicial volume and each end of $M$ has amenable fundamental group. Then for any $\alpha \in [M]^{\ell^1}_\mathrm{Lip}$ and any continuous bounded volume class $\omega_b$, $$\langle \rho_b^* (\omega_b), \alpha \rangle = \langle (c\circ (i^*_b)^{-1} \circ \rho^*_b)(\omega_b), [\overline M,\partial \overline M] \rangle$$ \end{lemma} \begin{proof} When $M$ is a $2$-dimensional manifold, the proof is given in \cite{KK14}. Actually the proof in general case is the same. We here sketch the proof for the reader's convenience. Let $K$ be a compact core of $M$. Note that $K$ is a compact submanifold with boundary that is a deformation retract of $M$. Consider the following commutative diagram, $$ \xymatrixcolsep{2pc}\xymatrix{ C^*_b(M,\mathbb{R}) & C^*_b(\overline M,\mathbb{R}) \ar[l]_-{j_b^*} & C^*_b(\overline M,\partial \overline M,\mathbb{R}) \ar[l]_-{i^*_b} \\ & C^*_b(\overline M, \overline M-K, \mathbb{R}) \ar[u]^-{l_b^*} \ar[ru]_-{q_b^*} & }$$ where every map in the above diagram is the map induced from the canonical inclusion. Every map in the diagram induces an isomorphism in bounded cohomology in $*\geq2$. Thus, there exists a cocycle $z_b \in C^n_b(\overline M,\overline M-K,\mathbb{R})$ such that $l^*_b([z_b]) = \rho^*_b(\omega_b)$. Let $c=\sum_{i=1}^\infty a_i \sigma_i$ be a locally finite fundamental $\ell^1$-cycle with finite Lipschitz constant representing $\alpha \in [M]^{\ell^1}_\mathrm{Lip}$. Then, we have $$\langle \rho^*_b(\omega_b),\alpha \rangle = \langle l_b^* ([z_b]), \alpha \rangle = \langle z_b, c \rangle.$$ Since $z_b$ vanishes on simplices with image contained in $\overline M-K$, we have $\langle z_b, c \rangle =\langle z_b, c|_K \rangle$ where $c|_K=\sum_{\mathrm{im}\sigma_i \cap K \neq \emptyset} a_i \sigma_i$. It is a standard fact that $c|_K$ represents the relative fundamental class $[\overline M,\overline M-K]$ in $H_n(\overline M, \overline M-K,\mathbb{R})$ (see \cite[Theorem 5.3]{Loh07}.) On the other hand, we have \begin{eqnarray*} \langle (c \circ (i^*_b)^{-1} \circ \rho_b^*)(\omega_b), [\overline M,\partial \overline M] \rangle &=& \langle (c\circ q^*_b)([z_b]), [\overline M,\partial \overline M]) \rangle \\ &=& \langle [z_b], q_*[\overline M,\partial \overline M] \rangle \\ &=& \langle [z_b], [\overline M,\overline M-K] \rangle=\langle z_b,c|_K \rangle . \end{eqnarray*} Therefore, we finally get the desired identity. \end{proof} By Lemma \ref{lem:indep} we can reformulate the definition {\bf D3} as follows. $$ \mathrm{Vol}_3(\rho)= \inf_{\omega_b} \langle (c\circ (i^*_b)^{-1} \circ \rho_b^*)(\omega_b), [\overline M,\partial \overline M] \rangle$$ where infimum is taken over all continuous bounded volume classes. Noting that $[\Theta]_{c,b} \in H^n_{c,b}(G,\mathbb R)$ is a continuous bounded volume class, it is clear that $$\mathrm{Vol}_3(\rho)\leq \mathrm{Vol}_2(\rho).$$ It is conjecturally true that the comparison map $H^n_{c,b}(G,\mathbb R) \rightarrow H^n_c(G,\mathbb R)$ is an isomorphism for any connected semisimple Lie group $G$ with finite center. Hence conjecturally, $\mathrm{Vol}_2(\rho)=\mathrm{Vol}_3(\rho)$. In spite of the absence of the proof of the conjecture, we will give a proof for $\mathrm{Vol}_2(\rho)=\mathrm{Vol}_3(\rho)$ by using the definition {\bf D4}. \begin{lemma}\label{lem:extend} Let $\omega_b \in H^n_{c,b}(G,\mathbb R)$ be a continuous bounded volume class. Let $f_b :\mathcal X^{n+1} \rightarrow \mathbb R$ be a continuous bounded alternating $G$-invariant cocycle representing $\omega_b$. Then $f_b$ is extended to a bounded alternating $G$-invariant cocycle $\bar f_b : \overline{\mathcal X}^{n+1} \rightarrow \mathbb R$. Furthermore, $\bar f_b$ is uniformly continuous on $\mathcal{X}^n \times \{\xi\}$ for any $\xi \in \partial \mathcal{X}$. \end{lemma} \begin{proof} For any $(\bar x_0, \ldots, \bar x_n) \in \overline{\mathcal X}^{n+1}$, define $$\bar f_b(\bar x_0,\ldots, \bar x_n) = \lim_{t\rightarrow \infty} f_b(c_0(t),\ldots,c_n(t)),$$ where each $c_i(t)$ is a geodesic ray toward $\bar x_i$. Here, for $x \in \mathcal{X}$, we say that $c : [0,\infty) \rightarrow \mathcal{X}$ is a geodesic ray toward $x$ if there exists $t\in [0,\infty)$ such that the restriction map $c|_{[0,t]}$ of $c$ to $[0,t]$ is a geodesic with $c(t)=x$ and $c|_{[t,\infty)}$ is constant to $x$. Then it is clear that $\bar f_b(x_0,\ldots,x_n)=f_b(x_0,\ldots, x_n)$ for $(x_0,\ldots,x_n) \in \mathcal X^{n+1}$. To see the well-definedness of $\bar f_b$, we need to show that for other geodesic rays $c_i'(t)$ toward $\bar x_i$, \begin{eqnarray}\label{eqn:welldefine} \lim_{t\rightarrow \infty} f_b(c_0(t),\ldots,c_n(t))=\lim_{t\rightarrow \infty} f_b(c_0'(t),\ldots,c_n'(t)).\end{eqnarray} Note that the limit always exists because $f_b$ is bounded. In rank $1$ case, the distance between two geodesic rays with the same endpoint decays exponentially to $0$ as they go to the endpoint. Moreover since $f_b$ is $G$-invariant and $G$ transitively acts on $\mathcal{X}$, $f_b$ is uniformly continuous on $\mathcal{X}^{n+1}$. Thus, for any $\epsilon>0$ there exists some number $T>0$ such that $$ | f_b(c_0(t),\ldots,c_n(t)) - f_b(c_0'(t),\ldots,c_n'(t)) | <\epsilon$$ for all $t>T$. This implies (\ref{eqn:welldefine}) and hence $\bar f_b$ is well-defined. The alternating property of $\bar f_b$ actually comes from $f_b$. Due to the alternating property of $f_b$, we have \begin{eqnarray*} \bar f_b(\bar x_0, \ldots, \bar x_i,\ldots, \bar x_j,\ldots, \bar x_n) &=& \lim_{t\rightarrow \infty} f_b(c_0(t),\ldots,c_i(t),\ldots,c_j(t),\ldots, c_n(t)) \\ &=&\lim_{t\rightarrow \infty} -f_b(c_0(t),\ldots,c_j(t),\ldots,c_i(t),\ldots, c_n(t))\\ &=&-\bar f_b(\bar x_0,\ldots, \bar x_j,\ldots, \bar x_i,\ldots, \bar x_n) \end{eqnarray*} Therefore we conclude that $\bar f_b$ is alternating. The boundedness and $G$-invariance of $\bar f_b$ immediately follows from the boundedness and $G$-invariance of $f_b$. Furthermore, it is easy to check that $\bar f_b$ is a cocycle by a direct computation. Now it remains to prove that $\bar f_b$ is uniformly continuous on $\mathcal{X}^n\times \{\xi\}$. It is obvious that $\bar f_b$ is continuous on $\mathcal{X}^n\times \{\xi\}$. Noting that the parabolic subgroup of $G$ stabilizing $\xi$ acts on $\mathcal{X}$ transitively, it can be easily seen that $\bar f_b$ is uniformly continuous on $\mathcal{X}^n\times \{\xi\}$. \end{proof} The existence of $\bar f_b$ allows us to reformulate $\mathrm{Vol}_3$ in terms of $\mathrm{Vol}_4$. Following the proof of Proposition \ref{prop:1}, we get \begin{eqnarray}\label{eqn:A} \langle (c\circ (i^*_b)^{-1} \circ \rho_b^*)(\omega_b), [\overline M,\partial \overline M] \rangle = \langle \widehat D^* [\bar f_b], [\widehat M] \rangle \end{eqnarray} The last term $\langle \widehat D^* [\bar f_b], [\widehat M] \rangle$ above is computed by $\langle \widehat D^*\bar f_b, \widehat c \rangle$ for any equivariant map $\widehat D$ and fundamental cycle $\widehat c$ of $\widehat M$. By choosing proper equivariant map and fundamental cycle, we will show that $\langle \widehat D^* [\bar f_b], [\widehat M] \rangle$ does not depend on the choice of continuous bounded volume class. \begin{proposition}\label{lem:indepwb} Let $\omega_b$ and $\omega_b'$ be continuous bounded volume classes. Let $\bar f_b$ and $\bar f_b'$ be the bounded alternating cocycles in $F^n_\mathrm{alt}(\overline X;G,\mathbb R)$ associated with $\omega_b$ and $\omega_b'$ respectively as in Lemma \ref{lem:extend}. Then $$\langle \widehat D^* [\bar f_b], [\widehat M] \rangle=\langle \widehat D^* [\bar f_b'], [\widehat M] \rangle.$$ \end{proposition} \begin{proof} It suffices to prove that for some $\rho$-equivariant map $\widehat D :\widehat{\widetilde M} \rightarrow \overline \mathcal{X}$ and fundamental cycle $\widehat c$ of $\widehat M$, $$\langle \widehat D^*\bar f_b, \widehat c\rangle = \langle \widehat D^*\bar f_b', \widehat c\rangle.$$ To show this, we will prove that for some sequence $(\widehat c_k)_{k\in \mathbb N}$ of fundamental cycles of $\widehat M$ $$\lim_{k\rightarrow \infty} \left( \langle \widehat D^*\bar f_b, \widehat c_k \rangle - \langle \widehat D^*\bar f_b', \widehat c_k\rangle \right)=0.$$ Let $v_1,\ldots,v_s$ be the ideal points of $M$. As in Section \ref{sec:pseudo}, fix a product structure $T_{v_i} \times [0,\infty]$ on the end relative to $v_i$ for each $i=1,\ldots,s$ and then lift such structures to the universal cover. We stick to the notations used in Section \ref{sec:pseudo}. Set $$M_k = M-\cup_{i=1}^s T_{v_i} \times (k,\infty].$$ Then $(M_k)_{k\in \mathbb N}$ is an exhausting sequence of compact cores of $M$. The boundary $\partial M_k$ of $M_k$ consists of $\cup_{i=1}^s T_{v_i} \times \{k\}$. Let $\mathcal T_0$ be a triangulation of $M_0$. Then we extend it to a triangulation on $\widehat M$ as follows. First note that $\mathcal T_0$ induces a triangulation on each $T_{v_i}$. Let $\tau$ be an $(n-1)$-simplex of the induced triangulation on $T_{v_i}$ for some $i \in \{1,\ldots,s\}$. Then we attach $\pi(\tau \times [0,\infty])$ to $T_{v_i}\times\{0\}$ along $\tau \times \{0\}$ where $\pi :\overline M \rightarrow \widehat M$ be the collapsing map. Since $\pi$ is an embedding on $\tau \times [0,\infty)$ and $\pi$ maps $\tau \times \{\infty\}$ to the ideal point $v_i$, it can be easily seen that $cone(\tau):=\pi(\tau \times [0,\infty])$ is an $n$-simplex. Hence we can obtain a triangulation of $\widehat M$ by attaching each $cone(\tau)$ to $\partial M_0$, which is denoted by $\widehat{\mathcal T_0}$. Next, we extend $\mathcal T_0$ to a triangulation of $M_k$. In fact, $M_k$ is decomposed as follows. $$M_k = M_0 \cup \bigcup_{i=1}^s T_{v_i} \times [0,k].$$ Hence attach each $\tau \times [0,k]$ to $M_0$ along $\tau \times\{0\}$ and then triangulate $\tau \times [0,k]$ by using the prism operator \cite[Chapter 2.1]{Hatcher}. Via this process, we obtain a triangulation of $M_k$, denoted by $\mathcal T_k$. Note that $\mathcal T_0$ and $\mathcal T_k$ induce the same triangulation on each $T_{v_i}$. In addition, one can obtain a triangulation $\widehat{\mathcal T_k}$ of $\widehat M$ from $\mathcal T_k$ in a similar way that $\widehat{\mathcal T_0}$ is obtained from $\mathcal T_0$ as above. Let $c_k$ be the relative fundamental class of $(M_k,\partial M_k)$ induced from $\mathcal T_k$. Then it can be seen that $$\widehat c_k = c_k + (-1)^{n+1}cone(\partial c_k)$$ is the fundamental cycle of $\widehat M$ induced from $\widehat{\mathcal T_k}$. Any simplex occurring in $c_k$ is contained in $M_k$. Now we choose a pseudo-developing map $\widehat D :\widehat{\widetilde M} \rightarrow \overline X$. Let $\tilde v_i$ be a lift of $v_i$ to $\widehat{\widetilde M}$. Let $P_{\tilde v_i} \times [0,\infty]$ be the cone structure of a neighborhood of $\tilde v_i$ where $P_{\tilde v_i} $ covers $T_{v_i}$ and $P_{\tilde v_i}\times \{\infty\}$ is just the ideal point $\tilde v_i$. We may assume that $\widehat D$ is a cone map on each $P_{\tilde v_i} \times [0,\infty]$. Let $\tilde c_k$ be a lift of $c_k$ to a cochain in $\widetilde M$ and $\widetilde{\partial c_k}$ be a lift of $\partial c_k$. Let $\tau \times \{0\}$ be an $(n-1)$-simplex in $T_{v_i} \times \{0\}$ occurring in $\partial c_0$ and $\tilde \tau$ be a lift of $\tau$ to $P_{\tilde v_i}$. Then $\tilde \tau \times \{k\}$ is a lift of $\tau \times \{k\} \in \partial c_k$. Since $\widehat D$ is a cone map on $P_{\tilde v_i} \times [0,\infty]$, $D(\tilde \tau \times [0,\infty])$ is the geodesic cone over $\tilde \tau \times \{0\}$ with top point $\tilde v_i$ in $\overline \mathcal{X}$. Hence the diameter of $D(\tilde \tau \times \{k\})$ decays exponentially to $0$ as $k \rightarrow \infty$ for each $\tau$. By a direct computation, we have \begin{eqnarray*} \langle \widehat D^*\bar f_b - \widehat D^*\bar f_b' , \widehat c_k\rangle &=& \langle \widehat D^*\bar f_b - \widehat D^*\bar f_b', \tilde c_k \rangle + (-1)^{n+1}\langle \widehat D^*\bar f_b - \widehat D^*\bar f_b', cone(\widetilde{\partial c_k}) \rangle \\ &=& \langle \bar f_b - \bar f_b', \widehat D_*(\tilde c_k) \rangle + (-1)^{n+1}\langle \bar f_b - \bar f_b', \widehat D_*(cone(\widetilde{\partial c_k}))\rangle \\ &=& \langle f_b - f_b', D_*(\tilde c_k) \rangle + (-1)^{n+1}\langle \bar f_b - \bar f_b', \widehat D_*(cone(\widetilde{\partial c_k}))\rangle \end{eqnarray*} The last equality comes from the fact that $\widehat D_*(\tilde c_k)$ is a singular chain in $\mathcal{X}$. Since $f_b$ and $f_b'$ are continuous bounded alternating cocycles representing the continuous volume class $\omega_{\mathcal X} \in H^n_c(G,\mathbb R)$, there is a continuous alternating $G$-invariant function $\beta : \mathcal{X}^n \rightarrow \mathbb R$ such that $f_b -f_b' =\delta \beta$. Hence $$\langle f_b - f_b', D_*(\tilde c_k) \rangle = \langle \delta \beta, D_*(\tilde c_k) \rangle = \langle \beta, \partial D_*(\tilde c_k) \rangle = \langle \beta, D_*(\widetilde{\partial c_k}) \rangle.$$ As observed before, since the diameter of all simplices occurring in $D_*(\widetilde{\partial c_k})$ decays to $0$ as $k \rightarrow \infty$ and moreover $\beta$ is uniformly continuous on $\mathcal{X}$, we have $$\lim_{k\rightarrow \infty} \langle \beta, D_*(\widetilde{\partial c_k}) \rangle =0$$ Note that $D(cone(\tilde \tau \times \{k\}))$ is the geodesic cone over $D(\tilde \tau \times \{k\})$ with top point $\tilde v_i$. By Lemma \ref{lem:extend}, both $\bar f_b$ and $\bar f_b'$ are uniformly continuous on $\mathcal{X}^n \times \{\tilde v_i\}$. Since the diameter of $D(\tilde \tau \times \{k\})$ decays to $0$ as $k\rightarrow \infty$, $$\lim_{k \rightarrow \infty} \langle \bar f_b, D(cone(\tilde \tau \times \{k\})) \rangle =\lim_{k \rightarrow \infty} \langle \bar f_b', D(cone(\tilde \tau \times \{k\})) \rangle = 0.$$ Applying this to each $\tau$, we can conclude that $$\lim_{k \rightarrow \infty} \langle \bar f_b,D_*(cone(\widetilde{\partial c_k})) \rangle =\lim_{k \rightarrow \infty} \langle \bar f_b', D_*(cone(\widetilde{\partial c_k}))\rangle =0.$$ In the end, it follows that $$\lim_{k\rightarrow \infty} \langle \widehat D^*\bar f_b - \widehat D^*\bar f_b' , \widehat c_k \rangle=0.$$ As we mentioned, the value on the left hand side above does not depend on $\widehat c_k$. Thus we can conclude that $\langle \widehat D^*\bar f_b - \widehat D^*\bar f_b' , \widehat c_k \rangle=0$. This implies that $\langle \widehat D^*\bar f_b, \widehat c \rangle = \langle \widehat D^*\bar f_b', \widehat c\rangle$ for any fundamental cycle $\widehat c$ of $\widehat M$, which completes the proof. \end{proof} Combining Proposition \ref{lem:indepwb} with (\ref{eqn:A}), Proposition \ref{prop:indepwb} immediately follows. \begin{proposition} The definitions of {\bf D3} and {\bf D4} are equivalent. \end{proposition} \begin{proof} By Lemma \ref{lem:indep} and Proposition \ref{prop:1}, we have \begin{eqnarray*} \mathrm{Vol}_3(\rho) &=& \inf \{ \langle \rho^*_b(\omega_b),\alpha \rangle \ | \ c(\omega_b)=\omega_{\mathcal{X}} \text{ and } \alpha\in [M]_\mathrm{Lip}^{\ell^1} \} \\ &=& \inf \{ \langle (c\circ (i^*_b)^{-1} \circ \rho^*_b)(\omega_b), [\overline M,\partial \overline M] \rangle \ | \ c(\omega_b)=\omega_\mathcal{X} \} \\ &=& \inf \{ \langle \widehat D^* [\bar f_b], [\widehat M] \rangle \ | \ c(\omega_b)=\omega_\mathcal{X} \} \\ &=& \langle \widehat D^* [\overline \Theta], [\widehat M] \rangle \\ &=& \mathrm{Vol}_4(\rho), \end{eqnarray*} which completes the proof. \end{proof}
1,941,325,220,675
arxiv
\section{Introduction} In the past decade, with the widespread adoption of consumer-friendly and affordable hardware, Virtual Reality (VR) has gained a larger role in the Architecture, Engineering, and Construction (AEC) community. Studies suggest that immersive environments - which are not limited to visual immersion - enable a better spatial understanding when compared to 2D or non-immersive 3D representations \cite{Schnabel_Kvan_2003,paes2017immersive} enhance collaboration and team engagement among stakeholders \cite{bassanino2010impact,berg2017industry,Fernando2013} for designers and researchers to conduct virtual building occupant studies \cite{kuliga2015virtual, adi2014using, Heydarian_Carneiro_Gerber_Becerik-Gerber_2015, Heydarian_Pantazis_Carneiro_Gerber_Becerik-Gerber_2015}. In such context, immersive visualization can be integrated in the design process as a tool that supports decision-making tasks, design modifications, and provide information on their resulting impact through building performance simulation \cite{caldas2019design}. In performance-oriented sustainable design, daylighting is considered a major driver behind energy consumption and occupant well-being particularly in large commercial buildings. The U.S. Energy Information Administration (EIA) \cite{center2020annual} estimates residential and commercial sectors combined use about 8\% of total electricity consumption for lighting. In this domain, simulation tools incorporate visual and photometric metrics to study occupant experience and visual comfort. However, daylight assessment of a design or a building is not limited to quantitative metrics, and other factors such as geometry and visual factors play an extensive role in this process. Such properties have been a cornerstone for daylight research, with previous studies proposing tools for objective-driven daylight form-finding \cite{Caldas_Santos_2016,caldas2008generation,Caldas_Norford_2002}, merging the spatial and visual qualities of which daylight can offer, with numeric goal-oriented generative design strategies. Although virtual immersive environments have been widely incorporated in various design and engineering tasks, some important limitations of the current state of the technology can result in critical drawbacks in decision making processes in design, particularly if the design process involves daylighting assessments and analysis. In daylighting design the user highly depends on visual feedback and rendered information, and therefore, graphical and display limitations can path the way to misleading visual representations provided by the system. As a result, it is vital that daylight design VR tools identify and address this limitation. For daylight simulation and graphics renderings, ray-tracing has been a widely accepted method in computer graphics and radiometric simulations. Following the rendering equation introduced by Kajiya \cite{Kajiya_1986}, many raytracing methods have been developed since then to simulate light behavior and optical effects. Tools for simulating daylighting performance metrics such as Radiance and Velux Daylight Visualizer take advantage of such ray-tracing techniques and have been validated through numerous studies. As a result, these tools are broadly used in building performance design and analysis, assisting architects and building engineers to evaluate daylight behavior in different phases of the design process. \begin{figure*} \centering \includegraphics[width=1.9\columnwidth]{figures/Workflow} \caption{Workflow of RadVR- the system takes a 3D model with material properties as input. By incoperating Radiance as its calculation engine, RadVR simultaneously encompasses the qualitative immersive presence of VR and quantitative physically correct daylighting calculations of Radiance by overlaying simulation data to spatial immersive experiences.}~\label{fig:flowchart} \end{figure*} However, implementing real-time raytracing \cite{cook1984distributed} methods in virtual environments is challenging due to current graphic processing limitations of conventional hardware. This has resulted in the inability to produce physically correct renderings in high-frequency rates using such systems. In order to experience 6DOF and avoid user discomfort within immersive environment, rendered information displayed in Head-mounted Displays (HMD) is required to update in a framerate of least 90Hz to match the pose and field of view on the user. Rendering in such high frequencies requires lots of graphical computation power, which current conventional hardware GPU’s is unable to provide. In addition to updating pose estimation, the wide field of view experienced in virtual environments requires high-resolution output, adding complexity and computation load to the rendering process. While there is ongoing research focused on producing high-frequency physically correct rendering which can potentially overcome the limitation of misleading renderings for daylighting designers in VR, the ability to dynamically evaluate and analyze the modeled spaces using quantitative metrics can still be considered an open challenge in the daylighting-design process. Therefore, current game engines, which are the main development platform for virtual reality applications, take advantage of biased rendering methods to achieve faster scene processing \cite{Gregory_2018}. Such methods limit the number of ray samples and their corresponding bouncing count from the camera to the light sources (or vice-versa) resulting in unrealistic lighting representation of the target environment. However, as light bounces are limited in such methods and cannot illustrate accurate illuminance values of a given viewpoint, ambient lighting of surfaces is not achieved and is mainly limited to shadow and occlusion calculation of a scene. Many methods have been introduced to overcome this limitation \cite{williams1983pyramidal,crow1984summed,Segal_Foran_1992}, for example applying visual illusion techniques or pre-rendered texture maps, in which pre-baked light textures are efficiently mapped to the corresponding geometry in the scene, decreasing the real-time rendering load of the model. However, in applications that lighting conditions change, such methods cannot be implemented. Additionally, display limitations in current HMD systems can also decrease the required fidelity for daylighting design and decision making. Although there are several studies that propose prototypes of high dimensional range monitors, current consumer HMD hardware such as the Oculus Rift and the HTC Vive have a maximum brightness of no more than 190.5 cd/m$^2$ \cite{mehrfard2019comparative}. Therefore, relying on visual outputs of current VR rendering pipelines might be misleading and counterproductive for daylighting design processes. Hence, it is important to inform the user of possible errors and photometric mismatches through an extended visualization medium, that allows the user to compare and analyze rendered information with quantitative values in the form of common daylighting metrics. Radiance \cite{Ward_J._1994} is the most widely used, validated, physically-based raytracing program in lighting and daylight simulation of buildings \cite{ochoa2012state, reinhart2011daylight}. Although it was one of the first backward raytracing programs developed for the light and building analysis, the incremental improvement and extension of Radiance’s raytracing capabilities to both bi-directional \cite{mcneil2013validation, geisler2016validation} and forward raytracing \cite{schregle2004daylight, grobe2019photon, grobe2019photonimage} led to its being widely regarded as the “golden standard” for lighting simulation \cite{santos2018comparison}. As a result, Radiance has been used in several inter-program comparisons and validation studies \cite{bellia2015impact, jones2017experimental,reinhart2011daylight,reinhart2009experimental}. Despite the recent advancements to integrate Radiance in current digital building design workflows, there are few works that use Radiance as an ancillary analysis tool in immersive environments \cite{wasilewski2017, Jones_2017}. Hence, to address the challenge of using accurate quantitative daylight information in immersive environments, this work proposes an end-to-end 6DOF virtual reality tool, RadVR, that uses Radiance as its calculation engine. RadVR simultaneously encompasses the qualitative immersive presence of VR and quantitative physically correct daylighting calculations of Radiance by overlaying simulation data to spatial immersive experiences. The simulation accuracy can be customized by the user, from fast direct light analysis to progressively accurate daylighting simulations with higher detailed resolution. With an end-to-end system architecture, RadVR integrates 3D modeling software that uses conventional 2D GUI environments such as Rhino3D and provides an immersive virtual reality framework for the designer to simulate and explore various daylighting strategies. By establishing a bi-directional data pipeline between the virtual experience and Radiance, daylighting analysis can be practiced in earlier stages of design, without the need of transferring models back and forth between platforms. In our proposed methodology, overlaying quantitative calculations of various daylighting metrics within the rendered virtual space would provide additional informative value to the design process. In addition, providing the user with time-navigation and geometric manipulation tools, which are specifically developed for daylighting-based design scenarios, can further facilitate the analysis process in VR. \thispagestyle{empty} \section{Background} \subsection{Radiance and Daylighting Simulation} Radiance \cite{Ward_J._1994} is an open-source simulation engine that uses text-based input/output files, and it does not provide a Graphical User Interface (GUI). At the beginning of its development, Radiance was unable to interface with digital design tools such as Computer-Aided Design (CAD) and Building Information Modeling (BIM) programs. This forced designers and daylight analysts to use Radiance 3D modeling operations to describe a scene for simulation. Nevertheless, in the mid-1990s, Ward \cite{ward_obj2rad} proposed the obj2rad Radiance subroutine to facilitate the export of the geometry produced by a CAD and BIM tool to Radiance. Since then, several researchers and software developers have proposed several Radiance-based tools that promote the integration of design tools with the simulation engine. ADELINE \cite{christoffersen1994adeline, erhorn1994documentation}, was one of the first tools to integrate Radiance with a CAD tool. A small built-in CAD program, the Scribe-Modeler, provided ADELINE CAD and 3D modeling capabilities. However, ADELINE and Scribe-Modeler had several constraints regarding modeling operations and the geometric complexity of the building models, e.g., ADELINE only supported a limited number of mesh faces in a given model. ECOTECT \cite{roberts2001ecotect} was an analysis program for both thermal and daylight analysis of buildings that used Radiance to complement its limited abilities in accurately predicting daylight levels in buildings \cite{vangimalla2011validation}. The software facilitated the use of Radiance through a sophisticated GUI and became particularly popular in the early 2000s. However, the software is no longer developed and distributed. DAYSIM \cite{reinhart2001validation}, SPOT \cite{rogers2006daylighting}, and COMFEN \cite{hitchcock2008comfen} are Radiance-based tools that also emerged at the turning of the century. Although DAYSIM and SPOT extend Radiance calculation capabilities to Climate-Based Daylighting Modeling (CBDM), they do not provide a user-friendly GUI. Thus, the standalone version of these tools is seldom used by architects in their design workflows. Regarding COMFEN, its modeling and simulation abilities are extremely limited because the tool was specifically developed for initial analyses that focus on glazing selection and the design of simplified static shading systems. DIVA for Rhino \cite{jakubiec2011diva} is a daylight and thermal analysis tool that largely contributed to the integration of Radiance-based analysis in building design workflows. DIVA is fully integrated with Rhinoceros (Rhino) \cite{mcneel2015rhinoceros}, a popular Non-Uniform Rational Basis Spline (NURBS) CAD software among architects. DIVA easily processes the geometry modeled in Rhino to Radiance and DAYSIM, and provides an easy-to-use GUI that interfaces with both simulation engines. The user can also access to DIVA through Grasshopper, a Visual Programming Language (VPL) for Rhino, to perform more advanced Radiance simulations. Ladybug+Honeybee \cite{roudsari2013ladybug} enables a complete use of DAYSIM and Radiance-based techniques, including bi-directional raytracing techniques \cite{mcneil2013validation, geisler2016validation}, through Grasshopper and Dynamo VPLs. The Ladybug+Honeybee version for Dynamo allows the use of Radiance and DAYSIM in the Revit BIM program. Autodesk Insight is another tool that also supports Cloud-based Radiance calculations for BIM design workflows. As briefly reviewed above, there have been several efforts to develop intuitive interfaces that facilitate the use of Radiance as a lighting and daylighting simulation engine for performance-based design workflows.. Throughout the years, there have always been efforts to migrate the Radiance engine toward new interfaces by introducing intuitive design and analysis features to allow users of such systems to conduct daylighting simulation and informative analysis. Nevertheless, the integration of Radiance in performance-based design processes supported by Virtual and Augmented Reality interfaces is still in its early phases. It is important to investigate new user interfaces in Virtual Reality and Augmented Reality that are supported by state-of-the-art daylighting simulation engines such as Radiance for two reasons: (1) it is foreseeable that VR and AR will play an important role in the design of the built environment, (2) daylighting design workflows supported by AR and VR tools require the robust and reliable predictions provided by state-of-the-art simulation engines such as Radiance. This work was developed in such a research direction, by introducing a novel 6DOF virtual reality interface for the Radiance engine, for simulation, visualization, and analysis for daylighting-based design tasks. \thispagestyle{empty} \subsection{Virtual Reality and Design Task Performance} In the AEC and design community, virtual reality platforms are being gradually adopted as new mediums that can potentially enhance the sense of presence, scale, and depth of various stakeholders of building projects. Several methods have been introduced to study the relationship between spatial perception and user task performance in immersive environments. Witmer and Singer \cite{Witmer_Singer_1998} define presence as the subjective experience of being in one place or environment, even when one is physically situated in another. By developing presence questionnaires, they argue that a consistent positive relationship can be found between presence and task performance in virtual environments. Since then, similar questionnaires have been applied in several studies in AEC and related fields on VR \cite{Castronovo_Nikolic_Liu_Messner_2013, Kalisperis_Muramoto_Balakrishnan_Nikolic_Zikic_2006, keshavarzi2019affordance} with Faas et al. specifically investigating whether immersion and presence can produce better architectural design outcomes at early design stages \cite{Faas_Bao_Frey_Yang_2014}. For skill transfer and decision making tasks, Waller et al. show that sufficient exposure to a virtual training environment has the potential to surpass a real-world training environment \cite{Waller_Hunt_Knapp_1998}. Heydarian et al. conclude users perform similarly in daily office activities (object identification, reading speed and comprehension) either in immersive virtual environments or benchmarked physical environments \cite{Heydarian_Carneiro_Gerber_Becerik-Gerber_Hayes_Wood_2015}. Moreover, other studies investigated if virtual environments enhance occupant navigations in buildings when compared to 2D screens. Some indicating a significant improvement \cite{Robertson_Czerwinski_van_Dantzich_1997, Ruddle_Payne_Jones_1999}, while others present no significant differences \cite{mizell2002comparing,Sousa_Santos_Dias_Pimentel_Baggerman_Ferreira_Silva_Madeira_2009}. In addition to individual design task procedures, the ability to conduct productive virtual collaboration between various stakeholders in a building project is an important factor that can impact multiple stages of the design process. Such capabilities of immersive environments have been broadly investigated for collaborative review purposes which usually happen in the last phases of the design process. In these phases, critical evaluation and analysis can impact construction cost and speed. \cite{Eastman_Teicholz_Sacks_Liston_2011} Identifying missing elements, drawing errors and design conflicts can help to avoid unwanted costs and allocation of resources. Commercial software such as Unity Reflect, Autodesk Revit Live and IrisVR enable virtual walkthroughs and facilitate visualization of conventional 3D and BIM file formats. Some of these platforms also allow the designer to update the BIM model directly within the immersive environment or vice versa. \subsection{Building Performance Visualization in Virtual Reality} Multiple studies explored the visualization results for building performance simulation in VR, either by using pre-calculated data or using the VR interface to perform simulations and visualize their outputs. For example, Nytsch-Geusen et al. developed a VR simulation environment using bi-directional data exchange between Unity and Modelica/Dymola \cite{nytsch2017buildingsystems_vr}. Rysanek et al. developed a workflow for managing building information and performance data in VR with equirectangular image labeling methods \cite{rysanek2017workflow}. For augmenting data on existing buildings, Malkawi et al. developed a Human Building Interaction system that uses Augmented Reality (AR) to visualize CFD simulations \cite{Malkawi_Srinivasan_2005}. Augmented and virtual reality interfaces have also been applied for structural investigations and finite element method simulations. In \cite{Hambli_Chamekh_Bel_Salah_2006}, the authors use artificial neural networks (ANN) and approximation methods to expedite the simulation process and achieve real-time interaction in the study of complex structures. Carneiro et al. \cite{carneiro2019influencing} evaluate how spatiotemporal information visualization in VR can impact user design choices. They report participants reconsider initial choices when informed of better alternatives through data visualization overlaid in virtual environments. For performance-based generative design systems, work of \cite{keshavarzi2020v} enables users to analyze and narrow down generative design solution spaces in virtual reality. Their proposed system utilizes a hybrid workflow in which a spatial stochastic search approach is combined with a recommender system allowing users to pick desired candidates and eliminate the undesired ones iteratively in an immersive fashion. \subsection{Daylight Analysis in Virtual Reality} In the study of daylighting, immersive environments have been widely used as an end-user tool to study daylight performance and provide occupant feedback. In this regard, Heydarian et al. study the lighting preferences of building occupants through their control of blinds and artificial lights in a virtual environment \cite{Heydarian_Pantazis_Carneiro_Gerber_Becerik-Gerber_2015}. Rockcastle et al. used VR headsets to collect subjective evaluations of rendered daylit architectural scenes \cite{Rockcastle_Chamilothori_Andersen_2017}. Using similar settings, Chamilothori et al. experimented the effect of façade patterns on the perceptual impressions and responses of individuals to a simulated daylit space in virtual reality \cite{chamilothori2019subjective}. Carneiro et al. \cite{carneiro2019understanding} use virtual environments to study how the time-of-day influence participants lighting choices in different window orientations. Similar to our study, they use real-time HDR rendering to visualize lighting conditions in VR. However, their study is limited to only three times of the day (9am, 1pm, 5pm) and no real-time physically-correct simulation takes place to inform users with quantitative daylighting metrics of the target virtual environments. Instead they use pre-calculated animations to visualize lighting levels. There is also a body of research which examines the validity of using VR to represent the impressions of a scene for various lighting conditions. Chen et al. \cite{chen2019virtual} compare participants subjective feeling towards a physical lighting environment with a virtual reality reproduction, video reproduction and photographic reproduction. They show human subjects are most satisfied with VR reproductions with a coefficient of 0.886. Chamilothori et al. \cite{chamilothori2019adequacy} used 360 degree physically-based renders visualized in a VR headset to evaluate and compare perceived pleasantness, interest, excitement, complexity, and satisfaction in daylit spaces with a real-world setting. While such approach can be heavily dependent on the tone-mapping method used, their results indicate no significant differences between the real and virtual environments on the studied evaluations. Another similar validation is seen in Abd-Alhamid et al. work \cite{abd2019developing} where they investigate subjective (luminous environment appearance,and high-order perceptions) and objective (contrast-sensitivity and colour-discrimination) visual responses in both real and VR environments of an office. They also report no significant differences of the studied parameters between the two environments and show a high level of perceptual accuracy of appearance and high-order perceptions in VR when compared to the real-world space. Finally, for real-time quantitative simulation and visualization, Jones \cite{Jones_2017} developed Accelerad, a GPU accelerated version of Radiance for global illumination simulation for parallel multiple-bounce irradiance caching.The system allows faster renderings when compared to a CPU version of Radiance, and therefore, can facilitate higher refresh rates for VR environments. However, the VR implementation of this method currently does not provide high-frequency 6 degrees-of-freedom (6DOF) renderings, thus being limited in providing an enhanced sense of presence and of scale to the user.. Our work, in contrast, utilizes qualitative 6DOF rendering using HDR pipelines and integrates Radiance as a backbone for quantitative simulations. Such a combination can provide a smooth (high refresh rate) spatial exploration within the immersive environment while allowing the user to overlay various daylighting simulations for quantitative analysis. In addition, we attempt to introduce novel interaction modules, to facilitate the daylighting design task in VR. \begin{figure*} \centering \includegraphics[width=1.3\columnwidth]{figures/GHCompMerg.jpg} \caption{RadVR import plug-in for Grasshopper, a visual programming language for Rhinoceros 3D. Using the \emph{Assign Material} component, different material types (glazing, plastic, translucent, electrochromic glazing, etc.) can be assigned to geometry, and provide a semantic input for RadVR}~\label{fig:ghComp} \end{figure*} \section{Methodology} \thispagestyle{empty} \subsection{System Architecture} Figure ~\ref{fig:flowchart} shows the workflow of RadVR’s end-to-end processing pipeline. The system takes semantic 3D geometry as input and automatically converts it to a Radiance geometry description with the corresponding material properties. When the user runs RadVR within the virtual environment, the Radiance engine runs in the background an initial simulation to prepare the primary scene within VR. This loads the entire geometry with its defined material into VR, allowing the designer to explore, simulate and review the multiple daylighting functions of the tool. From this moment on, the virtual reality software simultaneously integrates the Radiance engine in performing different simulations in a bi-directional manner, allowing the user to trigger the simulations directly in VR. The following subsections firstly describe the core issues addressing the design of the system architecture; semantic 3D geometry input, Octree preparation, Radiance integration, and game engine implementation. Secondly, we discuss different functionalities of RadVR, simulations types, visualization, and output metrics. Finally, we describe the design approach of the user interaction of RadVR and implementation of the different modules of the system. \subsubsection{Semantic Geometry Input} As the daylighting performance of a building is highly dependent on material properties of the target space, the procedure of importing geometry should be intertwined with a semantic material selection to building a correct scene for daylight simulations. However, at early design stages, the material assignment for certain surfaces may not be finalized. Thus, to allow early-stage daylighting analysis the authors developed a RadVR plug-in for Grasshopper - a visual programming environment for Rhinoceros3D. With this plug-in, the user directly assigns the corresponding material to every geometry prepared to be exported, including surfaces or meshes modeled in Rhino or the result of parametric geometry produced by a Grasshopper algorithm. The RadVR plug-in produces the required data file for both the game engine and simulation engine (Radiance) in two separate target directories. Such method allows the user to interact with one unified input module in Rhino/Grasshopper 2D GUI before transferring to a virtual immersive environment. Moreover, the plug-in serves as a bridge between 3D modeling and parametric practices with the performance analysis in VR systems. The plug-in also provides a predefined material list in which the user can choose the material or modify its main parameters by using other Grasshopper functions in the pipeline. The plugin is not a required component of the RadVR system, but is a rather a tool to facilitate importing data to the RadVR interface. Figure \ref{fig:ghComp} shows an example of how a semantic 3D model is exported via the RadVR plugin components. \begin{figure*} \centering \includegraphics[width=1.8\columnwidth]{figures/ControllerDiagramMerged_Small.jpg} \caption{Changing the time of the year using virtual reality touch controllers. By pressing up/down the month of the year is modified and by pressing left/right the hour of day is modified}~\label{fig:4pics} \end{figure*} \subsubsection{Octree Preparation} To prepare the input building description for daylighting simulations within VR, the system labels each instance of the geometry input to the corresponding material property and assigns a sky model to the scene first depending on simulation type, and if necessary on location, day, and hour. If the user requests a daylight factor simulation the system automatically generates an CIE overcast sky, but if the user requests an illuminance analysis it will generate a CIE clear sky model based on location, day, and hour. The sky model is stored in a dedicated text file, the labelled geometry in a Radiance file (*.rad), and another text file describes the materials optical properties. All the files are combined into a single octree. If needed, the user can modify the material file by editing or writing their own materials using dedicated Radiance shaders. \begin{figure*} \centering \includegraphics[width=2\columnwidth]{figures/9point} \caption{9 point-in-time matrices in RadVR. While choosing each date in the matrix, the sun position instantly updates to construct the corresponding shadows and daylighting effects.}~\label{fig:9point} \end{figure*} \thispagestyle{empty} \subsubsection{Radiance Integration} For raycasting-based daylight simulation, RadVR uses Radiance \cite{Ward_J._1994} as its calculation engine. Radiance is a validated daylighting simulation tool \cite{mardaljevic1995validation, reinhart2000simulation, reinhart2006development} developed by Greg Ward and it is composed by a collection of multiple console-based programs. To simulate irradiance or illuminance values at individual sensors the system uses Radiance’s subprogram \emph{rtrace}. These sensors may form a grid over a work plane, or they may represent individual view directions for pixels of an image. However, instead of calculating color values of output pixels of a scene, \emph{rtrace} sensors can be implemented in a wide range of spatial distribution covering target locations with multiple direction in an efficient manner. This minimizes computation time by limiting ray tracing calculations to specific targets and avoid calculating images of a large size as one directional array. When a simulation is triggered in RadVR a designated C\# script is activated to communicate with Radiance \emph{rtrace} through the native command console. The required input of every simulation is provided according to the virtual state of the user and time of the year of defined in GUI of RadVR. Moreover, the \emph{rtrace} simulation runs as a background process without the user viewing the simulation console or process. Once the simulation is complete, a virtual window notifies the user of the completion and the scene would be updated with the simulation visualization. The results are stored in memory and can be later parsed and visualized if called by the user. \subsection{Game Engine Implementation} As discussed in the introduction, the ability to output high frequency renderings in an efficient manner is the main objective of modern game engines. RadVR uses the Unity3D game engine and libraries for its main development platform. Like many other game engines, Unity is currently incapable of real-time raytracing for VR applications and implements a variety of biased-rendering methods to simulate global lighting. For material visualization, a library of Unity material files was manually developed using the Standard Shader. The parameters of the shaders were calibrated by the authors to visually correspond to the properties of listed materials available in Grasshopper plugin. In addition, these properties can be later modified in RadVR which would be visually updated during runtime. \thispagestyle{empty} \subsection{RadVR User Modules} \subsubsection{Direct Sunlight Position Analysis} One important aspect of daylight analysis is understanding the relationship between time, sun position, and building geometry. Hence, we implemented a module that given a building location (latitude and longitude), a day of the year and a hour of the day it correctly positions the sun for direct sunlight studies in buildings. In RadVR, an interactive 3D version of the stereographic sun path diagram is developed with calculations of the US National Oceanic and Atmospheric Administration (NOAA) Sunrise/Sunset and Solar Position Calculators. These calculations are based on equations from Astronomical Algorithms, by Jean Meeus \cite{Meeus_1998} . Each arc represents a month of the year and each analemma represents an hour of the day. The authors implemented a C\# script to translate NOAA equations to functions that operate within the Unity3D environment. This script calculates the zenith and azimuth of the sun based on longitude, latitude and time of the year, and controls the position and rotation components of a direct light object in the VR environment. The mentioned inputs are accessible from the implemented GUI of the program, both through user interface menu options and VR controller input. \thispagestyle{empty} To avoid non-corresponding arcs throughout the months, the representing days of each month differ and are as follows: January 21, February 18, March 20, May 21, June 21, July 21, August 23, September 22, October 22, November 21, December 22. In addition, monthly arcs are color coded based on their season with the winter solstice (December 22 in northern hemisphere and June 21 in southern hemisphere) visualized in blue, and the summer solstice arc (June 21 in northern hemisphere and December 22 in southern hemisphere) color coded in orange. Monthly arcs in between correspond to a gradient of blue and orange based on their seasonality. The observer location of the sun path diagram is set to the center eyepoint (mid-point between the virtual left and right eyes). The diagram moves with the user, with its center always positioned at the observer location. When the user is turning around their head or executing virtual locomotion within the immersive environment, the sunpath diagram location is updated. This feature of the software also allows users to indicate whether direct sun illumination is visible from the observer’s position throughout the year. If a section of sun-path diagram is visible through the building openings it indicates direct sunlight penetration will happen during the corresponding time at the observer’s location. The user controls the time of day using two different input methods. The first is by using VR controllers and changing the time with joystick moves. On moving the joystick from left to right the time of the day increases on a constant day of the year and vice versa. The joystick movement results on the sun’s arc movement from sunrise to sunset. In contrast, on moving the joystick from down to up, the day of the year increases in a constant time of the day, which results moving on the corresponding analemma in the sun path diagram. The speed of the movement can be adjusted through RadVR settings, allowing users to control their preferable sun path movement for daylight analysis. Moreover, to adjust the time in hourly steps and avoid the smooth transition in minutes, authors implemented a \emph{SnapTime} function to assist user designers in altering time of the day controls. This function also extends to the day of the year, with snaps happening on the 21st of the month only. Designed as an optional feature, which can be turned on or off using the RadVR menu, \emph{SnapTime} allows users to quickly and efficiently round the time of the year on hourly and monthly numbers for sunlight analysis. The second input method is using an immersive menu, which is loaded when the user holds the trigger. Using a raycasting function, the user can point toward different buttons and sliders and select the intended time of the day and day of the year. When the time of day and date of the year is being changed by the user, lighting conditions and shadows are updated based on the corresponding building model. However, in many cases the user is eager to locate the position of the sun relative to the building, but due to the specific geometry of the model, the sun location is being blocked by the solid obstructions. To resolve this issue, we implemented the \emph{Transparent Model} function in the workflow, which adds a see-through effect to the model to observe the sun position relatively to the point-of-view. The \emph{Transparent Model} function replaces all solid and translucent materials with a transparent material to achieve this. While conventional daylighting analysis tools such as DIVA also provide visualization of the sunpath diagram in their 2D interfaces, we believe our human-centric immersive approach would further facilitate the task of correlating the 3D attributes of a designed building to the 3D properties of the sunpath. In such context, our goal was to develop an easy-to-control sunpath interaction module where the user can intuitively inspect in real time the relationship between the position of the sun, which depends on time, the building, and the resulting direct light pattern while freely moving around in the building. Our approach also exceeds sunpath simulators of current BIM visualization software in VR, such as IrisVR and Unity Reflect, by rendering analemmas, generating the 9 point-in-time matrix (i.e., 9 relevant hours that cover the different equinox and solstices),, and triggering transparent material modes to allow a better understanding of the sunpath movement and it's relation to the building. \subsubsection{The 9 point-in-time matrix}\label{sec:9Point} In addition to the manual configuration of time, assessing a 9 point-in-time matrix is also a useful method used in daylight studies. The analysis of the morning (9:00am), noon (12:00pm) and afternoon (3:00pm) for the solstices and equinoxes is a fast way to evaluate and compare daylight patterns throughout the year. Users access the RadVR version of point in time matrix through the corresponding UI menu which contains 9 captioned buttons that represent the 9 point in times. By clicking on each button, the time would be updated in the surrounding environment, resulting in the reposition of the sun position, shadows, etc. In contrast to the conventional 9 point-in-time matrix where a single viewpoint of the building is rendered in 9 times in different times of the year to form a 3x3 matrix of the rendered viewpoints in one frame, RadVR’s 9 point-in-time is a set of nine 360 degrees 6DOF viewpoints that are individually accessed and updated through the 3x3 user interface shortcut. Therefore, evaluation of these times can be done in a much wider field of view covering all surroundings and not just one specific camera angle. This may result in a more comprehensive daylight comparison of the buildings space, as user designers can simultaneously identify geometrical properties of daylight in multiple viewpoints of the buildings. However, the limitation of not being able to view all renderings in one frame can be viewed as a drawback compared to the conventional 9 point-in-time render matrix. For an in-depth comparison of conventional 2D point-in-time matrix vs RadVR, please see results of user studies in Section \ref{sec:user}. \thispagestyle{empty} \subsubsection{Quantitative Simulations} One of the main design objectives of RadVR is to allow the user to spatially map the daylighting performance of the building with its geometrical properties while immersed in the virtual model of the building itself. When visualizing on-demand simulations in the surrounding immersive space, the user would be able to perceive which geometrical properties are impacting the results by visually inspecting the building with 6DOF movement (see Fig \ref{fig:visualization}). While qualitative renderings of the daylit scene are produced directly from the game engine rendering pipeline, the physically correct quantitative simulations of conventional daylighting metrics are achieved by triggering Radiance simulations through the front-end user interface of RadVR. By defining certain simulation settings such as simulation type, sensor array resolution, and ambient bounce count through user-centric interaction modules, the designer can run, visualize, compare and navigate through different types of daylighting simulations within the immersive environment of RadVR. The following explains the different components of the quantitative simulation front-end module: A standard workflow for illuminance-based simulations, which is also used in other daylighting analysis software such as DIVA, is to define an array of planar sensors and measure illuminance in each sensor. The sensors description includes spatial position given by cartesian 3D coordinates and a vector that defines its direction. For daylighting simulation, the sun location and sky conditions define the lighting environment, therefore the time of the desired simulation and its corresponding sky model are input parameters for the simulation. \begin{figure*} \centering \includegraphics[width=2\columnwidth]{figures/TimeSeriesPPOutputHQ_LQ.jpg} \caption{Comparison of 9 point-in-time matrix between the extracted screenshot panorama of RadVR (left) and Diva4Rhino (right).}~\label{fig:comp} \end{figure*} To construct the sensor arrays in RadVR, a floating transparent plane - \emph{SimulationPlane} - is instantiated when the user is active in the simulation mode. This plane follows the user within the virtual space during all types of virtual locomotion (teleporting, touchpad-walking, flying) allowing the user to place the \emph{SimulationPlane} based on their own position in space. The size and height of the \emph{SimulationPlane} can be adjusted using corresponding sliders. This type of interaction allows the VR user to locate the simulation sensors wherever the user intends to in the virtual environment. In contrast to conventional 3D modeling software which visual feedback is inherited from birds-eye views, and orbiting transformations are considered as the main navigation interaction, immersive experiences are highly effective when designed around human-scale experiences and user-centric interactions. Therefore, instead of expecting the user to use flying locomotion navigation and accurate point selection to generate the sensor grid, the \emph{SimulationPlane} automatically adjusts its position relative to the user position. The distance between sensor points, can also be adjusted by the user in both X and Y directions. Such property allows the designer to control the simulation time for different testing scenarios or allocate different sensor resolutions in different locations. If studying a certain area of virtual space requires more resolution, the user can adjust the \emph{SimulationPlane} size, height and sensor spacing, while modifying the same parameters for another simulation which can be later overlaid or visualized in the same virtual space. In addition to sensor resolution, ambient bounces of the light source rays is another determining factor in the accuracy of ray tracing simulations. While the default value of the RadVR simulations is set to 2 ambient bounces, this parameter can be modified through the corresponding UI slider to increase simulation accuracy in illuminating the scene. However, such increase results in slower calculations, a factor which the user can adjust based on the objective of each simulation. The time and the corresponding sun location of each simulation is based on the latest time settings controlled by the user in the RadVR runtime. By using the touchpad controller to navigate the month of year and hour of the day or accessing any of the given timestamps of the 9-point-matrix, the user can modify the time of the year for the simulation setting. Moreover, longitude and latitude values can be accessed through the RadVR menu allowing comparative analysis for different locations. The current version of RadVR offers quantitative simulations of two daylighting metrics: (a) Point-in-time Illuminance $(E)$, and (b) Daylight Factor ($DF$). In the following sections, we provide a short description of each metric and elaborate on why the mentioned metrics were prioritized in the development of the system. \begin{figure*} \centering \includegraphics[width=1.6\columnwidth]{figures/SimulationFigureLQ.jpg} \caption{Example visualization of a Daylight Factor $(DF)$ simulation within RadVR. Value are plotted in the location of each sensor node. A three-color gradient palette is implemented where blue is considered as the minimum value, yellow as the median, and red as the maximum value. The range of heatmap can be modified through RadVR menus.}~\label{fig:visualization} \end{figure*} \thispagestyle{empty} \subsubsection{Point-in-time Illuminance Simulation} The illuminance at a point $P$ $(E_P)$ of a given surface is the ratio between the luminous flux $(\upphi)$ incident on an infinitesimal surface in the neighborhood of $P$ and the area of that surface $(A_{rec})$. $E$ measuring units are lux,in the International System of Units (SI), and foot-candles (fc) in the Imperial System of Units (IP). Illuminance basically measures how much the incident light illuminates a surface in terms of human brightness perception. The mathematical formula is \cite{carlucci2015review}: $$E_P = \frac{d\upphi}{dA_{rec}} \text{[lux or fc]}$$ \thispagestyle{empty} Since $E$ measurements are local and are assessed in a point-in-time fashion, illuminance simulations have the advantage of a faster calculation process when compared to other daylighting metrics and can deliver an accurate measurement in an instantaneous moment for a specific spatial location within a given luminous environment. Such property provides RadVR users to trigger and visualize simulations in a fast and iterative manner, while navigating and inspecting the results. However, illuminance is a point-in-time (static) metric and does not measure daylighting quality in a given period of time. Therefore, a fast and effective user workflow should be established to allow the user to iterate and compare between different time of the year to provide an efficient feedback to the design process. Such approach is followed in RadVR, in which by easily changing the time of the year with touchpad controllers or using the 9-point-in-time matrix menu the user can modify the simulation time settings in an immediate manner. The calculation of illuminance levels over a grid of sensor points involves the use of Radiance's rtrace subroutine \cite{ward1996radiance}. For more details on using rtrace and other complementary routines to calculate illuminance, the authors refer the reader to the tutorials of Compagnon \cite{compagnon1997radiance} and Jacob \cite{jacobs2014radiance}. \thispagestyle{empty} \subsubsection{Daylight Factor Simulation} The Daylight Factor at a point $P$ $({DF}_P)$, is the ratio of the daylight illumination at a given point on a given plane due to the light received directly or indirectly from the sky of assumed or known luminance distribution to the illumination on a horizontal plane due to an unobstructed hemisphere of this sky. Direct sunlight is excluded for both interior or exterior values of illumination. The following expression calculates ${DF}_P$ \cite{hopkinson1966daylight}: $${DF}_P = \frac{(E_P,obs)}{(E_P,unobs)}$$ where $(E_P,obs)$ is the horizontal illuminance at a point P due to the presence of a room that obstructs the view of the sky and $(E_P,unobs)$ is the horizontal illuminance measured at the same point P if the view of the sky is unobstructed by the room. $DF$ was first proposed by Trotter in 1895 \cite{walsh1951early} and is based on a ratio to avoid the dependency of assessing daylight performance based in instantaneous sky conditions \cite{reinhart2006dynamic}. $DF$ assumes that the sky has a uniform luminance, therefore, it uses an overcast sky model in the simulation process. Such assumption results in fast simulation time and can be representative for an entire year. As $DF$ cannot properly represent daylight illumination conditions that differ from the overcast sky \cite{mardaljevic2009daylight} and is insensitive to building orientation \cite{kota2009historical}, its combination with illuminance point-in-time simulations are useful to a broader understanding of the daylighting performance of the space, while maintain a fairly low simulation run-time and computation process. The calculation of Daylight Factor and a given point also uses rtrace \cite{ward1996radiance}. For more details on using rtrace and other complementary routines to compute Daylight Factor, the authors refer the reader to the tutorials previously mentioned in the calculation of illuminance \cite{compagnon1997radiance,jacobs2014radiance}. \begin{figure*} \centering \includegraphics[width=2\columnwidth]{figures/OtherGroups.jpg} \caption{Interior panoramic screenshot of a single frame using the RadVR system. Renderings are executed in real-time providing a 6 degrees of freedom immersive experience.}~\label{fig:othergroup} \end{figure*} \thispagestyle{empty} \subsection{Visualization of Simulation Results} Finally, after the completion of the simulation, RadVR automatically plots the results on the corresponding \emph{SimulationPlane} with a heatmap representation, where each sensor is located at the center of colored matrices (see Fig \ref{fig:visualization}). RadVR implements a three-color gradient palette where blue (RGB 0, 0, 255) is considered as the minimum value, yellow (RGB 255, 255, 0) as the middle value, and red (RGB 255, 0, 0) as the maximum value. For point-in-time illuminance simulations, the system automatically extract the minimum and maximum values of the simulation results, whereas in the Daylight Factor simulations the default range goes from 0 and 10. The user can later modify the minimum and maximum bounds of the visualization by accessing the corresponding range-slider in RadVR simulation menu. \section{Case Study Applications}\label{sec:user} To capture additional user feedback and assess how daylighting analysis tasks in RadVR can be executed, we apply the proposed tool in two case studies with two separate user groups. These studies would provide anecdotal evidence about the effectiveness of using the tool in building design and analysis.The first group, provided ongoing empirical feedback throughout the development of the RadVR software. Features such as the dynamic sunpath, locomotion, and the simulation experience were explored, with design feedback and usability testing of each of the developed functions. The second group was involved in a one-time study in comparing RadVR with a conventional 2D display daylight analysis tool, Diva for Rhino. In this anecdotal study, we explored how RadVR helped users understand the relationship between the sun, time, and the building in a sunlight study, in addition to perceiving the simulation results. Below we describe additional details of the two anecdotal studies: \subsection{Empirical Testing} The goal of these studies was to receive ongoing user feedback during the development of RadVR. Feature and usability testing, along with design discussions were conducted during the course of 15 weeks. Eleven students of a graduate level course - ARCH 249: Physical, Digital, Virtual- at UC Berkeley's Department of Architecture participated in this testing. The course itself was oriented towards designing interactive modules for virtual reality experiences. During weekly meetings, students tested new features of RadVR and provided empirical feedback on the developed functionalities. Features such as the direct sunlight position analysis, 9-point matrix, and locomotive modules which involved many UI elements were discussed during these sessions. Students of this group did not necessarily come with design knowledge in daylighting, and therefore, mainly commented on the usability aspects of the virtual reality experience itself. The group provided ideas and ways of improvement for the studied features. The results of the discussion were gradually embedded into the final user experience design of RadVR. \subsection{Preliminary Comparative Study} To evaluate the potential of our approach, we conducted a pilot study of RadVR with students who had previously used a popular Radiance and DAYSIM frontend for Rhino, DIVA \cite{jakubiec2011diva}. This study used the design work produced by architecture students for a daylighting assignment conducted in a graduate level course - ARCH 240: Advanced Topics in Energy and Environment - at UC Berkeley's Department of Architecture. Participation in the study was optional for students, and 16 out of 40 students volunteered in this case study. Only 5 of the 16 participants had experienced a 6DOF virtual reality experience before. The study entailed the following phases: (1) using DIVA, improve the daylight performance of the student’s design previously done for ARCH 240 class; (2) conducting daylight analysis in RadVR with the same purpose of improving daylight performance; (3) completing an exit survey that compares the two approaches to daylighting analysis and design. During their previous Arch 240 assignment, students had been asked to design a 25m x 40m swimming pool facility in San Francisco, California with a variable building height. The goal of the design was to achieve a coherent and well-defined daylight concept for the building that addresses both the diffuse and direct component of light. ARCH 240 instructors advised students to consider relevant daylight strategies, including top lighting, side lighting, view out, relation with solar gains and borrowed light. Students used the Diva for Rhino tool to assess and refine the daylighting strategies implemented in the design task phase. Daylight Factor analysis and 9 point-in-time matrix visualizations had been conducted in this phase and reported as part of the deliverables assignment. The students positioned a equally spaced analysis sensor grid, with sensors placed 0.6m from each other, at 0.8 m from the ground floor. Radiance’s ambient bounce (-aa) parameter was equal to 6. Using RadVR Grasshopper plugin (Figure \ref{fig:ghComp}), the Rhino models were exported to RadVR. The added materials followed the materials chosen by the students. Students conducted daylight analysis in RadVR performing two tasks during a 15 minute session. Firstly, students studied the relationship between the sun, time, and the building in a sunlight study. For this, they initially navigated and inspected their designs using two implemented locomotion functions, teleportation and flying. In order to observe time variations on direct light patterns, students used the time controllers that control sun position according to hour-of-the-year and location of the design (i.e., latitude and longitude). The students also used the 9 point-in time matrix RadVR module to study the direct light patterns in different seasons and different times of day. Secondly, the students simulated Daylight Factor through the RadVR menu. As in DIVA, they positioned the sensor grid at 0.8 meter from the ground floor. However, to reduce computation time and provide a quicker performance feedback, they used 1 X 1 m sensor grid and 2 ambient bounces. After the simulation, the participants navigated through the results, and evaluated the building design by correlating key design features that affect simulation results. The users could color mapping the results and control the value gradient by accessing visualization menus and changing the range to their preferred domain. Before each task the users followed a brief tutorial on how to use the RadVR that took approximately 4 minutes. \thispagestyle{empty} Upon completion, the users filled out an exit survey that evaluated their experience in RadVR when comparing to Diva for Rhino. The questions of the survey were designed to identify tasks where the immersive experience in RadVR can bring an additional insight or improve the usability of digital simulation tools in current daylighting design workflows. The survey had three parts. The first part evaluates the user experience in conducting the first task that focus on study direct daylight patterns overtime. The second one, focused in using RadVR for illuminance-based simulations and assessed the user experience in producing and navigating through simulation results in VR. The last part consisted in an overall evaluation (e.g., comfort, learning curve) of the RadVR software compared to DIVA for Rhino. Each question covered a specific activity of the two daylight analysis tasks. To answer to each question the students needed to choose a value of a linear 5-point scale on how they compared the performance of the mentioned activity between the two tools. To avoid any confusion, the words \textit{RadVR} and \textit{Diva for Rhino} were colored in different colors and comparison adjectives (significantly, slightly, same) were displayed as bold text. \subsection{Equipment} When working with RadVR, participants used a Oculus Rift head-mounted display and controllers that was connected to a computer with a Corei7 2.40GHz processor and an NVIDIA GeForce GTX 1060 graphics card. The maximum measured luminance of the Oculus Rift display has been reported to be 80 cd/m$^2$ for a white scene RGB (255, 255, 255) \cite{Rockcastle_Chamilothori_Andersen_2017}. The Oculus Rift holds a 110° field of view display, with two OLED displays for each eye with a resolution of 1080×1200 pixels and a refresh rate of 90 Hz. \thispagestyle{empty} \section{Results} Table \ref{fig:userTable}.a presents the first part of the survey results, focusing on understanding the relationship between time, the sun and the building. On average, the majority of the responses show that RadVR can be potentially helpful in understanding the variation of direct light patterns through time. As time navigation in RadVR is achieved using the VR controller joysticks, the direct sunlight penetration smoothly updates in the scene, allowing a quick understanding of how the variation of the hour and date impacts the sun location relative to the building. However, some participants pointed out the lack of the system’s ability to capturing diffuse lighting, which is a result of using biased rendering methods to achieve real-time visualizations. The participants also found the ability to moving around in the building very useful, particularly to correlate design features and daylight performance. This highlights the importance of 6-degrees of freedom (6DOF) being available in the immersive design tool. Yet, we observed that some users experienced initial difficulties when using the available locomotion techniques (virtual flying and walking) to inspect parts of their designs in VR. This was the main drawback when compared with 2D GUI zoom and pan functionalities. Moreover, as mentioned in Section \ref{sec:9Point}, RadVR’s 9-point-matrix is not a grid of 9 rendered images of different times of the year but a matrix of 9 buttons that change the time of year of the surrounding environment. Thus, it is only possible to compare sunlight patterns of different hours and seasons by switching from a one-time event to another. The responses emphasize that comparison mostly takes place between two modes of daylighting conditions and heavily relies on visual memory. Nevertheless, the participants found that not being limited to a specific point-of-view and being able to easily change light conditions by control time and location are advantages of RadVR over daylighting analysis processes based on 2D still renderings. \thispagestyle{empty} \begin{table*} \caption{Results of post-experiment surveys with the focus of (a) understanding the relationship between time, the sun and the building (b) understanding simulations, and (c) usability experiences. Participants were asked to indicate which software created a better workflow for the questioned activities on a five point scale.}~\label{fig:userTable} \centering \includegraphics[width=1.8\columnwidth]{figures/TableReformattedNoPer_Mod-PS.pdf} \end{table*} \thispagestyle{empty} \begin{figure} \centering \includegraphics[width=1\columnwidth]{figures/user} \caption{User experiments of RadVR while performing Daylight Factor simulations on designed spaces}~\label{fig:userPic} \end{figure} In the second part of the survey (Table \ref{fig:userTable}.b), the questions aimed to assess the ability of using RadVR in understanding simulation results, and their relationship with building geometry, when compared with their previous experience with DIVA. Approximately two thirds of the answers preferred RadVR as a simulation visualization tool, while the remaining indicate that the proposed system does not perform substantially better in this regard. With the \emph{SimulationPlane} located lower than the eye level participants instantly locate over-lit or under-lit areas and virtually teleport to those areas that were outside the preferred 4-6\% daylight factor range. We also observed that students were able to locate the building elements (side openings, skylights, etc.) that affected the results, by instantly changing their point-of-view within the building and simulation map. In some cases, participants accessed the Gradient Change slider from the RadVR menu, to change the default range (0\%-10\%) of the heatmap (for example 2\%-4\%) as a way to narrow down their objective results. Finally, in the third part of the experiment, the software’s general usability experience was studied (Table \ref{fig:userTable}.c). In terms of the learning curve, a significant majority of the participants reported that RadVR was easier to learn than DIVA for Rhino. This may be a result of designing the time navigation and locomotion functions with minimum controller interactions. However, since DIVA for Rhino was initially taught to students, many of the key concepts of daylighting analysis using software had been previously understood, and therefore, may have caused an easier learning process for RadVR. Other functions are accessed through immersive menus, which due to the large range of field of view compared to 2D screens, every window can contain many functions. During the experiment, if the subject asked on how to execute a specific function, the author would assist vocally while the subject had the headset on. \thispagestyle{empty} All responses show that RadVR enabled a more enjoyable experience than DIVA for Rhino. However, since most participants (11/16) had never experienced 6DOF VR, there might be a bias towards RadVR due to its novelty and engaging environment. Many participants seemed to enjoy the new immersive experience of walking and navigating through their designed spaces with 6DOF technology, an approach which may be associated with play and in which movement is key. When asked about why RadVR could play an effective role as a teaching tool, some participants noted that the real-time update of direct light and shadows while navigating through different time events helped their understanding of the sun movement in different times of the year. After the session, a number of groups showed intent to modify their design strategies, citing their improved understanding of the geometry and its impact on daylighting performance. \section{Conclusion} \thispagestyle{empty} The research proposed in this work introduces a 6DOF virtual reality tool for daylighting design and analysis, RadVR. The tool combines qualitative immersive renderings with quantitative physically correct daylighting calculations. With a user-centric design approach and an end-to-end workflow, RadVR facilitates users to (1) based on the design’s location, observe direct sunlight penetration through time by smoothly update the sun’s position, (2) interact with a 9-point-matrix of illuminance calculations for the nine most representative times of the year, (3) simulate, visualize and compare Radiance raytracing simulations of point-in-time illuminance and daylight factor directly through the system, and (4) accessing various simulations settings for different analysis strategies through the front-end virtual reality user interface. This work includes a preliminary user-based assessment study of RadVR performance conducted with students which had previously used DIVA for Rhino. The survey results show RadVR can potentially facilitate spatial understanding tasks, navigation and sun position analysis as a complement to current 2D daylight analysis software. Additionally, participants report they can better identify what building elements impact simulation results within virtual reality. However, as the preliminary studies do not follow a fully randomized procedure with exact similar tasks and conditions for both RadVR and Diva for Rhino, the purpose is not to find if RadVR outperforms Diva for Rhino in the aforementioned tasks, but to identify tasks where RadVR can complement current two-dimensional daylight computer analysis. In fact, to fully compare the different tools a more comprehensive user study is required as future work to evaluate the effectiveness of RadVR compared with other daylighting simulation interfaces using a randomized population of users with various skill levels in daylighting design tools Since students had initially learned the concepts of daylighting analysis through DIVA, the comparative results do not necessarily guarantee RadVR could be a substitute to DIVA, or other 2D display software, as a standalone daylighting analysis tool. Instead, we believe our proposed system indicates new directions for daylighting simulation interfaces for building design, provides additional usability, proposes new analytical procedures, and complements current daylighting analysis workflows. \thispagestyle{empty} One of the main contributions of this work is establishing a stable bi-directional data pipeline between Unity3D and other third-party building performance simulations tools. While many building performance simulations engines do not have native Graphical User Interfaces and can only be accessed through console-based systems, the development of a virtual reality GUI would allow architects and other building designers to conduct pre-construction analysis in 1:1 scale immersive environments of various performance metrics. Moreover, with the integration of VR design methodologies in CAD-based software, it will be possible to use such an analysis in earlier stages of design, all within immersive environments and without the need of transferring models back and forth between platforms. However, despite the spatial immersion and presence provided by RadVR, the tool comes with a number of limitations. For example, due to the limited power of its real-time rendering system, many spatial qualities of daylit spaces cannot be captured, resulting in flat renderings and unrealistic qualitative outputs. Such limitation is mostly seen when indirect lighting strategies are implemented since biased rendering methods used in current game engine real-time rendering systems cannot fully capture ray bouncing and light scattering effects. Additionally, reading results in large scale heatmap from a human scale point-of-view is difficult, particularly with the visualized work plane usually set at 0.8 m and the eye height around 1.7 m. Users reported this limitation was rather resolved in flying mode since they could observe results from a birds-eye view. Repositioning to the right point-of-view is another limitation since it is time consuming compared to the fast orbit interactions of 3D modeling environments in 2D user interfaces. Nevertheless, after identifying over-lit or under-lit areas, users were able to teleport to the exact location and investigate what architectural element is responsible for them. Future work on the development of this tool encompasses four main tasks. First, improve the graphics quality by implementing state-of-the-art rendering shaders and recent GPU based techniques to improve the ability of the system to better capture ambient lighting in real-time. Second, expand RadVR's current sky model palette (the CIE overcast sky and the CIE clear sky) by including all-weather Perez sky models \cite{perez1993all} which use Typical Meteorological Year hourly data to express the typical sky conditions of a given location at any given hour. The inclusion of all-weather Perez skies is also an important task to expand the RadVR tool to handle climate-based daylight metrics, including Daylight Autonomy and Useful Daylight Illuminance. Third, improving data visualizations by exploring other types of data representation that better suit 3D immersive spaces. Such an approach can enhance data reasoning tasks in VR and it can benefit from some visual properties such as stereoscopic depth or gaze and color maps. We also intend to conduct a comprehensive comparison between the RadVR sun path visualizer, other digital sun path tools, and physical heliodons to further refine RadVR sun path tool. Finally, we plan to extend current RadVR locomotion abilities with redirected walking and tunneling techniques to reduce potential motion sickness resulted from artificial locomotion in VR. \bibliographystyle{ACM-Reference-Format}
1,941,325,220,676
arxiv
\section{Introduction} \label{intro} There are many searches for new physics being done at the Large Hadron Collider (LHC) but for the moment only a scalar particle that resembles the last missing piece of the Standard Model (SM) has been discovered, the Higgs~\cite{Aad:2012tfa,Chatrchyan:2012ufa}. The fact that no clear indication of signs of new physics emerges from these collider searches implies that, as phenomenologists, we should pay attention to the small but coordinated deviations from the SM behavior that may arise in different search channels as possible hints of a specific type of new physics. Interestingly, there are two particular somewhat recent searches in which this behavior seems to be happening. On one hand, in the ditop search performed by the CMS collaboration~\cite{Sirunyan:2019wph} at a center of mass energy $\sqrt{s}=13$ TeV and a luminosity of $\mathcal{L}=35.9$ fb$^{-1}$, with top quarks decaying into single and dilepton final states, deviations from the SM behavior arise at the 3.5 $\sigma$ level locally (1.9 $\sigma$ after the look-elsewhere effect) that can be accounted for by a pseudoscalar with a mass around 400 GeV that couples at least to top quarks and gluons. On the other hand, in the ditau search done by the ATLAS collaboration~\cite{Aad:2020zxo} at $\sqrt{s}=13$ TeV and $\mathcal{L}=139$ fb$^{-1}$, deviations from the SM at the 2$\sigma$ level are observed which, interestingly enough, can be accounted for by pseudoscalar coupling to tau leptons, bottom-quarks and gluons, once again with a mass around 400 GeV. Furthermore, whatever new physics may account simultaneously for these possible hints should also be consistent with other collider searches to which it could provide contributions to~\cite{Aad:2014ioa,Khachatryan:2015qba,Aaboud:2017yyg,Aaboud:2017uhw,Aaboud:2017cxo,Sirunyan:2018taj,Sirunyan:2019xls,Sirunyan:2019xjg,Sirunyan:2019wrn,Aad:2020klt}. In this work we consider initially from a pure phenomenological bottom-up approach the possibility that a pseudoscalar with a mass of 400 GeV is indeed responsible simultaneously for these observed hints and at the same time satisfies the constraints coming from other final states that it could contribute to and that are search for at the LHC. An idea in this direction was considered also in~\cite{Richard:2020cav}. Considering the interactions of the pseudoscalar up to dimension-5 operators, we find that there exists a well-defined region of couplings in which the hints as well as the constraints can be satisfied at the same time. We then study the implications on the new physics of possible gauge invariant models in which a pseudoscalar with that particular mass and couplings can be obtained. We find that, given the parameter space consistent with the measurements, it turns out to be highly unlikely that the pseudoscalar could be accommodated in a two Higgs doublet model (2HDM) even in more general possible versions of it~\cite{Egana-Ugrinovic:2019dqu}. However, we show that if the pseudoscalar is associated with a pseudo-Nambu-Goldstone (pNGB) boson of broken global symmetry in composite Higgs models~\cite{Kaplan:1983fs,Kaplan:1983sm,Georgi:1984ef,Georgi:1984af,Dugan:1984hq,Contino:2003ve,Agashe:2004rs}, one can accommodate its mass and couplings for an $SO(6)/SO(5)$ model~\cite{Gripaios:2009pe,Chala:2017sjk} with consistency and the 1$\sigma$ level with the measurements, and in a $SO(5)\times U(1)_{P}\times U(1)_{X}/SO(4)\times U(1)_X$~\cite{Chala:2017sjk} at the 2$\sigma$ level. The remainder of the work is organized as follows: in Section~\ref{exp-searches} we summarize the main experimental searches, carried out by ATLAS and CMS, for heavy resonances that give rise to small deviations from the SM expectations or that could restrict the possible parameter space. Section~\ref{pheno} is devoted, from a general approach of effective field theories, to the phenomenology of a 400-GeV pseudoscalar boson that can account for these experimental signatures. In Section~\ref{num-results} we analyze numerically the range of couplings allowed and excluded by the experimental data, while Section~\ref{UVcomp} is dedicated to the UV completions that could give rise to this 400-GeV pseudoscalar boson with the allowed couplings. Finally, we present our main conclusions in Section~\ref{conclu}. \section{Experimental hints and constraints} \label{exp-searches} In the present work we consider several searches carried out at the LHC by the ATLAS and CMS collaborations that either show a hint or lead to a constraint for a pseudoscalar, and that allow us to determine the effective couplings of the proposed new state. A brief summary of the searches and the parameters used is presented in this section. \subsection{Final state $t\bar t$} A moderate excess in top quark pair production has been found by the CMS collaboration \cite{Sirunyan:2019wph}. This excess has been proven to be compatible with an intermediate state consisting on a scalar or pseudoscalar boson with a mass of $400$ GeV, created via gluon fusion and decaying into a top-antitop quark pair. In Ref.~\cite{Sirunyan:2019wph} the CMS collaboration presented a search for heavy Higgs bosons decaying into a top quark pair, in single and dilepton final states, within a data set corresponding to an integrated luminosity of 35.9 fb$^{-1}$ and a center-of-mass energy of $13$ TeV. The masses of the hypothetical scalar and pseudoscalar bosons were probed within the range $400$ GeV to $750$ GeV and a total relative width from $0.5$ to $25\%$ of its mass. The largest deviation from the SM background was observed for a pseudoscalar boson with a mass of $400$ GeV and a total relative width of $4\%$, with a local significance of $3.5\pm 0.3$ standard deviations. The significance of the excess became $1.9$ standard deviations after accounting for the look-elsewhere effect in the mass, total width and CP-state of the new resonance. The analysis considered tree level interactions with the top quark only, as well as interactions with gluons induced at one-loop level. Under those assumptions it was found that, for $m_A=400$ GeV and $\Gamma_A/m_A=0.04$, a top coupling $g_{At\bar t}\approx 0.9$ yields the maximum likelihood ratio between the hypothesis of the existence of the pseudoscalar boson and the SM scenario, as can be seen in Fig. 7 of that work.~\footnote{We are using the same notation as Ref.~\cite{Sirunyan:2019wph} in this subsection.} For $0.6\lesssim g_{At\bar t}\lesssim 1$ the model exceeds the range of compatibility with the SM in more than two standard deviations, is consistent with the measurement at one standard deviation and is below a critical value which would yield a nonphysical scenario, in which the partial decay width of the pseudoscalar boson into a top-antitop pair would exceed the total decay width from that channel. In the present work we consider cross sections of the $t\bar t$ channel in agreement with those values of $g_{At\bar t}$. \subsection{Final state $\tau \tau$} In Ref.~\cite{Aad:2020zxo} the ATLAS collaboration reported a search for heavy scalar and pseudoscalar bosons performed with data corresponding to Run 2 of the LHC, with an integrated luminosity of $139$ fb$^{-1}$ at a center-of-mass energy of $\sqrt{s}=13$ TeV. The search for heavy resonances was performed within the mass range $0.2-2.5$ TeV, in the $\tau^+\tau^-$ decay channel. The relevant data for this work is presented in plots of $\sigma_{bb}\times BR_{\tau \tau}$ vs $\sigma_{gg}\times BR_{\tau \tau}$, that show ellipses for one and two standard deviations containing the observed value at the center. For a mass of 400 GeV the SM scenario lies more than two standard deviations away from the observed value. In this work we demand $\sigma \times BR$ lying within the area contended by the $1\sigma$ ellipses (excepted where explicitly stated). \subsection{Final state $b\bar b$} The CMS collaboration has searched for Higgs bosons decaying into a bottom-antibottom quark pair accompanied by at least one additional bottom-quark, with data corresponding to a center-of-mass energy of $13$ TeV and an integrated luminosity of 35.7 fb$^{-1}$,~\cite{Sirunyan:2018taj}. The analysis considered scalar and pseudoscalar bosons with masses ranging from $300$ to $1300$ GeV, finding no significant deviation from the SM. An upper limit for the cross section times branching fraction is reported: $\sigma_{bb} \times BR_{bb}=5.7$ pb at 95\% confidence level (CL), for a mass of 400 GeV. \subsection{Final state $t\bar tt\bar t$} Another decay channel of interest for the present work consists on the decay of the pseudoscalar boson into four tops. We consider the result reported in Ref.~\cite{Aad:2020klt} by the ATLAS collaboration, where a production cross section of two pairs top-antitop was found to be $24^{+7}_{-6}$ fb. Interestingly, notice that this value is roughly two standard deviations above the SM prediction. \subsection{Final state $Zh$} A pseudoscalar boson decaying into a $Z$ boson and a neutral SM Higgs boson has been probed by the ATLAS and CMS collaborations in multiple searches, finding no evidence of any significant deviation from the SM background. In the present work we consider the constraints for $\sigma\times BR$ reported by the ATLAS collaboration~\cite{Aaboud:2017cxo}, that sets limits in the gluon fusion and bottom fusion initial states, for a mass of 400 GeV the 95\% CL bounds are roughly $0.22$ pb and $0.25$ pb, respectively. \subsection{Final state $\gamma\gamma$} Searches for new heavy particles decaying into two photons within mass ranges containing $400$ GeV have been carried out by the ATLAS and CMS collaborations. No excess has been found by any search and upper limits were set for production cross section times branching ratio. In Ref.~\cite{Aad:2014ioa} the ATLAS collaboration presented a search for scalar particles decaying via narrow resonances into two photons with masses ranging $65$ to $600$ GeV, using $20.3$ fb$^{-1}$ at $\sqrt{s}=8$ TeV, having found no evidence for the existence of these particles and setting an upper limit of $\sigma\times BR_{\gamma\gamma}\lesssim 2-3$ fb at 95\% CL. Another search performed by the ATLAS collaboration is presented in Ref.~\cite{Aaboud:2017yyg} with an integrated luminosity of $36.7$ fb$^{-1}$ at $\sqrt{s}=13$ TeV. This search included spin-0 particles decaying into a final state consisting on two photons within a mass range of $200$ to $2700$ GeV. For 400 GeV the upper limit was found to be $2-3$ fb at 95\% CL. In Ref.~\cite{Khachatryan:2015qba} the CMS collaboration explored the diphoton mass spectrum from $150$ to $850$ GeV with an integrated luminosity of $19.7$ fb$^{-1}$, at a center of mas energy of $\sqrt{s}=8$ TeV. Assuming a total decay width between $0.1$~GeV and $40$~GeV the CMS collaboration reported an upper limit $\lesssim 4-15$~fb at 95\% CL. \subsection{Final state $VV$} The $VV$ final state, with $V=W,Z$, has been explored by the ATLAS and CMS collaborations at the LHC in numerous searches, finding no excesses in any of them. A search for heavy neutral resonances decaying into a $W$ boson pair was carried out by the ATLAS collaboration in Ref.~\cite{ATLAS:2017uhp} using a data set corresponding to an integrated luminosity of 36.1 fb$^{-1}$ with a center-of-mass energy of $13$ TeV. The search focuses on the decay channel $(WW\rightarrow e\nu\mu\nu)$ and provides an upper limit for $\sigma\times BR_{WW}$ as a function of the mass of the resonance, ranging from $200$ GeV to $5$ TeV. For the analysis various benchmark models were considered, including Higgs-like scalars in different width scenarios. In Ref.~\cite{ATLAS:2020tlo}, the ATLAS collaboration presents a search for heavy resonances decaying into a pair of $Z$ bosons leading to two pairs lepton-antilepton, or two pairs lepton-neutrino, for masses ranging from $200$ to $2000$ GeV. A similar search was published by the CMS collaboration in the range $130$ GeV to $3$ TeV~\cite{CMS:2018amk}. \subsection{Final state $Z\gamma$} A search for the Higgs boson and for narrow high mass resonances decaying into $Z\gamma$ is presented by the ATLAS collaboration in Ref.~\cite{Aaboud:2017uhw} using $36.1$ fb$^{-1}$ of $pp$ collisions at $\sqrt{s}=13$ TeV. The search for high mass resonances focuses on spin-0 and spin-2 interpretations. The results are found to be consistent with the SM and upper limits on the production cross section times the branching ratio are reported, varying the observed values between $88$ fb and $2.8$ fb for masses within $250$-$2400$ GeV in the case of spin-0 resonances. \section{Phenomenology} \label{pheno} Considering the experimental hints described in the previous section as a possible sign of new physics at an invariant mass around 400 GeV, we propose initially from a purely phenomenological perspective to do an analysis in which we study the compatibility of some of these hints with the introduction of a new pseudoscalar state $a$ with a mass $m_a=400$ GeV, that interacts solely with $3^{rd}$ generation charged quarks and leptons, the SM gauge bosons $Z$, gluons, photons, and $H$, as: \begin{align}\label{eq-Lint} {\cal L}_{\rm int}&=\frac{g_{gg}}{4}\ a\ G_{\mu\nu}\tilde G^{\mu\nu} + \frac{g_{\gamma\gamma}}{4}\ a\ F_{\mu\nu}\tilde F^{\mu\nu} + g_{Zh} (a\overleftrightarrow\partial_\mu h) Z^\mu + i g_{t}\ a\ \bar t\gamma^5 t + i g_{b}\ a\ \bar b\gamma^5 b + i g_{\tau}\ a\ \bar\tau\gamma^5\tau \ , \end{align} with $\tilde F^{\mu\nu}=(1/2)\epsilon^{\mu\nu\rho\sigma}F_{\rho\sigma}$ and similarly for $\tilde G^{\mu\nu}$, $g_{t}$, $g_{b}$, $g_{\tau}$, and $g_{Zh}$ dimensionless couplings, whereas $g_{gg}$ and $g_{\gamma\gamma}$ have dimension of an inverse mass scale, the first ones are expected to be present at tree level, whereas the second ones are expected to be induced at the one-loop level. This is the most general CP-invariant interaction Lagrangian linear in the pseudoscalar state $a$ that can be written up to dimension-5 operators, neglecting pseudoscalar interactions with the kinetic terms of massive electroweak gauge bosons $\tilde{Z}_{\mu\nu}Z^{\mu\nu}$ and $\tilde{W}^{\dagger}_{\mu\nu}W^{\mu\nu}$ and with $\tilde{F}_{\mu\nu}Z^{\mu\nu}$ that, though they are of the same order as the interaction with photons (1-loop order), for the phenomenology we want to address they turn out to be irrelevant. In fact, due to their connection via electroweak (EW) symmetry, the pseudoscalar coupling to EW massive gauge bosons and to $Z\gamma$ should be similar in nature to the diphoton coupling. Rescaling the photon coupling by the corresponding factors and modifying the decay rates by the appropriate phase spaces we obtain, in the interesting region of couplings, $VV$ and $Z\gamma$ cross sections that are one to two orders of magnitude below the bounds discussed in the previous section. In order to be able to separate the contribution of possible heavy colored and/or electromagnetically charged beyond the SM (BSM) states to $g_{gg}$ and $g_{\gamma\gamma}$ that have been integrated out from our effective theory, from those of the $3^{rd}$ generation quarks and leptons, we explicitly write $g_{gg}$ and $g_{\gamma\gamma}$ as, \begin{eqnarray} g_{gg}=\frac{3\alpha_s}{12\pi m_t}\times 4.2 (x+ g_{t})\; , \quad g_{\gamma\gamma}=\frac{3\alpha}{2\pi m_t}\times \left(\frac{2}{3}\right)^2 4.2 \left(z+ g_{t}+ x\left(\frac{Q_x}{2/3}\right)^2\right) \ , \end{eqnarray} with $\alpha_s=g^2_s/(4\pi)$ and $\alpha= e^2/(4\pi)$ and where we are already assuming that in Eq.~(\ref{eq-Lint}), in what concerns the SM quark contributions, only the top quark provides sizable contributions to the loop-induced couplings and we explicitly used that the loop function $A^{a}_{1/2}(m_a^2/(4m_t^2))\approx 4.2$ for $m_a=400$ GeV. Furthermore, we parameterize the contributions from heavy BSM states that have been integrated out as coming from vector-like fermions in the fundamental representation of $SU(3)_c$ with EM charge $Q_x$, whose contribution is accounted by the dimensionless parameter $x$ and a separate BSM contribution from colorless vector-like fermions charged under EM and accounted by the dimensionless parameter $z$. These loop contributions are normalized with respect to the top quark contribution such that, for example, in the presence of a color triplet heavy fermion $F$ with EM charge $Q_x=Q_F$, are: \begin{align} x=x_F= \frac{g_{F}\ m_t}{m_F}\times \frac{2}{4.2} \ , \end{align} where we used $A^a_{1/2}(m_a^2/(4m_F^2))\approx 2$ for $m_F\gg m_a$, and $g_{F}$ is the vector-like coupling of $a$ to the heavy fermion $F$. Analogously, for a colorless vector-like fermion $E$ with EM charge $Q_E$ we would obtain, \begin{align} z=z_E= \frac{1}{3}\left(\frac{Q_E}{2/3}\right)^2 \frac{g_{E}\ m_t}{m_E}\times \frac{2}{4.2} \ . \end{align} We restrict ourselves to the case in which the pseudoscalar behaves as a narrow resonance, narrow width approximation (NWA), with a total width $\Gamma_a/m_a\leq 8 \%$. The dominant 2-body partial decay widths for $a$ take the form, \begin{eqnarray} \Gamma_{a\to f\bar{f}}&=&N_c\frac{g^2_{f}m_a}{8\pi}\sqrt{1-4\frac{m^2_f}{m^2_a}} \ ,\\ \Gamma_{a\to Zh}&=&\frac{g^2_{Zh}}{16\pi }\frac{m^3_a}{m^2_Z}\left[1+\left(\frac{m^2_h-m^2_Z}{m^2_a}\right)^2-2\left(\frac{m^2_h+m^2_Z}{m^2_a}\right)\right]^{3/2} \ ,\\ \Gamma_{a\to gg}&=&\frac{ g^2_{gg }m^3_a}{8\pi} \ ,\\ \Gamma_{a\to \gamma\gamma}&=&\frac{ g^2_{\gamma\gamma}m^3_a }{8\pi} \ , \end{eqnarray} where $f$ represents the fermions of interest, $f=t,b,\tau$ and $N_c=3$ for $f=t,b$ and $N_c=1$ for $\tau$. In what respects the production of the pseudoscalar, the main channels are gluon and bottom fusion. Following Ref.~\cite{Franceschini:2015kwy}, the hadronic production cross section from parton pairs ${\cal P}\bar{\cal P}$ at an energy $\sqrt{s}$ can be written as, \begin{equation} \sigma(pp\to a) = \frac{1}{s\times m_a}\sum_{\cal P} C_{{\cal P}\bar{\cal P}}\Gamma_{a\to{\cal P}\bar{\cal P}} \ . \end{equation} For the LHC at $\sqrt{s}=8,\,13$ TeV, considering only couplings to gluons and bottom-quarks, we obtain the following: \begin{center} \begin{tabular}{| c | c | c |} \hline $\sqrt{s}$ & $C_{gg}$ & $C_{b\bar b}$ \\ \hline 8 TeV & 4592 & 29 \\ \hline 13 TeV & 40187 & 278 \\ \hline \end{tabular} \end{center} The typical values that we find for the production cross sections at the LHC for $\sqrt{s}=13$ TeV that are consistent with the experimental hints are $\sim 20$ pb and $\sim 10$ pb for gluon and bottom fusion, respectively. Having all these elements at hand, we can calculate under the NWA, the contribution of $a$ to the different final states that either show a possible hint of BSM physics or otherwise put a constraint on the couplings of the pseudoscalar to SM particles. We do this in the following section and comment on our findings. In particular we focus on the cases in which there could be a UV contribution to the gluonic operator $x\neq 0$ and when such contribution vanishes, $x=0$. \section{Numerical results} \label{num-results} In this section we study the cross sections of the different production modes of the 400-GeV pseudoscalar, taking into account their dominant decay channels, as a function of two parameters at a time, fixing the rest of the parameters to two benchmark points, one with $x$ = 0 and another one with $x$ $\neq$ 0, defined as follows: \begin{itemize} \item $g_t$ = 0.78, $g_b$ = 0.39, $g_\tau$ = 0.04, $x$ = 0, \item $g_t$ = 0.7, $g_b$ = 0.37, $g_\tau$ = 0.04, $x$ = 0.13. \end{itemize} All the cross sections have been computed with the expressions detailed in the previous section, except the $t \bar t t \bar t$ production, which has been calculated with {\tt MadGraph\_aMC@NLO 2.5.2}~\cite{Alwall:2014hca} at leading order. For the pseudoscalar productions via gluon fusion and in association with bottom-quarks we have considered $K$-factors of 2~\cite{CMS:2019pzc} and 1.24~\cite{Bonvini:2016fgf}, respectively, while a $K$-factor of 1.27~\cite{ATLAS:2020hpj}, with a 20\% uncertainty, has been used for $t \bar t t \bar t$ production. Likewise, for each final cross section we consider 1$\sigma$ uncertainties, also detailed in Section~\ref{exp-searches}. We present the results as contour plots of the different cross sections as a function of a pair of couplings, and each contour line is labeled with the corresponding cross-section value at $\pm$1$\sigma$. The red shaded areas correspond to values of the cross sections lower than their reference values subtracting their uncertainty of 1$\sigma$, and the blue ones to higher than these reference values plus 1$\sigma$. In the case of the ditau channel, we demand consistency at the 1$\sigma$ level with the experimental measurements (given by an ellipse in the gluon and bottom fusion channels, see Section~\ref{exp-searches}). Therefore, both blue and red shaded areas are excluded by data, and only the white area would be allowed by the LHC experimental measurements and consistent with the possible hints of new physics. Recall also that the only channels that show slight excesses or deviations from the SM predictions are $t \bar t$ and $\tau^+ \tau^-$, as well as 4-tops although in this case a new pseudoscalar is not particularly favored, while the other channels only impose exclusion limits on the total cross sections at 95\% CL. In addition, it is important to note that the channels that are not shown in the following contour plots are due to the fact that they impose softer constraints on our parameter space. In particular, we find that the coupling to a $Z$ and a Higgs is restricted to: $g_{Zh}\lesssim 0.03$, roughly independently of the other couplings, while the new physics contribution to diphotons, called $z$, shows a stronger dependence on $g_t$ and $x$, with larger values allowed for smaller $g_t$ and $x$: $z\lesssim 1.2$ for $g_t=0.7$ and $x=0$, and smaller values for larger $g_t$ and $x$: $z\lesssim 0.5$ for $g_t=0.78$ and $x=0.13$. Neither $g_{Zh}$ nor $z$ determine the shape of the white regions in the plots we show. Concerning the branching ratios of the dominant decay channels of the 400-GeV pseudoscalar, we find that for the parameter region consistent with the experimental measurements, they are dominated by either ditops of dibottoms since we find that the pseudoscalar coupling to ditops is large, but more interestingly the consistency with measurements implies a large coupling to dibottoms, up to 30 times the Higgs bottom-quark Yukawa coupling. For the benchmark scenario with $x$ = 0, the maximum value for BR($a \to t \bar t$) allowed by our analysis is 0.970, the minimum one is 0.270 and the average value is 0.675, while the maximum value for BR($a \to b \bar b$) is 0.726, the minimum one is 0.022 and the average value is 0.319. On the other hand, within the benchmark scenario with $x$ = 0.13, the maximum, minimum, and average values allowed by our analysis for the $t \bar t$ ($b \bar b$) decay modes are 0.806, 0.541, and 0.696 (0.450, 0.187, and 0.296), respectively. Notice also that the contribution of the bottom-quark loop to the production of the 400-GeV pseudoscalar via gluon fusion, for the favored values of the $g_b$ and $g_t$ couplings, is of order 6-8\% compared to that of the top-quark loop.~\footnote{Also as comparison, for the production of the SM Higgs boson, the $b$-quark contribution to the gluon-fusion is approximately 3-4\% that of the $t$-quark, besides the coupling of the Higgs to bottoms is much smaller than the 400-GeV pseudoscalar one.} \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{gt-gb-x0-gtau004_kf_v2.png} \includegraphics[width=0.45\textwidth]{gt-gb-x013-gtau004_kf_v2.png} \caption{Production cross-section predictions of relevant 400-GeV pseudoscalar channels with 1$\sigma$ uncertainties of in the [$g_t$, $g_b$] plane, with $x$ = 0 (left panel) and $x$ = 0.13 (right panel). In both plots $g_\tau$ is fixed to 0.04. Each contour line is labeled with the 1$\sigma$ cross-section value of the corresponding considered channel. Blue and red shaded areas are excluded by data.} \label{contour-plots_gt-gb} \end{figure} In the left panel of Fig.~\ref{contour-plots_gt-gb} we show the values of the cross sections with an uncertainty of 1$\sigma$ of the channels $t \bar t$, $t \bar t t \bar t$, and $\tau^+\tau^-$ in the [$g_t$, $g_b$] plane with $x$ = 0 and $g_\tau$ = 0.04. Each contour line is labeled with the corresponding value at 1$\sigma$, marking the blue (+1$\sigma$) and red (-1$\sigma$) shaded areas that are excluded by data. Therefore, only $g_t$ values ranging from about 0.7 to 0.85 would be allowed by the LHC experimental measurements, provided that $g_b$ is more or less within the range [0.28, 0.4]. In this favored region the width varies 4-6\% with respect to the mass, dominated by the $t \bar t$ decay channel, and the main branching fractions vary among theses values: BR($a$ $\to$ $t \bar t$) $\sim$ 0.6-0.8, BR($a$ $\to$ $b \bar b$) $\sim$ 0.2-0.4, BR($a$ $\to$ $\tau^+ \tau^-$) $\sim$ 0.001, and BR($a$ $\to$ $gg$) $\sim$ 0.035-0.045. These latter values of BR($a$ $\to$ $gg$) hardly change as a functions of the couplings considered and will not be shown again. The right panel of Fig.~\ref{contour-plots_gt-gb} is devoted to same analysis but switching on the coupling $x$, with a value of 0.13. Since the coupling of the pseudoscalar to gluons is now contributing to the $t \bar t$ channel, the constraints on $g_t$ are softer and this coupling can take lower values, close to 0.6. The maximum $g_t$ values are also reduced, up to approximately 0.8. The range of allowed values of $g_b$ is also reduced, from values around from 0.32 to 0.37. In this case the width varies 3.5-6\% and the branching ratios of the dominant channels are BR($a$ $\to$ $t \bar t$) $\sim$ 0.55-0.75, BR($a$ $\to$ $b \bar b$) $\sim$ 0.25-0.45, and BR($a$ $\to$ $\tau^+ \tau^-$) $\sim$ 0.001. The reduction in the white area when comparing the figure on the left with $x=0$ with the right one at $x=0.13$ is provided by the increased constraint in the ditau channel since in fact there is a reduction in the constraint from the ditop channel which would have naively provided a larger white area. However this increment in the ditau constraint can be understood by looking on the right plot of Fig.~\ref{contour-plots_gt-gtau-x} where we see that the ditau ellipse allows larger values of $g_t$ at $x=0$ than at $x=0.13$, which is sensible since both $x$ and $g_t$ contribute to the gluon fusion diagram that enters in the ditau channel. Coming back to Fig.~\ref{contour-plots_gt-gb}, the preferred values for $g_t$ and $g_b$, which we have calculated with our effective model, compatible with the ATLAS and CMS experimental measurements, lie in the white areas and therefore are able to explain the new physics hints while at the same time are consistent with the data at 68\% CL. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{gt-gtau-x0-gb039_kf_v2.png} \includegraphics[width=0.45\textwidth]{gt-x-gtau004-gb039_kf_v2.png} \caption{Production cross-section predictions of relevant 400-GeV pseudoscalar channels with 1$\sigma$ uncertainties in the [$g_t$, $g_\tau$] (with $x$ = 0 and $g_b$ = 0.39) and [$g_t$, $x$] ($g_{\tau}$ = 0.04 and $g_b$ = 0.39) planes, left and right panels, respectively. Each contour line is labeled with the 1$\sigma$ cross-section value of the corresponding considered channel. Blue and red shaded areas are excluded by data.} \label{contour-plots_gt-gtau-x} \end{figure} Left panel of Fig.~\ref{contour-plots_gt-gtau-x} is dedicated to the pseudoscalar cross-section predictions in the [$g_t$, $g_\tau$] plane, with $x$ = 0 and $g_b$ = 0.39. Contour lines label the values of these cross sections with 1$\sigma$ uncertainty, indicating the blue and red shaded areas not allowed by data. The parameter space region allowed by data is centered at $g_t$ $\sim$ 0.78 and $g_\tau$ $\sim$ 0.040, with $g_t$ varying between 0.74 and 0.82, and $g_\tau$ between 0.033 and 0.047, approximately. In the favored region the width varies 5.5-6\% with respect to the mass, increasing mainly with $g_t$, since the branching fractions to $\tau$-lepton pairs is small. The main branching fractions take the following values: BR($a$ $\to$ $t \bar t$) $\sim$ 0.65-0.7, BR($a$ $\to$ $b \bar b$) $\sim$ 0.3-0.35, and BR($a$ $\to$ $\tau^+ \tau^-$) $\sim$ 0.001-0.002. It is important to remark here that these not-excluded values of $g_t$ are in accordance with the allowed values in the [$g_t$, $g_b$] plane of Fig.~\ref{contour-plots_gt-gb}. Contrary to these contour plots, once the the restrictions from the other channels are accomplished, the $b \bar b$ channel does not impose any restriction on the parameter space of the [$g_t$, $g_\tau$] plane, with the values set as we have indicated for the rest of the couplings, and the most restrictive channels are only $t \bar t$, $\tau^+ \tau^-$, and $t \bar t t \bar t$. In the right panel of Fig.~\ref{contour-plots_gt-gtau-x} we show the values of the cross sections with an uncertainty of 1$\sigma$ of the channels $t \bar t$, $t \bar t t \bar t$, and $\tau^+\tau^-$ ($gg$ $\to$ $a$ $\to$ $\tau^+ \tau^-$) in the [$g_t$, $x$] plane. Only $g_t$ values ranging from about 0.68 to 0.83 would be allowed by the LHC experimental measurements, provided that $x$ is less than 0.13. In this favored region the width varies 4.5-6\% with respect to the mass, dominated by the $t \bar t$ decay channel, since the branching ratio to gluons is small and the branching ratios of the dominant decay channels are BR($a$ $\to$ $t \bar t$) $\sim$ 0.61-0.68, BR($a$ $\to$ $b \bar b$) $\sim$ 0.31-38, and BR($a$ $\to$ $\tau^+ \tau^-$) $\sim$ 0.001. In this plane, the $b \bar b$ channel comes back into play by excluding a small portion of the parameter space that $t \bar t$, $\tau^+ \tau^-$, and $t \bar t t \bar t$ allowed. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{gtau-gb-x0-gt078_kf_v2.png} \includegraphics[width=0.45\textwidth]{gb-x-gt075-gtau004_kf_v2.png} \caption{Production cross-section predictions of relevant 400-GeV pseudoscalar channels with 1$\sigma$ uncertainties in the [$g_\tau$, $g_b$] (with $g_t =$ 0.78 and $x$ = 0) and [$g_b$, $x$] (with $g_t =$ 0.75 and $g_{\tau}$ = 0.04) planes, left and right, respectively. Each contour line is labeled with the 1$\sigma$ cross-section value of the considered corresponding channel. Blue and red shaded areas are excluded by data.} \label{contour-plots_gtau-gb-x} \end{figure} In the left panel of Fig.~\ref{contour-plots_gtau-gb-x} the 1$\sigma$ cross-section values of the $t \bar t$, $t \bar t t \bar t$, and $b \bar b b \bar b$ channels are displayed in the [$g_\tau$, $g_b$] plane. The allowed values of $g_\tau$ (white area) vary between 0.035 and 0.046, depending on the value of $g_b$, which is constrained to the range [0.37, 0.42]. In the favored region the width varies 5.5-6\% with respect to the mass, increasing mainly with $g_b$, since the branching ratio to $\tau^+ \tau^-$ is small. The branching fraction of the dominant channels are here BR($a$ $\to$ $t \bar t$) $\sim$ 0.65-0.70, BR($a$ $\to$ $b \bar b$) $\sim$ 0.3-0.35, and BR($a$ $\to$ $\tau^+ \tau^-$) $\sim$ 0.001. These values of $g_\tau$ correspond to those allowed in the left panel of Fig.~\ref{contour-plots_gt-gtau-x}, which indicate a good agreement of these data at 68\% CL. The contour lines of the cross sections of the $t \bar t$, $b \bar b$, and $\tau^+\tau^-$ channels, with 1$\sigma$ of uncertainty, are shown in the right panel of Fig.~\ref{contour-plots_gtau-gb-x} in the [$g_b$, $x$] plane. Allowed values of $g_b$ vary between 0.33 and 0.4, while $x$ can reach allowed values up to 0.10, depending strongly on $g_b$. In the favored region the width varies 4.5-5.5\% with respect to the mass, increasing mainly with $g_b$, since the branching fraction to gluons is small. The dominant branching ratios are in this case BR($a$ $\to$ $t \bar t$) $\sim$ 0.64-0.72, BR($a$ $\to$ $b \bar b$) $\sim$ 0.28-0.36, and BR($a$ $\to$ $\tau^+ \tau^-$) $\sim$ 0.001. Again, the values of $x$ and $g_b$ are in agreement with the allowed values of Figs.~\ref{contour-plots_gt-gb} and~\ref{contour-plots_gt-gtau-x}. One significant conclusion is that the most restrictive channels are $t \bar t$, $t \bar t t \bar t$, and $\tau^+ \tau^-$, in which the production of the pseudoscalar and/or its decays are governed by $g_t$, together with the $b \bar b$ channel when the values of $g_b$ are significant and competitive with respect to those of $g_t$. Recall also that in the scenario with $x$ = 0.13, the bounds imposed by $t \bar t t \bar t$ channel are lower than the other channels. Finally, we find it important to keep in mind that the white areas of the six plots shown in this section are compatible with each other and show the allowed-by-data values for each of the effective couplings of a 400 GeV pseudoscalar that can produce the slight excesses in $t \bar t$ and $\tau^+ \tau^-$ reported by CMS and ATLAS, respectively. \section{UV completions} \label{UVcomp} In this section we describe several models and analyze whether they can reproduce the phenomenology of the previous section. We start with 2HDMs, finding that usually they can not pass the constraints from flavor violating decays. Then we consider models containing a composite pseudoscalar singlet, and study the case where this state is a pNGB. \subsection{Two Higgs doublet models} These models contain a neutral pseudoscalar that could reproduce the collider phenomenology. In models of Type-I, II, III and IV the couplings with the SM fermions are proportional to the SM Yukawa coupling, up to the factor $t_\beta$ or its inverse, with $t_\beta$ being defined by the ratio of the vacuum expectation values (vev) of both neutral doublets. In Type-I models all the couplings are given by $\pm t_\beta^{-1}$, in units of the SM Yukawa. In Type-II, up-quark couplings are given by $t_\beta^{-1}$, whereas down-quarks and charged leptons are given by $t_\beta$. In Type-III (also on L 2HDM), quarks have a factor $\pm t_\beta^{-1}$ an charged leptons $t_\beta$. In Type-IV (also in F 2HDM), up-quarks and charged leptons have a factor $\pm t_\beta^{-1}$ and down-quarks a factor $t_\beta$. These structures of couplings cannot accommodate the excesses. A generalization for the couplings in the Higgs sector of a 2HDM has been considered in Ref.~\cite{Egana-Ugrinovic:2019dqu} that allow to enhance the interactions with up- or down-type quarks, while suppressing flavor changing neutral currents via flavor alignment. The authors refer to this framework as Spontaneous Flavor Violation (SFV). Particularly interesting for our study is the case of up-type SFV 2HDM, where in the mass basis the up-quark diagonal Yukawa matrix is rescaled by a real parameter $\xi$, whereas each down-type Yukawa is rescaled by an independent factor: $\kappa_i$, namely: $Y_u={\rm diag}(y_u,y_c,y_t)\to\xi Y_u$ and $Y_d={\rm diag}(y_d,y_s,y_b)\to{\rm diag}(\kappa_d,\kappa_s,\kappa_b)$. A similar framework corresponds to the down-type SFV 2HDM, exchanging the role of up- and down-type quarks. In the sector of charged leptons: $Y_\ell={\rm diag}(y_e,y_\mu,y_\tau)\to\xi_\ell Y_\ell$. The new parameters are independent, and can be suitably chosen to reproduce the collider phenomenology we wish to describe. The problem arises given that this model gets strong constraints from flavor physics. Since the neutral Yukawa couplings are aligned, flavor violation requires the presence of a charged Higgs boson. For large $\kappa_b$ this state gives contributions to $C_7^{bs}$ at 1-loop, inducing $B\to s\gamma$. Following Ref.~\cite{Egana-Ugrinovic:2019dqu}, for the values of $\xi$ and $\kappa_b$ that can fit the results of Section~\ref{num-results}, we obtain a strong lower bound on the mass of the charged Higgs: $m_H\gtrsim 3.4$~TeV. But in 2HDMs $m_H$ and $m_A$ are related by $m_A^2=m_H^2+v^2(\lambda_4-\lambda_5)/2$, with $V\supset \lambda_4(H_2^\dagger H_1)(H_1^\dagger H_2)+\lambda_5(H_1^\dagger H_2)/2$, thus one would have to require $(\lambda_4-\lambda_5)/2\sim{\cal O}(100)$ for $m_A\sim 400$~GeV, loosing perturbative control of the theory. \subsection{Composite models with a pseudoscalar singlet} An interesting possibility is the presence of a new strongly interacting sector, with a mass gap at an infrarred scale of order few TeV, that leads to bound states, to which we will refer as resonances, one of them being a pseudoscalar. We will consider the case where the pseudoscalar is a singlet under the SM gauge symmetry. Moreover, we will take it as a composite NGB of the new sector, generated by the spontaneous breaking of a global symmetry, such that it can be lighter than the rest of the resonances. In this case the shift symmetry of the NGB singlet forbids a potential, making this state massless, but if the symmetry is explicitly broken by the interactions with the SM, a potential is generated at radiative level. The SM gauge interactions can give this explicit breaking of the global symmetry, the interactions with the SM fermions can also do that, the latter being model dependent. One of the most popular examples for the interactions with fermions is partial compositeness, where at a high UV scale $\Lambda_{\rm UV}\gg$TeV linear interactions of the SM fermions and the composite operators are generated: ${\cal L}_{\rm UV}=\lambda\bar\psi^{\rm SM}{\cal O}$.~\footnote{$\Lambda_{\rm UV}$ can be, for example, of order $M_{\rm Pl}$ or $M_{\rm GUT}$.} The shift symmetry can also be broken by anomalous interactions with the SM gauge fields, as in the case of the $\eta'$ of QCD, or by anomalies in the new sector, that are independent from the first ones.~\cite{Gripaios:2007tk,Gripaios:2008ei} It is interesting to consider that also the Higgs arises as a pNGB~\cite{Kaplan:1983fs,Kaplan:1983sm,Georgi:1984ef,Georgi:1984af,Dugan:1984hq,Contino:2003ve,Agashe:2004rs}, obtaining an explanation for its mass being smaller than the masses of the other resonances, although tuning is still required~\cite{Agashe:2005dk}. Scenarios with the pseudoscalar singlet and the Higgs being composite pNGBs have been considered, for example, in Refs.~\cite{Gripaios:2009pe,Franceschini:2015kwy,Gripaios:2016mmi,Chala:2017sjk}, giving estimates of the potential and couplings with the SM sector, as well as building specific realizations. In the following we will describe some general properties of a pseudoscalar singlet that is a composite pNGB, and in the next subsections we will consider some realizations of these ideas for the hints at 400 GeV. In the simplest example the resonances of the strongly interacting sector are determined by one mass scale, namely, the decay constant of the NGBs, $f$, that sets the scales of their self-interactions, and one coupling, $g_*$, that characterizes all the interactions between resonances. These couplings can be thought in analogy with $f_\pi$ and $g_\rho$ for the QCD mesons. We will consider $1\ll g_*\ll4\pi$ and the resonance's masses given by $m_*=g_*f$. There can be departures form this very simplified picture, for example if the NGBs arise from a non-simple group their decay constants can be different: $f_H$ for the Higgs and $f_P$ for the singlet, there can also be different couplings and deviations from the estimate of $m_*$. For simplicity we will assume the one scale and one coupling approach, except where explicitly stated. For the interactions with fermions we will consider $\Lambda_{\rm UV}\gg m_*$ and approximate scale invariance in a large window of energy, such that the running of the linear coupling $\lambda$ is driven by the anomalous dimension of ${\cal O}$, leading to an exponentially suppressed coupling for an irrelevant operator and to $\lambda\sim g_*$ for a relevant one~\cite{0406257}. At energies of order $m_*$ the linear interactions lead to mixing of the elementary and composite fermions, such that the massless fermions, before EW symmetry breaking, are partially composite, with a degree of compositeness $\epsilon_\psi\sim\lambda_\psi(m_*)/g_*$. The Yukawa couplings are modulated by $\epsilon$ as: \begin{equation} y_\psi\sim \epsilon_{\psi_L} g_*\epsilon_{\psi_R} \ . \end{equation} In the case of three generations of resonances, $\epsilon$ and $g_*$ are squared matrices of dimension three. When the breaking of the shift symmetry is dominated by partial compositeness, the mass of the (pseudo) scalar can be estimated as: \begin{equation}\label{eq-approx-masses} m_h^2\sim N_c y_t^2 \frac{g_*^2}{(4\pi)^2} v^2 \ ; \qquad m_P^2\sim N_c y_t^2 \frac{g_*^2}{(4\pi)^2} f^2 \ . \end{equation} The separation between $v$ and $f$ typically requires a tuning of order $\xi\equiv v^2/f^2$, and allows a splitting between $m_h$ and $m_P$. For $f\simeq 800$~GeV one can obtain $m_P\simeq 400$~GeV, although these estimations are up to factors of ${\cal O}(1)$. \footnote{Contributions to the pseudoscalar mass from anomalies in the new sector can be estimated as $m_P^2\sim \tilde N_fg_*^2/(4\pi)^2 f^2$, with $\tilde N_f$ determined by the number of fermions responsible for the anomaly.} At energies below $m_*$, one gets a theory with the SM fields and the pNGB $H$ and $P$. The $P$ interactions with SM fermions and gauge bosons require at least dimension-5 operators \begin{equation} {\cal L}_{\rm eff}\supset \frac{g_{PAA}}{\Lambda}\ P\ A_{\mu\nu}\tilde A^{\mu\nu}+i \frac{g_{P\psi\psi'}}{\Lambda}P \bar \psi_L H \psi'_R\ , \end{equation} where $A_{\mu\nu}$ stands for a generic SM gauge field strength tensor, $\psi_L$ and $\psi'_R$ are SM fermions and $\Lambda$ is the scale at which these operators are generated, that for simplicity was written to be the same for all the operators. One could also include terms as $\partial_\mu P\bar\psi\gamma^\mu\psi$, however they can be absorbed in the Yukawa and anomalous terms after field redefinitions~\cite{Bellazzini:2015nxw}. The coupling with the gauge bosons, $PA\tilde A$, breaks the shift symmetry of the NGBs, requiring either insertions of the fermion mixing that explicitly break the symmetry, or an anomalous breaking, generating topological terms, as the Wess-Zumino-Witten term~\cite{Wess:1971yu,Witten:1983tw}~\footnote{See also~\cite{Davighi:2018xwn} for a discussion of topological terms.}. To leading order in $1/f_P$ one gets: \begin{equation}\label{eq-FFtilde-cp} {\cal L}_{\rm eff}\supset \frac{P}{16\pi^2 f_P}\left(c_s g_s^2 G\tilde G+c_W g^2 W\tilde W+c_B g'^2 B\tilde B\right) \ . \end{equation} The coefficients can be estimated as: \begin{equation} \left.c_i\right|_{\rm pc}\sim N_c \frac{y_\psi^2}{g_*^2} \ , \qquad \left.c_i\right|_{\rm anom}\sim N_f \ . \end{equation} In both cases the interaction is generated at loop level, in the first case it is modulated by the Yukawa couplings of the SM fermions, whereas in the second case $c_i$ measures the number of fermions $N_f$ generating the anomaly. Rewriting Eq.~(\ref{eq-FFtilde-cp}) in the basis of mass eigenstates, one obtains the interaction with photons, with coefficient $e^2(c_W+c_B)/(16\pi^2f_P)$. As we will show in the next subsections, it is interesting to write $g_{P\psi\psi}$ as function of $y_\psi$, the matrix of Yukawa coupling of the Higgs, in the following form: \begin{equation}\label{eq-AB} \frac{g_{P\psi\psi}}{\Lambda}= \frac{A_\psi y_\psi}{f_P} +\frac{y_\psi B_\psi}{f_P} \ , \end{equation} where $A_\psi$ and $B_\psi$ are $3\times 3$ matrices, determined by the embedding of the Left- and Right-handed fermions, respectively. After rotation of the SM fermions to the mass basis, $\psi_{L/R}\to U_{L/R} \psi_{L/R}$, one gets: \begin{equation} \frac{g_{P\psi\psi}}{\Lambda}\to U_L^\dagger A_\psi U_L \frac{m_\psi}{f_P} +\frac{m_\psi}{f_P} U_R^\dagger B_\psi U_R\ , \end{equation} where $m_\psi$ is the diagonal fermion mass matrix. As expected, for $A$ and $B$ proportional to the identity the couplings are aligned with the masses, whereas in other cases they are not and lead to flavor transitions. A better determination of the flavor violating couplings requires assumptions about the flavor of the new sector. We will consider the case of flavor anarchy~\cite{Agashe:2004cp}, where the couplings between fermionic and pseudo/scalar resonances are given by matrices with all the coefficients of order $g_*$, such that flavor transitions between resonances have no suppression. It is well known that in this case bounds from flavor changing neutral currents (FCNC) require $m_*\gtrsim 10-20$~TeV~\cite{Agashe:2004cp,Csaki:2008zd}. In anarchic partial compositeness the size of the coefficients of the rotation matrices are determined by the degree of compositeness. Ordering the flavors by increasing mixing, leads to $(U_{\psi_L})_{ij}\sim\epsilon_{q^\psi}^i/\epsilon_{q^\psi}^j$ and $(U_{\psi_R})_{ij}\sim\epsilon_{\psi}^i/\epsilon_{\psi}^j$, with $i<j$. For quarks, assuming that $(U_{u_L})_{ij}\sim(U_{d_L})_{ij}$~\footnote{This assumptions is not necessary, since the condition is $U^\dagger_{u_L}U_{d_L}=V_{CKM}$, it is a simple hypothesis under which we can estimate the mixings. It is also possible to mix the elementary fermions with more than one composite operator, leading to less trivial situations.}, we get: \begin{align} &\epsilon_{q}^i\sim (V_{\rm CKM})_{i3}\epsilon_{q}^3 \ , \qquad i<3 \ ,\\ &\epsilon_\psi^i\sim\frac{m_\psi^i}{v}\frac{1}{(V_{\rm CKM})_{i3}g_*\epsilon_{q}^3} \ ,\qquad \psi=u,d \ . \end{align} These equations determine the degree of compositeness of the chiral quarks in terms of physical parameters as the CKM angles and masses. The only parameters that are not determined are $\epsilon_{q}^3$ and $g_*$, although the masses give lower bounds for $\epsilon_{q}^3$ when the right mixings are saturated, $\epsilon_q^3\gtrsim 1/g_*$. Similar estimates can be made for leptons, although in this case $U_{\rm PMNS}$ depends on the realization of neutrino masses. Assuming that the large mixing angles of the PMNS matrix are generated in the neutrino sector, bounds from flavor violation in the lepton sector are minimized around the Left-Right symmetric limit, $\epsilon_{l}^i\sim\epsilon_e^i$~\cite{Panico:2015jxa}, leading to: \begin{equation} \epsilon_{l,e}^i=\sqrt{\frac{m_\ell^i}{g_*v}} \ . \end{equation} In this framework the coefficients of the rotation matrices can be estimated as in Table~\ref{table-URij}. \begin{table}[ht!] \centering \begin{tabular}{|c|c|c|c|} \hline\rule{0mm}{5mm} & $u$ & $d$ & $e$ \\ \hline\rule{0mm}{5mm} $(U_R)_{12}$ & $\frac{m_u}{m_c\lambda_C}\sim 0.01$ & $\frac{m_d}{m_s\lambda_C}\sim 0.2$ & $\sqrt{\frac{m_e}{m_\mu}}\sim 0.07$ \\ \hline\rule{0mm}{5mm} $(U_R)_{23}$ & $\frac{m_c}{m_t\lambda_C^2}\sim 0.07$ & $\frac{m_s}{m_b\lambda_C^2}\sim 0.4$ & $\sqrt{\frac{m_\mu}{m_\tau}}\sim 0.2$ \\ \hline\rule{0mm}{5mm} $(U_R)_{13}$ & $\frac{m_u}{m_t\lambda_C^3}\sim 7\times 10^{-4}$ & $\frac{m_d}{m_b\lambda_C^3}\sim 0.08$ & $\sqrt{\frac{m_e}{m_\tau}}\sim 0.02$ \\ \hline \end{tabular} \caption{Estimates of coefficients of unitary Right-handed matrices using partial compositeness, taking the masses at the TeV scale.} \label{table-URij} \end{table} The integration of $P$ leads to FCNC with the four fermion interactions~\cite{Gripaios:2009pe}, \begin{align} & {\cal L}_{\rm eff}=C_\psi^{ijkl}(\bar\psi^i_L\psi^j_R)(\bar \psi^k_L\psi^l_R) \ ,\label{eq-fcnc1}\\ & C_\psi^{ijkl}=g_{P\psi^i\psi^j}g_{P\psi^k\psi^l}\frac{v^2}{2\Lambda^2}\frac{1}{m_P^2} \ . \label{eq-fcnc2} \end{align} The $\Delta F=2$ bounds from meson mixing give constraints on the size of the flavor violating coefficients, $C_{2}^{ij}=C_\psi^{ijij}$. There can also be dimension-5 operators with covariant derivatives, that can violate flavor, as $P\bar\psi_{L/R} i\not\!\!D \gamma^5\psi_{L/R}$. These operators can be rewritten in terms of $P\bar\psi_L\gamma^5 H\psi_R'$ and its hermitian conjugate after field redefinitions~\cite{Aguilar-Saavedra:2009ygx,Agashe:2009di}. \subsection{A model with $P$ from SO(6)/SO(5)} Refs. \cite{Gripaios:2009pe,Chala:2017sjk} have considered the spontaneous breaking SO(6) to SO(5), that delivers five pNGBs transforming as ${\bf 5}$ SO(5). By using that SO(4)$\sim$SU(2)$_L\times$SU(2)$_R$, under this group ${\bf 5}\sim({\bf 2},{\bf 2})\oplus({\bf 1},{\bf 1})$, leading to a model with custodial symmetry that generates a Higgs and a pseudoscalar singlet. An extra conserved U(1)$_X$ factor must be added to accommodate the hypercharge of the SM fermions, with $Y=T^3_R+X$. The NGB unitary matrix is given by: \begin{equation} U={\rm exp}(i\sqrt{2}\Pi/f) \ ; \qquad \Pi= h^{\hat a}T^{\hat a} + P T_P \ ; \end{equation} where $T^{\hat a}$ and $T_P$ are the broken generators associated to the Higgs and the pseudoscalar. To determine the interactions of the NGBs with the SM fermions one has to choose the representations of the fermionic composite operators ${\cal O}$ that mix with the elementary fermions. To avoid explicit breaking of the SM gauge symmetry, these SO(6) representations, when decomposed under the SM gauge symmetry, must contain multiplets transforming as the SM fermions, or in other words: the EW representations of the SM fermions are embedded into representations of SO(6)$\times$U(1)$_X$. These embeddings are not unique, and the phenomenology of $P$, in many aspects, depends strongly on this choice. We will consider the following embeddings: \begin{equation}\label{eq-embeddings1} {\cal O}_{q^u}, {\cal O}_u \sim {\bf 6}_{2/3} \ ,\qquad {\cal O}_{q^d},{\cal O}_d\sim {\bf 6}_{-1/3} \ , \qquad \psi_l, \psi_e \sim {\bf 6}_{-1} \ . \end{equation} The presence of two embeddings for $q$: $q^u$ and $q^d$, means that it interacts linearly with two different operators: ${\cal L}\supset \bar q(\lambda_{q^u}{\cal O}_{q^u}+\lambda_{q^d}{\cal O}_{q^d})$. {\bf6} is the smallest dimensional representation that allows to protect the $Zb_L\bar b_L$ coupling, that demands embedding $q$ in a $({\bf 2},{\bf 2})_{2/3}$ of SO(4)$\sim$U(1)$_X$~\cite{0605341}. $u$ is embedded in the same representation, but since there are two SO(4) singlets ${\bf 6}\sim{\bf5}\oplus{\bf1}\sim({\bf 2},{\bf 2})\oplus({\bf 1},{\bf 1})\oplus({\bf 1},{\bf 1})$, there is a free parameter $c_u\equiv\cos\theta_u$ that determines the projection over the SO(5) singlet, whereas $s_u\equiv\sin\theta_u$ determines its projection over the {\bf5}, in fact there is one $\theta_u$ for each generation. Since ${\bf 6}_{2/3}$ does not contain a -1/3 singlet, the down sector is embedded in a ${\bf 6}_{-1/3}$, with a new parameter $\theta_d$, similar to the up sector. Finally the leptons are embedded in a ${\bf 6}_{-1}$, with a $\theta_e$ for the lepton singlet. Since the SM fermions do not fill the embeddings, the linear interactions that lead to partial compositeness explicitly break the global symmetry and generate a potential for the NGBs, which coefficients can be estimated as we did for the quadratic ones in Eq.~(\ref{eq-approx-masses})~\cite{Chala:2017sjk}. We assume that at low energies, of order few to ten TeV, the strong dynamics generates resonances that mix with the elementary fermions. Integrating them one obtains to leading order: \begin{equation}\label{eq-lyso6} {\cal L}_{\rm eff}\supset\sum_{\psi=u,d,\ell}\bar \psi_L \frac{m_\psi}{v}\left(h+i\frac{P}{f}\cot_\psi\right)\psi_R + {\rm h.c.} \ , \end{equation} where generation indices are understood, such that $\cot_\psi$ is a matrix in the generation space: \begin{equation} \cot_\psi={\rm Diag}(\cot\theta_{\psi}^1,\cot\theta_{\psi}^2,\cot\theta_{\psi}^3) \ ,\qquad \cot\theta_{\psi}=c_\psi/s_\psi \ . \end{equation} Comparing with Eq.~(\ref{eq-AB}), in this model we obtain $A=I$ and $B=\cot_\psi$, a diagonal matrix. Rotating the chiral fermions to the mass basis, the diagonal coefficients of the second term of Eq.~(\ref{eq-lyso6}) lead to: \begin{equation}\label{eq-yso6} g_{P\psi^i\psi^i}\frac{v}{\Lambda}=\frac{m_{\psi^i}}{f}\sum_j|U_{\psi_R}|_{ji}^2\cot\theta_\psi^j\simeq\frac{m_{\psi^i}}{f}\cot\theta_\psi^i \ , \end{equation} where in the right-hand-side we have made the assumption that the term with $j=i$ dominates, that will be the case when the mixing angles are small and the ratio $\cot\theta_\psi^j/\cot\theta_\psi^i$ does not compensate the smallness of those mixings. Although $\cot_\psi$ is diagonal, if its elements are not universal, it does not commute with $U_{\psi_R}$ and generates flavor violating interactions $y_{P\psi^i\psi^j}$. For $i\neq j$: \begin{equation} (U_{\psi_R}^\dagger\cot_\psi U_{\psi_R})_{ij}=(\cot_{\psi_2}-\cot_{\psi_1})(U_{\psi_R}^*)_{2i}(U_{\psi_R})_{2j}+(\cot_{\psi_3}-\cot_{\psi_1})(U_{\psi_R}^*)_{3i}(U_{\psi_R})_{3j} \ . \end{equation} Since SO(6)$\sim$SU(4), it contains anomalous representations. Fermions of the composite sector transforming with these representations contribute to the anomalous interactions with the fields of SO(6): \begin{equation} c_s|_{\rm anom}=0 \ ,\qquad c_W|_{\rm anom}=-c_B|_{\rm anom}=n \ , \end{equation} where $n$ is an integer number. Unfortunately, in this model there are no anomalous contributions to the interactions with gluons, that could increase $\sigma(gg\to P)$ with small impact on $\sigma_{4t}$, neither with the photons, that could boost a signal in the clean $\gamma\gamma$ decay channel. \subsubsection{Matching} Making use of the previous results we can obtain the couplings of Eq.~(\ref{eq-Lint}) in the SO(6)/SO(5) model. From Eqs.~(\ref{eq-lyso6}) and (\ref{eq-yso6}) we get: \begin{equation} g_{\psi^i \simeq \frac{m_\psi^i}{f} \cot\theta_\psi^i \ , \qquad g_{gg} \simeq 4.2\frac{g_s^2}{16\pi^2f} \cot\theta_t\ . \end{equation} Using the benchmark point with $x=0$, described in Section~\ref{num-results}, we get for the third generation: \begin{align} & \cot\theta_t \sim 4 \frac{f}{\rm TeV} \ , \qquad \cot\theta_b \sim 145 \frac{f}{\rm TeV} \ , \qquad \cot\theta_\tau \sim 23 \frac{f}{\rm TeV} \ . \end{align} Notice that although $\cot\theta\sim 100$, it does not lead to large couplings with the pseudoscalar, since these couplings are proportional to the quark masses. In the case of flavor universal $\cot\theta$ one gets: $g_{d}\sim 2\times 10^{-4}$, $g_{s}\sim 2.5\times 10^{-3}$ and $g_{b}\sim 0.3$. Thus by choosing these values for $\cot\theta$, the SO(6)/SO(5) model is able to reproduce the phenomenology described in the previous sections, in the case without anomalous interactions from the new sector. \subsubsection{Flavor transitions} Let us study now FCNC induced by $P$-exchange. The Wilson coefficient of Eq.~(\ref{eq-fcnc2}) is given by \begin{equation} C_\psi^{ijkl}=\frac{1}{m_P^2}\frac{m_\psi^i}{f}\frac{m_\psi^k}{f}(U_{\psi_R}^\dagger\cot_\psi U_{\psi_R})_{ij}(U_{\psi_R}^\dagger\cot_\psi U_{\psi_R})_{kl} \ . \end{equation} Using the estimates of Table~\ref{table-URij} and Ref.~\cite{0707.0636} we can obtain bounds on \begin{equation} c_\psi^{(ij)} = \cot_\psi^i-\cot_\psi^j \ . \end{equation} Taking $f=1$~TeV and the Wilson coefficient as $C=(c/{\rm TeV})^2$, we get the predictions and bounds of Table~\ref{table-bounds-q}. \begin{table}[ht!] \centering \begin{tabular}{|c|c|c|c|} \hline\rule{0mm}{5mm} &\multicolumn{1}{c}{Model prediction}&\multicolumn{2}{|c|}{Bounds} \\ & $c$ & $\sqrt{{\rm Re}(c^2)}$ & $\sqrt{{\rm Im}(c^2)}$ \\ \hline\rule{0mm}{7mm} $K^0$ & $2\times 10^{-5}c_d^{(21)}+3\times 10^{-6}c_d^{(31)}$ & $1.4\times 10^{-4}$ & $1\times 10^{-5}$ \\ \hline\rule{0mm}{7mm} $D^0$ & $10^{-5}c_u^{(21)}+5\times 10^{-8}c_u^{(31)}$ & $4.4\times 10^{-4}$ & $4.4\times 10^{-4}$ \\ \hline\rule{0mm}{7mm} $B_d^0$ & $4\times 10^{-4}(c_d^{(21)}+c_d^{(31)})$ & $8\times 10^{-4}$ & $8\times 10^{-4}$ \\ \hline\rule{0mm}{7mm} $B_s^0$ & $2\times 10^{-3}(c_d^{(21)}+c_d^{(31)})$ & $8\times 10^{-3}$ & $8\times 10^{-3}$ \\ \hline \end{tabular} \caption{Wilson coefficients for meson mixing. Defining the Wilson coefficient as $C=(c/{\rm TeV})^2$ an taking $f=1$~TeV, the second column shows the estimates for $c$ in the model, the third and fourth columns show the bounds on the real and imaginary parts of these coefficients~\cite{0707.0636}.} \label{table-bounds-q} \end{table} From Table~\ref{table-bounds-q} and assuming real Wilson coefficients, we obtain that the values $c_d^{(21)}\sim 5$ and $c_d^{(31)}\sim 34$ saturate the bound from $K$-system, whereas $B_d$ requires $c_d^{(21)}+c_d^{(31)}\lesssim 2$ and $B_s$: $c_d^{(21)}+c_d^{(31)}\lesssim 4$ for saturation. Since $\cot\theta_b\sim 100$, constraints from $B$-meson mixing require a cancellation in $c_d^{(ij)}$ of order few percent, with one more order of magnitude for complex coefficients with arbitrary phases. Bounds from $D$ mesons allow $c_u^{(21)}\sim 44$, while constraints on $c_u^{(31)}$ are very soft. Since $\cot\theta_t\sim 5$, no tuning is needed to satisfy these bounds. For leptons, there are strong bounds from $\mu\to 3e$ and $\tau\to \ell\ell'\ell''$, with $\ell$ being muons and electrons. However, since the couplings involved are suppressed by the initial and final lepton masses, for $\cot\theta_e^i\lesssim{\cal O}(30)$ the predicted flavor violating BRs are several orders of magnitude below the bounds. Summarizing, departures from universality must be lower than few percent (permil) for the down sector in the case of real (complex) coefficients, demanding either tuning or flavor symmetries and departure from anarchy, whereas they can be of order one for the up sector and leptons. \subsection{A model with $P$ from a broken U(1)} The coset SO(5)$\times$U(1)$_P\times$U(1)$_X$/SO(4)$\times$U(1)$_X$ also delivers the Higgs and the pseudoscalar as NGBs from a strongly interacting sector.~\footnote{A conserved SU(3) factor accounting for color is understood.} The Higgs emerges from the same coset as in the well known minimal composite Higgs model, namely, SO(5)/SO(4), whereas $P$ emerges from the spontaneous breaking of U(1)$_P$. The extra U(1) factor is not spontaneously broken and it is required to accommodate the hypercharge of the SM fermions, with $Y=T^3_R+X$. Since the NGBs arise from different factors of the symmetry group, $H$ and $P$ can have different decay constants. We assume that the same dynamics is responsible for the spontaneous breaking of both factors and take them of the same order. The NGB unitary matrices are given by: \begin{equation} U_H={\rm exp}(i\sqrt{2}h^{\hat a}T^{\hat a}/f_H) \ ; \qquad U_P= {\rm exp}(i\sqrt{2}PQ_P/f_P) \ ; \end{equation} where $T^{\hat a}$ and $Q_P$ are the broken generators of SO(5) and U(1)$_P$. For a state with well defined charge $Q_P$, $U_P$ is just a phase. In order to determine the interactions of the NGBs with the SM fermions, one has to choose an embedding into representations of SO(5)$\times$U(1)$_P\times$U(1)$_X$. A simple and realistic case can be obtained by considering the following: \begin{equation}\label{eq-embeddings} {\cal O}_{q,u}\sim {\bf 5}_{2/3,p_{q,u}} \ ,\qquad {\cal O}_d\sim {\bf 10}_{2/3,p_d}, \qquad {\cal O}_l\sim {\bf 5}_{0,p_l} \ ,\qquad {\cal O}_e\sim {\bf 10}_{0,p_e} \ . \end{equation} The first and second subscripts in Eq.~(\ref{eq-embeddings}) are the U(1)$_X$ and U(1)$_P$ charges, respectively. The values of $Q_P$ are {\it a priori} arbitrary. Again the linear interactions of partial compositeness explicitly break the global symmetry and generate a potential for the NGBs. Ref.~\cite{Chala:2017sjk} has shown that, to generate a potential for $P$, at least one SM fermion must interact with two operators of the new sector with different $p$-charges. Following that reference we introduce two embeddings for $u_R$, with different charges under U(1)$_P$: $p_u^1$ and $p_u^2$. At energies below the TeV, where the massive resonances can be integrated, ones obtains Yukawa interactions: \begin{align}\label{eq-yukawa} {\cal L}_{\rm eff}\supset &\bar u_L u_R\left[\frac{h}{v}m_u +i\sqrt{2}\frac{P}{f_P}\left(p_qm_u -m_u p_u\right)\right] + \bar d_L d_R\left[\frac{h}{v}m_d +i\sqrt{2}\frac{P}{f_P}(p_qm_d - m_dp_d)\right] \nonumber\\ &+ \bar e_L e_R\left[\frac{h}{v}m_e +i\sqrt{2}\frac{P}{f_P}(p_lm_e-m_ep_e)\right] \end{align} with $p_u=(p_{u1}+p_{u2})/2$. In the case of one generation only, to leading order the $P$ couplings are proportional to the Higgs ones, being determined by the $Q_P$ charges and $f_P$. In the case of three generations $p_\psi$ are diagonal matrices, thus the alignment with the Higgs coupling depends on whether $Q_P$ is universal for the three generations or not. Comparing with Eq.~(\ref{eq-AB}), in this model we obtain $A=\sqrt{2}p_{\psi_L}$ and $B=-\sqrt{2}p_{\psi_R}$. If the diagonal terms dominate the sums, the flavor conserving coupling can be approximated by: \begin{equation}\label{eq-yso5u1} g_{P\psi^i\psi^i}\frac{v}{\Lambda}=\sqrt{2}\frac{m_{\psi^i}}{f}\sum_j(|U_{\psi_L}|_{ji}^2p_{\psi_L}^{(j)}-|U_{\psi_R}|_{ji}^2p_{\psi_R}^{(j)})\simeq\sqrt{2}\frac{m_{\psi^i}}{f}(p_{\psi_L}^{(i)}-p_{\psi_R}^{(i)}) \ . \end{equation} The flavor violating couplings have now contributions from Left- and Right-handed unitary matrices, for $i\neq j$: \begin{align}\label{eq-gijso5u1} g_{P\psi^i\psi^j}\frac{v}{\Lambda}= &[(p_{\psi_L}^2-p_{\psi_L}^1)(U_{\psi_L}^*)_{2i}(U_{\psi_L})_{2j}+(p_{\psi_L}^3-p_{\psi_L}^1)(U_{\psi_L}^*)_{3i}(U_{\psi_L})_{3j}]\frac{m_\psi^j}{f_P} \nonumber \\ &-[(p_{\psi_R}^2-p_{\psi_R}^1)(U_{\psi_R}^*)_{2i}(U_{\psi_R})_{2j}+(p_{\psi_R}^3-p_{\psi_R}^1)(U_{\psi_R}^*)_{3i}(U_{\psi_R})_{3j}]\frac{m_\psi^i}{f_P} \ . \end{align} The triangle anomaly with one U(1)$_P$ and two SU(3)$_c$ or SO(5) generators gives anomalous couplings~\cite{Franceschini:2015kwy}, for a multiplet of composite fermions $f$: \begin{equation}\label{eq-c-matching-cp} c_s|_{\rm anom}=\sum_f p_f d_2^{(f)} I_3^{(f)} \ , \quad c_W|_{\rm anom}=\sum_f p_f d_3^{(f)} I_2^{(f)} \ , \quad c_B|_{\rm anom}=\sum_f p_f d_2^{(f)} d_3^{(f)} (Y^{(f)})^2 \ , \end{equation} where $d_2$ and $d_3$ ($I_2$ and $I_3$) are the dimensions (indices) of the representations under SU(2)$_L$ and SU(3)$_c$, respectively. In the present example, for each generation of fermionic resonances transforming as in Eq.~(\ref{eq-embeddings}), taking $p_f=1$ for all the fermions, one obtains: $c_s\simeq 12$, $c_W\simeq 22$ and $c_B\simeq 14$. \subsubsection{Matching} To reproduce the phenomenology, we match the couplings of Eq.~(\ref{eq-Lint}) in the SO(5)$\times$U(1)/SO(4) model. From Eqs.~(\ref{eq-yukawa}) and~(\ref{eq-yso5u1}) we obtain: \begin{equation} g_{t \simeq \sqrt{2}\frac{m_{t}}{f_P} (p_q^{(3)}-p_u^{(3)}) \ , \qquad g_{b \simeq \sqrt{2}\frac{m_{b}}{f_P} (p_q^{(3)}-p_d^{(3)}) \ , \qquad g_{\tau \simeq \sqrt{2}\frac{m_{\tau}}{f_P} (p_l^{(3)}-p_e^{(3)}) \ . \end{equation} Taking into account Eq. (\ref{eq-FFtilde-cp}), we obtain for the anomalous coupling to gluons, \begin{align} g_{gg} \simeq \frac{g_s^2}{16\pi^2f_P} \left[4.2\sqrt{2}(p_q^{(3)}-p_u^{(3)}) + 4 c_s|_{\rm anom}\right] \ , \end{align} with $c_s|_{\rm anom}$ given in Eq.~(\ref{eq-c-matching-cp}). Similar expressions can be derived for EW gauge bosons. For the benchmark points we get: \begin{align}\label{eq-pvalues} p_q^{(3)}-p_u^{(3)} \simeq 2.6 \frac{f_P}{\rm TeV} \ , \qquad p_q^{(3)}-p_d^{(3)} \sim 10^2 \frac{f_P}{\rm TeV} \ , \qquad p_l^{(3)}-p_e^{(3)} \sim 20 \frac{f_P}{\rm TeV} \ . \end{align} The bottom-quark requires charges ${\cal O}(100)$. The global current associated to the symmetry U(1)$_P$ is expected to create spin one composite states, that can couple to the composite fermions with coupling $\tilde g_*$ and charge $Q_P$. A well defined perturbative description of the theory of resonances requires $\tilde g_* Q_P\ll 4\pi$, thus for charges ${\cal O}(100)$ one has to demand $\tilde g_* \ll 0.1$. However, since the mass of the spin-one state is estimated as $\sim g_* f_P$, for such a small U(1)$_P$ coupling one would expect a very light resonance. This is in fact expected given that for such a large charge the UV-running would imply a Landau pole at low energies, signaled by the presence of low mass resonances. We consider this to be a more serious problem than the flavor constraints discussed in the next section, that depend on several assumptions in the UV. Therefore, this model is in strong tension with the large coupling $g_b$, that is required at 68\% CL by the $\tau$-channel with pseudoscalar production by bottom fusion. However this strong tension can be alleviated by considering the 95\% CL limits, that contain a region with $g_b=0$ and $g_\tau\gtrsim 0.023$, requiring $(p_l^{(3)}-p_e^{(3)})\sim 10$. For the second benchmark point, where an anomalous contribution to the gluon coupling is included, taking $x\sim 0.13\pm 0.05$ we obtain: \begin{equation} \sum_f p_f d_2^{(f)} \simeq (1.4-2.7) \frac{f_P}{\rm TeV} \ . \end{equation} \subsubsection{Flavor} Using the couplings of Eq.~(\ref{eq-gijso5u1}) in~(\ref{eq-fcnc2}) one obtains the Wilson coefficients induced by $P$-exchange in this model. The bounds on deviations from universality of $p_{\psi_R}^{(i)}$ are as in Table~\ref{table-bounds-q}, changing $c_\psi^{(ij)}$ by $\Delta p_{\psi_R}^{(ij)}\equiv(p_{\psi_R}^{(j)}-p_{\psi_R}^{(i)})$. Thus the constraints on $c_\psi^{(ij)}$ apply also to $\Delta p_{\psi_R}^{(ij)}$. For leptons, in the Left-Right symmetric limit, constraints on $\Delta p_{l}^{(ij)}$ are are similar to those on $\Delta p_{e}^{(ij)}$. In this model there are as well bounds on $\Delta p_{\psi_L}^{(ij)}$, that can be obtained by a similar calculation, taking into account that the Left-handed angles are of CKM size. Assuming real Wilson coefficients, for the Kaon system we get: $\Delta p_q^{(21)}\lesssim 7$ and $\Delta p_q^{(31)}\lesssim 3\times10^3$, for $B_d$: $\Delta p_q^{(21)},\Delta p_q^{(31)}\lesssim 17$ and for $B_s$: $\Delta p_q^{(21)},\Delta p_q^{(31)}\lesssim 34$. Since the estimate of Eq.~(\ref{eq-pvalues}) gives $(p_q^{(3)}-p_d^{(3)})\gtrsim 10^2$, the constraints from meson mixing require cancellations at few percent level, whereas for complex coefficients the tuning is at permil level. For $D$ mesons $\Delta p_q^{(21)}\lesssim 2$, while constraints on $\Delta p_q^{(31)}$ are very soft. \section{Conclusions} \label{conclu} Several experimental results from the LHC could be pointing to a new pseudoscalar state at 400 GeV, mostly coupled to the third generation of fermions. Results from CMS in the $t\bar t$ final state as well as the results from ATLAS in the $\tau^+\tau^-$ channel, favor a pseudoscalar neutral particle that is produced via gluon fusion or in association with bottom-quarks. We have analyzed from a phenomenological perspective, initially in a bottom-up approach, the couplings that such state should have with those fermions and gauge bosons to be able to reproduce the different excesses, while at the same time keeping below the bounds the related channels that show no deviations with respect to the SM. Scanning over the parameter space of the most general CP-invariant interaction Lagrangian linear in the pseudoscalar state $a$ that can be written up to dimension-5 operators, we have found regions in which the new physics hints, as well as all experimental constraints, can be satisfied and in which the pseudoscalar coupling to gluons can be induced by the SM top quark, that gives by far the dominant contribution, but there can also be room for contributions from heavy new states. We found that the couplings of the pseudoscalar to top quarks and $\tau$ leptons are of the same order of magnitude as the Higgs ones, while the coupling to bottom-quarks is required to be $\sim 20-30$ times larger than the one for the Higgs. Though our low-energy phenomenological effective model satisfies all current experimental bounds and provides an explanation for the hints in ditop and ditau final states for the regions of parameter space considered, one may wonder in which of these and/or other channels one could expect to find a signal for the presence of the pseudoscalar in future measurements. In this aspect, one of the channels in which one would expect to be able to probe the previous scenario in the future is in the production of 4-tops in which there is a currently an excess that is around twice the SM one. Furthermore, given the hint of an excess in the $\tau^+\tau^-$ channel initiated by $b$-quarks, one could also expect the pseudoscalar to provide an excess in the $b\bar b$ final states in future measurements. On the other hand diboson channels are more model dependent due to their possible UV contributions, thus there are no robust predictions for diboson final states, though one would expect any possible signal to show first in the diphoton channel due to its cleanness. We have also considered a set of gauge invariant models, analyzing their capability to reproduce the collider phenomenology related with the hints at 400 GeV. We have found that, although usual 2HDMs of Type I-IV cannot reproduce the pseudoscalar couplings required by the phenomenology, more sophisticated models as SFV 2HDM could in principle do it~\cite{Egana-Ugrinovic:2019dqu}. However the contributions of this model to flavor changing processes, induced at radiative level by exchange of a charged Higgs, require the masses of these states to be above $\sim 3$~TeV, demanding quartic couplings above the perturbative regime to split the pseudo scalar mass $m_A$ from the CP-even heavy Higgs mass $m_H$. On a different direction, models with a new strongly interacting sector leading to a light pseudoscalar singlet could also potentially reproduce the LHC phenomenology we wish to address. We have analyzed two specific realizations that have already been considered in the literature, although in a different context. One of the models contains a composite state that is a pseudo Nambu Goldstone boson arising from an spontaneous breaking of SO(6)/SO(5), which Yukawa couplings are aligned with the Higgs ones. In this model SO(6) anomalies can generate contributions to the couplings with EW massive gauge bosons, but not with the gluons or photons. Under the assumption of flavor anarchy, bounds from mixing in Kaon and $B$-meson systems require tuning to obtain an approximate universal factor in the down sector, another interesting possibility that requires a dedicated analysis would be the introduction of flavor symmetries. The other model we consider is based on the coset $SO(5)\times U(1)_P\times U(1)_X /SO(4)\times U(1)_X$, in which the light state arises from an spontaneous breaking of a $U(1)_P$ symmetry of the new sector, that can also generate anomalous contributions to the couplings with gluons and photons. However at $1\sigma$ level the coupling of the bottom-quark demands U(1) charges of new composite fermions $\sim {\cal O}(100)$, introducing some tension. At $2\sigma$ level this requirement is relaxed, since the $\tau^+\tau^-$ channel with initial $b$-quarks is compatible with the SM. Last, it would be interesting to study models with a pseudoscalar arising from other cosets, in particular with unified groups, where one could expect to obtain contributions to the gluon coupling from the anomaly, as well as models with elementary weakly-coupled states. \section*{Acknowledgments} The authors would like to thank Ezequiel \'Alvarez for help with MGME and V\'{\i}ctor Mart\'{\i}n-Lozano for collaboration in the beginning of this article and for fruitful discussions. The work of EA is partially supported by the ``Atracci\'on de Talento'' program (Modalidad 1) of the Comunidad de Madrid (Spain) under the grant number 2019-T1/TIC-14019 and by the Spanish Research Agency (Agencia Estatal de Investigaci\'on) through the grant IFT Centro de Excelencia Severo Ochoa SEV-2016-0597. The work is also partially supported by CONICET and ANPCyT under projects PICT 2016-0164, PICT 2017-2751, PICT 2017-0802, PICT 2017-2765 and PICT 2018-03682. \bibliographystyle{JHEP}
1,941,325,220,677
arxiv
\section{Introduction} Let $\sigma:U \rightarrow [n]$ be a permutation of elements of an $n$-set $U$. For two disjoint subsets $A,B$ of $U$, we say $A \prec_{\sigma} B$ when every element of $A$ precedes every element of $B$ in $\sigma$, i.e., $\sigma(a) < \sigma(b), \forall (a,b) \in A \times B$. Otherwise, we say $A \nprec_{\sigma} B$. We say that $\sigma$ {\em separates} $A$ and $B$ if either $A \prec_{\sigma} B$ or $B \prec_{\sigma} A$. We use $a \prec_{\sigma} b$ to denote $\{a\} \prec_{\sigma} \{b\}$. For two subsets $A, B$ of $U$, we say $A \preceq_{\sigma} B$ when $A \setminus B \prec_{\sigma} A\cap B \prec_{\sigma} B \setminus A$. In this paper, we introduce and study a notion called \emph{pairwise suitable family of permutations} for a hypergraph $H$. \begin{definition} \label{definitionPairwiseSuitable} A family $\mathcal{F}$ of permutations of $V(H)$ is \emph{pairwise suitable} for a hypergraph $H$ if, for every two disjoint edges $e,f \in E(H)$, there exists a permutation $\sigma \in \mathcal{F}$ which separates $e$ and $f$. The cardinality of a smallest family of permutations that is pairwise suitable for $H$ is called the {\em separation dimension} of $H$ and is denoted by $\pi(H)$. \end{definition} A family $\mathcal{F} = \{\sigma_1, \ldots, \sigma_k\}$ of permutations of a set $V$ can be seen as an embedding of $V$ into $\mathbb{R}^k$ with the $i$-th coordinate of $v \in V$ being the rank of $v$ in the $\sigma_i$. Similarly, given any embedding of $V$ in $\mathbb{R}^k$, we can construct $k$ permutations by projecting the points onto each of the $k$ axes and then reading them along the axis, breaking the ties arbitrarily. From this, it is easy to see that $\pi(H)$ is the smallest natural number $k$ so that the vertices of $H$ can be embedded into $\mathbb{R}^k$ such that any two disjoint edges of $H$ can be separated by a hyperplane normal to one of the axes. This motivates us to call such an embedding a {\em separating embedding} of $H$ and $\pi(H)$ the {\em separation dimension} of $H$. The study of similar families of permutations dates back to the work of Ben Dushnik in 1947 where he introduced the notion of \emph{$k$-suitability} \cite{dushnik}. A family $\mathcal{F}$ of permutations of $[n]$ is \emph{$k$-suitable} if, for every $k$-set $A \subseteq [n]$ and for every $a \in A$, there exists a $\sigma \in \mathcal{F}$ such that $A \preceq_{\sigma} \{a\}$. Let $N(n,k)$ denote the cardinality of a smallest family of permutations that is $k$-suitable for $[n]$. In 1971, Spencer \cite{scramble} proved that $\log \log n \leq N(n,3) \leq N(n,k) \leq k2^k\log \log n$. He also showed that $N(n,3) < \log \log n + \frac{1}{2} \log \log \log n + \log (\sqrt{2}\pi) + o(1)$. Fishburn and Trotter, in 1992, defined the \emph{dimension} of a hypergraph on the vertex set $[n]$ to be the minimum size of a family $\mathcal{F}$ of permutations of $[n]$ such that every edge of the hypergraph is an intersection of \emph{initial segments} of $\mathcal{F}$ \cite{FishburnTrotter1992}. It is easy to see that an edge $e$ is an intersection of initial segments of $\mathcal{F}$ if and only if for every $v \in [n] \setminus e$, there exists a permutation $\sigma \in \mathcal{F}$ such that $e \prec_{\sigma} \{v\}$. F\"{u}redi, in 1996, studied the notion of \emph{$3$-mixing} family of permutations \cite{Furedi1996}. A family $\mathcal{F}$ of permutations of $[n]$ is called $3$-mixing if for every $3$-set $\{a, b, c\} \subseteq [n]$ and a designated element $a$ in that set, one of the permutations in $\mathcal{F}$ places the element $a$ between $b$ and $c$. It is clear that $a$ is between $b$ and $c$ in a permutation $\sigma$ if and only if $\{a,b\} \preceq_{\sigma} \{a,c\}$ or $\{a.c\} \preceq_{\sigma} \{a,b\}$. Such families of permutations with small sizes have found applications in showing upper bounds for many combinatorial parameters like poset dimension \cite{kierstead1996order}, product dimension \cite{FurediPrague}, boxicity \cite{RogSunSiv} etc. The notion of separation dimension introduced here seems so natural but, to the best of our knowledge, has not been studied in this generality before. Apart from that, a major motivation for us to study this notion of separation is its interesting connection with a certain well studied geometric representation of graphs. In fact, we show that $\pi(H)$ is same as the \emph{boxicity} of the intersection graph of the edge set of $H$, i.e., the line graph of $H$. An axis-parallel $k$-dimensional box or a \emph{$k$-box} is a Cartesian product $R_1 \times \cdots \times R_k$, where each $R_i$ is a closed interval on the real line. For example, a line segment lying parallel to the $X$ axis is a $1$-box, a rectangle with its sides parallel to the $X$ and $Y$ axes is a $2$-box, a rectangular cuboid with its sides parallel to the $X$, $Y$, and $Z$ axes is a $3$-box and so on. A {\em box representation} of a graph $G$ is a geometric representation of $G$ using axis-parallel boxes as follows. \begin{definition} \label{definitionBoxicity} The \emph{$k$-box representation} of a graph $G$ is a function $f$ that maps each vertex in $G$ to a $k$-box in $\mathbb{R}^k$ such that, for all vertices $u,v$ in $G$, the pair $\{u,v\}$ is an edge if and only if $f(u)$ intersects $f(v)$. The \emph{boxicity} of a graph $G$, denoted by $\operatorname{boxicity}(G)$, is the minimum positive integer $k$ such that $G$ has a $k$-box representation. \end{definition} Box representation is a generalisation of interval representation of \emph{interval graphs} (intersection graphs of closed intervals on the real line). From the definition of boxicity, it is easy to see that interval graphs are precisely the graphs with boxicity $1$. The concept of boxicity was introduced by F.S. Roberts in 1969 \cite{Roberts}. He showed that every graph on $n$ vertices has an $\floor{ n/2 }$-box representation. The $n$-vertex graph whose complement is a perfect matching is an example of a graph whose boxicity is equal to $n/2$. Upper bounds for boxicity in terms of other graph parameters like maximum degree, treewidth, minimum vertex cover, degeneracy etc. are available in literature. Adiga, Bhowmick, and Chandran showed that the boxicity of a graph with maximum degree $\Delta$ is $O(\Delta \log^2 \Delta)$ \cite{DiptAdiga}. Chandran and Sivadasan proved that boxicity of a graph with treewidth $t$ is at most $t+2$ \cite{CN05}. It was shown by Adiga, Chandran and Mathew that the boxicity of a $k$-degenerate graph on $n$ vertices is $O(k \log n)$ \cite{RogSunAbh}. Boxicity is also studied in relation with other dimensional parameters of graphs like partial order dimension and threshold dimension \cite{DiptAdiga,Yan1}. Studies on box representations of special graph classes too are available in abundance. Scheinerman showed that every outerplanar graph has a $2$-box representation \cite{Scheiner} while Thomassen showed that every planar graph has a $3$-box representation \cite{Thoma1}. Results on boxicity of series-parallel graphs \cite{CRB1}, Halin graphs \cite{halinbox}, chordal graphs, AT-free graphs, permutation graphs \cite{CN05}, circular arc graphs \cite{Dipt}, chordal bipartite graphs \cite{SunMatRog} etc. can be found in literature. Here we are interested in boxicity of the line graph of hypergraphs. \begin{definition} \label{definitionLineGraph} The \emph{line graph} of a hypergraph $H$, denoted by $L(H)$, is the graph with vertex set $V(L(H)) = E(H)$ and edge set $E(L(H)) = \{\{e,f\} : e, f \in E(H), e \cap f \neq \emptyset \}$. \end{definition} For the line graph of a graph $G$ with maximum degree $\Delta$, it was shown by Chandran, Mathew and Sivadasan that its boxicity is $\order{\Delta \log\log \Delta}$ \cite{RogSunSiv}. It was in their attempt to improve this result that the authors stumbled upon pairwise suitable family of permutations and its relation with the boxicity of the line graph of $G$. May be we should mention in passing that though line graphs of graphs form a proper subclass of graphs, any graph is a line graph of some hypergraph. \subsection{Summary of results} \newcounter{counterResult} Some of the results in this paper are interesting because of their consequences. Some are interesting because of the connections with other questions in combinatorics, which are exploited to good effect in their proof. Hence, in this section summarising our results, we indicate those connections along with the consequences. The definitions of the parameters mentioned are given in the appropriate sections. As noted earlier, the motivating result for this paper is the following: \begin{enumerate} \item For any hypergraph $H$, $\pi(H)$ is precisely the boxicity of the line graph of $H$, i.e., $$\pi(H) = \operatorname{boxicity}(L(H)) \mbox{\hspace{5ex} (Theorem \ref{theoremConnectionBoxliPermutation})}.$$ \setcounter{counterResult}{\value{enumi}} \end{enumerate} It is the discovery of this intriguing connection that aroused our interest in the study of pairwise suitable families of permutations. This immediately makes applicable every result in the area of boxicity to separation dimension. For example, any hypergraph with $m$ edges can be separated in $\mathbb{R}^{\floor{m/2}}$; for every $m \in \mathbb{N}$, there exist hypergraphs with $m$ edges which cannot be separated in any proper subspace of $\mathbb{R}^{\floor{m/2}}$; every hypergraph whose line graph is planar can be separated in $\mathbb{R}^3$; every hypergraph whose line graph has a treewidth at most $t$ can be separated in $\mathbb{R}^{t+2}$; hypergraphs separable in $\mathbb{R}^1$ are precisely those whose line graphs are interval graphs and so on. Further, algorithmic and hardness results from boxicity carry over to separation dimension since constructing the line graph of a hypergraph can be done in quadratic time. We just mention two of them. Deciding if the separation dimension is at most $k$ is NP-Complete for every $k \geq 2$ \cite{Coz,Kratochvil} and unless NP = ZPP, for any $\epsilon >0$, there does not exist a polynomial time algorithm to approximate the separation dimension of a hypergraph within a factor of $m^{1/2 -\epsilon}$ where $m = |E(H)|$ \cite{DiptAdigaHardness} \footnote{A recent preprint claims that the inapproximability factor can be improved to $m^{1 - \epsilon}$, which is essentially tight. \cite{chalermsookgraph}.}. In this work, we have tried to find bounds on the separation dimension of a hypergraph in terms of natural invariants of the hypergraph like maximum degree, rank etc. The next two results are for rank-$r$ hypergraphs. \begin{enumerate} \setcounter{enumi}{\value{counterResult}} \item For any rank-$r$ hypergraph $H$ on $n$ vertices $$\pi(H) \leq \frac{e \ln 2}{\pi \sqrt{2}} 4^r \sqrt{r} \log n \mbox{\hspace{5ex} (Theorem \ref{theoremHypergraphSizeUpperbound})}. $$ The bound is obtained by direct probabilistic arguments. The next result shows that this bound is tight up to a factor of constant times $r$. \item Let $K_n^r$ denote the complete $r$-uniform hypergraph on $n$ vertices with $r > 2$. Then $$ c_1 \frac{4^r}{\sqrt{r-2}} \log n \leq \pi(K_n^r) \leq c_2 4^r \sqrt{r} \log n,$$ for $n$ sufficiently larger than $r$ and where $c_1 = \frac{1}{2^7}$ and $c_2 = \frac{e\ln2}{\pi\sqrt{2}} < \frac{1}{2}$ (Theorem \ref{theoremHypergraphSizeLowerbound}). The lower bound is obtained by first proving that the separation dimension of $K_n$, the complete graph on $n$ vertices, is in $\orderatleast{\log n}$ and then showing that, given any separating embedding of $K_n^r$ in $\mathbb{R}^d$, the space $\mathbb{R}^d$ contains ${2r-4 \choose r-2}$ orthogonal subspaces such that the projection of the given embedding on to these subspaces gives a separating embedding of a $K_{n-2r+4}$. \item For any rank-$r$ hypergraph $H$ of maximum degree $D$, $$\pi(H) \leq \order{rD \log^2(rD)} \mbox{\hspace{5ex} (Corollary \ref{corollaryHypergraphMaxDegree})}. $$ This is a direct consequence of the nontrivial fact that $\operatorname{boxicity}(G) \in \order{\Delta \log^2 \Delta}$ for any graph $G$ of maximum degree $\Delta$ \cite{DiptAdiga}. Further using the fact, again a nontrivial one, that there exist graphs of maximum degree $\Delta$ with boxicity $\orderatleast{\Delta \log \Delta}$ \cite{DiptAdiga}, we show that there exists rank-$r$ hypergraphs of maximum degree $2$ with separation dimension in $\orderatleast{r \log r}$. It is trivial to see that the separation dimension of hypergraphs with maximum degree $1$ cannot be more than $1$. \setcounter{counterResult}{\value{enumi}} \end{enumerate} Below we highlight the main results in this paper when we restrict $H$ to be a graph. Every graph has a non-crossing straight line 3D drawing, which is nothing but an embedding of the vertices of a graph into $\mathbb{R}^3$ such that any two disjoint edges can be separated by a plane. Hence if we allow separating hyperplanes of all orientations, then we can have a separating embedding of every graph in $\mathbb{R}^3$. But if we demand that all the separating hyperplanes be normal to one of the coordinate axes, then the story changes. \vspace{1ex} \noindent For a graph $G$ on $n$ vertices, we show the following upper bounds. \begin{enumerate} \setcounter{enumi}{\value{counterResult}} \item $\pi(G) \leq 6.84 \log n$ (Theorem \ref{theoremBoxliSize}). This bound is obtained by simple probabilistic arguments. We also prove that this bound is tight up to constant factors by showing that a complete graph $K_n$ on $n$ vertices has $\pi(K_n) \geq \log \floor{n/2}$. \item $\pi(G) \leq 2^{9 \operatorname{log^{\star}} \Delta} \Delta$, where $\Delta$ denotes the maximum degree of $G$ (Theorem \ref{theoremBoxliDelta}). This is an improvement over the upper bound of $\order{\Delta \log \log \Delta}$ for the boxicity of the line graph of $G$ proved in \cite{RogSunSiv}. The proof technique works by recursively partitioning the graph into $\order{\Delta / \log \Delta}$ parts such that no vertex has more than $\frac{1}{2} \log \Delta$ neighbours in any part and then attacking all possible pairs of these parts.. \item $\pi(G) \in O(k \log \log n)$, where $k$ is the degeneracy of $G$ (Theorem \ref{theoremBoxliDegeneracy}). This is proved by decomposing $G$ into $2k$ star forests and using $3$-suitable permutations of the stars in every forest and the leaves in every such star simultaneously. We also show that the $\log \log n$ factor in this bound cannot be improved in general by demonstrating that for the fully subdivided clique $K_n^{1/2}$, which is a $2$-degenerate graph, $\pi(K_n^{1/2}) \in \orderexactly{\log \log n}$. \item $\pi(G) \in \order{\log(t+1)}$, where $t$ denotes the treewidth of $G$ (Theorem \ref{theoremBoxliTreewidth}). This is proved by adjoining a family of pairwise suitable permutations of the colour classes of a minimal chordal supergraph of $G$ with $2\log (t+1)$ more ``colour sensitive'' permutations based on a DFS traversal of the tree. This bound is also seen to be tight up to constant factors because the clique $K_n$, whose treewidth is $n-1$, has $\pi(K_n) \geq \log \floor{n/2}$. \item $\pi(G) \leq 2\chi_a + 13.68\log \chi_a$ and $\pi(G) \leq \chi_s + 13.68\log \chi_s$, where $\chi_a$ and $\chi_s$ denote, respectively, the acyclic chromatic number and star chromatic number of $G$ (Theorem \ref{theoremBoxliAcyclicStar}). Both the bounds are obtained by exploiting the structure of the graph induced on a pair of colour classes. This bound, when combined with certain results from literature immediately gives a few more upper bounds (Corollary \ref{corollaryBoxliGenus}): (i) $\pi(G) \in O(g^{4/7})$, where $g$ is the Euler genus of $G$; and (ii) $\pi(G) \in O(t^2 \log t)$, if $G$ has no $K_t$ minor. \item $\pi(G) \leq 3$, if $G$ is planar (Theorem \ref{theoremBoxliPlanar}). This is proved using Schnyder's celebrated result on planar drawing \cite{schnyder1990embedding}. This bound is the best possible since the separation dimension of $K_4$ is $3$. \item $\pi(G^{1/2}) \leq (1 + o(1)) \log \log (\chi - 1) + 2$, where $G^{1/2}$ is the graph obtained by subdividing every edge of $G$ and $\chi$ is the chromatic number of $G$ (Corollary \ref{corollarySubdivisionChromaticNumber}). This is proved by associating with every graph $G$ an interval order whose dimension is at least $\pi(G^{1/2})$ and whose height is less than the chromatic number of $G$. The tightness, up to a factor of $2$, of the above bound follows from our result that $\pi(K_n^{1/2}) \geq \frac{1}{2} \floor{\log\log (n-1)}$. \item If $G$ is the $d$-dimensional hypercube $Q_d$, $$ \frac{1}{2} \floor{\log\log(d-1)} \leq \pi(Q_d) \leq (1 + o(1)) \log\log d, $$ where $c$ is a constant (Theorem \ref{theoremHypercube}). The lower bound follows since $K_d^{1/2}$ is contained as a subgraph of $Q_d$. The upper bound is obtained by taking a $3$-suitable family of permutations of the $d$ positions of the binary strings and defining an order on the strings itself based on that. \setcounter{counterResult}{\value{enumi}} \end{enumerate} The main lower bounding strategy that we employ in this paper is the following result that we prove in Theorem \ref{theoremBoxliLowerBound}. \begin{enumerate} \setcounter{enumi}{\value{counterResult}} \item For a graph $G$, let $V_1, V_2 \subsetneq V(G)$ such that $V_1 \cap V_2 = \emptyset$. If there exists an edge between every $s_1$-subset of $V_1$ and every $s_2$-subset of $V_2$, then $\pi(G) \geq \min \left\{ \log \frac{|V_1|}{s_1}, \log \frac{|V_2|}{s_2} \right\}$. This immediately shows that $\pi(K_{n,n}) \geq \log n$, $\pi(K_n) \geq \log \floor{n/2}$ and that for any graph $G$, $\pi(G) \geq \log \floor{\omega/2}$, where $\omega$ denotes the size of a largest clique in $G$. It also forms a key ingredient in showing the lower bound on separation dimension of the complete $r$-uniform hypergraph. Finally it is used to derive the following lower bound for random graphs. \item For a graph $G \in \mathcal{G}(n,p)$, $\pi(G) \geq \log(np) - \log \log(np) - 2.5$ asymptotically almost surely (Theorem \ref{theoremBoxliLowerBoundRandom}). \setcounter{counterResult}{\value{enumi}} \end{enumerate} The last result in the paper is the following lower bound on the separation dimension of fully subdivided cliques (Theorem \ref{theoremKnHalf}). \begin{enumerate} \setcounter{enumi}{\value{counterResult}} \item Let $K_n^{1/2}$ denote the graph obtained by subdividing every edge of $K_n$ exactly once. Then, $$\pi(K_n^{1/2}) \geq \frac{1}{2} \floor{\log\log(n-1) }.$$ This is proved by using Erd\H{os}-Szekeres Theorem to extract a large enough set of vertices of the underlying $K_n$ that are ordered essentially the same by every permutation in the selected family and then showing that separating the edges incident on those vertices can be modelled as a problem of finding a realiser for a canonical open interval order of same size. This lower bound is used to show the tightness of two of the upper bounds above. \setcounter{counterResult}{\value{enumi}} \end{enumerate} \subsection{Outline of the paper} The remainder of this paper is organised as follows. A brief note on some standard terms and notations used throughout this paper is given in Section \ref{sectionNotation}. Section \ref{sectionConnection} demonstrates the equivalence of separation dimension of a hypergraph $H$ and boxicity of the line graph of $H$. All the upper bounds are stated and proved in Section \ref{sectionUpperBounds}. The tightness of the upper bounds, where we know them, are mentioned alongside the bound but their proofs and discussion are postponed till the subsequent section (Section \ref{sectionLowerBound}). Finally, in Section \ref{sectionOpenProblems}, we conclude with a discussion on a few open problems that we find interesting. \subsection{Notational note} \label{sectionNotation} A {\em hypergraph} $H$ is a pair $(V, E)$ where $V$, called the {\em vertex set}, is any set and $E$, called the {\em edge set}, is a collection of subsets of $V$. The vertex set and edge set of a hypergraph $H$ are denoted respectively by $V(H)$ and $E(H)$. The {\em rank} of a hypergraph $H$ is $\max_{e \in E(H)}|e|$ and $H$ is called {\em $k$-uniform} if $|e| = k, \forall e \in E(H)$. The {\em degree} of a vertex $v$ in $H$ is the number of edges of $H$ which contain $v$. The {\em maximum degree} of $H$, denoted as $\Delta(H)$ is the maximum degree over all vertices of $H$. All the hypergraphs considered in this paper are finite. A {\em graph} is a $2$-uniform hypergraph. For a graph $G$ and any $S \subseteq V(G)$, the subgraph of $G$ induced on the vertex set $S$ is denoted by $G[S]$. For any $v \in V(G)$, we use $N_G(v)$ to denote the neighbourhood of $v$ in $G$, i.e., $N_G(v) = \{u \in V(G) : \{v,u\} \in E(G)\}$. A \emph{closed interval} on the real line, denoted as $[i,j]$ where $i,j \in \mathbb{R}$ and $i\leq j$, is the set $\{x\in \mathbb{R} : i\leq x\leq j\}$. Given an interval $X=[i,j]$, define $l(X)=i$ and $r(X)=j$. We say that the closed interval $X$ has \emph{left end-point} $l(X)$ and \emph{right end-point} $r(X)$. For any two intervals $[i_1, j_1], [i_2,j_2]$ on the real line, we say that $[i_1, j_1] < [i_2,j_2]$ if $j_1 < i_2$. For any finite positive integer $n$, we shall use $[n]$ to denote the set $\{1,\ldots , n\}$. A permutation of a finite set $V$ is a bijection from $V$ to $[|V|]$. The logarithm of any positive real number $x$ to the base $2$ and $e$ are respectively denoted by $\log(x)$ and $\ln(x)$, while $\operatorname{log^{\star}}(x)$ denotes the iterated logarithm of $x$ to the base $2$, i.e. the number of times the logarithm function (to the base $2$) should be applied so that the result is less than or equal to $1$. \section{Pairwise suitable family of permutations and a box representation} \label{sectionConnection} In this section we show that a family of permutations of cardinality $k$ is pairwise suitable for a hypergraph $H$ (Definition \ref{definitionPairwiseSuitable}) if and only if the line graph of $H$ (Definition \ref{definitionLineGraph}) has a $k$-box representation (Definition \ref{definitionBoxicity}). Before we proceed to prove it, let us state an equivalent but more combinatorial definition for boxicity. We have already noted that interval graphs are precisely the graphs with boxicity $1$. Given a $k$-box representation of a graph $G$, orthogonally projecting the $k$-boxes to each of the $k$-axes in $\mathbb{R}^k$ gives $k$ families of intervals. Each one of these families can be thought of as an interval representation of some interval graph. Thus we get $k$ interval graphs. It is not difficult to observe that a pair of vertices is adjacent in $G$ if and only if the pair is adjacent in each of the $k$ interval graphs obtained. The following lemma, due to Roberts \cite{Roberts}, formalises this relation between box representations and interval graphs. \begin{lemma}[Roberts \cite{Roberts}] \label{lemmaRoberts} For every graph $G$, $\operatorname{boxicity}(G) \leq k$ if and only if there exist $k$ interval graphs $I_1, \ldots, I_k$, with $V(I_1) = \cdots = V(I_k) = V(G)$ such that $G = I_1 \cap \cdots \cap I_k$. \end{lemma} From the above lemma, we get an equivalent definition of boxicity. \begin{definition} \label{definitionBoxicityInterval} The \emph{boxicity} of a graph $G$ is the minimum positive integer $k$ for which there exist $k$ interval graphs $I_1,\ldots, I_k$ such that $G = I_1 \cap \cdots \cap I_k$. \end{definition} Note that if $G = I_1 \cap \cdots \cap I_k$, then each $I_i$ is a supergraph of $G$. Moreover, for every pair of vertices $u,v \in V(G)$ with $\{u,v\} \notin E(G)$, there exists some $i \in [k]$ such that $\{u,v\} \notin E(I_i)$. Now we are ready to prove the main theorem of this section. \begin{theorem} For a hypergraph $H$, $\pi(H) = \operatorname{boxicity}(L(H))$. \label{theoremConnectionBoxliPermutation} \end{theorem} \begin{proof} First we show that $\pi(H) \leq \operatorname{boxicity}(L(H))$. Let $\operatorname{boxicity}(L(H)) = b$. Then, by Lemma \ref{lemmaRoberts}, there exists a collection of $b$ interval graphs, say $\mathcal{I} = \{I_1, \ldots, I_b\}$, whose intersection is $L(H)$. For each $i \in [b]$, let $f_i$ be an interval representation of $I_i$. For each $u \in V(H)$, let $E_H(u) = \{e \in E(H) : u \in e\}$ be the set of edges of $H$ containing $u$. Consider an $i \in [b]$ and a vertex $u \in V(H)$. The closed interval $C_i(u) = \bigcap_{e \in E_H(u)} f_i(e)$ is called the {\em clique region} of $u$ in $f_i$. Since any two edges in $E_H(u)$ are adjacent in $L(H)$, the corresponding intervals have non-empty intersection in $f_i$ . By the Helly property of intervals, $C_i(u)$ is non-empty. We define a permutation $\sigma_i$ of $V(H)$ from $f_i$ such that $\forall u,v \in V(G)$, $C_i(u) < C_i(v) \implies u \prec_{\sigma_i} v$. It suffices to prove that $\{\sigma_1, \ldots, \sigma_b\}$ is a family of permutations that is pairwise suitable for $H$. Consider two disjoint edges $e, e'$ in $H$. Hence $\{e, e' \} \notin E(L(H))$ and since $L(H) = \bigcap_{i=1}^b I_i$, there exists an interval graph, say $I_i \in \mathcal{I}$, such that $\{e, e'\} \notin E(I_i)$, i.e., $f_i(e) \cap f_i(e') = \emptyset$. Without loss of generality, assume $f_i(e) < f_i(e')$. For any $v \in e$ and any $v' \in e'$, since $C_i(v) \subseteq f_i(e)$ and $C_i(v') \subseteq f(e')$, we have $C_i(v) < C_i(v')$, i.e. $v \prec_{\sigma_i} v'$. Hence $e \prec_{\sigma_i} e'$. Thus the family $\{ \sigma_1, \ldots, \sigma_b \}$ of permutations is pairwise suitable for $H$. Next we show that $\operatorname{boxicity}(L(H)) \leq \pi(H)$. Let $\pi(H) = p$ and let $\mathcal{F} = \{\sigma_1, \ldots, \sigma_p\}$ be a pairwise suitable family of permutations for $H$. From each permutation $\sigma_i$, we shall construct an interval graph $I_i$ such that $L(H) = \bigcap_{i=1}^{p}I_i$. Then by Lemma \ref{lemmaRoberts}, $\operatorname{boxicity}(L(H)) \leq \pi(H)$. For a given $i \in [p]$, to each edge $e \in E(H)$, we associate the closed interval $$f_i(e) = \left[ \min_{v \in e}\sigma_i(v) ~,~ \max_{v \in e}\sigma_i(v) \right],$$ and let $I_i$ be the intersection graph of the intervals $f_i(e), e \in E(H)$. Let $e, e' \in V(L(H))$. If $e$ and $e'$ are adjacent in $L(H)$, let $v \in e \cap e'$. Then $\sigma_i(v) \in f_i(e) \cap f_i(e'),~\forall i \in [p]$. Hence $e$ and $e'$ are adjacent in $I_i$ for every $i \in [p]$. If $e$ and $e'$ are not adjacent in $L(H)$, then there is a permutation $\sigma_i \in \mathcal{F}$ such that either $e \prec_{\sigma_i} e'$ or $e' \prec_{\sigma_i} e$. Hence by construction $f_i(e) \cap f_i(e') = \emptyset$ and so $e$ and $e'$ are not adjacent in $I_i$. This completes the proof. \end{proof} \section{Upper bounds} \label{sectionUpperBounds} For graphs, sometimes we work with a notion of suitability that is stronger than the pairwise suitability of Definition \ref{definitionPairwiseSuitable}. This will facilitate easy proofs for some results to come later in this article. \begin{definition} \label{definition3Mixing} For a graph $G$, a family $\mathcal{F}$ of permutations of $G$ is \emph{$3$-mixing} if, for every two adjacent edges $\{a,b\}, \{a,c\} \in E(G)$, there exists a permutation $\sigma \in \mathcal{F}$ such that either $b \prec_{\sigma} a \prec_{\sigma} c$ or $c \prec_{\sigma} a \prec_{\sigma} b$. \end{definition} Notice that a family of permutations $\mathcal{F}$ of $V(G)$ is pairwise suitable and $3$-mixing for $G$ if, for every two edges $e,f \in E(G)$, there exists a permutation $\sigma \in \mathcal{F}$ such that either $e \preceq_{\sigma} f$ or $f \preceq_{\sigma} e$. Let $\boxli^{\star}(G)$ denote the cardinality of a smallest family of permutations that is pairwise suitable and $3$-mixing for $G$. From their definitions, $\pi(G) \leq \boxli^{\star}(G)$. We begin with the following two straightforward observations. \begin{observation} \label{observationMonotonicity} $\pi(G)$ and $\boxli^{\star}(G)$ are monotone increasing properties, i.e., $\pi(G') \leq \pi(G)$ and $\boxli^{\star}(G') \leq \boxli^{\star}(G)$ for every subgraph $G'$ of $G$. \end{observation} \begin{observation} \label{observationDisjointComponents} Let $G_1, \ldots, G_r$ be a collection of disjoint components that form a graph $G$, i.e, $V(G) = \biguplus_{i=1}^r V(G_i)$ and $E(G) = \biguplus_{i=1}^r E(G_i)$. If $\pi(G) \geq 1$ for some $i \in [r]$, then $\pi(G) = \max_{i \in [r]} \pi(G_i)$. \end{observation} A nontrivial generalisation of Observation \ref{observationDisjointComponents}, when there are edges across the parts, is given in Lemma \ref{lemmaMaxPairsParts}. Now we show an upper bound on $\pi(G)$ in terms of $|V(G)|$. \subsection{Separation dimension and size of a hypergraph} \label{sectionBoxliSize} \begin{theorem} \label{theoremHypergraphSizeUpperbound} For any rank-$r$ hypergraph $H$ on $n$ vertices $$ \pi(H) \leq \frac{e \ln 2}{\pi \sqrt{2}} 4^r \sqrt{r} \log n.$$ \end{theorem} \begin{proof} Consider family $\mathcal{F}$ of $m$ permutations of $[n]$ chosen independently and uniformly from the $n!$ possible ones. For an arbitrary pair of disjoint edges $e, f \in E(H)$, the probability $q$ that $e$ and $f$ are separated in $\sigma$ is at least $2 (r!)^2 / (2r)!$. Using Stirling's bounds $\sqrt{2\pi} k^{k + 1/2} e^{-k} \leq k! \leq e k^{k + 1/2} e^{-k}$, we get $q \geq \frac{2\pi\sqrt{2}}{e} \sqrt{r} / {4^r}$. The probability of the (bad) event that $e$ and $f$ are not separated in any of the $m$ permutations in $\mathcal{F}$ is at most $(1 - q)^m$. Since the number of non-empty edges in $H$ is less than $n^r$, by the union bound, the probability $p$ that there exists some pair of edges which is not separated in any of the permutations in $\mathcal{F}$ is less than $n^{2r}(1-q)^r \leq e^{2r \ln n}e^{-qm}$. Hence if $2r \ln n \leq qm$, then $p < 1$ and there will exist some family $\mathcal{F}$ of size $m$ such that every pair of edges is separated by some permutation in $\mathcal{F}$. So $m \geq \frac{2r}{q} \ln n$ suffices. So $\pi(H) \leq \frac{e}{\pi\sqrt{2}} 4^r \sqrt{r} \ln n$. \end{proof} \subsubsection*{Tightness of Theorem \ref{theoremHypergraphSizeUpperbound}} Let $K_n^r$ denote a complete $r$-uniform graph on $n$ vertices. Then by Theorem \ref{theoremHypergraphSizeLowerbound}, $\pi(K_n^r) \geq \frac{1}{2^7} \frac{4^r}{\sqrt{r-2}} \log n$ for $n$ sufficiently larger than $r$. Hence the bound in Theorem \ref{theoremHypergraphSizeUpperbound} is tight by factor of $64 r$. \begin{theorem} \label{theoremBoxliSize} For a graph $G$ on $n$ vertices, $\pi(G) \leq \boxli^{\star}(G) \leq 6.84 \log n$. \end{theorem} \begin{proof} From the definitions of $\pi(G)$ and $\boxli^{\star}(G)$ and Observation \ref{observationMonotonicity}, we have $\pi(G) \leq \boxli^{\star}(G) \leq \boxli^{\star}(K_n)$, where $K_n$ denotes the complete graph on $n$ vertices. Here we prove that $\boxli^{\star}(K_n) \leq 6.84 \log n$. Choose $r$ permutations, $\sigma_1, \ldots, \sigma_r$, independently and uniformly at random from the $n!$ distinct permutations of $[n]$. Let $e$, $f$ be two distinct edges of $K_n$. The probability that $e \preceq_{\sigma_i} f$ is $1/6$ for each $i \in [r]$. ($4$ out of $4!$ outcomes are favourable when $e$ and $f$ are non-adjacent and $1$ out of $3!$ outcomes is favourable otherwise.) \begin{eqnarray*} Pr[(e \preceq_{\sigma_i} f)~or~(f \preceq_{\sigma_i} e)] & = & Pr[(e \preceq_{\sigma_i} f)] + Pr[(f \preceq_{\sigma_i} e)] \\ & = & \frac{1}{6} + \frac{1}{6} \\ & = & \frac{1}{3} \end{eqnarray*} Therefore, \begin{eqnarray*} Pr[\bigcap_{i=1}^r\left((e \npreceq_{\sigma_i} f) \cap (f \npreceq_{\sigma_i} e)\right)] & = & \left(Pr[(e \npreceq_{\sigma_i} f) \cap (f \npreceq_{\sigma_i} e)]\right)^r \\ & = & (1-\frac{1}{3})^r \\ & = & \left(\frac{2}{3}\right)^r \end{eqnarray*} \begin{eqnarray*} Pr[\bigcup_{\forall\mbox{ pairs of distinct edges }e,f}\left(\bigcap_{i=1}^r\left((e \nprec_{\sigma_i} f) \cap (f \nprec_{\sigma_i} e)\right)\right)] < n^4 \left(\frac{2}{3}\right)^r \end{eqnarray*} Substituting for $r= 6.84\log n$ in the above inequality, we get \begin{eqnarray*} Pr[\bigcup_{\forall\mbox{ pairs of distinct edges }e,f}\left(\bigcap_{i=1}^r\left((e \nprec_{\sigma_i} f) \cap (f \nprec_{\sigma_i} e)\right)\right)] < 1 \end{eqnarray*} That is, there exists a family of permutations of $V(K_n)$ of cardinality at most $6.84\log n$ which is pairwise suitable and $3$ mixing for $K_n$. \end{proof} \subsubsection*{Tightness of Theorem \ref{theoremBoxliSize}} Let $K_n$ denote a complete graph on $n$ vertices. Since $\omega(K_n) = n$, it follows from Corollary \ref{corollaryBoxliOmega} that $\pi(K_n) \geq \log \floor{n/2}$. Hence the bound proved in Theorem \ref{theoremBoxliSize} is tight up to a constant factor. \subsubsection*{An auxiliary lemma} Using Theorem \ref{theoremBoxliSize}, we shall now prove a lemma that will be used later in proving bounds for $\pi(G)$ in terms of maximum degree, star chromatic number, and acyclic chromatic number. \begin{lemma} \label{lemmaMaxPairsParts} Let $P_G=\{V_1, \ldots, V_r\}$ be a partitioning of the vertices of a graph $G$, i.e., $V(G) = V_1 \uplus \cdots \uplus V_r$. Let $\hat{\pi}(P_G) = \max_{i,j \in [r]} \pi(G[V_i \cup V_j])$. Then, $\pi(G) \leq 13.68 \log r + \hat{\pi}(P_G) r$. \end{lemma} \begin{proof} Let $H$ be a complete graph with $V(H) = \{h_1, \ldots , h_r\}$. Let $\mathcal{M} = \{M_1, \ldots ,M_r \}$ be a collection of matchings of $H$ such that each edge is present in at least one matching $M_i$. It is easy to see that there exists such a collection (Vizing's Theorem on edge colouring - Theorem $5.3.2$ in \cite{Diest}). For each $i \in [r]$, let $G_i$ be a subgraph of $G$ such that $V(G_i) = V(G)$ and for a pair of vertices $u \in V_a$, $v \in V_b$, $\{u,v\} \in E(G_i)$ if $a=b$ or $\{h_a,h_b\} \in M_i$. Note that $G_i$ is made of $|M_i|$ disjoint components. Let $\mathcal{F}_i$ be a family of permutations that is pairwise suitable for $G_i$ such that $|\mathcal{F}_i| = \pi(G_i)$. By Observation \ref{observationDisjointComponents}, we have $|\mathcal{F}_i| \leq \hat{\pi}(P_G)$. From Theorem \ref{theoremBoxliSize}, $\boxli^{\star}(H) \leq 6.84 \log r$. Let $\mathcal{E}$ be a family of permutations that is pairwise suitable and $3$-mixing for $H$ such that $|\mathcal{E}| = \boxli^{\star}(H) \leq 6.84 \log r$. We construct two families of permutations, namely $\mathcal{F}_{r+1}$ and $\mathcal{F}_{r+2}$, of $V(G)$ from $\mathcal{E}$ such that $|\mathcal{F}_{r+1}| = |\mathcal{F}_{r+2}| = |\mathcal{E}|$. Corresponding to each permutation $\sigma \in \mathcal{E}$, we construct $\tau_{\sigma} \in \mathcal{F}_{r+1}$ and $\kappa_{\sigma} \in \mathcal{F}_{r+2}$ as follows. If $h_i \prec_{\sigma} h_j$, then we have $V_i \prec_{\tau_{\sigma}} V_j$ and $V_i \prec_{\kappa_{\sigma}} V_j$. Moreover, for each $i \in [r]$ and for distinct $v,v' \in V_i$, $v \prec_{\tau_{\sigma}} v'\iff v' \prec_{\kappa_{\sigma}} v$. \begin{claim} \label{claimMaxPairsParts} $\mathcal{F} = \bigcup_{i=1}^{r+2}\mathcal{F}_i$ is a pairwise-suitable family of permutations for $G$. \end{claim} We prove the claim by showing that for every pair of non-adjacent edges $e, e' \in E(G)$, there is a $\sigma \in \mathcal{F}$ such that $e \prec_{\sigma} e'$ or $e' \prec_{\sigma} e$. We call an edge $e$ in $G$ a \emph{crossing edge} if there exists distinct $i,j \in [r]$ such that $e$ has its endpoints in $V_i$ and $V_j$. Otherwise $e$ is called a \emph{non-crossing edge}. Consider any two disjoint edges $\{a,b\}, \{c,d\}$ in $G$. Let $a \in V_i, b \in V_j, c \in V_k$ and $d \in V_l$. If $|\{i,j,k,l\}| \leq 2$, then both the edges belong to some $G_p, p \in [r]$ and hence are separated by a permutation in $\mathcal{F}_p$. If $|\{i,j,k,l\}| = 3$, then the two edges are separated by a permutation in $\mathcal{F}_{r+1}$ or $\mathcal{F}_{r+2}$ since $\mathcal{E}$ was $3$-mixing for $H$. If $|\{i,j,k,l\}| = 4$, then the two edges are separated by a permutation in both $\mathcal{F}_{r+1}$ and $\mathcal{F}_{r+2}$ since $\mathcal{E}$ was pairwise suitable for $H$. Details follow. \setcounter{case}{0} \begin{case}[both $\{a,b\}$ and $\{c,d\}$ are crossing edges] If $i,j,k$ and $l$ are distinct then from the definition of $\mathcal{E}$ there exists a permutation $\sigma \in \mathcal{E}$ such that $\{h_i,h_j\} \prec_{\sigma} \{h_k, h_l\}$ or $\{h_k, h_l\} \prec_{\sigma} \{h_i,h_j\}$. Without loss of generality, assume $\{h_i,h_j\} \prec_{\sigma} \{h_k, h_l\}$. Therefore, in the permutations $\tau_{\sigma}$ and $\kappa_{\sigma}$ constructed from $\sigma$, we have $\{a,b\} \prec_{\tau_{\sigma}} \{c,d\}$ and $\{a,b\} \prec_{\kappa_{\sigma}} \{c,d\}$. Recall that $\mathcal{E}$ is a pairwise suitable and $3$-mixing family of permutations for $H$. If $i=k$ and $i,j,l$ are distinct, then there exists a permutation $\sigma \in \mathcal{E}$ such that $h_j \prec_{\sigma} h_i \prec_{\sigma} h_l$ or $h_l \prec_{\sigma} h_i \prec_{\sigma} h_j$. Without loss of generality, assume $h_j \prec_{\sigma} h_i \prec_{\sigma} h_l$. Now it is easy to see that either $\{a,b\} \prec_{\tau_{\sigma}} \{c,d\}$ or $\{a,b\} \prec_{\kappa_{\sigma}} \{c,d\}$. The cases when $i=l, j, k$ are distinct or $i,j=k,l$ are distinct or $i,j=l,k$ are distinct are symmetric to the above case where $i=k,j,l$ are distinct. Consider the case when $i=k,j=l$ are distinct. In this case, both $\{a,b\}$ and $\{c,d\}$ have their endpoints in $V_i$ and $V_j$. Then there exists some $p \in [r]$ such that $\{a,b\}, \{c,d\} \in E(G_p)$. Since $\mathcal{F}_p$ is a pairwise suitable family of permutations for $G_p$ there exists a $\sigma \in F_p$ such that $\{a,b\} \prec_{\sigma} \{c,d\}$ or $\{c,d\} \prec_{\sigma} \{a,b\}$. The case when $i=l$ and $j=k$ are distinct is similar. \end{case} \begin{case}[only $\{a,b\}$ is a crossing edge] Let $a \in V_i, b \in V_j$ and $c,d \in V_k$. If $i,j,k$ are distinct then there exists a permutation $\sigma$ in $\mathcal{E}$ such that either $h_i \prec_{\sigma} h_j \prec_{\sigma} h_k$ or $h_k \prec_{\sigma} h_j \prec_{\sigma} h_i$. Without loss of generality, assume $h_i \prec_{\sigma} h_j \prec_{\sigma} h_k$. Now its easy to see that both $\{a,b\} \prec_{\tau_{\sigma}} \{c,d\}$ and $\{a,b\} \prec_{\kappa_{\sigma}} \{c,d\}$. If $i=k, j$ are distinct then both $\{a,b\}$ and $\{c,d\}$ have their endpoints from $V_i \cup V_j$. Then there exists some $p \in [r]$ such that $\{a,b\}, \{c,d\} \in E(G_p)$. Since $\mathcal{F}_p$ is a pairwise suitable family of permutations for $G_p$ there exists a $\sigma \in \mathcal{F}_p$ such that $\{a,b\} \prec_{\sigma} \{c,d\}$ or $\{c,d\} \prec_{\sigma} \{a,b\}$. The case when $j=k, i$ are distinct is similar. \end{case} \begin{case}[only $\{c,d\}$ is a crossing edge] Similar to the case above. \end{case} \begin{case}[both $\{a,b\}$ and $\{c,d\}$ are non-crossing edges] Then, for each $p \in [r]$, $\{a,b\}, \{c,d\} \in E(G_p)$. Since $\mathcal{F}_p$ is a pairwise suitable family of permutations for $G_p$ there exists a $\sigma \in \mathcal{F}_p$ such that $\{a,b\} \prec_{\sigma} \{c,d\}$ or $\{c,d\} \prec_{\sigma} \{a,b\}$. \end{case} Thus, we prove Claim \ref{claimMaxPairsParts}. Hence, we have $\pi(G) \leq |\mathcal{F}| = \sum_{i=1}^{r}|F_i| + |F_{r+1}| + |F_{r+2}| \leq \hat{\pi}(P_G) r + 13.68 \log r$. \end{proof} \subsection{Maximum degree} Adiga, Bhowmick, and Chandran have shown that the boxicity of a graph $G$ of maximum degree $\Delta$ is in $O(\Delta \log^2 \Delta)$ \cite{DiptAdiga}. For any hypergraph $H$ of rank $r$ and maximum degree $D$ the maximum degree of $L(H)$ is $r(D-1)$. Hence the next bound follows immediately form Theorem \ref{theoremConnectionBoxliPermutation}. \begin{corollary} \label{corollaryHypergraphMaxDegree} For any hypergraph $H$ of rank $r$ and maximum degree $D$, $$ \pi(H) \in \order{rD \log^2(rD)}. $$ \end{corollary} It is known that there exist graphs of maximum degree $\Delta$ whose boxicity can be as high as $c \Delta \log \Delta$ \cite{DiptAdiga}, where $c$ is a small enough positive constant. Let $G$ be one such graph. Consider the following hypergraph $H$ constructed from $G$. Let $V(H) = E(G)$ and $E(H) = \{E_v : v \in V(G)\}$ where $E_v$ is the set of edges incident on the vertex $v$ in $G$. It is clear that $G = L(H)$. Hence $\pi(H) = \operatorname{boxicity}(G) \geq c \Delta(G) \log \Delta(G)$. Note that the rank of $H$ is $r = \Delta(G)$ and the maximum degree of $H$ is $2$. Thus $\pi(H) \geq c r \log(r)$ and hence the dependence on $r$ in the upper bound in Corollary \ref{corollaryHypergraphMaxDegree} cannot be considerably brought down in general. We improve the above upper bound in the case of graphs using the auxiliary lemma from the previous section. For a graph $G$ with maximum degree $\Delta$, it is easy to see that from Lemma \ref{lemmaMaxPairsParts}, $\pi(G) \in \order{\Delta^2}$. Consider $P_G$ to be the partition of $V(G)$ corresponding to the colour classes in a distance-two colouring of $G$, i.e, a vertex colouring of $G$ in which no two vertices of $G$ which are at a distance at most $2$ from each other are given the same colour. Then the subgraphs induced by any pair of colour classes is a collection of disjoint edges and hence $\hat{\pi}(P_G) \leq 1$. It is easy to see that a distance-two colouring can be done using $\Delta^2 + 1$ colours and hence the bound. Corollary \ref{corollaryHypergraphMaxDegree} improves it to $\order{\Delta \log^2 \Delta}$. It was shown in \cite{RogSunSiv} using $3$-suitable family of permutations that $\pi(G) \in \order{\Delta \log\log \Delta}$. Here we improve the above bound and show that $\pi(G) \leq 2^{9 \operatorname{log^{\star}} \Delta} \Delta$ (Theorem \ref{theoremBoxliDelta}). The idea employed is to recursively partition $V(G)$ into $\order{\Delta / \log \Delta}$ parts such that the subgraphs induced by any pair of parts have a maximum degree at most $\log \Delta$ and then apply Lemma \ref{lemmaMaxPairsParts}. Existence of such a partition is guaranteed by Lemma \ref{lemmaLogDegreePartition} below which in turn is proved by an application of the powerful Lov\'{a}sz local lemma. \begin{lemma}[Lov\'{a}sz local lemma, Erd\H{o}s and Lov\'{a}sz \cite{lovaszlocallemma}] \label{lemmaLovaszLocal} Let $G$ be a graph on vertex set $[n]$ with maximum degree $d$ and let $A_1, \ldots , A_n$ be events defined on some probability space such that for each $i$, $$Pr[A_i] \leq \frac{1}{4d}.$$ Suppose further that each $A_i$ is jointly independent of the events $A_j$ for which $\{i,j\} \notin E(G)$. Then $Pr[\overline{A_1} \cap \cdots \cap \overline{A_n}] > 0$. \end{lemma} The following lemma is similar to Lemma $4.2$ in \cite{FurediKahn} and shall be used in proving an upper bound for $\pi(G)$ in terms of the maximum degree of $G$. \begin{lemma} \label{lemmaLogDegreePartition} For a graph $G$ with maximum degree $\Delta \geq 2^{64}$, there exists a partitioning of $V(G)$ into $\ceil{ 400 \Delta / \log \Delta }$ parts such that for every vertex $v \in V(G)$ and for every part $V_i, \, i \in \big[ \ceil{ 400 \Delta / \log \Delta } \big],\, \lvert N_G(v) \cap V_i \rvert \leq \frac{1}{2} \log \Delta$. \end{lemma} \begin{proof} Since we can have a $\Delta$-regular supergraph (with possibly more vertices) of $G$ we can as well assume that $G$ is $\Delta$-regular. Let $r = \ceil{ \frac{400 \Delta}{\log \Delta} } \leq \frac{401\Delta}{\log \Delta}$. Partition $V(G)$ into $V_1 , \ldots , V_r$ using the following procedure: for each $v \in V(G)$, independently assign $v$ to a set $V_i$ uniformly at random from $V_1, \ldots , V_r$. We use the following well known multiplicative form of Chernoff Bound (Theorem 4.4 in \cite{mitzenmacher}). Let $X$ be a sum of mutually independent indicator random variables with $\mu = E[X]$. Then for any $\delta > 0$, $$Pr[X \geq (1+\delta) \mu] \leq c_{\delta}^{\mu},$$ where $c_{\delta} = e^{\delta} / (1 + \delta)^{(1 + \delta)}$. Let $d_i(v)$ be a random variable that denotes the number of neighbours of $v$ in $V_i$. Then $\mu_{i,v} = E[d_i(v)] = \frac{\Delta}{r} \leq \frac{1}{400} \log \Delta$. For each $v \in V(G), i \in [r]$, let $E_{i,v}$ denote the event $d_i(v) \geq \frac{1}{2}\log \Delta$. Then applying the above Chernoff bound with $\delta = 199$, we have $Pr[E_{i,v}] = Pr[d_i(v) \geq 200 \frac{\log \Delta}{400}] \leq 2^{-3.1 \log \Delta} = \Delta^{-3.1}$. Consider the collection of ``bad'' events $E_{i,v}$, $i \in [r], v \in V(G)$. In order to apply Lemma \ref{lemmaLovaszLocal}, we construct a dependency graph $H$ whose vertices are events $E_{i,v}$ and two vertices are adjacent if and only if the corresponding two events are dependent. Since $E_{i,v}$ depends only on where the neighbours of $v$ went to in the random partitioning, it is easy to see that the maximum degree of $H$, denoted by $d_H$, is at most $(1 + \Delta + \Delta(\Delta-1))r = (1+\Delta^2)r \leq \frac{402 \Delta^3}{\log \Delta}$. For each $i \in [r], v \in V(G)$, $Pr[E_{i,v}] \leq \frac{1}{\Delta^{3.1}} \leq \frac{\log \Delta}{1608 \Delta^3} \leq \frac{1}{4d_H}$. Therefore, by Lemma \ref{lemmaLovaszLocal}, we have $Pr[\bigcap_{i \in [r], v \in V(G)} \overline{E_{i,v}}] > 0$. Hence there exists a partition satisfying our requirements. \end{proof} \begin{theorem} For a graph $G$ with maximum degree $\Delta$, $\pi(G) \leq 2^{9 \operatorname{log^{\star}} \Delta} \Delta$. \label{theoremBoxliDelta} \end{theorem} \begin{proof} Let $\pi(\Delta) := \max\{\pi(H) : H \textnormal{ is a graph with maximum degree at most } \Delta \}$. Then, clearly $\pi(G) \leq \pi(\Delta)$. If $\Delta \leq 1$, then $G$ is a collection of matching edges and disjoint vertices and therefore $\pi(1) = 1$. When $\Delta > 1$, it was shown in Theorem $10$ of \cite{RogSunSiv} that $\pi(\Delta) \leq (4\Delta - 4)(\ceil{ \log \log (2\Delta - 2) } + 3) + 1$. For every $1 < \Delta < 2^{64}$, it can be verified that $(4\Delta - 4)(\ceil{ \log \log (2\Delta - 2) } + 3) + 1 \leq 2^{9 \operatorname{log^{\star}} \Delta} \Delta$. Therefore, the statement of the theorem is true for every $\Delta < 2^{64}$. For $\Delta \geq 2^{64}$, let $P_G$ be a partition of $V(G)$ into $V_1 \uplus \cdots \uplus V_{r}$ where $r=\ceil{ 400 \Delta / \log \Delta }$ and $|N_G(v) \cap V_i| \leq \frac{1}{2} \log \Delta, ~ \forall v \in V(G), i \in [r]$. Existence of such a partition is guaranteed by lemma \ref{lemmaLogDegreePartition}. From Lemma \ref{lemmaMaxPairsParts}, we have $\pi(G) \leq 13.68 \log r + \hat{\pi}(P_G) r$ where $\hat{\pi}(P_G) = \max_{i,j \in [r]} \pi(G[V_i \cup V_j])$. Since $|N_G(v) \cap V_i| \leq \frac{1}{2} \log \Delta$ for every $v \in V(G), i \in [r]$, the maximum degree of the graph $G[V_i \cup V_j]$ is at most $\log \Delta$ for every $i, j \in [r]$. Therefore, $\hat{\pi}(P_G) \leq \pi(\log \Delta)$. Thus we have \begin{eqnarray} \label{equationRecurrencePi} \pi(\Delta) & \leq & \ceil{ \frac{400 \Delta}{\log \Delta} } \pi(\log \Delta) + 13.68\log \ceil{ \frac{400 \Delta}{\log \Delta}} \nonumber \\ & \leq & 2^9 \frac{\Delta}{\log \Delta} \pi(\log \Delta), \, \textnormal{where } \Delta \geq 2^{64}. \end{eqnarray} Now we complete the proof by using induction on $\Delta$. The statement is true for all value of $\Delta < 2^{64}$ and we have the recurrence relation of Equation (\ref{equationRecurrencePi}) for larger values of $\Delta$. For an arbitrary $\Delta \geq 2^{64}$, we assume inductively that the bound in the statement of the theorem is true for all smaller values of $\Delta$. Now since $\Delta \geq 2^{64}$, we can apply the recurrence in Equation (\ref{equationRecurrencePi}). Therefore \begin{eqnarray*} \label{equationInductionPi} \pi(\Delta) & \leq & 2^9 \frac{\Delta}{\log \Delta} \pi(\log \Delta) \\ & \leq & 2^9 \frac{\Delta}{\log \Delta} 2^{9 \operatorname{log^{\star}}(\log \Delta)} \log \Delta, \, \textnormal{(by induction)} \\ & = & 2^{9 \operatorname{log^{\star}} \Delta} \Delta. \end{eqnarray*} \end{proof} We believe that the bound proved above can be improved. Please see the discussion in Section \ref{sectionOpenProblems}. \subsection{Degeneracy} \label{sectionDegeneracy} \begin{definition} \label{definitionDegeneracy} For a non-negative integer $k$, a graph $G$ is \emph{$k$-degenerate} if the vertices of $G$ can be enumerated in such a way that every vertex is succeeded by at most $k$ of its neighbours. The least number $k$ such that $G$ is $k$-degenerate is called the \emph{degeneracy} of $G$ and any such enumeration is referred to as a \emph{degeneracy order} of $V(G)$. \end{definition} For example, trees and forests are 1-degenerate and planar graphs are 5-degenerate. Series-parallel graphs, outerplanar graphs, non-regular cubic graphs, circle graphs of girth at least 5 etc. are 2-degenerate. For any non-negative integer $n$, a \emph{star} $S_n$ is a rooted tree on $n+1$ nodes with one root and $n$ leaves connected to the root. In other words, a star is a tree with at most one vertex whose degree is not one. A \emph{star forest} is a disjoint union of stars. \begin{definition} \label{definitionArboricty} The \emph{arboricity} of a graph $G$, denoted by $\mathcal{A}(G)$, is the minimum number of spanning forests whose union covers all the edges of $G$. The \emph{star arboricity} of a graph $G$, denoted by $\mathcal{S}(G)$, is the minimum number of spanning star forests whose union covers all the edges of $G$. \end{definition} Clearly, $\mathcal{S}(G) \geq \mathcal{A}(G)$ from definition. Furthermore, since any tree can be covered by two star forests, $\mathcal{S}(G) \leq 2\mathcal{A}(G)$. For the sake of completeness, we give a proof for the following already-known lemma. \begin{lemma} \label{lemmaStarArboricityDegeneracy} For a $k$-degenerate graph $G$, $\mathcal{S}(G) \leq 2k$. \end{lemma} \begin{proof} By following the degeneracy order, the edges of $G$ can be oriented acyclically such that each vertex has an out-degree at most $k$. Now the edges of $G$ can be partitioned into $k$ spanning forests by choosing a different forest for each outgoing edge from a vertex. Thus, $\mathcal{A}(G) \leq k$ and $\mathcal{S}(G) \leq 2k$. \end{proof} \begin{theorem} \label{theoremBoxliDegeneracy} For a $k$-degenerate graph $G$ on $n$ vertices, $\pi(G) \in O(k \log \log n)$. \end{theorem} \begin{proof} Let $B = \{b_1, \ldots , b_n\}$ and let $r = \floor{ \log \log n + \frac{1}{2}\log \log \log n + \log(\sqrt{2} \pi) + o(1) }$. From \cite{scramble}, we know that there exists a family $\mathcal{E} =\{\sigma^1, \ldots , \sigma^r\}$ of permutations of $B$ that is $3$-suitable for $B$. Recall that a family $\mathcal{E}$ of permutations of $[n]$ is called $3$-suitable if for every $a, b_1, b_2 \in [n]$ their exists a permutation $\sigma \in \mathcal{E}$ such that $\{b_1, b_2\} \prec_{\sigma} \{a\}$. By Lemma \ref{lemmaStarArboricityDegeneracy}, we can partition the edges of $G$ into a collection of $2k$ spanning star forests. Let $\mathcal{C} = \{C_1, \ldots , C_{2k} \}$ be one such collection. Each star in each star forest has exactly one root vertex which is a highest degree vertex in the star (ties resolved arbitrarily). Consider a spanning forest $C_i$, $i \in [2k]$. We construct a family $\mathcal{F}_i = \{\sigma_i^1 , \ldots, \sigma_i^r , \overline{\sigma}_i^1 , \ldots, \overline{\sigma}_i^r\}$ of permutations of $V(G)$ from $C_i$ as follows. In the permutation $\sigma_i^j$, the vertices of the same star of $C_i$ come together as a block, the blocks are ordered according to the permutation $\sigma^j$; within every block the root vertex comes last; and the leaves are ordered according to $\sigma^j$. The permutation $\overline\sigma_i^j$ is similar to $\sigma_i^j$ except that the blocks are ordered in the reverse order. This is formalised in Construction \ref{constructionBoxliDegeneracy}. Let $L_i$ and $l_i$, $i \in [2k]$ be functions from $V(G) \rightarrow B$ such that the following two properties hold. \setcounter{property}{0} \begin{property} \label{property1Degeneracy} $L_i(u) = L_i(v)$ if and only if $u$ and $v$ belong to the same star in $C_i$ \end{property} \begin{property} \label{property2Degeneracy} If $u$ and $v$ belong to the same star in $C_i$, then $l_i(u) \neq l_i(v)$. \end{property} It is straight forward to construct such functions. \begin{construction}(Constructing $\sigma_i^j$ and $\overline{\sigma}_i^j$). \label{constructionBoxliDegeneracy} \begin{algorithmic} \vspace{1ex} \STATE{For any distinct $u,v \in V(G)$, } \IF{$L_i(u) \neq L_i(v)$} \STATE{/*$u$ and $v$ belong to different stars in $C_i$ */} \STATE{$u \prec_{\sigma_i^j} v \iff L_i(u) \prec_{\sigma^j} L_i(v)$} \STATE{$u \prec_{\overline{\sigma}_i^j} v \iff L_i(v) \prec_{\sigma^j} L_i(u)$} \ELSE \STATE{/*$u$ and $v$ belong to the same star in $C_i$ */} \IF{$u$ is the root vertex of its star in $C_i$} \STATE{$v \prec_{\sigma_i^j} u$} \STATE{$v \prec_{\overline{\sigma}_i^j} u$} \ELSIF{$v$ is the root vertex of its star in $C_i$} \STATE{$u \prec_{\sigma_i^j} v$} \STATE{$u \prec_{\overline{\sigma}_i^j} v$} \ELSE \STATE{$u \prec_{\sigma_i^j} v \iff l_i(u) \prec_{\sigma^j} l_i(v)$} \STATE{$u \prec_{\overline{\sigma}_i^j} v \iff l_i(u) \prec_{\sigma^j} l_i(v)$} \ENDIF \ENDIF \end{algorithmic} \end{construction} \begin{claim} \label{claimBoxliDegeneracy} $\mathcal{F} = \bigcup_{i=1}^{2k} \mathcal{F}_i$ is a pairwise-suitable family of permutations for $G$. \end{claim} Let $\{a,b\}, \{c,d\}$ be two disjoint edges in $G$. Let $C_i$ be the star forest which contains the edge $\{a, b \}$. We will show that one of the permutations in $\mathcal{F}_i$ constructed above will separate these two edges. Since the edge $\{a, b\}$ is present in $C_i$ for some $i \in [2k]$, the vertices $a$ and $b$ belong to the same star, say $S$, of $C_i$ with one of them, say $a$, as the root of $S$. If the vertices $c$ and $d$ are not in $S$ then $3$-suitability among the stars (blocks) is sufficient to separate the two edges. If $c$ and $d$ are in $S$, then the $3$-suitability within the leaves of $S$ suffices. If only one of $c$ or $d$ is in $S$, then the $3$-suitability among the leaves is sufficient to realise the separation of the two edges in one of the two corresponding permutations of the blocks. The details follow. \setcounter{case}{0} \begin{case}[$c,d \in V(S)$] Then by Property \ref{property1Degeneracy}, $L_i(a) = L_i(b) = L_i(c) = L_i(d)$. Since $\mathcal{E} = \{\sigma_1 , \ldots , \sigma_r\}$ is a $3$-suitable family of permutations for $B= \{b_1, \ldots , b_n \}$, there exists a permutation, say $\sigma^j \in \mathcal{E}$, such that $\{l_i(c), l_i(d)\} \prec_{\sigma^j} \{l_i(b)\}$. Then, from Construction \ref{constructionBoxliDegeneracy}, we have $\{c,d\} \prec_{\sigma_i^j} b$. Since $a$ is the root vertex of the star $S$ in $C_i$ we also have $u \prec_{\sigma_i^j} a$, for all $u \in V(S) \setminus \{a\}$. Thus, $\{c,d\} \prec_{\sigma_i^j} \{a,b\}$. \end{case} \begin{case}[only $c \in V(S)$] Then, by Property \ref{property1Degeneracy}, $L_i(a) = L_i(b) = L_i(c)$ and $L_i(c) \neq L_i(d)$. Moreover, by Property \ref{property2Degeneracy}, $l_i(a)$, $l_i(b)$ and $l_i(c)$ are distinct. Since $\mathcal{E}$ is a $3$-suitable family of permutations for $B$, there exists a $\sigma^j \in \mathcal{E}$ such that $l_i(c) \prec_{\sigma^j} l_i(b)$. Combining this with the fact that $a$ is the root vertex of $S$, using Construction \ref{constructionBoxliDegeneracy}, we get $c \prec_{\sigma_i^j} b \prec_{\sigma_i^j} a$ and $c \prec_{\overline{\sigma}_i^j} b \prec_{\overline{\sigma}_i^j} a$. Recall that $L_i(c) \neq L_i(d)$. If $L_i(d) < L_i(c)$, then we get $d \prec_{\sigma_i^j} c \prec_{\sigma_i^j} b \prec_{\sigma_i^j} a$. Otherwise, we get $d \prec_{\overline{\sigma}_i^j} c \prec_{\overline{\sigma}_i^j} b \prec_{\overline{\sigma}_i^j} a$. \end{case} \begin{case}[only $d \in V(S)$] This is similar to the previous subcase. \end{case} \begin{case}[$c,d \notin V(S)$] If $c$ and $d$ belong to the same star in $C_i$, say $S'$, then by Property $P_1$, we have $L_i(a) = L_i(b)$, $L_i(c) = L_i(d)$, and $L_i(a) \neq L_i(c)$. Then for any $j \in [r]$, either $L_i(a) \prec_{\sigma^j} L_i(c)$ or $L_i(c) \prec_{\sigma^j} L_i(a)$. Therefore, either $\{a,b\} \prec_{\sigma_i^j} \{c,d\}$ or $\{c,d\} \prec_{\sigma_i^j} \{a,b\}$. If $c$ and $d$ belong to different stars in $C_i$, then Property $P_1$ ensures that $L_i(c)$, $L_i(d)$ and $L_i(a)$ are distinct. Since $\mathcal{E}$ is a $3$-suitable family of permutations for $B$, there exists a $\sigma^j \in \mathcal{E}$ such that $\{L_i(c), L_i(d)\} \prec_{\sigma^j} L_i(a)$. This, combined with Construction \ref{constructionBoxliDegeneracy}, implies that $\{c,d\} \prec_{\sigma_i^j} \{a,b\}$. \end{case} Thus, we prove Claim \ref{claimBoxliDegeneracy}. Applying the same, we get $\pi(G) \leq |\mathcal{F}| = \sum_{i=1}^{2k}|\mathcal{F}_i| = 4kr = 4k\floor{ \log \log n + \frac{1}{2}\log \log \log n + \log(\sqrt{2} \pi) + o(1) }$. \end{proof} \subsubsection*{Tightness of Theorem \ref{theoremBoxliDegeneracy}} Let $K_n^{1/2}$ denote the graph obtained by subdividing every edge of a complete graph on $n$ vertices. Note that $K_n^{1/2}$ is $2$-degenerate. In Theorem \ref{theoremKnHalf} of {Section \ref{sectionSubdividedClique}, it is shown that $\pi(K_n^{1/2}) \in \theta(\log \log n)$. Hence the $\log \log n$ factor in Theorem \ref{theoremBoxliDegeneracy} cannot be brought down in general. \subsection{Treewidth} \label{sectionTreewidth} \begin{definition} \label{defintionTreewdith} A \emph{tree decomposition} of a graph $G$ is a pair $(\{X_i : i \in I\}, T)$, where $I$ is an index set, $\{X_i : i \in I\}$ is a collection of subsets of $V(G)$, and $T$ is a tree on $I$ such that \begin{enumerate}[(i)] \item $\bigcup_{i \in I} X_i = V(G)$, \item $\forall \{u, v\} \in E(G), \exists i \in I$ such that $u, v \in X_i$, and \item $\forall i, j, k \in I$: if $j$ is on the path in $T$ from $i$ to $k$, then $X_i \cap X_k \subseteq X_j$. \end{enumerate} The {\em width} of a tree decomposition $(\{X_i~:~i \in I\}, T)$ is $\max_{i \in I} |X_i| -1$. The \emph{treewidth} of $G$ is the minimum width over all tree decompositions of $G$ and is denoted by $\operatorname{tw}(G)$. \end{definition} \begin{definition} \label{definitionOrderedTreeDecomposition} A tree decomposition $(\{X_i\}_{i \in V(T)}, T)$ of a graph $G$, such that $T$ has a designated root, denoted by $root(T)$, and a fixed ordering on the children of every node is called an {\em ordered tree decomposition}. By $preorder(i)$ and $postorder(i)$ we denote, respectively, the first and last time that a node $i \in V(T)$ is visited by a depth first traversal of $T$ starting from $root(T)$. For every node $i \in V(T)$, the distance from $root(T)$ in $T$ is called its {\em level} and denoted by $level(i)$. For a vertex $v \in V(G)$, $bag(v)$ denotes the node $i \in V(T)$ at the smallest level such that $v \in X_i$. Finally, $T(v)$ denotes the subtree of $T$ induced by $bag(v)$ and all its descendents. \end{definition} It follows from the above definition that for every $u, v \in V(G)$ either $T(u)$ and $T(v)$ are disjoint or one is contained in the other depending on whether one is a descendent of the other or not. Hence the following observation is immediate. We use $T(u) \subseteq T(v)$ to denote that $T(u)$ is contained in $T(v)$. \begin{observation} \label{observationSubtrees} Let $(\{X_i\}_{i \in V(T)}, T)$ be an ordered tree decomposition of a graph $G$. For every $\{u, v\} \in E(G)$, either $T(u) \subseteq T(v)$ or $T(v) \subseteq T(u)$. \end{observation} \begin{definition} \label{definitionPSplittingPreorder} Let $\mathcal{T} = (\{X_i\}_{i \in V(T)}, T)$ be an ordered tree decomposition of a graph $G$ and let $P = (V_1, V_2)$ be a bipartition of V(G), i.e., $V_1 \uplus V_2 = V(G)$. We define a function $f: V(G) \rightarrow \mathbb{N}$ as follows. \[ f(v) = \begin{cases} \begin{array}{ll} preorder(bag(v)), & \textnormal{if } v \in V_1 \\ postorder(bag(v)), & \textnormal{if } v \in V_2 \end{array} \end{cases} \] A permutation $\sigma$ of $V(G)$ is called {\em $P$-splitting} if $f(u) < f(v) \implies u \prec_{\sigma} v$. \end{definition} \begin{theorem} \label{theoremBoxliTreewidth} Let $G$ be a graph of treewidth $t$. Then $\pi(G) \leq 15.68 \ceil{\log(t+1)} + 2$. \end{theorem} \begin{proof} Let $\mathcal{T} = (\{X_i\}_{i \in V(T)}, T)$ be an ordered tree decomposition of $G$ of width $t$. Let $G'$ be a supergraph of $G$ obtained by adding an edge between every pair of vertices that appear together in some bag $X_i, i \in V(T)$. Hence the treewidth of $G'$ is also $t$ and so its chromatic number is $t+1$. Let $c : V(G') \rightarrow [t+1]$ be a proper colouring of $G'$. In the proof to follow, we shall prove the theorem for $G'$. Since $G'$ is a supergraph of $G$, by Observation \ref{observationMonotonicity}, the theorem follows. Let $K_{t+1}$ be a complete graph on $[t+1]$ and let $\mathcal{E}$ be a smallest family of permutations that is pairwise suitable and $3$-mixing for $K_{t+1}$. By Theorem \ref{theoremBoxliSize}, we know that $|\mathcal{E}| \leq 6.84 \log(t+1)$. For $\sigma \in \mathcal{E}$, let $(V(G), \lhd_{\sigma})$ be the partial order in which $u \lhd_{\sigma} v \iff c(u) \prec_{\sigma} c(v)$. Let $\tau(\sigma)$ and $\tau'(\sigma)$ be two linear extensions of $(V(G), \lhd_{\sigma})$ such that for two distinct vertices $u, v \in V(G)$ with $c(u) = c(v)$, we have $u \prec_{\tau(\sigma)} v \iff v \prec_{\tau'(\sigma)} u$. Let $\mathcal{F}_1 = \{\tau(\sigma), \tau'(\sigma)\}_{\sigma \in \mathcal{E}}$. Consider two disjoint edges $\{u_1, u_2\}$ and $\{u_3, u_4\}$ of $G'$. Let $C = \{c(u_i): i \in [4]\}$. If $|C| = 4$, that is if all the four end points have different colours, then consider the permutation $\sigma \in \mathcal{E}$ that separates $\{c(u_1), c(u_2)\}$ from $\{c(u_3), c(u_4)\}$. It is easy to see that $\{u_1, u_2\}$ is separated from $\{u_3, u_4\}$ in both $\tau(\sigma)$ and $\tau'(\sigma)$. If $|C| = 3$, then without loss of generality, we can assume that $c(u_1) = c(u_3)$. Since $\mathcal{E}$ is $3$-mixing for $K_{t+1}$, there exists a permutation $\sigma \in \mathcal{E}$ such that $c(u_1)$ is between $c(u_2)$ and $c(u_4)$ in $\sigma$. Hence $\{u_1, u_2\}$ and $\{u_3, u_4\}$ are separated in exactly one of $\tau(\sigma)$ or $\tau'(\sigma)$. The case left to be considered is the case when $|C| = 2$. In this case, we construct a different family of permutations. Let $\mathcal{P}$ be a family of bipartitions of $V(G)$ such that for every pair of distinct colours $i, j \in [t+1]$, there exists a partition $(V_1, V_2) \in \mathcal{P}$ with $c^{-1}(i) \subseteq V_1$ and $c^{-1}(j) \subseteq V_2$. It is easy to see that we can have such a family of size $2 \ceil{ \log(t+1)}$ by partitioning $V(G)$ based on the bits of a binary encoding of colours. For a bipartition $P$ of $V(G)$, let $\sigma(P)$ denote the $P$-splitting permutation of $V(G)$ as in Definition \ref{definitionPSplittingPreorder}. In particular, $\sigma_{pre} = \sigma((V(G), \emptyset))$ and $\sigma_{post} = \sigma((\emptyset, V(G)))$. Finally, let $\mathcal{F}_2 = \{\sigma((V_1, V_2)) : (V_1, V_2) \in \mathcal{P} \} \cup \{\sigma_{pre}, \sigma_{post} \}$. Since $|C| = 2$ we can assume without loss of generality that $c(u_1) = c(u_3) = i$ and $c(u_2) = c(u_4) = j$, $i \neq j$. Let $(V_1, V_2) \in \mathcal{P}$ be the bipartition such that $c^{-1}(i) \subseteq V_1$ and $c^{-1}(j) \subseteq V_2$. Similarly let $(U_1, U_2) \in \mathcal{P}$ be the partition such that $c^{-1}(i) \subseteq U_2$ and $c^{-1}(j) \subseteq U_1$. Let $\sigma_{ij} = \sigma((V_1, V_2))$ and $\sigma_{ji} = \sigma((U_1, U_2))$. We claim that one of the permutations from $\{\sigma_{ij}, \sigma_{ji}, \sigma_{pre}, \sigma_{post} \}$ will separate $\{u_1, u_2\}$ from $\{u_3, u_4\}$. Without loss of generality we can assume that $level(bag(u_1)) \leq level(bag(u_i)),~ \forall i \in [4]$. So $T(u_2) \subseteq T(u_1)$ (Observation \ref{observationSubtrees}). If $T(u_3) \cup T(u_4)$ is disjoint from $T(u_1)$ then $\sigma_{pre}$ separates $\{u_1, u_2\}$ from $\{u_3, u_4\}$. So we can assume $T(u_3) \cup T(u_4) \subseteq T(u_1)$. If $preorder(bag(u_2)) < preorder(bag(u_i)),~\forall i \in \{3, 4 \}$, then $\sigma_{pre}$ will separate $\{u_1, u_2\}$ from $\{u_3, u_4\}$. Similarly if $postorder(bag(u_2)) > postorder(bag(u_i)),~\forall i \in \{3,4\}$, then $\sigma_{post}$ will separate them. Hence we can further assume that $T(u_2) \subseteq T(u_3) \cup T(u_4)$. Since $u_1$ and $u_2$ are adjacent and $c(u_1) = c(u_3)$, it can be seen that once $T(u_3) \subseteq T(u_1)$ as we have here, we cannot have $T(u_2) \subseteq T(u_3)$. Since $u_3$ and $u_4$ are adjacent, we get $T(u_2), T(u_3) \subsetneq T(u_4) \subseteq T(u_1)$ with $T(u_2) \not\subseteq T(u_3)$. Since $c(u_2) = c(u_4)$, by a similar argument, $T(u_3) \not\subseteq T(u_2)$. Hence we can conclude that $T(u_2) \cap T(u_3) = \emptyset$. Now if $postorder(bag(u_2)) < preorder(bag(u_3))$, then $\sigma_{ij}$ separates $\{u_1, u_2\}$ from $\{u_3, u_4\}$. Otherwise, $postorder(bag(u_3)) < preorder(bag(u_2))$ and therfore $\sigma_{ji}$ does the required separation. Hence we conclude that $\mathcal{F}_1 \cup \mathcal{F}_2$ is a pairwise suitable family of permutations for $G'$ and hence $G$. Therefore $\pi(G) \leq 15.68 \ceil{\log(t+1)} + 2$. \end{proof} \subsubsection*{Tightness of Theorem \ref{theoremBoxliTreewidth}} For a complete graph $K_n$, $\operatorname{tw}(K_n) = n-1$. By Corollary \ref{corollaryBoxliOmega} in Section \ref{sectionLowerBound}, we have $\pi(K_n) \geq \log \floor {n/2}$. Hence, Theorem \ref{theoremBoxliTreewidth} is tight up to a constant factor. \subsection{Acyclic and star chromatic number} \begin{definition} \label{definitionAcyclicStarChromatic} The \emph{acyclic chromatic number} of a graph $G$, denoted by $\chi_a(G)$, is the minimum number of colours needed to do a proper colouring of the vertices of $G$ such that the graph induced on the vertices of every pair of colour classes is acyclic. The \emph{star chromatic number} of a graph $G$, denoted by $\chi_s(G)$, is the minimum number of colours needed to do a proper colouring of the vertices of $G$ such that the graph induced on the vertices of every pair of colour classes is a star forest. \end{definition} Recall (from Section \ref{sectionDegeneracy}) that a star forest is a disjoint union of stars. Clearly, $\chi_s(G) \geq \chi_a(G) \geq \chi(G)$, where $\chi(G)$ denotes the chromatic number of $G$. In order to bound $\pi(G)$ in terms of $\chi_a(G)$ and $\chi_s(G)$, we first bound $\pi(G)$ for forests and star forests. Then the required result follows from an application of Lemma \ref{lemmaMaxPairsParts} from Section \ref{sectionBoxliSize}. \begin{lemma} \label{lemmaBoxliStars} For a star forest $G$, $\pi(G) = 1$. \end{lemma} \begin{proof} Let $S_1, \ldots , S_r$ be the collection of stars that form $G$. Let $\sigma$ be a permutation of $V(G)$ which satisfies $V(S_1) \prec_{\sigma} \cdots \prec_{\sigma} V(S_r)$. It is easy to verify that $\{\sigma\}$ is pairwise suitable for $G$. \end{proof} \begin{lemma} \label{lemmaBoxliForests} For a forest $G$, $\pi(G) \leq 2$. \end{lemma} \begin{proof} Let $T_1, \ldots , T_r$ be the collection of trees that form $G$. Convert each tree $T_i$ to an ordered tree by arbitrarily choosing a root vertex for $T_i$ and assigning an arbitrary order to the children of each vertex. Let $\sigma_1, \sigma_2$ be two permutations of $V(G)$ defined as explained below. Consider a vertex $u \in V(T_i)$ and a vertex $v \in V(T_j)$, where $i,j \in [r]$. If $i \neq j$, then $u \prec_{\sigma_1} v \iff i < j$ and $u \prec_{\sigma_2} v \iff i < j$. Otherwise, $u \prec_{\sigma_1} v$ if and only if $u$ precedes $v$ in a preorder traversal of the ordered tree $T_i$ and $u \prec_{\sigma_2} v$ if and only if $u$ precedes $v$ in a postorder traversal of the ordered tree $T_i$. It is left to the reader to verify that $\{\sigma_1, \sigma_2\}$ form pairwise suitable family of permutations for $G$. \end{proof} \begin{theorem} \label{theoremBoxliAcyclicStar} For a graph $G$, $\pi(G) \leq 2\chi_a(G) + 13.68\log(\chi_a(G))$. Further, if the star chromatic number of $G$ is $\chi_s$, then $\pi(G) \leq \chi_s(G) + 13.68\log(\chi_s(G))$. \end{theorem} \begin{proof} The theorem follows directly from Lemma \ref{lemmaMaxPairsParts}, Lemma \ref{lemmaBoxliForests}, and Lemma \ref{lemmaBoxliStars}. \end{proof} This, together with some existing results from literature, gives us a few easy corollaries. Alon, Mohar, and Sanders have showed that a graph embeddable in a surface of Euler genus $g$ has an acyclic chromatic number in $O(g^{4/7})$ \cite{alonacyclicgenus}. It is noted by Esperet and Joret in \cite{esperet2011boxicity}, using results of Nesetril, Ossona de Mendez, Kostochka, and Thomason, that graphs with no $K_t$ minor have an acyclic chromatic number in $\order{t^2 \log t}$. Hence the following corollary. \begin{corollary} \label{corollaryBoxliGenus} \begin{enumerate}[(i)] \item For a graph $G$ with Euler genus $g$, $\pi(G) \in O(g^{4/7})$, \item for a graph $G$ with no $K_t$ minor, $\pi(G) \in O(t^2 \log t)$, and \end{enumerate} \end{corollary} \subsection{Planar graphs} Since planar graphs have acyclic chromatic number at most $5$ \cite{borodin1979acyclic}, it follows from Theorem \ref{theoremBoxliAcyclicStar} that, for every planar graph $G$, $\pi(G) \leq 42$. Using Schnyder's celebrated result on non-crossing straight line plane drawings of planar graphs we improve this bound to the best possible. \begin{theorem}[Schnyder, Theorem $1.1$ in \cite{schnyder1990embedding}] \label{theoremSchnyder} Let $\lambda_1$, $\lambda_2$, $\lambda_3$ be three pairwise non parallel straight lines in the plane. Then, each plane graph has a straight line embedding in which any two disjoint edges are separated by a straight line parallel to $\lambda_1$, $\lambda_2$ or $\lambda_3$. \end{theorem} This immediately gives us the following tight bound for planar graphs. \begin{theorem} \label{theoremBoxliPlanar} Separation dimension of a planar graph is at most $3$. More over there exist planar graphs with separation dimension $3$. \end{theorem} \begin{proof} Consider the following three pairwise non parallel lines in $\mathbb{R}^2$: $\lambda_1 = \{(x,y) : y = 0, x \in \mathbb{R} \} $, $\lambda_2 = \{(x,y) : x = 0, y \in \mathbb{R} \}$ and $\lambda_3 = \{(x,y): x,y \in \mathbb{R}, x+y = 0\}$. Let $f: V(G) \rightarrow \mathbb{R}^2$ be an embedding such that any two disjoint edges in $G$ are separated by a straight line parallel to $\lambda_1$, $\lambda_2$ or $\lambda_3$. For every vertex $v$, let $v_x$ and $v_y$ denote the projections of $f(v)$ on to the $x$ and $y$ axes respectively. Construct $3$ permutations $\sigma_1, \sigma_2, \sigma_3$ such that $u_x < v_x \implies u \prec_{\sigma_1} v$, $u_y < v_y \implies u \prec_{\sigma_2} v$, and $u_x + u_y < v_x + v_y \implies u \prec_{\sigma_3} v$, with ties broken arbitrarily. Now it is easy to verify that any two disjoint edges of $G$ separated by a straight line parallel to $\lambda_i$ in the embedding $f$, will be separated in $\sigma_i$. Tightness of the theorem follows from considering $K_4$, the complete graph on $4$ vertices which is a planar graph. Any single permutation of its $4$ vertices separates exactly one pair of disjoint edges. Since $K_4$ has $3$ pairs of disjoint edges, we need exactly $3$ permutations. \end{proof} \subsection{Subdivisions of graphs} \begin{definition} A graph $G'$ is called a {\em subdivision} of a graph $G$ if $G'$ is obtained from $G$ by replacing a subset of edges of $G$ with independent paths between their ends such that none of these new paths has an inner vertex on another path or in $G$. A subdivision of $G$ where every edge of $G$ is replaced by a $k$-length path is denoted as $G^{1/k}$. The graph $G^{1/2}$ is called {\em fully subdivided} $G$. \end{definition} The main result in this section is an upper bound for $\pi(G^{1/2})$ in terms of $\chi(G)$, where $\chi(G)$ denotes the chromatic number of $G$. It is easy to see that the acyclic chromatic number of $G^{1/k}$ for $k \geq 3$ is at most $3$ for any graph $G$ (Use the first two colours to properly colour the internal vertices in every path introduced by the subdivision and give the third colour to all the original vertices) \cite{wood2005acyclic}. Hence, by Theorem \ref{theoremBoxliAcyclicStar}, $\pi(G^{1/k}) \in \order{1}, \forall k > 2$. Acyclic chromatic number of $G^{1/2}$ is at most $\max \{ \chi(G), 3 \}$ \cite{wood2005acyclic} and hence $\pi(G^{1/2}) \in \order{\chi(G)}$ by Theorem \ref{theoremBoxliAcyclicStar}. We improve this easy upper bound considerably and show that $\pi(G^{1/2}) \leq (1 + o(1))\log\log \chi(G)$. In Section \ref{sectionSubdividedClique}, we come up with a different strategy to show that $\pi(K_n^{1/2}) \geq \frac{1}{2} \floor{\log \log (n-1)}$ there by demonstrating the tightness of the above upper bound. The upper bound on $\pi(G^{1/2})$ is obtained by a constructing an interval order based on $G$ of height $\chi(G) - 1$ and then showing that its poset dimension is an upper bound on $\pi(G^{1/2})$. We need some more definitions and notation before proceeding. \begin{definition}[Poset dimension] Let $(\mathcal{P}, \lhd)$ be a poset (partially ordered set). A {\em linear extension} $L$ of $\mathcal{P}$ is a total order which satisfies $(x \lhd y \in \mathcal{P}) \implies (x \lhd y \in L)$. A {\em realiser} of $\mathcal{P}$ is a set of linear extensions of $\mathcal{P}$, say $\mathcal{R}$, which satisfy the following condition: for any two distinct elements $x$ and $y$, $x\lhd y \in \mathcal{P}$ if and only if $x \lhd y \in L$, $\forall L \in \mathcal{R}$. The \emph{poset dimension} of $\mathcal{P}$, denoted by $dim(\mathcal{P})$, is the minimum integer $k$ such that there exists a realiser of $\mathcal{P}$ of cardinality $k$. \end{definition} \begin{definition}[Interval dimension] A \emph{open interval} on the real line, denoted as $(a,b)$, where $a,b \in \mathbb{R}$ and $a < b$, is the set $\{x \in \mathbb{R} : a < x < b\}$. For a collection $C$ of open intervals on the real line the partial order $(C, \lhd)$ defined by the relation $(a,b) \lhd (c,d)$ if $b \leq c$ in $\mathbb{R}$ is called the {\em interval order} corresponding to $C$. The poset dimension of this interval order $(C,\lhd)$ is called the \emph{interval dimension} of $C$ and is denoted by $\operatorname{dim}(C)$. \end{definition} \begin{theorem} \label{theoremSubdivisionIntervalOrder} For any graph $G$ and a permutation $\sigma$ of $V(G)$, let $C_{G, \sigma}$ denote the collection of open intervals $(\sigma(u), \sigma(v)), \{u,v\} \in E(G), u \prec_{\sigma} v$. Then, $$ \pi(G^{1/2}) \leq \min_{\sigma} \operatorname{dim} ( C_{G,\sigma} ) + 2,$$ where the minimisation is done over all possible permutations $\sigma$ of $V(G)$. \end{theorem} \begin{proof} Let $\sigma$ be any permutation of $V(G)$. We relabel the vertices of $G$ so that $v_1 \prec_{\sigma} \cdots \prec_{\sigma} v_n$, where $n = |V(G)|$. For every edge $e = \{v_i, v_j\} \in E(G), i < j$, the new vertex in $G^{1/2}$ introduced by subdividing $e$ is denoted as $u_{ij}$. For a new vertex $u_{ij}$, its two neighbours, $v_i$ and $v_j$ will be respectively called the {\em left neighbour} and {\em right neighbour} of $u_{ij}$. We call an edge of the form $\{v_i, u_{ij}\}$ as a {\em left edge} and one of the form $\{u_{ij}, v_j\}$ as a {\em right edge}. Let $\mathcal{R} = \{L_1, \ldots, L_d\}$ be a realiser for $(C_{G, \sigma}, \lhd)$ such that $d = \operatorname{dim}(C_{G,\sigma})$. For each total order $L_p, p \in [d]$, we construct a permutation $\sigma_p$ of $V(G^{1/2})$ as follows. First, the subdivided vertices are ordered from left to right as the corresponding intervals are ordered in $L_p$, i.e, $u_{ij} \prec_{\sigma_p} u_{kl} \iff (i,j) \prec_{L_p} (k,l)$. Next the original vertices are introduced into the order one by one as follows. The vertex $v_1$ is placed as the left most vertex. Once all the vertices $v_i, i < j$ are placed, we place $v_j$ at the left most possible position so that $v_{j-1} \prec_{\sigma_p} v_j$ and $u_{ij} \prec_{\sigma_p} v_j, \forall i <j$. This ensures that $v_j \prec_{\sigma_p} u_{jk}, \forall k >j$ because $u_{ij'} \prec_{\sigma_p} u_{jk}, \forall j' \leq j$ (Since $(i,j) \lhd (j,k)$). Now we construct two more permutations $\sigma_{d+1}$ and $\sigma_{d+2}$ as follows. In both of them, first the original vertices are ordered as $v_1 \prec \cdots \prec v_n$. In $\sigma_{d+1}$, the subdivided vertices are placed immediately after its left neighbour, i.e., $v_i \prec_{\sigma_{d+1}} u_{ij} \prec_{\sigma_{d+1}} v_{i+1}$ for all $\{i, j \} \in E(G)$. In $\sigma_{d+2}$, the subdivided vertices are placed immediately before its right neighbour, i.e., $v_{j-1} \prec_{\sigma_{d+2}} u_{ij} \prec_{\sigma_{d+2}} v_{j}$ for all $\{i, j \} \in E(G)$. Notice that in all the permutations so far constructed, the left (right) neighbour of every subdivided vertex is placed to its left (right). We complete the proof by showing that $\mathcal{F} = \{\sigma_1, \ldots, \sigma_{d+2}\}$ is pairwise suitable for $G^{1/2}$ by analysing the following cases. Any two disjoint left edges are separated in $\sigma_{d+1}$ and any two disjoint right edges are separated in $\sigma_{d+2}$. If $(i,j) \lhd (k,l)$, then every pair of disjoint edges among those incident on $u_{ij}$ or $u_{kl}$ are separated in every permutation in $\mathcal{F}$. Hence the only non-trivial case is when we have a left edge $\{v_i, u_{ij}\}$ and a right edge $\{u_{kl}, v_l\}$ such that $(i,j) \cap (k,l) \neq \emptyset$. Since $(i,j)$ and $(k,l)$ are incomparable in $(C_{G, \sigma}, \lhd)$, there exists a permutation $\sigma_p, p \in [d]$ such that $u_{ij} \prec_{\sigma_p} u_{kl}$. Since $v_i$ is before $u_{ij}$ and $v_l$ is after $u_{kl}$ in every permutation, $\sigma_p$ separates $\{v_i, u_{ij}\}$ from $\{u_{kl}, v_l\}$. \end{proof} The {\em height} of a partial order is the size of a largest chain in it. It was shown by F\"{u}redi, Hajnal, R\"{o}dl and Trotter \cite{furedi1991interval} that the dimension of an interval order of height $h$ is at most $\log\log h + (\frac{1}{2} + o(1))\log\log\log h$ (see also Theorem $9.6$ in \cite{trotter1997new}). The next corollary uses this result along with Theorem \ref{theoremSubdivisionIntervalOrder}. \begin{corollary} \label{corollarySubdivisionChromaticNumber} For a graph $G$ with chromatic number $\chi(G)$, $$ \pi(G^{1/2}) \leq \log\log (\chi(G)-1) + \left( \frac{1}{2} + o(1) \right) \log\log\log (\chi(G)-1) + 2. $$ \end{corollary} \begin{proof} Let $V_1, \ldots, V_{\chi(G)}$ be the colour classes of an optimal proper colouring of $G$. Let $\sigma$ be a permutation of $V(G)$ such that $V_1 \prec_{\sigma} \cdots \prec_{\sigma} V_{\chi(G)}$. Now it is easy to see that the longest chain in $(C_{G,\sigma}, \lhd)$ is of length at most $\chi(G) - 1$. Hence the result follows from that of F\"{u}redi et al. \cite{furedi1991interval} and Theorem \ref{theoremSubdivisionIntervalOrder} above. \end{proof} \subsubsection*{Tightness of Corollary \ref{corollarySubdivisionChromaticNumber}} Theorem \ref{theoremKnHalf} in Section \ref{sectionSubdividedClique} proves that $\pi(K_n^{1/2}) \geq \frac{1}{2} \floor{\log\log(n-1) }$. Hence the upper bound in Corollary \ref{corollarySubdivisionChromaticNumber} is tight up to a constant factor. \subsection{Hypercube} \label{sectionHypercube} \begin{definition} \label{definitionHypercube} For a positive integer $d$, the {\em $d$-dimensional hypercube} $Q_d$ is the graph with $2^d$ vertices where each vertex $v$ corresponds to a distinct $d$-bit binary string $g(v)$ such that two vertices $u, v \in V(Q_d)$ are adjacent if and only if $g(u)$ differs from $g(v)$ at exactly one bit position. Let $g_i(v)$ denote the $i$-th bit from right in $g(v)$, where $i \in [d]$. The number of ones in $g(v)$ is called the {\em hamming weight} of $v$ and is denoted by $h(v)$. \end{definition} \begin{observation} \label{observationHypercubeNonadjacentEdges} Let $a,b,c,$ and $d$ be four distinct vertices in the hypercube $Q_d$ with $\{a,b\}, \{c,d\} \in E(Q_d)$ such that $g(a)$ and $g(b)$ differ only in the $i$-th bit position from right and $g(c)$ and $g(d)$ differ only in the $j$-th position from right. Then there exists some $k \in [d] \setminus \{i,j\}$ such that $g_k(a)$ ($=g_k(b)$) differs from $g_k(c)$ ($=g_k(d)$). \end{observation} \begin{proof} Assume for contradiction that, for every $k \in [d] \setminus \{i,j\}$, $g_k(a) = g_k(b) = g_k(c) = g_k(d)$. if $i=j$ then there can be only $2$ distinct binary strings among $\{g(a), g(b), g(c), g(d) \}$. If $i \neq j$, then there can only be $3$ distinct binary strings among $\{g(a), g(b), g(c), g(d) \}$ since the $i$-th and $j$-th bit positions from right cannot simultaneously be $1 - g_i(c)$ and $1-g_j(a)$ respectively for any of the $4$ strings in the set. This contradicts the distinctness of $a,b,c,$ and $d$. \end{proof} \begin{theorem} \label{theoremHypercube} For the $d$-dimensional hypercube $Q_d$, $$ \frac{1}{2} \floor{\log\log(d-1)} \leq \pi(Q_d) \leq \floor{ \log\log d + \frac{1}{2} \log\log\log d + \log(\sqrt{2}\pi) + o(1) } .$$ \end{theorem} \begin{proof} Let $H$ be the subgraph of $Q_d$ induced on the vertex set $V(H) = \{v \in V(Q_d)~:~h(v) \in \{1,2\}\}$. Observe that $H$ is isomorphic to $K_d^{1/2}$ and therefore, by Theorem \ref{theoremKnHalf}, $\pi(H) \geq \floor{ \log\log (d-1) }$. Hence the lower bound follows from by Observation \ref{observationMonotonicity}. Next we show the upper bound by using $3$-suitable permutations of the bit positions. Let $\mathcal{E} = \{ \sigma_1, \ldots , \sigma_r \}$ be a smallest $3$-suitable family of permutations of $[d]$. From \cite{scramble}, we know that $r \leq \floor{ \log \log d + \frac{1}{2}\log \log \log d + \log(\sqrt{2} \pi) + o(1) }$. For a permutation $\sigma \in \mathcal{E}$ and a pair $u,v \in V(Q_d)$, let $i_{\sigma}(u,v)$ denote the largest value of $\sigma(i)$ (over all $i \in [d]$) for which $g_{\sigma(i)}(u) \neq g_{\sigma(i)}(v)$, i.e., the right most bit position where $u$ and $v$ differ if the bit positions are permuted according to $\sigma$. From $\mathcal{E}$, we construct a family of permutations $\mathcal{F} = \{\tau_1, \ldots , \tau_r\}$ that is pairwise suitable for $Q_d$. The permutation $\tau_j$ is constructed by first permuting the bit positions of all the binary strings according to $\sigma_j$ and then reading out the vertices in the right to left lexicographic order of the bit strings. That is, for $u, v \in V(Q_d)$, $u \prec_{\tau_j} v$ if $g_i(u) < g_i(v)$, where $i = i_{\sigma_j}(u,v)$. In order to show that $\mathcal{F}$ is a pairwise suitable family of permutations for $Q_d$, consider two disjoint edges $\{a,b\}$, $\{c,d\}$ in $Q_d$ such that $g(a)$ and $g(b)$ differ only in the $l$-th position from right and $g(c)$ and $g(d)$ differ only in the $m$-th position from right. Then, from Observation \ref{observationHypercubeNonadjacentEdges}, we know that there exists a $k \in [d] \setminus \{l,m\}$ such that $g_k(a)$ ($=g_k(b)$) differs from $g_k(c)$ ($=g_k(d)$). Since $\mathcal{E}$ is a $3$-suitable family of permutations for $[d]$, there exists a $\sigma_s \in \mathcal{E}$ such that $ \{l,m\} \prec_{\sigma_s} k$. That is, $\sigma_s(l) < \sigma_s(k)$ and $\sigma_s(m) < \sigma_s(k)$. Hence, $i_{\sigma_s}(u,v) \geq \sigma_s(k), \, u \in \{a,b\}, v \in \{c,d\}$. It then follows from the definition of $\tau_s$ that either $\{a,b\} \prec_{\tau_s} \{c,d\}$ or $\{c,d\} \prec_{\tau_s} \{a,b\}$. \end{proof} \section{Lower bounds} \label{sectionLowerBound} The tightness of many of the upper bounds we showed in the previous section relies on the lower bounds we derive in this section. First, we show that if a graph contains a uniform bipartite subgraph, then it needs a large separation dimension. This immediately gives a lower bound on separation dimension for complete bipartite graphs and hence a lower bound for every graph $G$ in terms $\omega(G)$. The same is used to obtain a lower bound on the separation dimension for random graphs of all density. Finally, it is used as a critical ingredient in proving a lower bound on the separation dimension for complete $r$-uniform hypergraphs. Before we close this section we give a lower bound on the separation dimension of $K_n^{1/2}$ using Erd\H{o}s-Szekeres Theorem and a lower bound on the poset dimension of canonical interval orders. \subsection{Uniform bipartitions} \label{sectionLowerBoundBipartition} \begin{theorem} \label{theoremBoxliLowerBound} For a graph $G$, let $V_1, V_2 \subsetneq V(G)$ such that $V_1 \cap V_2 = \emptyset$. If there exists an edge between every $s_1$-subset of $V_1$ and every $s_2$-subset of $V_2$, then $\pi(G) \geq \min \left\{ \log \frac{|V_1|}{s_1}, \log \frac{|V_2|}{s_2} \right\}$. \end{theorem} \begin{proof} Let $\mathcal{F}$ be a family of permutations of $V(G)$ that is pairwise suitable for $G$. Let $r = |\mathcal{F}|$. We claim that, for any $\sigma \in \mathcal{F}$, there always exists an $S_1 \subseteq V_1$ and an $S_2 \subseteq V_2$ such that $|S_1| \geq \ceil{ |V_1|/2 }, |S_2| \geq \ceil{ |V_2|/2 }$ and $S_1 \prec_{\sigma} S_2$ or $S_2 \prec_{\sigma} S_1$. To see this, scan $V(G)$ in the order of $\sigma$ till we see $\ceil{|V_1|/2 }$ elements from $V_1$ or $\ceil{|V_2|/2}$ elements of $V_2$, which ever happens earlier. In the former case the first $\ceil {|V_1|/2 }$ elements of $V_1$ precede at least $\ceil{ |V_2|/2 }$ elements of $V_2$ and in the latter case the first $\ceil{|V_2|/2}$ elements of $V_2$ precede at least $\ceil{|V_1|/2}$ elements of $V_1$. Extending this claim recursively to all permutations in $\mathcal{F}$, we see that there always exist a $T_1 \subseteq V_1$ and a $T_2 \subseteq V_2$ such that $|T_1| \geq |V_1|/2^r, |T_2| \geq |V_2|/2^r$ and $\forall \sigma \in \mathcal{F}$, either $T_1 \prec_{\sigma} T_2$ or $T_2 \prec_{\sigma} T_1$. We now claim that either $|T_1| \leq s_1$ or $|T_2| \leq s_2$. Suppose, for contradiction, $|T_1| \geq s_1+1$ and $|T_2| \geq s_2+1$. Then by the statement of the theorem, there exists an edge $e = \{v_1, v_2\}$ of $G$ such that $v_1 \in T_1$ and $v_2 \in T_2$ and a second edge $f$ between $T_1 \setminus \{v_1\}$ and $T_2 \setminus \{v_2\}$. Since $T_1$ and $T_2$ are separated in every permutation of $\mathcal{F}$, no permutation in $\mathcal{F}$ separates the disjoint edges $e$ and $f$ between $T_1$ and $T_2$. This contradicts the fact that $\mathcal{F}$ is a pairwise suitable family for $G$. Hence, either $|V_1| / 2^r \leq |T_1| \leq s_1$ or $|V_2|/2^r \leq |T_2| \leq s_2$ or both. That is, $r \geq \min \left\{ \log \frac{|V_1|}{s_1}, \log \frac{|V_2|}{s_2} \right\}$. \end{proof} The next two corollaries are immediate. \begin{corollary} \label{corollaryCompleteBipartiteLowerBound} For a complete bipartite graph $K_{m,n}$ with $m \leq n$, $\pi(K_{m,n}) \geq \log(m)$. \end{corollary} \begin{corollary} \label{corollaryBoxliOmega} For a graph $G$, $$\pi(G) \geq \log \floor{\frac{\omega}{2}},$$ where $\omega$ is the size of a largest clique in $G$. \end{corollary} \subsection{Random graphs} \label{sectionRandomGraphs} \begin{definition}[Erd\H{o}s-R\'{e}nyi model] $\mathcal{G}(n,p)$, $n \in \mathbb{N}$ and $0 \leq p \leq 1$, is the discrete probability space of all simple undirected graphs $G$ on $n$ vertices with each pair of vertices of $G$ being joined by an edge with a probability $p$ independent of the choice for every other pair of vertices. \end{definition} \begin{definition} A property $P$ is said to hold for $\mathcal{G}(n,p)$ {\em asymptotically almost surely (a.a.s)} if the probability that $P$ holds for $G \in \mathcal{G}(n,p)$ tends to $1$ as $n$ tends to $\infty$. \end{definition} \begin{theorem} \label{theoremBoxliLowerBoundRandom} For $G \in \mathcal{G}(n,p(n))$ $$\pi(G) \geq \log(np(n)) - \log\log(np(n)) - 2.5 \mbox{ a.a.s}.$$ \end{theorem} \begin{proof} If $np(n) \leq e^{e/4}$, then $\log(np(n)) - \log\log(np(n)) - 2.5 \leq 0$, and hence the statement is trivially true. So we can assume that $p(n) > e^{e/4} / n$. Let $s(n) = 2 \ln (np(n)) / p(n)$. Since $p(n) > e^{e/4} / n$ by assumption, $\ln (np(n)) > e/4$ and hence if $\lim_{n \rightarrow \infty} p(n) = 0$, we get $\lim_{n \rightarrow \infty} s(n) = \infty$. Otherwise, that is when $\liminf_{n \rightarrow \infty} p(n) > 0$, we have $s(n) \geq 2 \ln(np(n)) / 1$ which tends to $\infty$ as $n \rightarrow \infty$. Hence in every case $\lim_{n \rightarrow \infty} s(n) = \infty$. Let $V(G) = V_1 \uplus V_2$ be a balanced partition of $V(G)$, i.e., $V_1 \cap V_2 = \emptyset$ and $|V_1|, |V_2| \geq \floor{ n/2 }$. $S_1 \subseteq V_1$ and $S_2 \subseteq V_2$ be such that $|S_1| = |S_2| = s(n)$. The probability that there is no edge in $G$ between $S_1$ and $S_2$ is $(1-p(n))^{s(n)^2} \leq \exp(-p(n)s(n)^2)$. Hence the probability $q(n)$ that there exists an $s(n)$-sized set from $V_1$ and one $s(n)$-sized set from $V_2$ with no edge between them is bounded above by ${n/2 \choose s(n)}^2 \exp(-p(n)s(n)^2)$. Hence using the bound ${n \choose k} \leq (ne/k)^k$, we get \begin{eqnarray*} q(n) & \leq & \left( \frac{ne}{2s(n)} \right)^{2s(n)} \exp( -p(n)s(n)^2) \\ & = & \exp \left(2s(n) \ln \left( \frac{ne}{2s(n)} \right) - p(n)s(n)^2 \right) \\ & = & \exp \left(s(n) \left( 2\ln \left( \frac{np(n)e}{4\ln(np(n))}\right) - 2 \ln(np(n)) \right) \right) \\ & = & \exp \left(s(n) \left( 2\ln \frac{e}{4} - 2 \ln\ln(np(n)) \right) \right) \\ & = & \exp \left(-2s(n)\left(\ln\ln(np(n)) - \ln \frac{e}{4} \right) \right) \\ \end{eqnarray*} Since $p(n) > e^{e/4} / n$, $\ln\ln(np(n)) > \ln(e/4)$ and since $\lim_{n \rightarrow \infty} s(n) = \infty$, we conclude that $\lim_{n \rightarrow \infty} q(n) = 0$. With probability $1 - q(n)$, every pair of subsets from $V_1 \times V_2$ each of size $s(n)$ has at least one edge between them. So by Theorem \ref{theoremBoxliLowerBound}, $\pi(G) \geq \log \floor{n/2s(n)} \geq \log(np(n)) - \log\log (np(n)) - 2.5$ with probability $1 - q(n)$. Hence the theorem. \end{proof} Note that the expected average degree of a graph in $\mathcal{G}(n,p)$ is $\mathbb{E}_p[\bar{d}] = (n-1)p$. And hence the above bound can be written as $\log \mathbb{E}_p[\bar{d}] - \log\log \mathbb{E}_p[\bar{d}] - 2.5$. \subsection{Hypergraphs} \label{sectionLowerBoundHypergraphs} Now we illustrate one method of extending the above lower bounding technique from graphs to hypergraphs. Let $K_n^r$ denote the complete $r$-uniform hypergraph on $n$ vertices. We show that the upper bound of $\order{4^r \sqrt{r} \log n}$ obtained for $K_n^r$ from Theorem \ref{theoremHypergraphSizeUpperbound} is tight up to a factor of $r$. The lower bound argument below is motivated by an argument used by Radhakrishnan to prove a lower bound on the size of a family of scrambling permutations \cite{radhakrishnan2003note}. \begin{theorem} \label{theoremHypergraphSizeLowerbound} Let $K_n^r$ denote the complete $r$-uniform hypergraph on $n$ vertices with $r > 2$. Then $$ c_1 \frac{4^r}{\sqrt{r-2}} \log n \leq \pi(K_n^r) \leq c_2 4^r \sqrt{r} \log n,$$ for $n$ sufficiently larger than $r$ and where $c_1 = \frac{1}{2^7}$ and $c_2 = \frac{e\ln2}{\pi\sqrt{2}} < \frac{1}{2}$. \end{theorem} \begin{proof} The upper bound follows from Theorem \ref{theoremHypergraphSizeUpperbound} and so it suffices to prove the lower bound. \def\mathcal{S}{\mathcal{S}} Let $\mathcal{F}$ be a family of pairwise suitable permutations for $K_n^r$. Let $\mathcal{S}$ be a maximal family of $(r-2)$-sized subsets of $[2r-4]$ such that if $S \in \mathcal{S}$, then $[2r-4] \setminus S \notin \mathcal{S}$. Hence $|\mathcal{S}| = \frac{1}{2} {2r-4 \choose r-2} \geq 2^{-6} 4^r / \sqrt{r-2}$ (using the fact that $\sqrt{k} {2k \choose k} \geq 2^{2k-1}$). Notice that for any permutation $\sigma \in \mathcal{F}$, if $S \in \mathcal{S}$ and $[2r-4] \setminus S$ are separated in $\sigma$ then no other $S' \in \mathcal{S}$ and $[2r-4] \setminus S'$ are separated in $\sigma$. Hence we partition $\mathcal{F}$ into $|\mathcal{S}|$ (disjoint) sub-families $\{\mathcal{F}_S\}_{S \in \mathcal{S}}$ such that $\sigma \in \mathcal{F}_S$ if and only if $\sigma$ separates $S$ and $[2r-4] \setminus S$. We claim that each $\mathcal{F}_S$ is pairwise suitable for the complete graph on the vertex set $\{2r-3, \ldots, n\}$, i.e, for any distinct $a, b, c, d \in \{2r-3, \ldots, n\}$ there exists some $\sigma \in \mathcal{F}_S$ which separates $\{a, b\}$ from $\{c, d\}$. This is because the permutation $\sigma \in \mathcal{F}$ which separates the $r$-sets $S \cup \{a,b\}$ from $([2r-4] \setminus S) \cup \{c, d\}$ lies in $\mathcal{F}_S$. Hence by Corollary \ref{corollaryBoxliOmega}, we have $|\mathcal{F}_S| \geq \log \floor{(n - 2r +4)/2}$. Since $\mathcal{F} = \biguplus_{S \in \mathcal{S}} \mathcal{F}_S$, we have $|\mathcal{F}| \geq |\mathcal{S}||\mathcal{F}_S| \geq 2^{-6} \frac{4^r}{\sqrt{r-2}} \log \floor{ (n-2r+4)/2 }$ which is at least $2^{-7} \frac{4^r}{\sqrt{r-2}} \log n$ for $n$ sufficiently larger than $r$. \end{proof} \subsection{Fully subdivided clique} \label{sectionSubdividedClique} It easily follows from Corollary \ref{corollarySubdivisionChromaticNumber} that $\pi(K_n^{1/2}) \in O(\log \log n)$. In this section we prove that $\pi(K_n^{1/2}) \geq \frac{1}{2} \log \log (n-1)$, showing the near tightness of that upper bound. We give a brief outline of the proof below. (Definitions of the new terms are given before the formal proof.) First, we use Erd\H{o}s-Szekeres Theorem \cite{ErdosSzekeres} to argue that for any family $\mathcal{F}$ of permutations of $V(K_n^{1/2})$, with $|\mathcal{F}| < \frac{1}{2} \log \log n$, a subset $V'$ of original vertices of $K_n^{1/2}$, with $n' = |V'| \approx 2^{\sqrt{\log n}}$, is ordered essentially in the same way by every permutation in $\mathcal{F}$. Since the ordering of the vertices in $V'$ are fixed, the only way for $\mathcal{F}$ to realise pairwise suitability among the edges in the subdivided paths between vertices in $V'$ is to find suitable positions for the new vertices (those introduced by subdivisions) inside the fixed order of $V'$. We then show that this amounts to constructing a realiser for the canonical open interval order $(C_{n'}, \lhd)$ and hence $|\mathcal{F}|$, in this case, is lower bounded by the poset dimension of $(C_{n'}, \lhd)$ which is known to be at least $\log \log (n'-1) = \frac{1}{2} \log \log (n-1)$. \begin{definition}[Canonical open interval order] \label{definitionCanonicalOpenInterval} For a positive integer $n$, let $C_n = \{(a,b) : a, b \in [n], a < b \}$ be the collection of all the ${n \choose 2}$ open intervals which have their endpoints in $[n]$. Then $(C_n,\lhd)$, the interval order corresponding to the collection $C_n$, is called the {\em canonical open interval order}. \end{definition} Usually the canonical interval order is defined over closed intervals. For a positive integer $n$, let $I_{n} = \{[a,b]: a, b \in [n], a \leq b \}$ be the collection of all the ${n+1 \choose 2}$ closed intervals which have their endpoints in $[n]$. The poset $(I_n,\lhd')$, where $[i, j] \lhd' [k,l] \iff j < k$ is called the {\em canonical (closed) interval order} in literature. It is easy to see that $f: (C_n, \lhd) \rightarrow (I_{n-1}, \lhd')$, with $f((i,j)) = [i, j-1]$ is an isomorphism. It is well known that the dimension of $(I_{n-1}, \lhd')$ and hence $(C_n, \lhd)$ is at most $\log\log (n-1) + (\frac{1}{2} + o(1))\log\log\log (n-1)$. We state the lower bound below for later reference. \begin{theorem}[F\"{u}redi, Hajnal, R\"{o}dl, Trotter \cite{furedi1991interval}] \label{theoremIdimCanonicalLowerBound} $$dim(C_n) \geq \log\log(n-1),$$ \end{theorem} \begin{theorem} \label{theoremKnHalf} Let $K_n^{1/2}$ denote the graph obtained by fully subdividing $K_n$. Then, $$ \frac{1}{2} \floor{\log\log(n-1)} \leq \pi(K_n^{1/2}) \leq (1 + o(1))\log\log (n-1).$$ \end{theorem} \begin{proof} The upper bound follows from Corollary \ref{corollarySubdivisionChromaticNumber}. So it suffices to show the lower bound. Let $v_1, \ldots , v_n$ denote the \textit{original vertices} (the vertices of degree $n-1$) in $K_n^{1/2}$ and let $u_{ij}$, $i, j \in [n], i < j$, denote the new vertex of degree $2$ introduced when the edge $\{i,j\}$ of $K_n$ was subdivided. Let $\mathcal{F}$ be a family of permutations that is pairwise suitable for $K_n^{1/2}$ such that $|\mathcal{F}| = r = \pi(K_n^{1/2})$. For convenience, let us assume that $n$ is exactly one more than a power of power of $2$, i.e., $\log\log (n-1) \in \mathbb{N}$. The floor in the lower bound gives the necessary correction otherwise when we bring $n$ down to the largest such number below $n$. Let $p=(n-1)^{1/2^r} + 1$. By Erd\H{o}s-Szekeres Theorem \cite{ErdosSzekeres}, we know that if $\tau$ and $\tau'$ are two permutations of $[n^2 + 1]$, then there exists some $X \subseteq [n^2 + 1]$ with $|X|=n+1$ such that the permutations $\tau$ and $\tau'$ when restricted to $X$ are the same or reverse of each other. By repetitive application of this argument, we can see that there exists a set $X$ of $p$ original vertices of $K_n^{1/2}$ such that, for each $\sigma, \sigma' \in \mathcal{F}$, the permutation of $X$ obtained by restricting $\sigma$ to $X$ is the same or reverse of the permutation obtained by restricting $\sigma'$ to $X$. Without loss of generality, let $X = \{v_1, \ldots , v_p\}$ such that, for each $\sigma \in \mathcal{F}$, either $v_1 \prec_{\sigma} \cdots \prec_{\sigma} v_p$ or $v_p \prec_{\sigma} \cdots \prec_{\sigma} v_1$. Now we ``massage'' $\mathcal{F}$ to give it two nice properties without changing its cardinality or sacrificing its pairwise suitability for $K_n^{1/2}$. Note that if a family of permutations is pairwise suitable for a graph then the family retains this property even if any of the permutations in the family is reversed. Hence we can assume the following property without loss of generality. \setcounter{property}{0} \begin{property} \label{property1KnHalf} $v_1 \prec_{\sigma} \cdots \prec_{\sigma} v_p, \forall \sigma \in \mathcal{F}$. \end{property} Consider any $i,j \in [p], i < j$. For each $\sigma \in \mathcal{F}$, it is safe to assume that $v_i \prec_{\sigma} u_{ij} \prec_{\sigma} v_j$. Otherwise, we can modify the permutation $\sigma$ such that $\mathcal{F}$ is still a pairwise suitable family of permutations for $K_n^{1/2}$. To demonstrate this, suppose $v_i \prec_{\sigma} v_j \prec_{\sigma} u_{ij}$. Then, we modify $\sigma$ such that $u_{ij}$ is the immediate predecessor of $v_j$. It is easy to verify that, for each pair of disjoint edges $e,f \in E(K_n^{1/2})$, if $e \prec_{\sigma} f$ or $f \prec_{\sigma} e$ then the same holds in the modified $\sigma$ too. Similarly, if $u_{ij} \prec_{\sigma} v_i \prec_{\sigma} v_j$ then we modify $\sigma$ such that $u_{ij}$ is the immediate successor of $v_i$. Hence we can assume the next property also without loss in generality. \begin{property} \label{property2KnHalf} $v_i \prec_{\sigma} u_{ij} \prec_{\sigma} v_j, \forall i, j \in [p], i <j,~ \forall \sigma \in \mathcal{F}$. \end{property} These two properties ensure that for any two open intervals $(i,j)$ and $(k,l)$ in $C_p$ if $(i,j) \lhd (k,l)$ then $u_{ij} \prec_{\sigma} u_{kl}, \forall \sigma \in \mathcal{F}$. In the other case, i.e., when $(i,j) \cap (k,l) \neq \emptyset$, we make the following claim. \begin{claim} \label{claim1KnHalf} Let $i,j,k,l \in [p]$ such that $(i,j) \cap (k,l) \neq \emptyset$. Then there exist $\sigma_a, \sigma_b \in \mathcal{F}$ such that $u_{ij} \prec_{\sigma_a} u_{kl}$ and $u_{kl} \prec_{\sigma_b} u_{ij}$. \end{claim} Since $(i,j) \cap (k,l) \neq \emptyset$, we have $k < j$ and $i < l$. Hence by Property \ref{property1KnHalf}, $\forall \sigma \in \mathcal{F}$, $v_k \prec_{\sigma} v_j$ and $v_i \prec_{\sigma} v_l$. Now we prove the claim by contradiction. If $u_{ij} \prec_{\sigma} u_{kl}$ for every $\sigma \in \mathcal{F}$ then, together with the fact that $v_k \prec_{\sigma} v_j, \forall \sigma \in \mathcal{F}$, we see that no $\sigma \in \mathcal{F}$ can separate the edges $\{v_j, u_{ij}\}$ and $\{v_k, u_{kl}\}$. But this contradicts the fact that $\mathcal{F}$ is a pairwise suitable family of permutations for $K_n^{1/2}$. Similarly if $u_{kl} \prec_{\sigma} u_{ij}$ for every $\sigma \in \mathcal{F}$ then, together with the fact that $v_i \prec_{\sigma} v_l, \forall \sigma \in \mathcal{F}$, we see that no $\sigma \in \mathcal{F}$ can separate $\{v_i, u_{ij}\}$ and $\{v_l, u_{kl}\}$. But this too contradicts the pairwise suitability of $\mathcal{F}$. Thus we prove Claim \ref{claim1KnHalf}. With these two properties and the claim above, we are ready to prove the following claim. \begin{claim} \label{claim2KnHalf} $|\mathcal{F}| \geq \operatorname{dim}((C_p, \lhd))$. \end{claim} For every $\sigma \in \mathcal{F}$, construct a total order $L_{\sigma}$ of $C_p$ such that $(i,j) \lhd (k,l) \in L_{\sigma} \iff u_{ij} \prec_{\sigma} u_{kl}$. By Property \ref{property1KnHalf} and Property \ref{property2KnHalf}, $L_{\sigma}$ is a linear extension of $(C_p, \lhd)$. Further, Claim \ref{claim1KnHalf} ensures that $\mathcal{R} = \{L_{\sigma}\}_{\sigma \in \mathcal{F}}$ is a realiser of $(C_p, \lhd)$. Hence $|\mathcal{F}| = |\mathcal{R}| \geq \operatorname{dim}((C_p, \lhd))$. Now we are ready to show the final claim which settles the lower bound. \begin{claim} \label{claim3KnHalf} $|\mathcal{F}| \geq \frac{1}{2}\log \log (n-1)$. \end{claim} Suppose for contradiction that $|\mathcal{F}| = r < \frac{1}{2}\log \log (n-1)$. Then, by Claim \ref{claim2KnHalf}, $r \geq \operatorname{dim}((C_p, \lhd))$ where $p = (n-1)^{1/2^r} + 1 > 2^{\sqrt{\log (n-1)}} + 1$. But then, by Theorem \ref{theoremIdimCanonicalLowerBound}, we have $r \geq \log \log (p-1) > \log \log (2^{\sqrt{\log (n-1)}}) = \frac{1}{2}\log \log (n-1)$ which contradicts our starting assumption. \end{proof} \section{Discussion and open problems} \label{sectionOpenProblems} For a graph $G$, we have given upper bounds for $\pi(G)$ exclusively in terms of $|V(G)|$, $\Delta(G)$, $\operatorname{tw}(G)$, $\chi_a(G)$ and $\chi_s(G)$. Hence it is natural to ask if a lower bound can be given for $\pi(G)$ exclusively in terms of any of these parameters. The answer turns out to be negative at least for the first three. An empty graph $E_n$ on $n$ vertices has $\pi(E_n) = 0$. The star graph $S_{n-1}$ on $n-1$ leaves has $\Delta(S_n) = n-1$, but $\pi(S_n) = 0$. The $n \times n$ square grid $G$ on the plane has a treewidth $n$ but a bounded $\pi(G)$ since it is planar. In fact $\pi(G) = 2$ since a plane drawing of $G$ as an axis-parallel grid is a $2$-box representation of $L(G)$. As for $\chi_a(G)$, and hence $\chi_s(G)$, we cannot hope to get an exclusive lower bound for $\pi(G)$ of a larger order than $\log \log \chi_a(G)$. This is because if $G$ is the graph obtained by replacing every edge of $K_n$ with $n-1$ parallel paths of length $2$, then it is easy to see that (by two applications of pigeonhole principle) $\chi_a(G) \geq n$ \cite{kostochka1976note}. But since $G$ is $2$-degenerate we know that $\pi(G) \in \order{\log \log |G|}$ and $|G| \leq n^3$. In view of the above, it is natural to ask what other graph parameters, apart from $\omega(G)$, have a potential to give an exclusive lower bound for $\pi(G)$. Two parameters that we have tried are the Hadwiger number $\eta(G)$ and chromatic number $\chi(G)$. The Hadwiger number of a graph is the size of a largest clique minor in $G$. Note that $\operatorname{tw}(G) + 1 \geq \eta(G)$ and if the Hadwiger conjecture is true, then $\eta(G) \geq \chi(G) \geq \omega(G)$. The possibility of getting an exclusive lower bound for $\pi(G)$ in terms of $\eta(G)$ is ruled out because the double $n \times n$ square grid $G$, i.e., the graph obtained by taking two identical $n \times n$ square grids and connecting the identical nodes with an edge, has $\eta(G) \geq n$ \cite{chandran2007hadwiger} but $\pi(G) \leq 3$. Here again an axis parallel $3$-dimensional drawing of $G$ is a $3$-box representation. We have shown that $\pi(G) \geq \log \floor{\omega(G)/2}$. But a similar bound in terms of $\chi(G)$ could not be arrived at and hence we pose the following question. \begin{openproblem} For any graph $G$, is $\pi(G) \geq \log \chi(G) - c$, for some constant $c$? \end{openproblem} The answer is positive for graphs like perfect graphs where $\chi(G) = \omega(G)$. Notice that we cannot have an upper bound for $\pi(G)$ exclusively in terms of $\chi(G)$ since the complete bipartite graph $K_{n,n}$ has $\pi(K_{n,n}) \geq \log n$, but $\chi(K_{n,n}) = 2$. Among the upper bounds that are obtained in this paper, for which we do not know any reasonable tightness, the one based on $\Delta(G)$ (Theorem \ref{theoremBoxliDelta}) is the one that has engaged us the most. We saw that $\pi(G) \leq 2^{9 \operatorname{log^{\star}} \Delta(G)} \Delta(G)$. For a chordal graph $G$, by Theorem \ref{theoremBoxliTreewidth}, we have $\pi(G) \in O(\log \Delta(G))$, since $\omega(G) -1 \leq \Delta(G)$. For a graph $G$ with $\Delta(G)$ of order at least $\log n$, by Theorem \ref{theoremBoxliSize} (on $|V(G)|$), we have $\pi(G) \in \order{\Delta(G)}$. On the other hand, the examples of sparse graphs that we have studied, together with the monotonicity of $\pi(G)$ tempts us to make the following conjecture. \begin{conjecture} For a graph $G$ with maximum degree $\Delta(G)$, $\pi(G) \in \order{\Delta(G)}$. \end{conjecture} Since $\pi(G)$ is the boxicity of the line graph of $G$, it is interesting to see how it is related to boxicity of $G$ itself. But unlike separation dimension, boxicity is not a monotone parameter. For example the boxicity of $K_n$ is $1$, but deleting a perfect matching from $K_n$, if $n$ is even, blows up its boxicity to $n/2$. Yet we couldn't find any graph $G$ such that $\operatorname{boxicity}(G) > 2^{\pi(G)}$. Hence we are curious about the following question. \begin{openproblem} Does there exist a function $f : \mathbb{N} \rightarrow \mathbb{N}$ such that $\operatorname{boxicity}(G) \leq f(\pi(G))$? \end{openproblem} Note that the analogous question for $\boxli^{\star}(G)$ has an affirmative answer. If there exists a vertex $v$ of degree $d$ in $G$, then any $3$-mixing family of permutations of $V(G)$ should contain at least $\log d$ different permutations because any single permutation will leave $\ceil{d/2}$ neighbours of $v$ on the same side of $v$. Hence $\log \Delta(G) \leq \boxli^{\star}(G)$. From \cite{DiptAdiga}, we know that $\operatorname{boxicity}(G) \in \order{\Delta(G) \log^2 \Delta(G)}$ and hence $\operatorname{boxicity}(G) \in \order{2^{\boxli^{\star}(G)} (\boxli^{\star}(G))^2}$. The upper and lower bounds for $\pi(K_n^r)$, given by Theorem \ref{theoremHypergraphSizeLowerbound} differ by a factor of $r$. Estimating the exact order growth of $\pi(K_n^r)$ will be a challenging question. A similar gap of $r^2$ is present in the upper and lower bounds for the size of a smallest family of completely $r$-scrambling permutations of $[n]$ (See \cite{radhakrishnan2003note}). \begin{openproblem} What is the exact order growth of $\pi(K_n^r)$? \end{openproblem} Another interesting direction of enquiry is to find out the maximum number of hyperedges (edges) possible in a hypergraph (graph) $H$ on $n$ vertices with $\pi(H) \leq k$. Such an extremal hypergraph $H$, with $\pi(H) \leq 0$, is seen to be a maximum sized intersecting family of subsets of $[n]$. A similar question for order dimension of a graph has been studied \cite{agnarsson1999maximum,agnarsson2002extremal} and has found applications in ring theory. We can also ask a three dimensional analogue of the question answered by Schnyder's theorem in two dimensions. Given a collection $P$ of non parallel planes in $\mathbb{R}^3$, can we embed a graph $G$ in $\mathbb{R}^3$ so that every pair of disjoint edges is separated by a plane parallel to one in $P$. Then $|P|$ has to be at least $\pi(G)$ for this to be possible. This is because the permutations induced by projecting such an embedding onto the normals to the planes in $P$ gives a pairwise suitable family of permutations of $G$ of size $|P|$. Can $|P|$ be upper bounded by a function of $\pi(G)$? \bibliographystyle{plain}
1,941,325,220,678
arxiv
\section{Introduction} The concepts of market liquidity, price impact, information asymmetry and adverse selection have always been at the center of market microstructure research. The seminal paper by Kyle \cite{kyle85} made connections between all these concepts in a simple and tractable framework and became a corner stone in the literature of this field. In his model, Kyle described a game between three types of players: an inside trader (insider), noise traders and a market maker. A risky asset is traded over one period, where the insider has an exclusive information on the price position at the end of the term. This is often referred to as the fundamental price. Based on this information, the insider decides on the size of her market-order. At the same time the noise traders also submit their market-orders without any information about the price dynamics, so the total size of their orders is modelled as a centred random variable. The sum of all these orders, which is the order-flow, is then revealed to the market maker without the possibility to disentangle its components. Based on this observation, the market maker decides on the mid-price of the asset, and clears the orders. The presence of noise traders helps the insider to obscure her position from the marker maker. Under Gaussian assumptions on the distribution of the fundamental price and the noise traders orders, Kyle proved that this game has an equilibrium, in which the insider's strategy is linear with respect to the fundamental price, and the market maker's pricing rule is linear with respect to the orderflow. Kyle also extended the proof to a multi-period version of this model. The simple setting of Kyle's model reveals some fundamental connections between key concepts in market microstructure. In equilibrium the insider adjusts the order size according to the fundamental price, while taking into account the price impact of her order. Moreover, dependence in the parameters of the distribution of the fundamental price, is reflected in the market maker's pricing. This provides insights on the effect of asymmetric information, and more specifically adverse selection, in pricing strategies. Numerous extensions to Kyle's model were studied, we briefly survey just a few of them. Subrahmanyam \cite{Subrahmanyam:1991aa} studied an extension to Kyle's model where both the informed trader and market maker are risk averse. Nishide \cite{Nishide2006InsiderTW} investigated a version of the model with competing market makers. Boulatov and Bernhardt \cite{Boulatov:2015aa} considered the robustness of the linear Kyle equilibrium with respect to small perturbations in the payoffs of the agents. Molino et al. \cite{Garcia-del-Molino:2020aa} studied the case where the market maker is setting the price of $n$ correlated securities. A neural networks approach to Kyle's single period model was developed by Friedrich and Teichmann \cite{Teichmann20}. They showed that the agents strategies converges to the linear equilibrium, which was proved for the Gaussian case by Kyle, also for various types of fundamental price distributions. A continuous time version of Kyle's model was first proposed by Back \cite{Back:1992aa}. Collin-Dufresne and Fos \cite{Collin16} extended Back's work to the case where the liquidity provided by noise traders follows a general stochastic process. Significant amount of work on the mathematical foundations of the continuous Kyle model in the context of filtering, enlargement of filtrations and Markov bridges, is described in the lecture notes by \mathbf{c}{C}etin \cite{cetin-notes} and references therein. The main purpose of market makers is to add liquidity to markets by being ready to buy and sell assets at any time during the trading day. As a result, market makers also determine the spread between the bid and ask (i.e. the difference between the price quotes for market buy and sale orders) and even if it is only a few cents, they can profit by executing thousands of trades in a single day. None of the extensions of Kyle's model that were mentioned earlier take into account the fact that market makers also decide on the bid-ask spread and their profits depend on this decision. The market-maker spread is also considered a measure for assets liquidity, spreads tend to be tighter in more actively traded assets, and in those that have more available market makers. The size of the spread is also one of the main components of traders transaction costs. A related work by El Euch et al. \cite{rosen2020}, proposed a model for an exchange (or a regulator) who is aiming to attract liquidity to the market. The exchange was looking for the best make–take fees policy to offer to market-makers in order to maximise its utility. As mentioned earlier, Market makers earn money by having investors and traders buy assets in the ask price and sale them assets on a lower bid price. The wider the spread, the more potential profit the market maker can make. On the other hand, the competition among market makers can keep spreads tight. Therefore, a key addition to Kyle's model is to introduce the revenue of the market maker due to the spread and to capture the trade-off between providing a comparative price and earning money from the spread. We include in our model both the market maker's revenue along with the decision on the size of the spread, as described in Section \ref{sec-model}. We then derive the maximizer of the informed trader's revenue and give sufficient conditions for equilibrium in the game. We use neural networks methods to verify that indeed this equilibrium holds, and show that it experience phase transitions as we increase the relative weight of the revenue term with respect to the price efficiency in the market maker performance function. As presented in Figure \ref{fig-phase}, the equilibrium price in this game has three phases, the "Kyle phase" where the spread is zero and the mid-price is linear, the "linear mid-price with spread phase", and the "spread phase" where the market maker does not use any price rule other than the bid-ask spread (see also Figure \ref{phase2}). \paragraph{Organization of the paper:} In Section \ref{sec-model} we define a new extension to Kyle's model that takes into account the market maker's revenue from creating a spread along with price efficiency. In Section \ref{sec-res} we present our main results that include the existence of a unique solution to the insider's optimization problem and gives sufficient conditions for the equilibrium in the system. We also provide a neural network algorithm that solves the market market maker's optimization problem and hence derives the equilibrium. In the end of this section we prove the existence of a metastable equilibrium, that derived in a closed form. In Section \ref{section-neural} we provide a detailed description on the neural networks methodology. Sections \ref{sec-prf}--\ref{sec-pfs-3} are dedicated to the proofs of the main theoretical results. Finally in Section \ref{sec-form} we give some explicit formulas for the equilibrium points of the game. \section{The Model} \label{sec-model} We consider a one-period model that consists of three types of agents: an informed trader (insider), noise traders and a market maker. We assume that the future price (or fundamental price) at the end of the period, is predicted by the informed trader, and it is a random variable $\tilde v$ with a mean $p_{0}$ and variance $\sigma_{\tilde v}^2$. The noise traders have no predictions on the price move, and we denote by $\tilde u$ the total amount that they trade, which is a symmetric random variable with variance $\sigma_{\tilde u}^2$ and with a continuous probability density function $f_{\tilde u}$. It is assumed that $\tilde u$ and $\tilde v$ are independent random variables. Finally, we denote by $\tilde x$ the amount traded by the insider and by $\tilde p$ the execution price which is determined by the market maker . As in \cite{kyle85} we describe the trading as a two steps procedure. First, the values of $\tilde v$ and $\tilde u$ are realized and the insider chooses the size of her market order $\tilde x$. Note that when choosing $\tilde x$, the insider knows $\tilde v$ but not $\tilde u$. We define $\tilde x = X(\tilde v)$, where $X$ is a measurable function. In the second step, the market maker determines the traded (or execution) price $\tilde p$, while observing only the total order-flow $\tilde x+\tilde u$. Our main objective is to reflect the revenue of the market maker in her performance function. Since these revenue are directly linked to the bid-ask spread, we enlarge the class of linear prices which was proposed by Kyle. Motivated by Madhavan et al. \cite{madhav96} we assume that $\tilde p = P(\tilde x+\tilde u)$, where the price function $P$ is in the class of functions: \begin{equation} \label{admis-p} \mathscr{P} = \{P(x)=\lambda x + \theta\, \textrm{sign}(x)+p_{0}, \, \lambda, \theta > 0\}. \end{equation} \begin{remark} The choice of $\mathscr{P}$ in \eqref{admis-p} is the simplest way to define a price with a symmetric bid-ask spread, where the size of the spread is $\theta$. Our choice is consistent with Madhavan et al. \cite[Section 2]{madhav96}, where we note that the conditional expectation of the indicators in their one period model, can be replaced by the sign of $\tilde x+ \tilde u$. In Section \ref{sec-en-admis} we provide numerical evidence that extending the class $\mathscr{P}$ by adding higher order terms will still lead to equilibrium price in $\mathscr{P}$. \end{remark} The profit of the insider, $\tilde \pi$ is given by $\tilde \pi = (\tilde v -\tilde p)\tilde x$. Note that $\tilde \pi= \tilde \pi(X,P)$. \begin{definition}[Equilibrium] An equilibrium between the market maker and the insider is a pair of $X$ and $P$ such that the following two conditions hold. \begin{itemize} \item[\textbf{(i)}] \textbf{Profit maximization}: for any other strategy $X'$ and for any $v \in \mathds{R}$, \begin{equation} \label{trade-opt} \mathbb E\big[ \tilde \pi(X,P) | \tilde v =v \big] \geq \mathbb E\big[ \tilde \pi(X',P) | \tilde v =v \big]. \end{equation} \item[\textbf{(ii)}] \textbf{Market Efficiency and Revenue}: the random price $\tilde p =P(\tilde x+\tilde u)$ satisfies \begin{equation} \label{equi-mm} \argmin_{P\in \mathscr{P}} \big\{ \mathbb E\big[ (\tilde v - \tilde p)^{2} \big] - \gamma\theta E[|x(\tilde v) + \tilde u|] \big\}, \end{equation} where $\gamma>0$ is a fixed risk-aversion constant. \end{itemize} \end{definition} \begin{remark} In Kyle's paper \cite{kyle85} the market maker's efficiency criterion was given by \begin{equation} \label{eff-kyle} \tilde p=P(\tilde x+\tilde u) = {\mathbb{E}}[\tilde v| \tilde x+\tilde u ]. \end{equation} Note that in the setting of Theorem 1 in \cite{kyle85}, minimising $\mathbb E\big[ (\tilde v - \tilde p)^{2}\big]$, which is our \emph{market efficiency and revenue} criterion (\ref{equi-mm}) with $\gamma=0$, is equivalent to (\ref{eff-kyle}). In our model we incorporate the revenue of the market maker, so we add the term $\theta E[|\tilde v + \tilde u|]$, which reflects the revenue, as it is proportional to the size of the spread and to the total overflow. In Proposition \ref{prop-lin-eq} we prove that this term is essential in order to get a difference between the buy and sell prices. We call $\gamma$ the risk-aversion parameter since it describes the tradeoff between keeping an efficient price and making profits. The second clearly may create additional risk by suppressing insider from trading large orders and therefore trading in other venues. \end{remark} \section{Main Results} \label{sec-res} We first present our theoretical results, where we solve the trader's profit maximisation problem. We also give a necessary condition for finding the equilibrium. Then we will construct a neural network that will allow us to numerically find the equilibrium. We also prove the existence of a metastable equilibrium and derive it in a closed form. \subsection{Solution of the insider's problem} In the next proposition we show the existence of an optimal strategy for the insider. We also provide some insights on the properties of this strategy. We recall that the fundamental price at the end of the period $\tilde v$ is a random variable with a mean $p_{0}$ and variance $\sigma_{\tilde v}^2$. The noise traders order flow $\tilde u$ is a symmetric random variable with variance $\sigma^2_{\tilde u}$ and a continuous probability density function $f_{\tilde u}$. We denote by $F_{\tilde u}$ the cumulative distribution function of $\tilde u$. Moreover, it is assumed that $\tilde u$ and $\tilde v$ are independent random variables. Note that at this point we do not specify the distributions of $\tilde v$ and $\tilde u$. We postpone the proofs of all the theoretical results of this section to Section \ref{sec-prf}. \begin{proposition} \label{lemma-min-trader} For any $v\in \mathds{R}$ there exists a unique $x^{*}= x^{*}(v)$ that maximizes that expected profit of the insider \begin{equation} \label{inside-prob} R_{v}(x) := \mathbb E\big[ \tilde \pi(X,P) | \tilde v =v \big]. \end{equation} The maximizer $x^{*}$ satisfies the following properties: \begin{itemize} \item When $v=p_{0}$ we have $x^{*}=0$ and $R_{p_{0}}(x^{*})=0$. \item When $v \not =p_{0}$, then $x^{*}(v)$ is a solution to the equation, \begin{equation} \label{deriv-cond} \theta F_{\tilde u}(-x) -x\big(\theta f_{\tilde u}\big(-x\big)-\lambda\big) = \kappa(v), \end{equation} where $\kappa(v) =(v-p_{0}-\theta)/2$. We moreover have $R_{v}(x^{*}(v)) >0$ and $\textrm{sign}(x^{*}(v)) = \textrm{sign}(v-p_{0})$. \end{itemize} \end{proposition} \begin{remark} The proof of Proposition \ref{lemma-min-trader} suggests that if \begin{equation} \label{2nd-cond} \frac{d^2}{dx^2}(xF_{\tilde u}(-x)) < 0, \quad \textrm{for all } x \in \mathds{R} \setminus\{0\}, \end{equation} then $R_v$ is concave and (\ref{deriv-cond}) has a unique solution. An example for that is when $\tilde u$ has a centred Laplace distribution. \end{remark} \begin{remark} Note that in the case where there is no bid-ask spread, i.e. $\theta =0$, we recover the result of Theorem 1 in \cite{kyle85} and get that \begin{equation} \label{x-no-sprd} x^{*}(v) = \frac{v-p_{0}}{2\lambda}. \end{equation} \end{remark} \subsubsection{Solution to the Gaussian noise case} We specialise in the case where the total order flow of the noise traders $\tilde u$ is a mean-zero Gaussian random variable. Denote by $\Phi$ (respectively $\phi$) the cumulative distribution function (respectively the probability density function) of a standard Gaussian. \begin{corollary} Assume the same hypothesis as in Proposition \ref{lemma-min-trader}, only now let $\tilde u$ be a mean-zero Gaussian with variance $\sigma^2_{\tilde u}$. Then (\ref{deriv-cond}) is given by \begin{equation} \label{normal-eqn} \theta\Phi\big(-\frac{x}{\sigma_{\tilde u}}\big) -x\Big(\frac{\theta}{\sigma_{\tilde u}}\phi\big(-\frac{x}{\sigma_{\tilde u}}\big)-\lambda\Big) = \kappa(v). \end{equation} \end{corollary} The following Lemma characterises the global maximum for the informed trader problem under the Gaussian noise assumption. \begin{proposition}\label{gaus} Let $\tilde u$ be a mean-zero Gaussian random variable, then there are two possible cases: \begin{enumerate} \item[\textbf(a)] there exists a unique one solution $x^*$ to (\ref{normal-eqn}) and this is the maximizer of $R_{v}$. \item[\textbf(b)] there exist three solutions $x_1^*<x^*_2<x_3^*$ to (\ref{normal-eqn}), and the global maximizer of $R_{v}$ is either $x_1^*$ or $x^*_3$. \end{enumerate} \end{proposition} \subsubsection{Solution to the Uniform noise case} \label{unif-equi} We study in greater detail the case where $\tilde u$ is a Uniform random variable on $[-1,1]$. \begin{proposition} \label{prop-unif} Assume the same hypothesis as in Proposition \ref{lemma-min-trader}, only now let $u$ be Uniform on $[-1,1]$. The unique maximizer $x^{*}= x^{*}(v)$ which maximizes the expected profit of the trader in \eqref{inside-prob} is given by \begin{itemize} \item[\bf{(i)}] $x^*(v) = \frac{v-p_0}{2(\lambda +\theta)}$, for $0<v-p_0 \leq \lambda +\theta +\sqrt{(\lambda +\theta)\lambda}$, \item[\bf{(ii)}] $x^*(v) = \frac{v-p_0-\theta}{2\lambda }$, for $v-p_0>\lambda +\theta +\sqrt{(\lambda +\theta)\lambda}$. \end{itemize} \end{proposition} \subsection{Sufficient conditions for equilibrium} In this section we provides sufficient conditions for the existence of an equilibrium. We continue to assume that the future price $\tilde v$ is a random variable with a mean $p_{0}$ and variance $\sigma_{\tilde v}^2$ and that the noise traders order flow $\tilde u$ is a symmetric random variable with variance $\sigma_{\tilde u}^2$ and continuous density $f_{\tilde u}$. Also here we do not specify the distribution of $\tilde v$ and $\tilde u$. The proofs of the theoretical results in this section are given in Section \ref{sec-pfs2}. We consider $x^*(v)$ from Proposition \ref{lemma-min-trader} which is the maximizer of the insider's expected profit \eqref{inside-prob}. Before stating our main result, we introduce the following notation. Let \begin{eqnarray*} \ell_{p,x^{*}}&=& {\mathbb{E}}\big[|x^{*}(\tilde v)+\tilde u|^{p}\big], \quad p=1,2, \\ \mu_{x^{*}} &=&{\mathbb{E}}\big[x^{*}(\tilde v)( \tilde v- p_{0})\big], \\ \kappa_{x^{*}} &=&{\mathbb{E}}\big[ \textrm{sign}(x^{*}(\tilde v)+\tilde u)( \tilde v- p_{0})\big]. \end{eqnarray*} Were often write $\ell_{p}, \mu, \kappa$ to simplify the notation. Note that $\ell_{p,x^{*}}, \mu_{x^{*}}, \kappa_{x^{*}}$ are all functions of $(\lambda, \theta)$. In the next theorem we characterise the equilibrium between the market maker and the insider. \begin{theorem} [sufficient condition] \label{thm-equil1} Assume that $\tilde v$ is a random variable with a mean $p_{0}$ and variance $\sigma_{\tilde v}^2$ and that $\tilde u$ is symmetric random variable with variance $\sigma_{\tilde u}^2$ and a continuous density. For any $x^{*}(v)$, which is given in Proposition \ref{lemma-min-trader}, if the following system \begin{equation} \label{lam-th-eq} \lambda = \frac{\mu -(\kappa +\gamma \ell_{1}/2) \ell_{1}}{\ell_{2}-\ell_{1}^{2}}, \quad \theta = \kappa+\frac{\gamma}{2}\ell_{1}- \frac{\ell_{1}(\mu -\kappa \ell_{1}-\ell_{1}\gamma/2)}{\ell_{2}-\ell_{1}^{2}}, \end{equation} has a non-negative solution $(\lambda^*, \theta^*)$, then the optimal price that minimizes the market maker's objective function \eqref{equi-mm} is given by $$P^*(x)=\lambda^{*} x + \theta^{*} \textrm{sign}(x)+p_{0}.$$ Moreover, $(x^{*}(\cdot), \lambda^*, \theta^*)$ is an equilibrium of the game. \end{theorem} \begin{remark} Note that finding a solution to equation (\ref{lam-th-eq}) is a difficult task since $\ell_i, \kappa$ and $\mu$ depend on $(\lambda,\theta)$. In Section \ref{section-find-eq} we provide a numerical method, based on an ad hoc neural network, that can find the equilibrium point $(\lambda^*,\theta^*)$. Proving the uniqueness of the equilibrium seems to be out of reach due to the complexity of (\ref{lam-th-eq}). Nevertheless, our numerical approach provides evidence that uniqueness indeed holds. \end{remark} \begin{remark} We observe that in the case where we restrict to pricing rules with $\theta =0$ (i.e. zero spread), $\tilde u \sim N(0,\sigma_{\tilde u}^{2})$ and $\tilde v\sim N(p_{0}, \sigma_{\tilde v}^{2})$, then $x^{*}(v)$ is given by (\ref{x-no-sprd}), $\kappa = \frac{\ell_{1}\mu}{\ell_{2}}$ and $$\lambda^* =\frac{\mu}{\ell_{2}} =\frac{\beta\sigma_{\tilde v}^{2}}{\beta^{2}\sigma^{2}_{\tilde v}+\sigma_{\tilde u}^{2}},$$ where $\beta = \frac{1}{2\lambda}$. It follows that $P^{*}(x)$ is similar to the price at equilibrium in Theorem 1 of \cite{kyle85}. \end{remark} In the following proposition we prove that when $\gamma$ in \eqref{equi-mm} is set to zero, we recover the classical Kyle equilibrium without a bid-ask spread (i.e. $\theta =0$). \begin{proposition} \label{prop-lin-eq} Assume that $\tilde v - p_{0}$ is a centred Gaussian with variance $\sigma_{\tilde v}^2$ and that $\tilde u$ is either a centred Gaussian or centred Uniform with variance $\sigma_{\tilde u}^{2}$. If the risk-aversion parameter $\gamma$ in \eqref{equi-mm} is zero, then there exists an equilibrium in which $X$ and $P$ are linear functions that are given by $$ X(v) = \beta^{*}(v-p_{0}) \quad P(x) = p_{0}+ \lambda^{*} x, $$ where $\beta^{*} = \frac{\sigma_{ u}}{\sigma_{ v}}$ and $\lambda^{*} = \frac{\sigma_{ v}}{2\sigma_{ u}}$. \end{proposition} \subsection{Numerical results: finding the equilibrium} \label{section-find-eq} In this section we find the equilibrium points of the game under the assumption that $\tilde v - p_{0}$ is a centred Gaussian with variance $\sigma_{\tilde v}^{2}$ and that $\tilde u$ is either a centred Gaussian with variance $\sigma_{\tilde u}^{2}$ or Uniform on $[-1,1]$. In order to derive the equilibrium we design an ad hoc neural network, which is described in detail in Section \ref{section-neural}. In figure \ref{gamma-unif} we plot the optimal $\lambda^{*}$ and $\theta^{*}$ as a function of the risk-aversion parameter $\gamma$ for the Gaussian (left panel) and Uniform (right panel) cases. We also show: the expected insider's optimal market-order size, her optimal revenue, and the market maker value function as a function of the risk-aversion parameter. As expected in both cases, when the risk-aversion parameter increases, $\lambda^*$ decreases and the size of the spread $\theta^*$ increases. In addition, we observe that when $\gamma$ increases, the market maker gives more weight to revenue, and the insider increases the order size. This however does not necessarily implies an increase of the revenue. From the market maker point of view, we observe a logarithmic increase in its performance function \eqref{equi-mm}, as as $\gamma$ increases. This is due to the increase in the trader's order size, along with the increase in the revenue made from the spread. \begin{figure} [h!] \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width=\textwidth, trim=12mm 0 10mm 0, clip]{all_asafunctionofGammaNormal.png} \end{subfigure} \hfill \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width=\textwidth, trim=12mm 0 10mm 0, clip]{all_asafunctionofGammaUniform.png} \end{subfigure} \caption{Plot of the equilibrium price parameters $\lambda^*$ (red) and $\theta^*$ (blue) as a function of the risk aversion $\gamma$. We also show the expected insider's transaction size (green), expected insider profit (purple) and the market maker performance functional (black). The Gaussian noise case is presented on the left panel and the Uniform noise case on the right panel. } \label{gamma-unif} \end{figure} In figure \ref{oreder-fig} we fix $\gamma=0.5$ and plot the insider's optimal order size, revenue and the corresponding total order flow as a function of the price $v$ at equilibrium, both in the Gaussian and Uniform cases. Following our theoretical results, we observe that the optimal trade size $x^{*}(v)$ is symmetric with respect to $v$, and it is nonlinear in $v$. We observe that in both cases, when the the future price $|\tilde v|$ is roughly larger than one, the insider trades more aggressively, even if her position is detected by the market maker. Similar plot is presented in figure \ref{oreder-fig2} for the cases where $\gamma =10$ and $\gamma =16$. \begin{figure} [h!] \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width=1\textwidth, trim=10mm 0 10mm 0, clip]{all_asafunctionofVnormal_g005.png} \caption{Gaussian Noise} \end{subfigure} \hfill \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width=1\textwidth, trim=10mm 0 10mm 0, clip]{all_asafunctionofVuniformg005.png} \caption{Uniform Noise } \end{subfigure} \caption{$\gamma=0.5$: the insider expected revenue (red) and transaction size $x^{*}$ (blue) in equilibrium ($y$-axis) vs. $v$ (in the $x$-axis) in the "linear mid-price with a bid-ask spread phase". The total order flow $v+\tilde u$ appears grey spectrum }\label{oreder-fig} \end{figure} \begin{figure} [h!] \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width=1\textwidth, trim=10mm 0 10mm 0, clip]{all_asafunctionofVnormalg10.png} \caption{Gaussian Noise} \end{subfigure} \hfill \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width=1\textwidth, trim=10mm 0 10mm 0, clip]{all_asafunctionofVuniformg16.png} \caption{Uniform Noise } \end{subfigure} \caption{$\gamma=10$ (left) and $\gamma=16$ (right): the insider expected revenue (red) and transaction size $x^{*}$ (blue) in equilibrium ($y$-axis) vs. $v$ (in the $x$-axis) in the "bid-ask spread phase". }\label{oreder-fig2} \end{figure} We discuss the effect of the risk aversion factor $\gamma$ on the type of the equilibrium in the model. As Proposition \ref{prop-lin-eq} suggests when $\gamma =0$ we have the classical Kyle equilibrium without a bid-ask spread. We can numerically show that both for Gaussian noise and Uniform noise, equilibrium exists for an interval of positive $\gamma$'s. More precisely, there exists $\gamma_{LBid}>0$ such that for every $0<\gamma < \gamma_{LBid}$ the price in equilibrium is of the form $P^{*}(x) = \lambda^{*}x + \theta^{*}\,\textrm{sign}(x)$ where both $\theta^{*}$ and $\lambda^{*}$ are positive. For the Gaussian case $\gamma_{LBid} \approx 0.7$ and for the Uniform case $\gamma_{LBid} \approx 1$. Moreover there exists $\gamma_{LBid} <\gamma_{Bid}$ such that for any $\gamma_{LBid} <\gamma < \gamma_{Bid}$ equilibrium doesn't hold, where in the Gaussian case $\gamma_{Bid} \approx 10$ and in the Uniform case $\gamma_{Bid} \approx 16$. Finally in the third phase, where $\gamma> \gamma_{Bid}$ we have an equilibrium with a bid-ask spread only, namely $\lambda^{*}=0$ and $\theta^{*}>0$. The equilibrium in the this regime a \emph{metastable state}, since once the algorithm arrives to equilibrium, it will not get out with probability asymptotically close to one. However, it does not admit the classical definition of equilibrium point. We state and prove the precise result on the metastable equilibrium in Section \ref{sec-meta}. These results are summarised in Figures \ref{fig-phase} and \ref{phase2}. \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{gammaEqui.PNG} \caption{Equilibrium phase transitions. When $0<\gamma<\gamma_{LBid}$ the equilibrium price is $P^{*}(x)=\lambda^{*} x + \theta^{*} sign(x)$. For $\gamma>\gamma_{Bid}$ the equilibrium price function is $P^{*}(x)= \theta^{*} sign(x)$. When $\gamma_{LBid}<\gamma<\gamma_{Bid}$ no equilibrium was found. } \label{fig-phase} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{priceAsquantityDifferentequi.png} \caption{Plot of the price at equilibrium for the Gaussian case at the three different phases: for $\gamma =0$ (black), $\gamma =0.3$ (blue) and $\gamma =10$ (green).} \label{phase2} \end{figure} \subsection{Existence of a metastable equilibrium} \label{sec-meta} In this section we state and prove the the precise results on the metastable equilibrium which was found numerically in Section \ref{section-find-eq} for $\gamma >\gamma_{Bid}$. In order to define this equilibrium we first state our search algorithm for the equilibrium. We first demonstrate the algorithm to the classical Kyle's model where $\gamma$ in \eqref{equi-mm} is set to zero, and therefore the equilibrium price has a zero bid-ask spread. \begin{algorithm}[H] \caption{Equilibrium price for the classical Kyle model}\label{alg-kyle-0} \begin{algorithmic}[1] \State Initialise the price function $P(x) = \lambda_0x+b_0$ with some arbitrary weights $\lambda_0, b_0$. \State Given that $P(x) = \lambda_nx+b_n$, find the function $x_n(v)$ which maximize the trader's optimization problem: \begin{displaymath} R_{v}(x) =\mathbb E\big[ (\tilde v -(\lambda_n x_n(\tilde v)+b_n))\tilde x_n(\tilde v) | \tilde v =v \big]. \end{displaymath} \State Find $\lambda_{n+1}, b_{n+1}$ which minimizes market maker's cost function: $$ E\big[ \big(\tilde v - (\lambda_{n+1}x_n(\tilde v)+b_{n+1})\big)^{2} \big]. $$ \State \textbf{goto} 2. \end{algorithmic} \end{algorithm} In the following proposition we prove the convergence of the output of Algorithm \ref{alg-kyle-0} to the well known Kyle equilibrium. \begin{proposition}\label{prop:alg-kyle-0} Under the setting of Algorithm \ref{alg-kyle-0}, for any initial values $(\lambda_0,b_0)$ we have $$ \lim_{n\rightarrow \infty } (\lambda_n,b_n) = (\lambda^*, 0), \quad \textrm{and } \lim_{n\rightarrow\infty} x_n = x^*(v) = \frac{v}{2\lambda^*}, $$ where \begin{equation} \label{lam-st-kyle} \lambda^*= \frac{\sigma_{\tilde v}}{2\sigma_{\tilde u}}. \end{equation} \end{proposition} The proof of Proposition \ref{prop:alg-kyle-0} is given in Section \ref{sec-pfs-3}. Next we present an Algorithm for which $\gamma>0$ in \eqref{equi-mm} and therefore the equilibrium price has a bid-ask spread. Using this algorithm we derive $(\lambda^*, \theta^*)$ from (\ref{lam-th-eq}). \begin{algorithm}[H] \caption{Equilibrium price for Kyle model with bid-ask spread}\label{alg-kyle-bid-ask} \begin{algorithmic}[1] \State Initialise the price function $P(x)$ with some arbitrary weights $(\lambda_0,b_0,\theta_0)$. \State Given that $P(x) = \lambda_nx+\theta_n\textrm{sign}(x) + b_n$, find $x_n(v)$ that maximizes the trader's optimization problem \begin{displaymath} R_{v}(x) =\mathbb E\big[ (\tilde v -(\lambda_nx(\tilde v)+\theta_n\textrm{sign}(x)+b_n)) x (\tilde v) | \tilde v =v \big]. \end{displaymath} \State Find $(\lambda_{n+1},b_{n+1}, \theta_{n+1})$ that minimizes market maker's cost function, $$ C_{n+1}(\theta_{n+1},\lambda_{n+1}) := E\big[ \big(\tilde v - (\lambda_{n+1}x_n(\tilde v)+\theta_{n+1}\textrm{sign}(x)+b_{n+1})\big)^{2} \big]-\gamma \theta_{n+1} E\big[|x_n(\tilde v)+\tilde u| \big]. $$ \State \textbf{goto} 2. \end{algorithmic} \end{algorithm} Now we are ready to define the notion of metastable equilibrium. \begin{definition} [metastable equilibrium] We say that $(x^{*}, \lambda^{*}, \theta^{*})$ is a metastable equilibrium if for any $\alpha \in (0,1)$ there exists $\gamma (\alpha)>0$ large enough such that for every $n \geq 0$, if $(x_{n}, \lambda_{n}, \theta_{n}) = (x^{*}, \lambda^{*}, \theta^{*})$, then \begin{equation} \label{met-stable} P\big((x_{n+1}, \lambda_{n+1}, \theta_{n+1}) = (x^{*}, \lambda^{*}, \theta^{*})\big) >\alpha. \end{equation} \end{definition} In the following Proposition we prove that there exists a metastable equilibrium for the our game and specify it. \begin{proposition} \label{prop-big-gamma} Assume the same hypothesis as in Proposition \ref{lemma-min-trader}, only now let $\tilde v - p_{0}$ be standard Gaussian and $\tilde u$ be Uniform on $[-1,1]$. Then, there exists a metastable equilibrium in which $X$ is linear and $P$ consist only a bid-ask spread, that is, $$ X(v) = \frac{1}{2 \theta^{*}} (v-p_{0}), \quad P(x) = p_{0}+ \theta^{*} sign(x), $$ where $\theta^{*}$ is the unique root of the function \begin{equation*} H(\theta):= -\frac{1}{\theta} \text{erf}(\theta \sqrt{2})- \frac{\gamma}{2} \left( \text{erf}(\theta \sqrt{2}) \cdot \left(1+\frac{1}{4\theta^2}\right)+ \frac{1}{\theta\sqrt{2\pi}}e^{-2\theta^2}\right)+ 2\theta . \end{equation*} \end{proposition} The proof of Proposition \ref{prop-big-gamma} is given in Section \ref{sec-pfs-3}. \section{Algorithm for finding the equilibrium}\label{section-neural} In this section we describe the implementation of Algorithm \ref{alg-kyle-bid-ask} for finding the equilibrium. We consider the two previously used settings: standard Gaussian future price $\tilde v$ with standard Gaussian noise $\tilde u$ or the standard Gaussian future price with Uniform noise on $[-1,1]$. In order to solve the basic optimization problems, we use Scipy optimize python package, and for solving neural network we use the PyTorch library. In both cases, we use Python 3 on an office CPU with an i7-4930k processor and we choose a large sampling size of $N=10^5$. The average total running time of the Gaussian-Uniform algorithm and the Gaussian-Gaussian algorithm are $3.7s$ and $18.3 s$, respectively. In terms of complexity, the complexity of the Gaussian-Gaussian case is much higher as we do not have a closed form formula for the insider's optimiser. Thus, for each price value $v$, we need to solve an additional optimization problem. \subsection{Designing a neural network to find the equilibrium} In step 3 of Algorithm \ref{alg-kyle-bid-ask} we need to find the parameters of the price function $P$ in order to obtain the minimum of a the Market Maker's cost function $C$; this is typically what neural networks are doing. Here is a minimalist description of this class of approximators, for additional information the reader is referred to \cite{vapnik} and \cite{Goodfellow}. Neural networks are parametrized functions, mapping a $K$ dimensional vector of \emph{inputs} $X$ to a vector of \emph{outputs} $Y$. In our case the output will be a vector of length $2$, namely $(\lambda^*,\theta^*)$. In order to produce the output, inputs are first mapped to a \emph{hidden layer of $h$ neurons}, by combining linearly the inputs via \emph{weights} $(w_{i,k})_{i,k}$ and a biases $(b_i)_i$, and then applying an \emph{activation function} $\phi$ to this combination: $X\mapsto (\phi(\sum_{k=1}^K w_{i,k} X_k + b_i))_{1\leq i\leq h}$. This operation is repeated several times. Ultimately the last hidden layer is mapped to the output in a similar way. One of the main features of neural networks is that the weights and biases of each layer can be \emph{trained} to minimize a \emph{loss function}, thanks to automatic differentiation methods (see \cite{geeraert2017mini} for detailed applications of Adjoint Algorithmic Differentiation in finance). Once a loss function is specified, the theory of statistical learning studies how minimizing the expectation of the loss function over a distribution can be performed on a sample of this distribution (see for instance \cite{vayatis1999distribution} or more recently \cite{choromanska2015loss} for related work on deep neural networks). One typical \emph{empirical loss function} is the well known $L^{2}$ loss function: \begin{equation} \label{loss} Loss = \frac{1}{N} \sum_{j=1}^N\big\|f_{W,b}(x_j)- y_j \big\|^2 \xrightarrow{N\rightarrow \infty} \mathbb{E}\|f_{W,b}(\bold{x})- \bold{y} \big\|^2, \end{equation} where $\bold{x}$ and $\bold{y}$ are random variables with the same law as the inputs and the outputs, respectively. Clearly convergence to the expectation takes place only under certain assumptions on the distributions of the inputs and the outputs. We will encode step 3 of Algorithm \ref{alg-kyle-bid-ask} in architecture of a neural network in order to leverage on their learning capabilities (via automatic differentiation). Note that in our case the distribution of the datasets are known, hence we can generate very large samples using Monte-Carlo methods. The convergence of the empirical loss function to the theoretical one in \eqref{loss} is guaranteed. Moreover, note that step 2 in this Algorithm \ref{alg-kyle-bid-ask} can be solved efficiently by means of Propositions \ref{gaus} and \ref{prop-unif}. The iterations between step 2 and step 3 of Algorithm \ref{alg-kyle-bid-ask} can be seen as an \emph{adversarial approach}, using the language of the machine learning community (see \cite{cao2020connecting} for connections between adversarial learning methods and mean field games). In Stackelberg games one player computes her optimal control for multiple different scenarios, and then the other player chooses the scenario that is the best for her (see \cite{moon2018linear} for details). In our Kyle game, the insider computes his optimal response for any value of $\lambda$ and $\theta$, and then the market maker chooses $\lambda$ and $\theta$ that minimizes her costs. It is straightforward that the market maker's choice is adversarial to the informed trader, and it will be implemented via a neural network. We are thus iterating sequences of (1) learning of the neural network, (2) adversarial choice by the insider, up to convergence. This is compatible with the definition of \emph{adversarial learning}. Figure \ref{fig:twoLayer} describes the two layers network architecture which is used to solve the optimisation problem in step 3 of Algorithm \ref{alg-kyle-bid-ask}. In the first layer we have two neurons, one which multiplies the order flow by $\lambda$ and adds a bias parameter $b$. The other neuron receives the order flow input and applies the sign activation function to it. In the second layer, the output of the bottom neuron is multiplied by the parameter $\theta$ and combined with the output of the top neuron. Overall the output takes the form $y=\lambda x +\theta sign(x)+b$ which is compatible with \eqref{admis-p}. \begin{figure}[h!] \centering \includegraphics[scale=0.38]{NN2.PNG} \caption{Two layers network for solving the market maker's optimization problem} \label{fig:twoLayer} \end{figure} Based on this neural network, we present an algorithm which derives the equilibrium price via finding $(\lambda^{*},\theta^{*})$ and the optimal market order of the insider $x^{*}(v)$. \begin{algorithm}[H] \label{nn-alg} \caption{Two layers network for the market maker price with bid-ask spread}\label{alg-kyleBidAsk-NN} \begin{algorithmic}[1] \State Initialise the price function $P(x)$ with some arbitrary weights $(\lambda_0,b_0,\theta_0)$. \State Sample $v_1,...,v_N$ i.i.d distributes according to the law of $\tilde v$. \State Find $x_n(v_{i})$, $i=1,...,N$ that maximize the trader's optimisation problem: \begin{displaymath} R_{v}(x) =\mathbb E\big[ (\tilde v -(\lambda_n x_n(\tilde v)+b_n)+\theta_n sign(x_n) )\tilde x_n(\tilde v) | \tilde v =v_{i} \big]. \end{displaymath} \State Sample a set of $u_1,...,u_{N}$ i.i.d distributes according to the law of $\tilde u$. \State Train the neural network presented in Figure \ref{fig:twoLayer} with inputs $\{(u_1+x_n(v_{1}), v_1),...,(u_{N}+x_n(v_{N}),v_N)\}$ and $(\lambda_{n},b_{n},\theta_n)$ as the initial weights. \State Extract the weights $(\lambda_{n+1},b_{n+1},\theta_n)$ that minimize the loss function \begin{displaymath} \frac{1}{N} \sum_{j=1}^N\big(P(x_n(v_{j}) +u_j) - v_j \big)^2 -\gamma \theta\frac{1}{N}\sum_{j=1}^N |x_n(v_{j}) +u_j|. \end{displaymath} \State Update the price function $P$ according to the new weights $$ P(x)=\lambda_{n+1}x+\theta_n sign(x) + b_{n+1} $$ \State \textbf{goto} 2. \end{algorithmic} \end{algorithm} In figure \ref{conv-alg} we illustrate the convergence of our neural network algorithm for $\gamma=0.5$. We plot $(\lambda_{n},\theta_{n})$ as a function of $n$. We observe that the convergence of the algorithm is very fast, as we achieve convergence to equilibrium in accuracy of $10^{-6}$ after only $14$ iterations in the Gaussian noise case and after $9$ iterations in the Uniform noise case. The number of required iterations depends also on the risk-aversion parameter. Our results suggest that when the risk-aversion parameter is close to zero it takes an average of $9$ iterations to converge and when the risk-aversion parameter is close to $1$ it takes an average of $13$ iterations to converge. \begin{figure}[h!] \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width=1.1\textwidth]{lamAndthetAsepochNormal.png} \caption{Normal Noise Case} \end{subfigure} \hfill \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width=1.1\textwidth]{lamAndthetAsepochUniform.png} \caption{Uniform Noise Case} \end{subfigure} \caption{Plot of $\lambda_{n}$ and $\theta_{n}$ from Algorithm \ref{alg-kyleBidAsk-NN} as a function of $n$--the number of iterations (i.e. the numbers of times running from step 2 to step 8). The risk-aversion parameter is $\gamma=0.5$.} \label{conv-alg} \end{figure} \subsection{Enlarging the class admissible prices} \label{sec-en-admis} A possible generalisation of Algorithm \ref{alg-kyleBidAsk-NN} would be to derive the price $P$ from an arbitrary neural network as described in Algorithm \ref{alg-kyle-3}. This will allow us to depart from the class of admissible prices $\mathscr{P}$ in \eqref{admis-p}. However, if the price function $P$ is general, deriving the insider's strategy becomes more involved as the results of of Propositions \ref{gaus} and \ref{prop-unif} do not apply. As a result the implementation of such algorithm requires to solve numerically at each steps a large number of optimisation problems. This creates additional discretization errors that are detrimental for the convergence to equilibrium. \begin{algorithm}[H] \caption{Multi-layer network for the market maker price}\label{alg-kyle-3} \begin{algorithmic}[1] \State Initialise the price function $P(x)$ with a random seed. \State Sample $v_1,...,v_N$ i.i.d distributed according to the law of $\tilde v$ and independently sample a matrix $U=(u_{i,j})_{i=1,...,M, \, j=1,...N}$ where $u_{i,j}$ are i.i.d distributed according to the law of $\tilde u$. \State Solve the optimization problem \begin{displaymath} \min_{x_n(v_{j})} \frac{1}{M} \sum_{i=1}^M\big(P(x_{n}(v_{j})+u_{i,j}) - v_j\big)x_n(v_{j}), \quad \textrm{for any } j=1,...,N. \end{displaymath} \State Sample a new set of $u_1,...,u_N$ i.i.d distributed according to the law of $\tilde u$. \State Train the neural network with ${\{(u_1+x_n(v_{1}), v_1),...,}{(u_N+x_n(v_{N}),v_N)\}}$ as inputs, using as initial weights the ones obtained at the previous iteration. For the training, use the following loss function \begin{displaymath} \frac{1}{N} \sum_{j=1}^N\big(P(x_{n}(v_{j}) +u_j) - v_j \big)^2 -\gamma \theta\frac{1}{N}\sum_{j=1}^N |x_n(v_{j}) +u_j|. \end{displaymath} \State Update the price function $P$ according to weights extracted in the previous step \State \textbf{goto} 2. \end{algorithmic} \end{algorithm} In order to enlarge the class of admissible price functions while preserving the complexity of Algorithm \ref{alg-kyleBidAsk-NN} we tested this algorithm with higher degree polynomials in addition to the sign function. More precisely, we tested Algorithm \ref{alg-kyleBidAsk-NN} with the following classes of price functions \begin{align*} &\mathscr{P}_3 (x) = \{P(x)=\lambda_1 x + \lambda_3 x^3 + \theta \textrm{sign}(x)+p_{0}, \, \lambda_1,\lambda_3, \theta > 0\}, \\& \mathscr{P}_5 (x) = \{P(x)=\lambda_1 x + \lambda_3 x^3 +\lambda_5 x^5+ \theta \textrm{sign}(x)+p_{0}, \, \lambda_1,\lambda_3,\lambda_5, \theta > 0\}, \\& \mathscr{P}_7 (x) = \{P(x)=\lambda_1 x + \lambda_3 x^3 +\lambda_5 x^5+ +\lambda_7 x^7+ \theta \textrm{sign}(x)+p_{0}, \, \lambda_1,\lambda_3,\lambda_5,\lambda_7 \theta > 0\}. \end{align*} In all cases, the weights, besides $\lambda_1$ and $\theta$ converged to zero. This leads us to the following conjecture. \begin{conjecture} Let $\mathscr{P}_{n}$ be the class of polynomial price functions of order $n$ which incorporates a bid-ask spread. That is, $$ {\mathscr{P}}_{n} = \{P(x)= Poly_{n}(x)+ \theta sign(x) \}. $$ Then, there exists an equilibrium with all coefficients equal to zero except for a linear coefficient and the spread $\theta$, for $\gamma \in (\gamma_{LBid}, \gamma_{Bid})$. \end{conjecture} \section{Proofs of Proportions \ref{lemma-min-trader}, \ref{gaus} and \ref{prop-unif}} \label{sec-prf} This section is dedicated to the proofs of Proportions \ref{lemma-min-trader}, \ref{gaus} and \ref{prop-unif}. Before we start with the proofs, we introduce some notation and auxiliary lemmas. \begin{proof}[Proof of Proposition \ref{lemma-min-trader}] Let $P \in \mathscr P$, and fix $v \in \mathds{R}$. From (\ref{admis-p}) and since $\tilde p =P(\tilde x +\tilde u)$ we have \begin{equation} \label{r1} \begin{aligned} R_{v}(x) &:=\mathbb E\big[ \tilde \pi(X,P) | \tilde v =v \big] \\ &= {\mathbb{E}}\big[ \big(\tilde v -\lambda (x+\tilde u)-\theta \textrm{sign}( x +\tilde u)-p_{0}\big)x| \tilde v =v \big] \\ &= \big( v -\lambda x-\theta {\mathbb{E}}[ \textrm{sign}( x +\tilde u)]-p_{0}\big)x, \end{aligned} \end{equation} where $\lambda, \theta >0$. We have used the fact that $E(\tilde u)=0$ and the independence between $\tilde v$ and $\tilde u$. Recall that $F_{\tilde u}$ is the cumulative distribution function of $\tilde u$. Since $\tilde u$ is symmetric we have $$ {\mathbb{E}}[ \textrm{sign}( x +\tilde u)] = 1-2F_{\tilde u}(-x). $$ Therefore (\ref{r1}) becomes \begin{equation}\label{r2} \begin{aligned} R_{v}(x) &= \big( v -\lambda x-\theta(1-2F_{\tilde u}(-x))-p_{0}\big)x \\ &=-\lambda x^2+(v-p_0-\theta)x +2\theta xF_{\tilde u}(-x). \end{aligned} \end{equation} Note that $R_v(0) =0$. Now assume that $v-p_{0} >0$. Since $F_{\tilde u}(0) = 1/2$, it is easy to verify that $R'_v(0)>0$ and therefore there exists $x>0$ such that $R_v(x)>0$. Moreover note that $\lim_{x\rightarrow \pm \infty } R_{v}(x) = -\infty$, and that for any $x>0$ we have $R_v(x) >R_v(-x)$. It follows that there exists $0<x^*(v)<\infty $ that maximizes $R_{v}$. Moreover $x^{*}(v)$ is a solution to the equation $R'_{v}(x)=0$, which is equivalent to \eqref{deriv-cond}. For the case $v-p_{0}=0$ we have that $R_{v}(x)<0$ when $x\not =0$, and therefore $x^*(v) = 0$ is the unique maximizer of $R_v$. In the case when $v-p_0 <0$ we have $R'_v(0)<0$, and by repeating the same steps as in the case $v-p_0 >0$, it follows that there exists $\infty<x^*(v)<0 $ that maximizes $R_{v}$. Moreover $x^{*}(v)$ is a solution to the equation $R'_{v}(x)=0$. We conclude that when $v-p_{0}\not = 0$ there exists a unique maximum to (\ref{r1}) on $(-\infty, \infty)$ which we denote by $x^{*}=x^{*}(v)$. We also have that $R_{v}(x^{*}(v))>0$ and $\textrm{sign}(x^*(v)) = \textrm{sign}(v-p_0)$ . Moreover, when $v-p_{0}=0$, the unique maximum to (\ref{r1}) is $x^{*}=0$, for which we have $R_{0}(0)=0$. \end{proof} \begin{proof}[Proof of Proposition \ref{gaus}] Again we assume that $v-p_0>0$, where the case $v-p_0<0$ can be handled similarly. The existence of a unique maximizer $x^*(v)$ to $R_v$ is known from Proposition \ref{lemma-min-trader}. It is also known that $x^*(v)$ satisfies $R'_v(x^*(v))=0$, which in this case is given by \eqref{normal-eqn}. Hence it remains to identify the zeros of $R'_v$. Recall that $\tilde u$ is a mean-zero Gaussian with variance $\sigma^2_{\tilde u}$. From \eqref{r2} it follows that the second derivative of the $R_{v}$ is given by \begin{equation*} R''_{v}(x) = -2\lambda -4\theta f_{\tilde{u}}(-x)+ 2\theta x f_{\tilde{u}}^{'}(-x). \end{equation*} Without loss of generality, we assume that $\sigma^2_{\tilde u}=1$, so we have \begin{equation*} R''_{v}(x) = -2\lambda -2\theta \frac{1}{\sqrt{2\pi}}e^{-x^2/2}\left (2-x^{2} \right ). \end{equation*} It can be easily verified that $R''_{v}(x)$ is monotone increasing on $[0,2)$ and then decreasing on $[2,\infty)$. Since $R''_{v}(0)<0$ we get that the derivative of $ R'_{v}(x)$ satisfies one of the two cases: (i) negative on $[0,\infty)$, (ii) alternate signs twice on $[0,\infty)$, negative-->positive-->negative. Combining this with the fact that $ R'_{v}(0)>0$ and $\lim_{x\rightarrow \infty } R'_{v}(x) = -\infty$, we get that in case (i) there is only one solution to $R'_{v}(x)=0$ which is the global maxima. In case (ii) there are either one or three solutions ($0<x_1<x_2<x_3$) to $R'_{v}(x)=0$. The one solution case is clearly a global maxima as in case (i). If in case (ii) there are three solutions, then either $x_1$ or $x_3$ must be the unique global maxima. \end{proof} \begin{proof} [Proof of Proposition \ref{prop-unif} ] By proposition \ref{lemma-min-trader}, if $v-p_0>0$ then $x>0$. We also note that when $\tilde u$ is distributed uniformly on $[-1,1]$ and \eqref{r1} is given by \begin{equation} \label{r-unif} R_{v}(x) = \begin{cases} -(\lambda+\theta)x^2 +(v-p_0)x, & \text{for } 0\leq x \leq 1, \\%[1ex] -\lambda x^2+(v-p_0-\theta)x, & \text{for } x>1. \end{cases} \end{equation} Define $R_{1}(x) =-(\lambda+\theta)x^2 +(v-p_0)x$ and $R_{2}(x)=-\lambda x^2+(v-p_0-\theta)x$, where we note that both $R_1$ and $R_2$ are convex parabolas. The maxima $x_i$ of $R_i$ , $i=1,2,$ are obtained at \begin{equation} \label{x-i} x_1 = \frac{v-p_0}{2(\lambda+\theta)}, \quad x_2 = \frac{v-p_0-\theta}{2\lambda}, \end{equation} and we have \begin{equation}\label{r-i} R_1(x_1) = \frac{(v-p_0)^2}{4(\lambda+\theta)}, \quad R_2(x_2) = \frac{(v-p_0-\theta)^2}{4\lambda}. \end{equation} Note that the points $(x_i, R_{i}(x_{i}))$ $i=1,2$ appear both in the graph of $R_{v}$ if $0\leq x_1 \leq 1$ and $x_2\geq 1$ which translates to $v-p_0 \leq 2(\lambda+\theta)$ and $v-p_0 \geq 2\lambda +\theta$ (respectively). It follows in order to find the global maxima $x^*$ when $2\lambda +\theta \leq v-p_0 \leq 2(\lambda+\theta)$ we need to compare $R_1(x_1)$ to $R_2(x_2)$. We get that $R(x_1) > R(x_2)$ , i.e. $x^*=x_1$, when $2\lambda +\theta \leq v-p_0 \leq \bar z$ where $\bar z = \lambda+\theta + \sqrt{(\lambda+\theta)\lambda}$. Moreover, when $\bar z \leq v-p_0\leq 2(\lambda+\theta)$, then $R(x_1) > R(x_2)$ which means that $x^*=x_2$ . In order to complete the proof we need to show that \begin{equation} \label{unif-verf} \begin{aligned} x^*=x_1, & \quad \textrm{when } 0< v-p_0 \leq2\lambda +\theta, \\ x^*=x_2, & \quad \textrm{when } 2(\lambda+\theta)< v-p_0. \end{aligned} \end{equation} Note that if $0< v-p_0 \leq2\lambda +\theta$ then $(x_1,R_1(x_1))$ appears in the graph of $R_v$ but $(x_2,R_2)$ doesn't. From (\ref{r-unif}) it follows that $R_v$ is decreasing for $x\geq 1$. Since $x_1$ is the maximum of $R_v$ on $[0,1]$ if follows that $x^*=x_1$. If $2(\lambda+\theta)< v-p_0$, then $(x_2,R_2(x_2))$ appears in the graph of $R_v$ but $(x_1,R_1)$ doesn't. From (\ref{r-unif}) it follows that $R_v$ is increasing on $[0,1]$. Since $x_2\geq 1$ is the maxima of $R_2$, it is also the maxima of $R_v$ and $x^*=x_2$, and we verify (\ref{unif-verf}). \end{proof} \section{Proofs of Theorems \ref{thm-equil1} and Proposition \ref{prop-lin-eq}} \label{sec-pfs2} \begin{proof}[Proof of Theorem \ref{thm-equil1}] For $x^*(v)$ as in Proposition \ref{lemma-min-trader} define \begin{equation} \label{c-lam} C(\theta,\lambda)= {\mathbb{E}}\big[ \big( \tilde v- p_{0} - \lambda (x^{*}(\tilde v)+\tilde u) - \theta \textrm{sign}(x^{*}(\tilde v)+\tilde u) \big)^{2}\big]- \gamma\theta E[|x(\tilde v) + \tilde u|] . \end{equation} From (\ref{admis-p}) and (\ref{equi-mm}) it follows that we need to solve the minimization problem \begin{displaymath} \min_{(\theta,\lambda) \in \mathds{R}^{2}_{+}} C(\theta,\lambda), \end{displaymath} where $\mathds{R}^{2}_{+}$ denotes the first quadrant of $\mathds{R}^{2}$. Using the independence of $\tilde v$ and $\tilde u$ we get the following first order conditions: \begin{equation} \label{partial-c} \begin{aligned} \partial_{\lambda} C(\theta,\lambda)& = -2{\mathbb{E}}\big[(x^{*}(\tilde v)+\tilde u) \big( \tilde v- p_{0} - \lambda (x^{*}(\tilde v)+\tilde u) - \theta \textrm{sign}(x^{*}(\tilde v)+\tilde u) \big)\big] \\ &= -2{\mathbb{E}}\big[x^{*}(\tilde v)( \tilde v- p_{0}) \big]+2\lambda{\mathbb{E}}\big[(x^{*}(\tilde v)+\tilde u)^{2}\big] +2\theta {\mathbb{E}}\big[|x^{*}(\tilde v)+\tilde u|\big] \\ &=0, \end{aligned} \end{equation} and \begin{equation} \label{partial-c2} \begin{aligned} \partial_{\theta} C(\theta,\lambda) =& -2{\mathbb{E}}\big[ \textrm{sign}(x^{*}(\tilde v)+\tilde u) \big( \tilde v- p_{0} - \lambda (x^{*}(\tilde v)+\tilde u) - \theta \textrm{sign}(x^{*}(\tilde v)+\tilde u) \big)\big] \\ & - \gamma E[|x^{*}(\tilde v) + \tilde u|] \\ =& -2{\mathbb{E}}\big[ \textrm{sign}(x^{*}(\tilde v)+\tilde u)( \tilde v- p_{0})\big]+(2\lambda-\gamma) {\mathbb{E}}\big[|x^{*}(\tilde v)+\tilde u |\big]+ 2\theta \\ =&0. \end{aligned} \end{equation} We arrive to the following linear system of equations: \begin{displaymath} \begin{pmatrix} \ell_{2} & \ell_{1} \\ \ell_{1} & 1 \\ \end{pmatrix} \cdot \begin{pmatrix} \lambda \\ \theta \\ \end{pmatrix} = \begin{pmatrix} \mu\\ \kappa +\gamma \ell_{1}/2 \\ \end{pmatrix}. \end{displaymath} It follows that \begin{equation} \label{opt-eqn} \lambda^{*} = \frac{\mu -(\kappa +\gamma \ell_{1}/2) \ell_{1}}{\ell_{2}-\ell_{1}^{2}}, \quad \theta^{*} = \kappa+\frac{\gamma}{2}\ell_{1}- \frac{\ell_{1}(\mu -\kappa-\ell_{1}\gamma/2)}{\ell_{2}-\ell_{1}^{2}}. \end{equation} The Hessian matrix of $C(\theta,\lambda)$, $$ H(C(\theta,\lambda)) = \begin{pmatrix} 2\ell_{2} & 2\ell_{1} \\ 2\ell_{1} & 2 \end{pmatrix}, $$ is positive definite, hence $(\lambda^{*},\theta^{*})$ is a global minima. \end{proof} Before we prove Proposition \ref{prop-lin-eq} we introduce the following lemma. \begin{lemma} \label{lemma-z-u} Let $Y$ and $Z$ be independent random variables, such that $Z$ is a centred Gaussian with variance $\sigma_{\tilde v}^2$. Assume further that one of the following assumptions holds: \begin{itemize} \item[\textbf(a)] $Y$ is a centred Gaussian with variance $\sigma^2_{\tilde v}$. \item[\textbf(b)] $Y$ is a Uniform random variable on $[-b,b]$, for some $b\ge\sqrt{3}\sigma_{\tilde v}$. \end{itemize} Then we have \begin{equation} \label{z-y-lemma} {\mathbb{E}}\big[|Z+ Y |\big] \geq 2 {\mathbb{E}}\big[ \rm{sign}(Z+ Y)Z\big]. \end{equation} \end{lemma} \begin{proof} \textbf{(a)} Note that \begin{eqnarray*} {\mathbb{E}}[|Z+ Y |] = {\mathbb{E}}[ (Z+Y) \mathds{1}_{\{Z+Y>0\}}] - {\mathbb{E}}[ (Z+Y) \mathds{1}_{\{Z+Y \leq 0\}}]. \end{eqnarray*} On the other hand, \begin{eqnarray*} {\mathbb{E}}\big[ \textrm{sign}(Z+ Y)Z\big] = {\mathbb{E}}[ Z \mathds{1}_{\{Z+Y>0\}}] - {\mathbb{E}}[ Z \mathds{1}_{\{Z+Y \leq 0\}}]. \end{eqnarray*} Hence in order to prove (\ref{z-y-lemma}) we need to show that \begin{equation} \label{xy1} {\mathbb{E}}[ Y \mathds{1}_{\{Z+Y>0\}}] - {\mathbb{E}}[ Y \mathds{1}_{\{Z+Y \leq 0\}}] \geq {\mathbb{E}}[ Z \mathds{1}_{\{Z+Y>0\}}] - {\mathbb{E}}[ Z \mathds{1}_{\{Z+Y \leq 0\}}], \end{equation} or \begin{equation} \label{xy1} {\mathbb{E}}\big[ \rm{sign}(Z+ Y)Y\big] \geq {\mathbb{E}}\big[ \rm{sign}(Z+ Y)Z\big]. \end{equation} Since $Z$ and $Y$ have the same law, then the inequality above holds trivially in an equality. \textbf{(b)} Note that \begin{displaymath} \begin{aligned} &{\mathbb{E}}\big[|Z+ Y |\big] - 2 {\mathbb{E}}\big[ \rm{sign}(Z+ Y)Z\big] \\ &= \int \int_{z+y>0}(z+y)f_{Z}(z)f_{Y}(y)dzdy-\int \int_{z+y\leq0}(z+y)f_{Z}(z)f_{Y}(y)dzdy \\ & \quad - 2 \int \int_{z+y>0}zf_{Z}(z)f_{Y}(y)dzdy+2\int\int_{z+y \leq 0}zf_{Z}(z)f_{Y}(y)dzdy, \end{aligned} \end{displaymath} where $f_{Y}$ and $f_{Z}$ are the probability densities of $Y$ and $Z$. Since $Y$ is uniformly distributed on $[-b,b]$ and $Z$ is a centred Gaussian with variance $\sigma_{\tilde v}$ we get \begin{eqnarray*} &&{\mathbb{E}}\big[|Z+ Y |\big] - 2 {\mathbb{E}}\big[ \rm{sign}(Z+ Y)Z\big] \\ &&= \frac{1}{2b}\int_{-\infty}^{\infty}\int_{-b}^{b}1_{\{z+y>0\}}(y-z) f_{Z}(z)dzdy+\frac{1}{2b}\int_{-\infty}^{\infty}\int_{-b}^{b}1_{\{z+y>0\}}(z-y)f_{Z}(z)dzdy \\ &&= \frac{1}{b}\int_{-b}^{b}\int_{-y}^{\infty}(y-z) f_{Z}(z)dzdy \\ &&= \frac{1}{\sqrt{2\pi} \sigma_{\tilde v}} \frac{1}{b}\int_{-b}^{b}\int_{-y}^{\infty}(y-z) e^{-z^{2}/(2\sigma^{2}_{\tilde v})}dzdy. \end{eqnarray*} Calculation of the above integral gives: \begin{eqnarray*} &&{\mathbb{E}}\big[|Z+ Y |\big] - 2 {\mathbb{E}}\big[ \rm{sign}(Z+ Y)Z\big] \\ &&= \frac{1}{b}\int_{-b}^{b} \left(\frac{1}{2}y\left( \text{erf}\left(\frac{y}{\sqrt{2}\sigma_{\tilde v}}\right)+1\right) - \frac{\sigma_{\tilde v}}{\sqrt{2 \pi}} e^{-y^{2}/(2\sigma^{2}_{\tilde v})} \right)dy \\ &&= \frac{1}{2b} (b^{2}-\sigma_{\tilde v}^{2})\text{erf}\left(\frac{b}{\sqrt{2}\sigma_{\tilde v}}\right)+ \frac{\sigma_{\tilde v}}{ \sqrt{2\pi}} e^{-b^{2}/(2\sigma_{\tilde v }^{2})} - \frac{\sigma_{\tilde v}^{2}}{b} \text{erf}\left(\frac{b}{\sqrt{2}\sigma_{\tilde v}}\right) \\ &&= \frac{1}{2b}\left(b^{2} -3 \sigma_{\tilde v}^{2} \right) \text{erf}\left(\frac{b}{\sqrt{2}\sigma_{\tilde v}}\right) + \frac{\sigma_{\tilde v}}{ \sqrt{2\pi}} e^{-b^{2}/(2\sigma_{\tilde v }^{2})}. \end{eqnarray*} Thus, when $b\ge\sqrt{3}\sigma_{\tilde v}$, \eqref{xy1} holds and the result follows. \end{proof} \begin{proof}[Proof of Proposition \ref{prop-lin-eq}] Recall that $C(\theta,\lambda)$ was introduced in (\ref{c-lam}). In order to prove Proposition \ref{prop-lin-eq} we show that $(\theta^{*}, \lambda^{*})= (0,\frac{\sigma_{\tilde v}}{2\sigma_{ \tilde u}})$ minimizes $C(\theta,\lambda)$ for $\gamma =0$. In the proof Theorem \ref{thm-equil1} we showed that $C(\theta,\lambda)$ is concave. Therefore, it is enough to show that \begin{equation} \partial_{\lambda} C(0,\lambda^{*}) =0, \quad \textrm{and }\partial_{\theta} C(0,\lambda^{*}) \geq 0. \end{equation} Recall that the traders optimal order size is $x^*(v)=\frac{v-p_0}{2\lambda}$. From (\ref{c-lam}) we get \begin{displaymath} \begin{aligned} \partial_{\lambda} C(0,\lambda) &= -2{\mathbb{E}}\big[x^{*}(\tilde v)( \tilde v- p_{0}) \big]+2\lambda{\mathbb{E}}\big[(x^{*}(\tilde v)+\tilde u)^{2}\big] \\ &=-\frac{1}{\lambda}{\mathbb{E}}\big[( \tilde v- p_{0})^{2} \big]+2\lambda{\mathbb{E}}\Big[\Big(\frac{\tilde v-p_{0}}{2\lambda}+\tilde u\Big)^{2}\Big] \\ &=-\frac{1}{2\lambda}{\mathbb{E}}\big[( \tilde v- p_{0})^{2} \big]+2\lambda E(\tilde u^{2}). \end{aligned} \end{displaymath} We therefore get that $\partial_{\lambda} C(0,\lambda^{*}) =0$. From (\ref{c-lam}) we also have \begin{displaymath} \begin{aligned} \partial_{\theta} C(0,\lambda) =& -2{\mathbb{E}}\big[ \textrm{sign}(x^{*}(\tilde v)+\tilde u)( \tilde v- p_{0})\big]+2\lambda {\mathbb{E}}\big[|x^{*}(\tilde v)+\tilde u |\big] \\ =& -2{\mathbb{E}}\Big[ \textrm{sign}\Big(\frac{\tilde v-p_{0}}{2\lambda}+\tilde u\Big)( \tilde v- p_{0})\Big]+2\lambda {\mathbb{E}}\Big[\Big|\frac{\tilde v-p_{0}}{2\lambda}+\tilde u \Big|\Big]. \end{aligned} \end{displaymath} Hence in order to prove that $\partial_{\theta} C(0,\lambda^{*}) \geq 0$ we need to show \begin{equation} \label{rf1} {\mathbb{E}}\big[|\tilde v-p_{0}+2\lambda^* \tilde u |\big] \geq 2 {\mathbb{E}}\big[ \textrm{sign}(\tilde v-p_{0}+2\lambda^* \tilde u)( \tilde v- p_{0})\big]. \end{equation} Since $\tilde v-p_{0}$ is a centred Gaussian with variance $\sigma^{2}_{\tilde v}$, \eqref{rf1} follows immediately from Lemma \ref{lemma-z-u}(a) for the case where $\tilde u$ is a centred Gaussian with variance $\sigma^{2}_{\tilde u}$ and $2\lambda^* \tilde u=\frac{\sigma_{\tilde v}}{\sigma_{ \tilde u}} \tilde u$. When $\tilde u$ is a centred Uniform with variance $\sigma_{\tilde u}^{2}$, then $2\lambda^* \tilde u = \frac{\sigma_{\tilde v}}{\sigma_{ \tilde u}} \tilde u$ is distributed uniformly on $[-\sqrt{3} \sigma_{\tilde v} , \sqrt{3} \sigma_{\tilde v}]$ and \eqref{rf1} follows from Lemma \ref{lemma-z-u}(b). \end{proof} \section{Proofs of Propositions \ref{prop:alg-kyle-0} and \ref{prop-big-gamma}} \label{sec-pfs-3} \begin{proof} [Proof of Proposition \ref{prop:alg-kyle-0}] Without loss of generality we assume that $p_0=0$. Let $\lambda_0>0$ and solve the trader's problem from step (ii) of Algorithm \ref{alg-kyle-0} by using (\ref{x-no-sprd}) to get $$ x_0(v) = \beta_0v, $$ where $\beta_0= \frac{1}{2\lambda_0}$. Now solve the optimization on step (iii) (see e.g equation (2.8) in \cite{kyle85}) to get, \begin{eqnarray*} \lambda_1 &=& \frac{\beta_0 \sigma_{\tilde v}^2}{\beta^2_0\sigma_{\tilde v}^2+\sigma_{\tilde u}^2} \\ &=& \frac{2\lambda_0 \sigma_{\tilde v}^2}{ \sigma_{\tilde v}^2+4\lambda_0^2\sigma_{\tilde u}^2}. \end{eqnarray*} Repeating this procedure of $n$ steps, we have \begin{equation} \label{lam-n} \lambda_n = \frac{2\lambda_{n-1} \sigma_{\tilde v}^2}{ \sigma_{\tilde v}^2+4\lambda_{n-1}^2\sigma_{\tilde u}^2}, \quad \textrm{and } x_n(v) = \frac{v}{2\lambda_n}. \end{equation} We show that $\{\lambda_n\}_{n\geq 0}$ is a non-increasing sequence if $\lambda_0>\lambda^*$. This can be verified by induction. The claim for $n=0$ is satisfied by the hypothesis. Assume that $\lambda_{n-1} \geq \lambda^*$, then we get from (\ref{lam-n}) and (\ref{lam-st-kyle}) that \begin{eqnarray*} \frac{\lambda_{n}}{\lambda_{n-1}} &=&\frac{2 \sigma_{\tilde v}^2}{ \sigma_{\tilde v}^2+4\lambda_{n-1}^2\sigma_{\tilde u}^2} \\ &\leq & \frac{2 \sigma_{\tilde v}^2}{ \sigma_{\tilde v}^2+4(\lambda^*)^2\sigma_{\tilde u}^2} \\ &= &1. \end{eqnarray*} Hence $\{\lambda_n\}_{n\geq 0}$ is a non-increasing sequence. In the same way we can show that if $\lambda^*\geq \lambda_0$, then $\{\lambda_n\}_{n\geq 0}$ is a non-decreasing sequence. From these two claims together, it follows that if $\lambda_0>\lambda^*$ (or $\lambda^*\geq \lambda_0$) then $\lambda^*$ is a lower bound (respectively an upper bound) of $\{\lambda_n\}_{n\geq 0}$. Hence in each of these cases the limit $\lambda_\infty = \lim_{n\rightarrow \infty}\lambda_n$ exists. From (\ref{lam-n}) it follows that \begin{displaymath} \lambda_\infty = \frac{2\lambda_{\infty} \sigma_{\tilde v}^2}{ \sigma_{\tilde v}^2+4\lambda_{\infty}^2\sigma_{\tilde u}^2}, \end{displaymath} therefore $\lambda_{\infty} = \lambda^*= \frac{\sigma_{\tilde v}}{2\sigma_{\tilde u}}$. We also have $\lim_{n\rightarrow\infty} x_n(v) = \frac{v}{2\lambda_\infty} = \frac{v}{2\lambda^*}$. \end{proof} \begin{proof}[Proof of Proposition \ref{prop-big-gamma}] Without loss of generality we assume that $p_0=0$. Let $\gamma >0$. Assume that $\lambda=0$ and $x(\tilde v)=\frac{\tilde{v}}{2\theta}$ for some $\theta>0$ to be determined. We will first show that there exists $\theta^{*}>0$ such that \begin{equation} \label{der-cond} \partial_{\lambda} C(\theta^{*},0) > 0, \qquad \partial_{\theta} C(\theta^{*},0) =0. \end{equation} Note that from \eqref{partial-c} we have in this case, \begin{equation}\label{partial-c_uniform} \partial_{\lambda} C(\theta,0) = -2{\mathbb{E}}\left[\frac{\tilde{v}}{2\theta}\tilde{v}\right]+ 2\theta {\mathbb{E}}\left[\left|\frac{\tilde{v}}{2\theta}+\tilde u\right|\right] = -\frac{1}{\theta}+ 2\theta {\mathbb{E}}\left[\left|\frac{\tilde{v}}{2\theta}+\tilde u\right|\right]. \end{equation} Recall that $\tilde v$ is a standard Gaussian. Using the tower property we have, \begin{eqnarray*} {\mathbb{E}}\left[\left|\frac{\tilde{v}}{2\theta}+\tilde u\right|\right] &=& {\mathbb{E}}\left[ {\mathbb{E}}\left[\left|\frac{\tilde{v}}{2\theta}+\tilde u\right| \, \bigg|\tilde u\right] \right] \\ &=&{\mathbb{E}}\left[\frac{1}{\sqrt{2\pi}\theta}e^{-2\tilde{u}^2 \theta^2}+ \tilde{u}\cdot \text{erf}(\tilde{u}\theta \sqrt{2}) \right]. \end{eqnarray*} Since $\tilde{u}$ is uniformly distributed on $[-1,1]$ we get that \begin{equation}\label{EabsOFuniform} {\mathbb{E}}\left[\left|\frac{\tilde{v}}{2\theta}+\tilde u\right|\right] = \frac{1}{2}\left(\frac{1}{\theta\sqrt{2\pi}}e^{-2\theta^2}+ \text{erf}(\theta \sqrt{2}) \cdot \left(1+\frac{1}{4\theta^2}\right) \right). \end{equation} Plugging it into \eqref{partial-c_uniform} we have \begin{equation*} \partial_{\lambda} C(\theta,0) = -\frac{1}{\theta} + \text{erf}(\theta \sqrt{2}) \cdot \left(\theta+\frac{1}{4\theta}\right)+ \frac{1}{\sqrt{2\pi}}e^{-2\theta^2}. \end{equation*} It is easy to check that $\partial_{\lambda} C(\theta,0)$ is monotone increasing (on $\theta >0$) and for $\theta > 1$ it is strictly positive. From \eqref{partial-c2} with $x(\tilde v)=\frac{\tilde{v}}{2\theta}$ we have \begin{equation}\label{partial_theta_0} \partial_{\theta} C(\theta,0) = -2{\mathbb{E}}\left[ \tilde{v}\cdot \textrm{sign}\left(\frac{\tilde{v}}{2\theta}+\tilde u\right)\right]-\gamma {\mathbb{E}}\left[\left|\frac{\tilde{v}}{2\theta}+\tilde u \right|\right]+ 2\theta. \end{equation} Note that \begin{align*} {\mathbb{E}}\left[ \tilde{v}\cdot \textrm{sign}\left(\frac{\tilde{v}}{2\theta}+\tilde u\right)\right]=& \int_{-1}^{1}\int_{-\infty}^{\infty}v (\mathds{1}_{\{v>-2u\theta\}}-\mathds{1}_{\{v<-2u\theta\}})\frac{1}{2} \frac{1}{\sqrt{2\pi}}e^{-v^2/2} dv du \\&= \frac{1}{\sqrt{2\pi}}\int_{-1}^{1}\int_{-2u\theta}^{\infty}v e^{-v^2/2} dv du \\&= \frac{1}{\sqrt{2\pi}}\int_{-1}^{1} e^{-2 \theta^2 u^2} du \\&= \frac{1}{2\theta} \text{erf}(\theta \sqrt{2}). \end{align*} Using this and \eqref{EabsOFuniform} in \eqref{partial_theta_0} we get \begin{equation*} \partial_{\theta} C(\theta,0) = -\frac{1}{\theta} \text{erf}(\theta \sqrt{2})- \frac{\gamma}{2} \left( \text{erf}(\theta \sqrt{2}) \cdot \left(1+\frac{1}{4\theta^2}\right)+ \frac{1}{\theta\sqrt{2\pi}}e^{-2\theta^2}\right)+ 2\theta . \end{equation*} Define \begin{equation*} H(\theta)= -\frac{1}{\theta} \text{erf}(\theta \sqrt{2})- \frac{\gamma}{2} \left( \text{erf}(\theta \sqrt{2}) \cdot \left(1+\frac{1}{4\theta^2}\right)+ \frac{1}{\theta\sqrt{2\pi}}e^{-2\theta^2}\right)+ 2\theta . \end{equation*} Note that $H$ is continuous and monotone increasing on $\theta>0$. Moreover $\lim_{\theta \rightarrow 0} H(\theta) = -\infty $ and $\lim_{\theta \rightarrow \infty} H(\theta)= \infty$. It follows that $H$ has a unique zero $\theta^* = \theta^*(\gamma)$, which is clearly monotone increasing in $\gamma$ and $\lim_{\gamma \rightarrow \infty } \theta^*(\gamma) = \infty$. We therefore showed that for any $\gamma >0$ we have $\theta^*>0$ such that \eqref{der-cond} holds. Let $\varepsilon>0$ be arbitrary small. Choose $\gamma$ large enough so that $\theta^*$ satisfies $$ P(|\tilde v|< \theta^* ) > 1-\varepsilon. $$ Define $(x_0(v),\lambda_0,\theta_0) = \left( \frac{v}{2\theta^*}, 0, \theta^*\right)$. From Proposition \ref{prop-unif} it follows that $x_0(v) = \frac{v}{2\theta^*}$ solves the insider optimisation problem in step 2 of Algorithm \ref{alg-kyle-bid-ask} if $|\tilde v| < \theta^* $. Moreover, since $\theta^*$ satisfies \eqref{der-cond} and $C(\lambda,\theta)$ is convex, it follows that $(\lambda_0,\theta_0)$ minimises $C(\lambda,\theta)$, hence it is the output of step 3 in Algorithm \ref{alg-kyle-bid-ask}. We get that $$ (x_1(v),\lambda_1,\theta_1) = \left( \frac{v}{2\theta^*}, 0, \theta^* \right), \quad \textrm{ with probability larger than } 1-\varepsilon. $$ Repeating this argument we get \eqref{met-stable} for any $n\geq 1$. \end{proof} \section{Formulas for equilibrium points} \label{sec-form} In this section we derive simplified formulas for $\ell_{p,x^{*}}, \ p=1,2$, $\mu_{x^{*}} $ and $\kappa_{x^{*}}$ from Theorem \ref{thm-equil1}, for the case where $\tilde v-p_0$ is a standard Gaussian and $\tilde u$ is distributed uniformly on $[-1,1]$. We recall that in this case $x^*(v)$ is given by Proposition \ref{prop-unif}. Note that the expressions obtained for $\ell_{2,x^*}$ and $\mu_{x^*}$ are given closed form. The formulas for $\kappa_x^*$ and $\ell_{1,x^*}$ are given as an integral which could easily be evaluated by standard numerical schemes. We first introduce some notation. \paragraph{Notation.} Recall that $\phi$ and $\Phi$ are the probability density function and cumulative distribution function of the standard Gaussian distribution, respectively. For any nonnegative $\lambda$ and $\theta$ let $$ \beta(\lambda,\theta)= \lambda+\theta +\sqrt{\lambda+\theta}. $$ For any integrable functions $f,g:\mathds{R} \rightarrow \mathds{R}$ we define: \begin{eqnarray*} F_1(\lambda,\theta) &=& \int_{0}^{\beta(\lambda,\theta)}z^2\phi(z)dz= \frac{1}{2} \text{erf}\left(\frac{\beta(\lambda,\theta)}{2} \right)-\frac{\beta(\lambda,\theta)}{\sqrt{2\pi}}e^{-\beta(\lambda,\theta)^2/2}. \\ F_2(\lambda,\theta ; [f],[g]) &=&\int_{0}^{\infty} f(z) (1 \wedge g(z) +1)_{+} \mathds{1}_{\{z >\beta(\lambda,\theta)\}} \phi(z) dz, \\ \overline F_2(\lambda,\theta ; [f],[g]) &=&\int_{0}^{\infty} f(z) (1 \wedge g(z) +1)_{+} \mathds{1}_{\{z \leq\beta(\lambda,\theta)\}} \phi(z) dz, \\ F_3(\lambda, \theta; [f]) &=&\int_{0}^{\infty}(1-(1 \wedge f(z)))^2)\mathds{1}_{\{f(z)>-1\}} \mathds{1}_{\{z > \beta(\lambda,\theta)\}} \phi(z) dz \\ \overline F_3(\lambda, \theta; [f]) &=&\int_{0}^{\infty}(1-(1 \wedge f(z)))^2)\mathds{1}_{\{f(z)>-1\}} \mathds{1}_{\{z \leq \beta(\lambda,\theta)\}} \phi(z) dz. \end{eqnarray*} We start with the expression for $\ell_{2,x^{*}}$ \begin{lemma} \label{lem-l2} Under the assumptions of Proposition \ref{prop-unif} we have \begin{eqnarray*} \ell_{2,x^{*}} &=& \frac{1}{2\lambda^2}\left(\frac{1}{2}-F_1(\lambda,\theta)-2\theta \frac{1}{\sqrt{2\pi}}e^{-\beta(\lambda,\theta^2)^2/2} +\theta^2 (1-\Phi(\beta(\lambda,\theta)) \right)\\ &&+ \frac{1}{2(\lambda+\theta)^2} F_1(\lambda,\theta). \end{eqnarray*} \end{lemma} \begin{proof} From the independence of $\tilde v$ and $\tilde u$ we have \begin{eqnarray*} \ell_{2,x^{*}} &=& {\mathbb{E}}\big[(x^*(\tilde v) +\tilde u)^2\big] \\ &=& {\mathbb{E}}[x^*(\tilde v)^2] +\sigma_{\tilde u}^2. \end{eqnarray*} Recall that by Proposition \ref{prop-unif}, $x^*(v)$ is symmetric around $p_0$. Using the explicit formula for $x^*$ and since $\tilde v- p_0$ is a standard Gaussian we have \begin{eqnarray*} {\mathbb{E}}[x^*(\tilde v)^2] &=&2\frac{1}{4(\lambda+\theta)^2} {\mathbb{E}}\big[(\tilde v- p_0)^2\mathds{1}_{\{0\leq \tilde v-p_0 \leq \lambda+\theta +\sqrt{\lambda+\theta}\}} \big]\\ &&+ 2\frac{1}{4\lambda^2} {\mathbb{E}}\big[(\tilde v-p_0-\theta)^2 \mathds{1}_{\{\tilde v-p_0 > \lambda+\theta +\sqrt{\lambda+\theta}\}}\big] \\ &=& \frac{1}{2\lambda^2}\left(\frac{1}{2}-F_1(\lambda,\theta)-2\theta \frac{1}{\sqrt{2\pi}}e^{-\beta(\lambda,\theta)^2/2} +\theta^2 (1-\Phi(\beta(\lambda,\theta)) \right)\\ &&+ \frac{1}{2(\lambda+\theta)^2} F_1(\lambda,\theta). \end{eqnarray*} \end{proof} Next, we derive an expression for $\mu_{x^{*}} $. \begin{lemma} \label{lem-mu} Under the assumptions of Proposition \ref{prop-unif} we have $$ \mu_{x^{*}} =\frac{1}{(\lambda+\theta)} F_1(\lambda,\theta) + \frac{1}{\lambda}\left(\frac{1}{2}-F_1(\lambda,\theta) -\theta \frac{1}{\sqrt{2\pi}}e^{-\beta(\lambda,\theta)^2/2} \right ). $$ \end{lemma} \begin{proof} From Proposition \ref{prop-unif}, the symmetry of $x^*(v)$ around $p_0$ and since $\tilde v- p_0$ is a standard Gaussian we have \begin{eqnarray*} \mu_{x^{*}} &=& {\mathbb{E}}\big[x^*(v)(\tilde v-p_0)\big] \\ &=&2\frac{1}{2(\lambda+\theta)} {\mathbb{E}}\big[(\tilde v- p_0)^2\mathds{1}_{\{0\leq \tilde v-p_0 \leq \lambda+\theta +\sqrt{\lambda+\theta}\}} \big]\\ &&+ 2\frac{1}{2\lambda} {\mathbb{E}}\big[(\tilde v-p_0-\theta)(\tilde v- p_0) \mathds{1}_{\{\tilde v-p_0 > \lambda+\theta +\sqrt{\lambda+\theta}\}}\big] \\ &=& \frac{1}{(\lambda+\theta)} F_1(\lambda,\theta) + \frac{1}{\lambda}\left(\frac{1}{2}-F_1(\lambda,\theta) -\theta \frac{1}{\sqrt{2\pi}}e^{-\beta(\lambda,\theta)^2/2} \right ). \end{eqnarray*} \end{proof} Next we compute $\kappa_{x^{*}}$. \begin{lemma} \label{lem-kappa} Under the assumptions of Proposition \ref{prop-unif} we have $$ \kappa_{x^{*}} =2\overline F_{2}\Big(\lambda,\theta;[z], \Big[\frac{z}{2(\lambda+\theta)}\Big]\Big ) + \left(F_{2}\Big(\lambda,\theta;[z], \Big[\frac{z-\theta}{2\lambda}\Big]\Big ) + F_{2}\Big(\lambda,\theta;[z]\Big[\frac{z+\theta}{2\lambda}\Big]\Big) \right) . $$ \end{lemma} \begin{proof} Note that \begin{eqnarray*} \kappa_{x^{*}} &=& {\mathbb{E}}\big[\textrm{sign}(x^*(\tilde v) +\tilde u)(\tilde v -p_0) \big] \\ &=&{\mathbb{E}}\big[(\tilde v -p_0) \mathds{1}_{\{x^*(\tilde v) +\tilde u \geq 0\}}\big] - {\mathbb{E}}\big[ (\tilde v -p_0) \mathds{1}_{\{x^*(\tilde v) +\tilde u <0\}}\big] \\ &=&:I_1-I_2. \end{eqnarray*} We denote $(x)_{+} = \max\{0,x\}$. Using Proposition \ref{prop-unif} and the symmetry of $x^*(v)$ around $p_0$ we have \begin{eqnarray*} I_1 &=& 2{\mathbb{E}}\big[ (\tilde v -p_0) \mathds{1}_{\{x^*(\tilde v) +\tilde u \geq 0\}}\mathds{1}_{\{0\leq \tilde v-p_0 \leq \lambda+\theta +\sqrt{\lambda+\theta}\}} \big] \\ &&+2{\mathbb{E}}\big[ (\tilde v -p_0) \mathds{1}_{\{x^*(\tilde v) +\tilde u \geq 0\}}\mathds{1}_{\{\tilde v-p_0 > \lambda+\theta +\sqrt{\lambda+\theta}\}} \big] \\ &=& \int_{0}^{\infty} \int_{-1}^{1}z \mathds{1}_{\{u \geq -\frac{z}{2(\lambda+\theta)}\}}\mathds{1}_{\{z \leq \lambda+\theta +\sqrt{\lambda+\theta}\}} \phi(z) du dz \\ &&+ \int_{0}^{\infty}\int_{-1}^{1} z \mathds{1}_{\{u \geq -\frac{z-\theta}{2\lambda}\}}\mathds{1}_{\{z > \lambda+\theta +\sqrt{\lambda+\theta}\}} \phi(z) du dz \\ &=&\int_{0}^{\infty} z\Big(1-(-1) \vee \Big(-\frac{z}{2(\lambda+\theta)}\Big)\Big)_{+} \mathds{1}_{\{z \leq \lambda+\theta +\sqrt{\lambda+\theta}\}} \phi(z) dz \\ &&+\int_{0}^{\infty} z \Big(1-(-1)\vee \Big(- \frac{z-\theta}{2\lambda}\Big)\Big)_{+} \mathds{1}_{\{z > \lambda+\theta +\sqrt{\lambda+\theta}\}} \phi(z) dz \\ &=&\int_{0}^{\infty} z\Big(1+ 1 \wedge \Big(\frac{z}{2(\lambda+\theta)}\Big) \Big)_{+} \mathds{1}_{\{z \leq \lambda+\theta +\sqrt{\lambda+\theta}\}} \phi(z) dz \\ &&+\int_{0}^{\infty} z \Big(1+1\wedge \Big( \frac{z-\theta}{2\lambda}\Big)\Big)_{+} \mathds{1}_{\{z > \lambda+\theta +\sqrt{\lambda+\theta}\}} \phi(z) dz, \end{eqnarray*} where we have used the identity $(-x)\vee (-y) = -(x\wedge y)$ in the last iquality. On the other hand, \begin{eqnarray*} I_2 &=& 2{\mathbb{E}}\big[(\tilde v -p_0) \mathds{1}_{\{x^*(\tilde v) +\tilde u < 0\}}\mathds{1}_{\{0\leq \tilde v-p_0 \leq \lambda+\theta +\sqrt{\lambda+\theta}\}} \big] \\ &&+2{\mathbb{E}}\big[(\tilde v -p_0) \mathds{1}_{\{x^*(\tilde v) +\tilde u < 0\}}\mathds{1}_{\{\tilde v-p_0 > \lambda+\theta +\sqrt{\lambda+\theta}\}} \big] \\ &=& \int_{0}^{\infty} \int_{-1}^{1}z \mathds{1}_{\{u \leq -\frac{z}{2(\lambda+\theta)}\}}\mathds{1}_{\{z \leq \lambda+\theta +\sqrt{\lambda+\theta}\}} \phi(z) du dz \\ &&+ \int_{0}^{\infty}\int_{-1}^{1} z \mathds{1}_{\{u \leq -\frac{z-\theta}{2\lambda}\}}\mathds{1}_{\{z > \lambda+\theta +\sqrt{\lambda+\theta}\}} \phi(z) du dz \\ &=& \int_{0}^{\infty} z\Big(1 \wedge \Big(-\frac{z}{2(\lambda+\theta)}\Big) +1\Big)_{+} \mathds{1}_{\{z \leq \lambda+\theta +\sqrt{\lambda+\theta}\}} \phi(z) dz \\ &&+ \int_{0}^{\infty} z \Big(1 \wedge \Big(-\frac{z-\theta}{2\lambda}\Big) +1\Big)_{+} \mathds{1}_{\{z > \lambda+\theta +\sqrt{\lambda+\theta}\}} \phi(z) dz. \end{eqnarray*} By a change of variable it follows that \begin{eqnarray*} &&\kappa_{x^{*}} \\ && = 2\int_{0}^{\infty} z\Big(1+ 1 \wedge \Big(\frac{z}{2(\lambda+\theta)}\Big) \Big)_{+} \mathds{1}_{\{z \leq \lambda+\theta +\sqrt{\lambda+\theta}\}} \phi(z) dz\\ &&+\int_{0}^{\infty} z \Big[ \Big(1 \wedge \Big(\frac{z-\theta}{2\lambda}\Big) +1\Big)_{+}+\Big(1 \wedge \Big(\frac{z+\theta}{2\lambda}\Big) +1\Big)_{+}\Big] \mathds{1}_{\{z > \lambda+\theta +\sqrt{\lambda+\theta}\}} \phi(z) dz. \end{eqnarray*} \end{proof} \begin{lemma} Under the assumptions of Proposition \ref{prop-unif} we have \begin{eqnarray*} \ell_{1,x^*} &=& 2\overline F_{2}\Big(\lambda,\theta;\Big[\frac{z }{2\lambda+\theta}\Big],\Big[\frac{z }{2\lambda+\theta}\Big] \Big) + \overline F_{3}\Big(\lambda,\theta; \Big[\frac{z}{2(\lambda+\theta)}\Big]\Big)\\ &&+ F_{2}\Big(\lambda,\theta;\Big[\frac{z-\theta }{2\lambda}\Big],\Big[\frac{z-\theta }{2\lambda}\Big] \Big ) + F_{2}\Big(\lambda,\theta;\Big[\frac{z+\theta }{2\lambda}\Big],\Big[\frac{z+\theta }{2\lambda}\Big] \Big) \\ &&+\frac{1}{2} \left(F_{3}\Big(\lambda,\theta; \Big[\frac{z-\theta}{2\lambda}\Big]\Big)+F_{3}\Big(\lambda,\theta;\Big[- \frac{z-\theta}{2\lambda}\Big]\Big) \right). \end{eqnarray*} \end{lemma} \begin{proof} \begin{eqnarray*} \ell_1(x^{*}) &=& {\mathbb{E}}\big[|x^*(\tilde v) +\tilde u|\big] \\ &=&{\mathbb{E}}\big[(x^*(\tilde v) +\tilde u) \mathds{1}_{\{x^*(\tilde v) +\tilde u \geq 0\}}\big] - {\mathbb{E}}\big[ (x^*(\tilde v) +\tilde u)\mathds{1}_{\{x^*(\tilde v) +\tilde u <0\}}\big] \\ &=&:I_1-I_2. \end{eqnarray*} Using Proposition \ref{prop-unif} and the symmetry of $x^*(v)$ around $p_0$ we have \begin{eqnarray*} I_1 &=& 2{\mathbb{E}}\big[ (x^*(\tilde v) +\tilde u) \mathds{1}_{\{x^*(\tilde v) +\tilde u \geq 0\}}\mathds{1}_{\{0\leq \tilde v-p_0 \leq \lambda+\theta +\sqrt{\lambda+\theta}\}} \big] \\ &&+2{\mathbb{E}}\big[ (x^*(\tilde v) +\tilde u) \mathds{1}_{\{x^*(\tilde v) +\tilde u \geq 0\}}\mathds{1}_{\{\tilde v-p_0 > \lambda+\theta +\sqrt{\lambda+\theta}\}} \big] \\ &=&\int_{0}^{\infty} \int_{-1}^{1}\Big( \frac{z}{2(\lambda+\theta)} +u\Big)\mathds{1}_{\{u \geq -\frac{z}{2(\lambda+\theta)}\}}\mathds{1}_{\{z \leq \lambda+\theta +\sqrt{\lambda+\theta}\}} \phi(z) du dz \\ &&+\int_{0}^{\infty}\int_{-1}^{1} \Big( \frac{z-\theta}{2\lambda}+ u\Big)\mathds{1}_{\{u \geq -\frac{z-\theta}{2\lambda}\}}\mathds{1}_{\{z > \lambda+\theta +\sqrt{\lambda+\theta}\}} \phi(z) du dz \\ &=& \int_{0}^{\infty} \frac{z}{2(\lambda+\theta)}\Big(1-(-1) \vee \Big(-\frac{z}{2(\lambda+\theta)}\Big)\Big)_+\mathds{1}_{\{z \leq \lambda+\theta +\sqrt{\lambda+\theta}\}} \phi(z) dz \\ &&+\frac{1}{2} \int_{0}^{\infty}\Big(1-\Big((-1) \vee \Big(-\frac{z}{2(\lambda+\theta)}\Big)\Big)^2\Big)\mathds{1}_{\{-\frac{z}{2(\lambda+\theta)}<1\}} \mathds{1}_{\{z \leq \lambda+\theta +\sqrt{\lambda+\theta}\}} \phi(z) du dz \\ &&+ \int_{0}^{\infty} \frac{z-\theta }{2\lambda}\Big(1-(-1) \vee \Big(-\frac{z-\theta}{2\lambda}\Big)\Big)_+\mathds{1}_{\{z > \lambda+\theta +\sqrt{\lambda+\theta}\}} \phi(z) dz \\ &&+ \frac{1}{2}\int_{0}^{\infty}\Big(1-\Big((-1) \vee \Big(-\frac{z-\theta}{2\lambda}\Big)\Big)^2\Big)\mathds{1}_{\{-\frac{z-\theta}{2\lambda}<1\}} \mathds{1}_{\{z > \lambda+\theta +\sqrt{\lambda+\theta}\}} \phi(z) du dz. \end{eqnarray*} Since $(-x)\vee (-y) = -(x\wedge y)$ it follows that \begin{eqnarray*} I_1 &=& \int_{0}^{\infty} \frac{z}{2(\lambda+\theta)}\Big(1 \wedge \Big(\frac{z}{2(\lambda+\theta)}\Big)+1\Big)_+\mathds{1}_{\{z \leq \lambda+\theta +\sqrt{\lambda+\theta}\}} \phi(z) dz \\ &&+ \frac{1}{2}\int_{-\infty}^{\infty}\Big(1-\Big(1 \wedge \Big(\frac{z}{2(\lambda+\theta)}\Big)\Big)^2\Big)\mathds{1}_{\{-\frac{z}{2(\lambda+\theta)}<1\}} \mathds{1}_{\{z \leq \lambda+\theta +\sqrt{\lambda+\theta}\}} \phi(z) dz \\ &&+\int_{0}^{\infty} \frac{z-\theta }{2\lambda}\Big(1+1 \wedge \Big(\frac{z-\theta}{2\lambda}\Big)\Big)_+\mathds{1}_{\{z > \lambda+\theta +\sqrt{\lambda+\theta}\}} \phi(z) dz \\ &&+ \frac{1}{2}\int_{0}^{\infty}\Big(1-\Big(1 \wedge \Big(\frac{z-\theta}{2\lambda}\Big)\Big)^2\Big)\mathds{1}_{\{-\frac{z-\theta}{2\lambda}<1\}} \mathds{1}_{\{z > \lambda+\theta +\sqrt{\lambda+\theta}\}} \phi(z) dz. \end{eqnarray*} On the other hand, \begin{eqnarray*} I_2 &=& 2E\big( (x^*(\tilde v) +\tilde u) \mathds{1}_{\{x^*(\tilde v) +\tilde u < 0\}}\mathds{1}_{\{0\leq \tilde v-p_0 \leq \lambda+\theta +\sqrt{\lambda+\theta}\}} \big) \\ &&+2E\big( (x^*(\tilde v) +\tilde u) \mathds{1}_{\{x^*(\tilde v) +\tilde u < 0\}}\mathds{1}_{\{\tilde v-p_0 > \lambda+\theta +\sqrt{\lambda+\theta}\}} \big) \\ &=& \int_{0}^{\infty} \int_{-1}^{1}\Big( \frac{z}{2(\lambda+\theta)} +u\Big)\mathds{1}_{\{u < -\frac{z}{2(\lambda+\theta)}\}}\mathds{1}_{\{z \leq \lambda+\theta +\sqrt{\lambda+\theta}\}} \phi(z) du dz \\ &&+ \int_{0}^{\infty}\int_{-1}^{1} \Big( \frac{z-\theta}{2\lambda}+ u\Big)\mathds{1}_{\{u < -\frac{z-\theta}{2\lambda}\}}\mathds{1}_{\{z > \lambda+\theta +\sqrt{\lambda+\theta}\}} \phi(z) du dz \\ &=& \int_{0}^{\infty} \frac{z}{2(\lambda+\theta)}\Big(1 \wedge \Big(-\frac{z}{2(\lambda+\theta)}\Big)+1\Big)_+\mathds{1}_{\{z \leq \lambda+\theta +\sqrt{\lambda+\theta}\}} \phi(z) dz \\ &&+ \frac{1}{2}\int_{0}^{\infty}\Big(1 \wedge \Big(-\frac{z}{2(\lambda+\theta)}\Big)\Big)^2-1\Big)\mathds{1}_{\{-\frac{z}{2(\lambda+\theta)}>-1\}} \mathds{1}_{\{z \leq \lambda+\theta +\sqrt{\lambda+\theta}\}} \phi(z) dz\\ &&+\int_{0}^{\infty} \frac{z-\theta }{2\lambda}\Big(1 \wedge \Big(-\frac{z-\theta}{2\lambda}\Big)+1\Big)_+\mathds{1}_{\{z > \lambda+\theta +\sqrt{\lambda+\theta}\}} \phi(z) dz \\ &&+ \frac{1}{2}\int_{0}^{\infty}\Big(\Big(1 \wedge \Big(-\frac{z-\theta}{2\lambda}\Big)\Big)^2-1\Big)\mathds{1}_{\{-\frac{z-\theta}{2\lambda}>-1\}} \mathds{1}_{\{z > \lambda+\theta +\sqrt{\lambda+\theta}\}} \phi(z) dz. \end{eqnarray*} By a change of variable we get, \begin{eqnarray*} I_2 &=&- \int_{0}^{\infty} \frac{z}{2(\lambda+\theta)}\Big(1 \wedge \Big(\frac{z}{2(\lambda+\theta)}\Big)+1\Big)_+\mathds{1}_{\{z \leq \lambda+\theta +\sqrt{\lambda+\theta}\}} \phi(z) dz \\ &&+\frac{1}{2}\int_{0}^{\infty}\Big(\Big(1 \wedge \Big(\frac{z}{2(\lambda+\theta)}\Big)\Big)^2-1\Big)\mathds{1}_{\{\frac{z}{2(\lambda+\theta)}>-1\}} \mathds{1}_{\{z \leq \lambda+\theta +\sqrt{\lambda+\theta}\}} \phi(z) dz \\ &&+ \int_{0}^{\infty} \frac{z-\theta }{2\lambda}\Big(1 \wedge \Big(-\frac{z-\theta}{2\lambda}\Big)+1\Big)_+\mathds{1}_{\{z > \lambda+\theta +\sqrt{\lambda+\theta}\}} \phi(z) dz \\ &&+ \frac{1}{2}\int_{0}^{\infty}\Big(\Big(1 \wedge \Big(-\frac{z-\theta}{2\lambda}\Big)\Big)^2-1\Big)\mathds{1}_{\{-\frac{z-\theta}{2\lambda}>-1\}} \mathds{1}_{\{z > \lambda+\theta +\sqrt{\lambda+\theta}\}} \phi(z) dz. \end{eqnarray*} It follows that \begin{eqnarray*} &&\ell_{1,x^{*}} \\ &&= I_1 -I_2 \\ &&=2\int_{0}^{\infty} \frac{z}{2(\lambda+\theta)}\Big(1 \wedge \Big(\frac{z}{2(\lambda+\theta)}\Big)+1\Big)_+\mathds{1}_{\{z \leq \lambda+\theta +\sqrt{\lambda+\theta}\}} \phi(z) dz \\ &&+ \int_{0}^{\infty}\Big(1-\Big(1 \wedge \Big(\frac{z}{2(\lambda+\theta)}\Big)\Big)^2\Big)\mathds{1}_{\{\frac{z}{2(\lambda+\theta)}>-1\}} \mathds{1}_{\{z \leq \lambda+\theta +\sqrt{\lambda+\theta}\}} \phi(z) dz\\ &&+ \int_{0}^{\infty} \Big[\frac{z-\theta }{2\lambda}\Big(1+1 \wedge \Big(\frac{z-\theta}{2\lambda}\Big)\Big)_+ + \frac{z+\theta }{2\lambda}\Big(1 \wedge \Big(\frac{z+\theta}{2\lambda}\Big)+1\Big)_+\Big]\\ && \qquad \times \mathds{1}_{\{z > \lambda+\theta +\sqrt{\lambda+\theta}\}} \phi(z) dz \\ &&+ \frac{1}{2}\int_{0}^{\infty}\Big(1-\Big(1 \wedge \Big(\frac{z-\theta}{2\lambda}\Big)\Big)^2\Big)\mathds{1}_{\{\frac{z-\theta}{2\lambda}>-1\}} \mathds{1}_{\{z > \lambda+\theta +\sqrt{\lambda+\theta}\}} \phi(z) dz \\ &&+ \frac{1}{2}\int_{0}^{\infty}\Big(1-\Big(1 \wedge \Big(-\frac{z-\theta}{2\lambda}\Big)\Big)^2\Big)\mathds{1}_{\{-\frac{z-\theta}{2\lambda}>-1\}} \mathds{1}_{\{z > \lambda+\theta +\sqrt{\lambda+\theta}\}} \phi(z) dz \\ &=& 2 \overline F_{2}\Big(\lambda,\theta;\Big[\frac{z }{2\lambda+\theta}\Big],\Big[\frac{z }{2\lambda+\theta}\Big] \Big) +\frac{1}{2}\overline F_{3}\Big(\lambda,\theta; \Big[\frac{z}{2(\lambda+\theta)}\Big]\Big)\\ &&+ \Big[F_{2}\Big(\lambda,\theta;\Big[\frac{z-\theta }{2\lambda}\Big],\Big[\frac{z-\theta }{2\lambda}\Big] \Big ) + F_{2}\Big(\lambda,\theta;\Big[\frac{z+\theta }{2\lambda}\Big],\Big[\frac{z+\theta }{2\lambda}\Big] \Big) \Big] \\ &&+\frac{1}{2} \Big[F_{3}\Big(\lambda,\theta; \Big[\frac{z-\theta}{2\lambda}\Big]\Big)+F_{3}\Big(\lambda,\theta;\Big[- \frac{z-\theta}{2\lambda}\Big]\Big) \Big]. \end{eqnarray*} \end{proof} \bigskip \bibliographystyle{plain}
1,941,325,220,679
arxiv
\section{Introduction} Studying the large-scale environments of Active Galactic Nuclei (AGNs) is important for understanding the growth of supermassive black holes (SMBHs) and how they coevolve with their host galaxies \citep[e.g.,][]{Kormendy:2013}. Clustering is a powerful tool in statistically determining the typical dark matter halo in which AGN reside, as well as how they occupy their halos. Coupled with a sensible model of halo mass assembly, this can constrain fueling mechanisms (i.e., mergers versus secular evolution) and feedback scenarios, providing selection effects are properly taken into account. Previous studies of AGN clustering using soft X-ray and optically selected samples have found somewhat discrepant results for the typical host halo mass of AGN. Luminous quasars drawn from wide-area optical surveys appear to lie in smaller halos ($M_{\rm{h}}\sim 10^{12.5}$ $M_{\odot}h^{-1}$, where $h=H_0/100$ km s$^{-1}$ Mpc$^{-1}$) than moderate-luminosity X-ray AGN found in deeper surveys ($M_{\rm{h}}\sim 10^{13}$ $M_{\odot}h^{-1}$) across a wide range of redshift \citep[e.g.,][]{Croom:2005,Gilli:2005,Gilli:2009,Krumpe:2012,Ross:2009,Shen:2009,Allevato:2011,Allevato:2014}. Additionally, it is not clear whether unobscured and obscured AGN (either defined by their column density or optical classification) have the same clustering statistics, in accordance with the unified model, or if they tend to reside in different environments due to different accretion modes or due to being two stages of one evolutionary track, as claimed by recent (but discordant) studies \citep{Allevato:2014,Villarroel:2014,Mendez:2016,DiPompeo:2017}. However, they all probe different volumes, host galaxy properties, and luminosity ranges, making comparison between studies difficult (see, e.g., \citealt{Mendez:2016}). Additionally, the picture may be confused because a large number of obscured AGN have been missed in optical and soft X-ray surveys due to dust and gas obscuration from the torus and/or host galaxy. Population synthesis models of the Cosmic X-ray background indicate that a significant fraction of SMBH accretion occurs in obscured environments \citep{Treister:2004,Treister:2012}, meaning obscured AGN are a vital population to consider in a full model of halo, galaxy, and SMBH (co-)evolution. Hard X-ray selection ($>10$ keV) can remedy this obscuration-related bias, as the majority of energetic photons are able to pass through large columns of gas and dust, up to Compton-thick levels (\nh$\approx10^{24}$\nhunit; \citealt{Ricci:2015}). In addition, hard X-ray selection is extremely efficient, as there are very few contaminates, including the host galaxy. The Burst Alert Telescope (BAT; \citealt{Barthelmy:2005,Krimm:2013}) instrument on the {\it Swift} satellite \citep{Gehrels:2004} has surveyed the entire sky to unprecedented sensitivity in the 14-195 keV band \citep{Baumgartner:2013,Oh:2018}. Local AGN detected by BAT include the obscured and/or low-luminosity AGN missed by optical detection, as well as the rare high-luminosity AGN only found in wide-area surveys, so that BAT AGN can solve some of the aforementioned issues with previous AGN clustering studies. \cite{Cappelluti:2010} were the first to measure the clustering of \emph{Swift}/BAT AGN using a sample of 199 AGN in the 36-month catalog \citep{Ajello:2009}, but had uncertain results due to the small sample size. While they did find a dependence in X-ray luminosity, it was most likely a selection effect due to the strong redshift dependence inherent in any small flux-limited sample. In this study, we have more than doubled the sample by using the 70-month {\it Swift}/BAT AGN catalog \citep{Baumgartner:2013}, along with spectroscopic information from the {\it Swift}/BAT Spectroscopic Survey (BASS; \citealp{Koss:2017}), to constrain the AGN halo occupation distribution (HOD) for 499 BASS AGN in the redshift range $0.01<z<0.1$. To improve the statistics, we cross-correlate the AGN with local 2MASS galaxies that trace the underlying dark matter distribution. Additionally, we investigate the environmental dependence of AGN parameters like obscuring column density and black hole mass, while matching distributions in X-ray luminosity, redshift, stellar mass, and Eddington ratio. \cite{Krumpe:2017} recently published a similar, independent clustering analysis of \emph{Swift}/BAT AGN, in which they analytically fit the cross-correlation function with 2MASS galaxies. They also divided their sample by optical classification (Type 1 or Type 2) from \cite{Baumgartner:2013}, as well as by observed X-ray luminosity. However, detailed X-ray spectral fitting \citep{Ricci:2017B} allows us to estimate the \emph{intrinsic} absorption-corrected luminosity for each AGN, which differs strongly from the observed value at $N_{\rm H}>10^{23.5}$ cm$^{-2}$. The BASS DR1 release \citep{Koss:2017} also includes 46 new redshifts for a spectroscopic completeness of over 95\%, and provides column densities for each 836 AGN, which is the method for understanding whether an AGN is obscured or not. Our study also differs from \cite{Krumpe:2017} in how we fit models; namely, we populate dark matter halos statistically from $N$-body simulations (using the \texttt{Halotools} software package; \citealt{Hearin:2017}). Because this allows a straightforward correction for catalog incompleteness, we use an extended redshift range ($z<0.1$ rather than $ z<0.037$) for better number statistics (499 AGN compared to 274 in \citeauthor{Krumpe:2017}), and we do not have to rely on assumptions from analytic models. The simulation-based approach also allows us to look beyond halo mass to other halo parameters like halo concentration, in order to investigate effects such as assembly bias. In this paper we challenge the idea that AGN clustering is driven only by the typical mass of its dark matter halo. This paper is organized as follows: we describe the data selection of the BASS AGN and 2MASS galaxies in Section 2; our method for measuring the correlation function and fitting it with a halo model is described in Section 3; Section 4 presents the results for the full AGN sample, as well as the dependence on obscuration and black hole mass; we discuss our findings in Section 5, and summarize them in Section 6. We assume flat $\Lambda$CDM cosmology ($\Omega_{m}=0.3$, $\Omega_{\Lambda}=0.7$, $H_{0}=100~h^{-1}$ km s$^{-1}$ Mpc$^{-1}$, $h=0.7$), and errors quoted are $1 \sigma$ unless otherwise stated. \section{Data} \subsection{AGN Sample} BASS consists of 836 local AGN from the $Swift$-BAT 70-month catalog \citep{Koss:2017,Ricci:2017B}, selected by their hard X-ray emission ($14-195$ keV), which has the benefit of being unbiased toward obscuration. BASS comprises the largest, most unbiased sample of local AGN to date, and there is an abundance of complementary multiwavelength ancillary data\footnote{www.bass-survey.com}. Each AGN has soft X-ray data from {\it Chandra}, {\it XMM-Newton}, {\it Suzaku}, or {\it Swift}/XRT, so that the full X-ray spectra have been modeled ($0.3-150$ keV; \citealt{Ricci:2017B}). These give the obscuring column ($N_{\rm H}$) and intrinsic X-ray flux for each AGN, from which bolometric luminosities have been estimated using a fixed hard X-rays bolometric correction to the $14-195$ keV luminosities ($L_{\rm{bol}}=8~L_{14-195~keV}$; \citealt{Koss:2017}). Optical spectroscopy has been obtained for 641 unbeamed AGN, providing spectroscopic redshifts that allow for 3D clustering analyses. We assume that the AGN without spectra ($5\%$) do not systematically affect the clustering of the overall population, as we verified negligible differences between the flux distributions and angular correlation functions with and without their inclusion. Black hole masses have been estimated for 429 AGN, of which 54\% are unobscured and 46\% are obscured. Black hole masses from unobscured AGN were estimated from H$\beta$ and/or H$\alpha$ broad lines FWHM \citep{Kaspi:2000,Greene:2005,Bentz:2009,Trakhtenbrot:2012,MR:2016}; these have uncertainties of $0.3-0.4$ dex \cite[e.g.,][]{Shen:2013,Peterson:2014}. For obscured AGN without broad lines, we relied on the $M_{\rm BH}-\sigma_{*}$ relation \citep{Kormendy:2013}, where $\sigma_{*}$ was measured by fitting the spectra with host galaxy stellar templates. These black hole mass estimates have slightly larger uncertainties of $\sim0.5$ dex \citep{Xiao:2011}. Eddington ratios ($\lambda_{\rm{Edd}}\equiv L_{\rm{bol}}/L_{\rm{Edd}}$) were derived from the bolometric luminosities and black hole masses via $L_{\rm{bol}}/1.3\times 10^{38}$erg s$^{-1}[M_{\rm{BH}}/M_{\odot}]$. The uncertainties on $\lambda_{\rm{Edd}}$ are driven by the large systematic uncertainties on both $M_{\rm{BH}}$ determinations (up to $\sim$0.5 dex, see above) and bolometric corrections. The latter may be of roughly 0.2-0.3 dex, and perhaps involve more complicated uncertainties related to possible trends with luminosity and/or $\lambda_{\rm{Edd}}$ itself \citep{Marconi:2004,Vasudevan:2007,Jin:2012}. More details of the optical spectral analysis can be found in \cite{Koss:2017}. Stellar masses of the BAT AGN host galaxies were derived by spectrally de-convolving the AGN emission from stellar emission via SED decomposition. We combined near-IR data from 2MASS, which is more sensitive to stellar emission, with mid-IR data from the AllWISE catalog \citep{Wright:2010}, which is more sensitive to AGN emission. Where available, isophotal near-IR magnitudes from the 2MASS XSC were used to capture the most stellar emission, and the corresponding AllWISE elliptical magnitudes were used. We then converted the magnitudes to the AB system, and corrected for Galactic reddening using $E(B-V)$ estimates from \citet{Schlafly:2011}. We used the low-resolution SED templates from \citet{Assef:2010} to decompose the BAT AGN host galaxies into a linear combination of an AGN plus early-type (E), continuously star-forming (Sbc), and starburst galaxies (Im). To convert the luminosities of the galaxy components to masses, we obtained their stellar mass coefficients by fitting them with the \citet{Blanton:2007} stellar population synthesis templates. The templates were convolved with the 2MASS/WISE system responses, and fit to the data via weighted non-negative least-squares, where for the weights we use the inverse variances of the data. Finally, we include AGN reddening by performing the SED decompositions along a logarithmically spaced grid of $E(B-V)$ values, choosing the value that yields the lowest $\chi^2$. To estimate random errors in the stellar mass uncertainties, we re-fit each source several times, each time removing one of the seven photometric data points (jackknife resampling), and permuting the remaining magnitudes by their uncertainties. This produced random errors of about 0.06 dex. There is also a component of scatter introduced by using the \citet{Assef:2010} templates, which have three stellar components, instead of the five original \citet{Blanton:2007} stellar population templates. By fitting the NASA-Sloan Atlas photometry with the \citet{Assef:2010} templates, we estimate that there is an additional scatter term of about 0.08 dex, which we add in quadrature to the random error term provided above. Finally, the absolute stellar mass uncertainty for masses estimated using near-IR photometry is approximately a factor of two \citep{BelldeJong:2001}. We therefore estimate that our stellar mass uncertainties are about 0.32 dex, on average. We selected AGN in the redshift range $0.01<z<0.1$ with intrinsic (i.e., absorption-corrected) $L_{2-10~keV}>10^{42.5}$ $\rm erg \,s^{-1}$, to remove any bias in peculiar velocities of low-redshift objects, as well as improve the AGN completeness for the luminosity range. The upper redshift limit was imposed to match the maximum redshift of the galaxy sample that we cross-correlated with the AGN. After these cuts, the final number of AGN in our sample is 499, and their distribution of redshift versus X-ray luminosity is shown in Figure~\ref{fig:Lz}. The luminosities are comparable to those of well-studied, higher redshift AGN from pencil-beam X-ray surveys (e.g., COSMOS; \citealt{Civano:2016,Marchesi:2016}). In addition to this full sample, we made subsamples in two bins of $N_{\rm{H}}$ (threshold $=10^{22}$ cm$^{-2}$) and two bins of black hole mass (threshold $=10^{8}$ $M_{\odot}$). Because the statistics are insufficient to use volume-limited samples, different luminosity subsamples automatically probe different volumes and host galaxy stellar masses. We therefore do not examine the clustering dependence on X-ray luminosity. Additionally, to avoid selection effects between the two bins of $N_{\rm{H}}$ and $M_{\rm{BH}}$, we matched the subsamples in their distributions of redshift and X-ray luminosity. Specifically, we defined five bins of $z$; then for each bin, we randomly selected $N$ AGN of the sample, with the larger number of sources in the bin to match the number of sources ($N$) in the other sample. We then repeated the process for luminosity, with 10 bins of $\log L_{2-10~keV}$. The total numbers were 186 AGN in each bin of $N_{\rm{H}}$ and 102 AGN in each bin of $M_{\rm BH}$. Each random selection provided consistent results, as did using the derived bolometric luminosities rather than $L_X$. The characteristics of these subsamples are summarized in Table \ref{table:dtable}. \begin{figure} \centering \includegraphics[width=.5\textwidth]{f1.pdf} \caption{Log of the intrinsic $2-10$ keV luminosity versus redshift for the 499 BASS AGN at redshift $0.01<z<0.1$. The sample spans 2 decades in luminosity, but as in all flux-limited samples, there is a strong redshift-luminosity correlation. } \label{fig:Lz} \end{figure} \begin{table} \label{table:dtable} \centering \begin{tabular}{c c c c c c} \hline \hline AGN Sample & Threshold & N & $\tilde{M}_{bh}$ & $ \tilde{L}_{2-10~keV}$ & $\langle z\rangle$ \\ \hline $L$-limited (Full) & $L_{X}>10^{42.5}$ erg s$^{-1}$ & 499 & 8.0 & 43.4 & 0.04 \\ $\lambda_{\rm{Edd}}$-limited & $ \lambda_{\rm{Edd}}>0.01$ & 245 & 7.9 & 43.5 & 0.04 \\ Obscured & $N_{\rm{H}}\geq 10^{22}$ cm$^{-2}$& 186 & 8.2 & 43.4 & 0.04 \\ Unobscured & $ N_{\rm{H}}<10^{22}$ cm$^{-2}$& 186 & 7.7 & 43.4 & 0.04 \\ Small $M_{\rm{bh}}$ & $M_{\rm{bh}}\leq 10^{8}~M_{\odot}$ & 102 & $7.6$ & 43.4 & 0.04 \\ Large $M_{\rm{bh}}$ & $M_{\rm{bh}}>10^{8}~M_{\odot}$& 102 & $8.4$& 43.4 & 0.04 \\ & & \\ \hline \end{tabular} \caption{AGN subsamples and their characteristics, including the number of AGN, the median black hole mass, the median 2-10 keV luminosity (after correcting for absorption), and the average redshift of each. Black hole mass is in log units of $M_{\odot}$, and luminosity is in log units of erg s$^{-1}$.} \end{table} \subsection{Galaxy Sample} Using a dense sample of galaxies as tracers of the underlying dark matter distribution greatly boosts AGN clustering statistics \citep[e.g.,][]{Coil:2009}. We therefore cross-correlated our AGN sample with galaxies from the 2MASS Redshift Survey (\citealt{Huchra:2012}), as the redshift range is close to that of the AGN sample ($z_{\rm peak} \sim 0.03$). Selected based on their $K$-band magnitude, $K_{\rm s}\leq 11.75$, the galaxies are spectroscopically complete and cover $91\%$ of the sky (the Galactic plane is excluded; $|b|>8^{\circ}$). We estimated stellar masses of the 2MASS galaxies by employing a universal mass-to-light ratio ($M/L$) between $K$-band luminosity and stellar mass, as $K$-band $M/L$ does not significantly vary with mass at $z=0$ \citep{Lacey:2008}, nor is it sensitive to dust content. We use an absolute solar $K_{\rm{S}}$ band magnitude of 3.29 \citep{Blanton:2007} to obtain the luminosities and fit our measured autocorrelation function for $M_{*}/L_{K_{S}}$, as described in section 3.3. The random error associated with using a single $M/L$ ratio is about 0.3 dex \citep{BelldeJong:2001}. However, we only use the resulting distribution of stellar mass in our model, and we verified that convolving the distribution with this error does not change our results. We used the full flux-limited sample for maximal statistics, and corrected for incompleteness as a function of stellar mass when modeling the galaxies via the process described in Section 3. We excluded 2MASS galaxies that are also in the BASS AGN catalog (to within $3^{\prime\prime}$; 361 sources) so that the cross-correlation measurement was between two independent catalogs. In total, we used 38,567 galaxies in the redshift range $0.01<z<0.1$. \section{Method} \subsection{Correlation Function Measurement} The quantitative measure of clustering is the two-point correlation function, which quantifies the excess probability over a random distribution that a pair of objects are separated by a given distance ($\vec{r}$). We used the Landy$-$Szalay estimator \citep{Landy:1993}: \begin{equation} \xi(\vec{r}) = \frac{D_{1}D_{2}(\vec{r})-D_{1}R_{2}(\vec{r}) - R_{1}D_{2}(\vec{r}) + R_{1}R_{2}(\vec{r})}{R_{1}R_{2}(\vec{r})} , \end{equation} where DD, DR, and RR correspond to the data$-$data, data$-$random and random$-$random pairs, respectively. For an autocorrelation (ACF) measurement, the subscripts correspond to the same dataset, while they represent two different datasets for a cross-correlation. The random catalogs for each dataset have the same selection function as the data survey. Rather than using the AGN ACF, which has large uncertainties because of the rather small AGN sample, we cross-correlated the AGN with the larger galaxy sample to improve statistics. We created a random AGN sample with the same selection as the BASS survey by using the \emph{Swift}/BAT sensitivity map. We first randomized the the position of each random AGN on the sky, and then assigned it a flux drawn from the flux distribution of the data. If the flux was greater than the sensitivity at that position, we kept that specific randomly generated AGN; otherwise we omitted it. We then assigned it a redshift drawn from the redshift distribution of the data, smoothed with a Gaussian kernel with $\sigma_{z}=0.2$. We repeated this process for each AGN subsample (e.g., each bin in black hole mass or in absorbing column density). Due to the low number density of the data, we made each random AGN sample $\sim 100$ times larger than the corresponding BASS sample. For the galaxy random catalog, we assumed that the sensitivity is uniform across the sky and randomized the angular positions, excluding the galactic plane ($|b| < 8^{\circ}$). We assigned each random galaxy a redshift drawn from the distribution in the real data, also smoothed with a Gaussian kernel with $\sigma_{z}=0.2$. The redshift distributions of the galaxies, AGN, and random samples are shown in Figure \ref{fig:zdist}. The number of random galaxies is 20 times more than the number of 2MASS Redshift Survey galaxies. \begin{figure} \centering \includegraphics[width=.5\textwidth]{f2.pdf} \caption{Normalized redshift distributions of the AGN and galaxy samples, along with their respective random catalogs. The redshift distribution of each population is well matched by its randomly positioned counterparts.} \label{fig:zdist} \end{figure} We measured $\xi$ in bins of $r_{\rm{p}}$ and $\pi$ (distances perpendicular and parallel to the line of sight, respectively) using the pair counter from the publicly available software \texttt{CorrFunc} \citep{Sinha:2017}, which counts the number of pairs of galaxies in a catalog separated by $r_{\rm{p}}$ and $\pi$. We then projected through redshift space to eliminate any redshift-space distortions, to get the projected correlation function: \begin{equation} w_{p} = 2 \int^{\pi_{\rm{max}}}_{0} \xi(r_{p},\pi) d\pi . \end{equation} The value of $\pi_{\rm{max}}$ was chosen such that the amplitude of the projected correlation function converges and gets noisier for any higher values. We found this to be $60$ Mpc $h^{-1}$ for our sample, which is a commonly used value for $\pi_{\rm{max}}$. We calculated the covariance matrix via the jackknife resampling method: \begin{equation} \begin{split} C_{i,j} = \frac{M}{M-1} \sum_{k}^{M} \Big[w_{k}(r_{p,i}) - \langle w(r_{p,i})\rangle\Big]\\ \times\Big[w_{k}(r_{p,j}) - \langle w(r_{p,j})\rangle \Big] ~~, \end{split} \end{equation} \noindent where we split the sample into $M=25$ sections of the sky, and computed the cross-correlation function when excluding each section ($w_{k}$). We chose $M=25$ so that the patches were large enough to probe the largest $r_{\rm{p}}$ scale at our minimum redshift, yet numerous enough to create a normal distribution. We quote the errors on our measurement as the square root of the diagonals: $\sigma_{i} = \sqrt{C_{i,i}}$. \subsection{Model Formulation} In the hierarchical model of structure formation, galaxies reside in dark matter halos, which have gravitationally collapsed at the peaks of the underlying dark matter distribution. In this context, clustering statistics of galaxies depend only on the cosmology (how dark matter halos cluster; the two-halo term, dominant on scales $\gtrsim$1 Mpc h$^{-1}$) and how the galaxies occupy their dark matter halos (one-halo term; $\lesssim$1 Mpc h$^{-1}$), which depends on their formation and evolution. We consider two kinds of models to describe the latter: a HOD model and a subhalo model, described in the following sections. In both cases, we used \texttt{Halotools} \citep{Hearin:2017} to compute the model cross-correlation functions. This software populates dark matter halos from an $N$-body simulation with a model and computes the two-point statistics for the resulting galaxy mock catalog. Because we cross-correlated AGN with galaxies, we first created a mock sample with the same clustering statistics as the 2MASS galaxies (described in Section 3.3). We then used this simulated galaxy sample to cross-correlate with the AGN mock derived from the model. The average and median halo masses of the AGN sample were calculated empirically from the AGN mock. We did the analysis with two redshift $z=0$ halo catalogs (based on the ROCKSTAR halo-finder; \citealt{Behroozi:2013}) from different simulations, both included in \texttt{Halotools}: first, the Bolshoi--Planck simulation \citep{Riebe:2011}, which has a resolution of $1.35\times 10^8 M_{\odot}h^{-1}$ and a box size $L=$250~Mpc h$^{-1}$ using Planck 2013 cosmological parameters \citep{Planck:2015}; second, the Consuelo simulation, which has a larger volume ($L=420$~Mpc h$^{-1}$) but poorer resolution ($2\times 10^9 M_{\odot}h^{-1}$), with WMAP5 cosmology \citep{wmap5:2009}. The results are consistent with each other; however, since the Bolshoi--Planck simulation is complete down to haloes of mass $M_{\rm{vir}}\sim 10^{11} M_{\odot}$, we quote results from that analysis, which is better able to constrain minimum halo mass. \subsubsection{HOD Model} The HOD formalism \citep[e.g.,][]{Cooray:2002} describes the probability that $N$ galaxies (or AGN) reside in a host halo of mass $M_{h}$. To first order, this can be described as the average number of galaxies per host halo as a function of halo mass, $\langle N\rangle(M_{h})$. The HOD can be disaggregated into centrals and satellite galaxies, where the total HOD is sum of the two components $\langle N\rangle = \langle N_{c}\rangle + \langle N_{s}\rangle$. We used a simple parametrization for the the AGN HOD, derived from the \cite{Zheng:2007} model: \begin{equation} \langle N_{c}\rangle(M_{\rm{h}}) \propto \Theta( M_{h} - M_{\rm{min}})~, \end{equation} \begin{equation} \langle N_{s}\rangle(M_{\rm{h}}) \propto \Big(\frac{M_{\rm{h}}-M_{\rm{min}}}{M_{1}}\Big)^{\alpha} , \end{equation} \noindent where $\Theta$ is the Heaviside step function, $M_{\rm{min}}$ is the minimum halo mass to host a central AGN, $M_{1}$ is typical halo mass that starts hosting satellites, and $\alpha$ is the power-law slope of the satellites. We assumed $~\log(M_{1}/M_{\rm{min}})=1.2$, which is the case for galaxies with $M_r<-20$~mag \citep{Zehavi:2011}, and we left $M_{\rm{min}}$ and $\alpha$ as two free model parameters. The normalization of the HOD is not constrained by the correlation function. We searched for the best-fit model by stepping through $\log M_{\rm{min}}-\alpha$ parameter space in 0.1 unit increments ($11.2<\log M_{\rm{min}}<12.8$; $-0.5<\alpha<1.5$), where at each step we averaged five model realizations, and found where the correlated $\chi^{2}$ was minimum: \begin{equation} \begin{split} \chi^{2} = \sum_{i,j} \Big[ w_{\rm{obs}}(r_{\rm{p},i}) - w_{\rm{mod}}(r_{\rm{p},i})\Big] \times~C^{-1}_{\rm{eff},i,j}\\ \times\Big[w_{\rm{obs}}(r_{\rm{p},j}) - w_{\rm{mod}}(r_{\rm{p},j})\Big] ~, \end{split} \end{equation} \noindent where $w_{\rm{obs}}$ and $w_{\rm{mod}}$ correspond to the correlation function of the real data and mock data. Because the model has sample variance uncertainty from the finite simulation volume, $C_{\rm eff}$ is the sum of the covariance matrices from the data and simulation \citep{Zheng:2016}. The simulation covariance matrix was also estimated via Jackknife resampling, by splitting the simulation box into 125 cubes. We report the best-fit parameters in Section 4. For each realization of the HOD model, \texttt{Halotools} populates the the host halos with the mock central galaxies and adds satellites according to an NFW profile \citep{NFW:1996}. This is done only for the largest virialized halos in the catalog (i.e., ignoring the subhalo information). \subsubsection{Subhalo Model} The second type of model assumes a one-to-one relation between the galaxies and all halos and subhalos. We used the \cite{Behroozi:2010} model based on abundance matching, which assumes that stellar mass predominantly determines the clustering of the sample via the stellar-to-(sub)halo mass relation. This model has been calibrated and tested with galaxy observations, and so it provides an additional check to see if AGN occupy halos in the same way as inactive galaxies, i.e., based primarily on their stellar mass. For this model, the \texttt{Halotools} software populates a mock galaxy at the center of each halo {\it and} subhalo, and assigns it a stellar mass based on the peak mass of that (sub)halo. The mock galaxies in the center of each host halo correspond to the centrals galaxies, while the mocks in the subhalos correspond to the satellites. This method allows us to correct for the incompleteness of the AGN catalog in the following way: we populated the halos from our halo catalog with galaxies according to the Behroozi model, and then divided the stellar mass distribution of the resulting galaxy mock sample with the stellar mass distribution of the BASS AGN. We normalized it to obtain the incompleteness fraction as a function of stellar mass. We then assigned random values between 0 and 1 to the mock galaxies, and masked out any mock whose value fell below the incompleteness fraction for its assigned stellar mass. Consequently, the resulting mock sample had the same stellar mass distribution as the data. This subhalo-based model is approximately equivalent to the HOD model that assumes $\alpha=1$, as the number of subhalos (and hence satellite galaxies above a threshold luminosity) scales with halo mass. However, it is not biased by incompleteness in flux-limited catalogs. For this model there are no parameters to fit; rather, we simply assess how well the model agrees with the data. \subsection{Galaxy Mock Creation} \begin{figure} \centering \includegraphics[width=.5\textwidth]{f3.pdf} \caption{Projected autocorrelation function of 2MASS galaxies compared with the mock sample created using the \cite{Behroozi:2010} model and 2MASS selection function. } \label{fig:wp_gg} \end{figure} We used the subhalo model to create a mock galaxy sample with the same stellar mass distribution as the full flux-limited 2MASS galaxy catalog. We fit for the $K_{s}$-band $M/L$, by comparing the resulting mock autocorrelation function with the data. The best-fit value was found to be 0.6 in solar units ($\chi^{2}_{\nu}=0.75$). The masked autocorrelation function using the Bolshoi--Planck halo catalog with the best-fit $M/L$ ratio is shown in Figure \ref{fig:wp_gg}, along with the autocorrelation function of the 2MASS galaxies. We found that using $M/L$ ratios of upper and lower bounds of the 99\% confidence limits of the fit does not significantly change our results. Both simulations produced consistent results. \section{Results} The results for both models are summarized in Table 2 for each AGN sample. \subsection{Full AGN Sample} \begin{figure*} \centering \includegraphics[width=.48\textwidth]{f4.pdf} \includegraphics[width=.48\textwidth]{f5.pdf} \caption{Left: projected cross-correlation function of 2MASS galaxies and BASS AGN (blue points), with the best-fit HOD model (gray solid line) and subhalo model (black dotted line) for the $2-10$ keV luminosity-limited sample. The lower panel shows the data divided by the HOD model. An AGN sample limited by Eddington ratio ($L/L_{\rm{Edd}} > 0.01$; light blue points) is consistent with the same models. Right: contour map of the HOD fit, showing the $\Delta \chi^{2}=2.3$ and 6.2 levels, for the Bolshoi--Planck catalog (solid lines) and the Consuelo halo catalog (dotted lines).} \label{fig:wp_all} \end{figure*} The left panel of Figure \ref{fig:wp_all} shows the projected cross-correlation function for the full AGN sample and the corresponding HOD model fit: $\log M_{\rm{min}}/M_{\odot}h^{-1} = 12.4^{+0.2}_{-0.3}$, $\alpha_{AGN}=0.8^{+0.2}_{-0.5}$. We find that the average dark matter halo mass in which AGN reside is $\log M_{h}/M_{\odot}h^{-1}=13.4\pm0.2$, and the median mass is $\log M_{h}/M_{\odot}h^{-1} = 12.8\pm0.2$, from empirical measurement of the mocks from the $1\sigma$ best-fit HOD region. This is consistent with the measurement in \cite{Cappelluti:2010}, as well as in \cite{Krumpe:2017}. The smoothed contour map of the fit to the two-parameter HOD is shown in the right panel. While the associated significances of the contour levels should be taken with caution, we verified that the projected probability distributions for each parameter are nearly Gaussian, which we use to quote the errors on the best-fit parameters. The satellite power-law slope is consistent with that of the local inactive galaxy population ($\alpha \sim 1$), but favors $\alpha<1$. We stress that this HOD fit is using the full flux-limited sample, which is incomplete for all AGN luminosities. Thus the derived HOD pertains to AGN with median bolometric luminosity of $10^{44.7}$~erg~s$^{-1}$ at $z\sim 0.04$. However, we are able to compare our full cross-correlation measurement with an Eddington ratio-limited sample ($\lambda_{\rm{Edd}}>0.01$), which has been suggested to have a more unbiased HOD \citep{Jones:2017} than a luminosity-limited sample. Although the statistics are poorer due to only a fraction of the sources having black hole mass estimates, we find it also agrees with our best-fit HOD model (Figure \ref{fig:wp_all}). Figure \ref{fig:wp_all} also shows the results from our subhalo model analysis, which agrees well with the data ($\chi^{2}_{\nu}=1.6$), despite there being no free parameters. The advantage of the subhalo model is that it takes into account the incompleteness of the catalog. The median halo mass, $\log~M_{h}/M_{\odot}h^{-1}\sim 12.3$, is lower than was found with the HOD model (12.8) because of the proper treatment of incompleteness (i.e., taking into account the smaller mass galaxies and halos that were missed). We therefore conclude that AGN, on average, do not live in special environments compared with the overall galaxy population, as our only assumption was that the host galaxy stellar mass distribution of the AGN sample drives its clustering via the stellar-to-subhalo mass relation. \begin{table*} \label{table:t2} \centering \begin{tabular}{c | c c c c c | c c c} & & & HOD Model & & & & Subhalo Model & \\ \hline AGN Sample & $\tilde{M_{h}}$ & $\langle M_{h}\rangle$ & $M_{min}$ & $\alpha$ & $\chi^{2}_{\nu}$ & $\tilde{M_{h}}$& $\langle M_{h}\rangle$ & $\chi^{2}_{\nu}$ \\ \hline \hline Full & $12.8^{+0.2}_{-0.1}$ & $13.4^{+0.1}_{-0.3}$ & $12.4^{+0.2}_{-0.3}$ & $0.8^{+0.2}_{-0.5}$ & 1.5 & 12.3 & 13.3 & 1.6\\ Obscured & $12.9^{+0.3}_{-0.7}$ & $13.5^{+0.2}_{-0.2}$ & $12.5^{+0.2}_{-0.8}$&$1.1^{+0.4}_{-0.2}$ & 1.9 & 12.3&13.3 & 2.1\\ Unobscured & $12.0^{+0.2}_{-0.3}$ & $12.6^{+0.2}_{-0.3}$ & $11.4\pm0.2$ & $0.4^{+0.2}_{-0.4}$&1.0 &12.3 &13.3& 4.5\\ Small $M_{\rm{bh}}$ & $12.6^{+0.2}_{-0.9}$ & $13.4^{+0.2}_{-0.9}$ & $12.1^{+0.4}_{-1.0}$ & $0.9^{+0.2}_{-0.4}$ & 2.6 & 12.2&13.3 &2.1\\ Large $M_{\rm{bh}}$ & $12.8^{+0.2}_{-0.4}$ & $13.2^{+0.2}_{-0.3}$ & $12.4^{+0.2}_{-0.4}$ & $0.2^{+0.5}_{-0.4}$ & 0.6 & 12.4&13.3 &1.6\\ \hline \end{tabular} \caption{Halo model parameters for each AGN subsample, for both the HOD and subhalo models. All masses are in log units of $M_{\odot} h^{-1}$.} \end{table*} \subsection{Clustering versus Obscuration} \begin{figure*} \centering \includegraphics[width=.49\textwidth]{f6.pdf} \includegraphics[width=.49\textwidth]{f7.pdf} \includegraphics[width=.75\textwidth]{f8.pdf} \caption{Upper panels: projected cross-correlation function of obscured (red) versus unobscured (blue) AGN and corresponding HOD model fits (solid lines) and subhalo models (dotted points). While their two-halo terms are consistent with each other, obscured AGN appear more clustered on scales of the one-halo term. Upper right: $\Delta \chi^{2}$ contour maps of the HOD fit for unobscured (blue) and obscured (red) AGN are completely distinct, suggesting some difference between the two populations. Lower panels: matched subsamples have similar distributions of (left) log of the $L_{2-10~keV}$ luminosity, (middle) redshift, and (right) log of the host galaxy stellar mass.} \label{fig:nH} \end{figure*} Figure \ref{fig:nH} shows the cross-correlation function of unabsorbed ($N_{\rm{H}}<10^{22}$ cm$^{-2}$) versus absorbed ($N_{\rm{H}}\geq 10^{22}$ cm$^{-2}$) AGN with their corresponding HOD fits. The luminosity and redshift distributions of the two bins are shown. While the two-halo terms of the data seem consistent with each other, the obscured AGN appear more clustered on small scales (by $\sim 3\sigma$), consistent with recent studies of narrow- versus broad-line AGN in the Sloan Digital Sky Survey (SDSS; \citealt{Jiang:2016}) and in {\it Swift}/BAT AGN \citep{Krumpe:2017}. This was also seen using the full sample (without matching Luminosity distributions) and using different $N_{\rm{H}}$ thresholds up to (but not including) $10^{23}$ cm$^{-2}$. The stellar mass distributions shown in Figure \ref{fig:nH} are very similar, so this cannot cause the difference in clustering. The subhalo model for unobscured AGN (dotted blue line) is inconsistent with the data (blue dots) ($\chi^{2}_{\nu} > 4$), another indication that factors beyond host galaxy stellar mass are determining the clustering signal. The $\Delta \chi^{2}$ contour plots of the separate HOD fits for obscured and unobscured AGN are also shown in Figure \ref{fig:nH}; the shapes of the HODs differ by more than $4\sigma$. $M_{min}$ and $\alpha$ are different for each: unobscured AGN have smaller minimum halo mass and a shallower satellite-term slope. This would suggest that unobscured AGN tend to be in central galaxies while obscured AGN are more likely to be in satellites. The corresponding average dark matter halo masses are $\log M_{h}/M_{\odot}h^{-1}=13.5\pm 0.2$ for obscured AGN and $\log M_{h}/M_{\odot}h^{-1}=12.6\pm 0.3$ for unobscured AGN. The finding that obscured AGN live in larger mass halos than their unobscured counterparts agrees with recent results of angular clustering studies of infrared-selected \emph{WISE} AGN \citep{Hickox:2009,DiPompeo:2014,DiPompeo:2017}. It is inconsistent, however, with clustering studies of Type 1 vs. Type 2 X-ray-selected AGN in COSMOS \citep{Allevato:2014}, although these studies probed AGN at higher redshift ($z\sim 1$) and different luminosity ranges. This inconsistency could also be due to the host galaxies; the Type 1 sample had systematically higher luminosities, indicating they most likely had larger host galaxy stellar masses, which may explain why a larger bias for Type 1 AGN was found. The clustering properties of unobscured AGN are also consistent with the halo masses found for optical quasar samples across a wide range of redshift \cite[e.g.,][]{Croom:2005,Ross:2009,Shen:2009}. The distinctly different halo masses of obscured and unobscured AGN could be due to intrinsic differences between the two types. It has been suggested that (Compton-thin) obscured AGN tend to have lower Eddington ratios than unobscured AGN, since the covering factor depends on mass-normalized accretion rate \citep[e.g.,][]{Ricci:2017}. This would cause our sample of obscured AGN to have systematically larger black holes than the unobscured AGN since we matched their luminosities, which we verified with their $M_{\rm BH}$ distributions. To test if this is biasing the result, we considered the objects that have black hole mass and accretion rate estimates ($\sim 75\%$ of the AGN sample analyzed), and measured the clustering of Compton-thin ($N_{\rm H}<10^{23.5}$ cm$^{-2}$) obscured and unobscured AGN after matching distributions of Eddington ratio rather than luminosity (Figure \ref{fig:t1t2_edd}). The differences between the two types are still present, suggesting something else determines the environmental differences. However, it should be noted that the black hole mass determination for obscured AGN is less precise than for unobscured AGN. \begin{figure} \centering \includegraphics[width=.5\textwidth]{f9.pdf} \caption{Projected cross-correlation function for Compton-thin obscured (red) and unobscured (blue) AGN with matched distributions of Eddington ratio (inset). Although this analysis involved only half as many AGN as in Figure 4, the difference is similar, indicating that black hole mass (and its possible relation to halo mass) is not causing the difference in clustering.} \label{fig:t1t2_edd} \end{figure} \begin{figure} \centering \includegraphics[width=.5\textwidth]{f10.pdf} \caption{The clustering of obscured (red) and unobscured (blue) AGN subsamples is well reproduced by a toy subhalo model split by host halo concentration ($c<13.5$, blue; $c>10.0$, red). This differs from the HOD interpretation that each population has distinct occupation statistics; rather, each population could reside in halos of statistically different concentrations (and hence different assembly histories).} \label{fig:conc} \end{figure} \subsubsection{Role of Assembly Bias} Another possible difference of clustering between obscured and unobscured AGN could be related to the host halo assembly history rather than the total halo mass, an affect known as assembly bias \citep[e.g.,][]{Gao:2005,Dalal:2008}. In general, there could be a connection between the mass assembly onto the host halo and the mass assembly onto the central black hole, and additionally, a link between obscuration and halo age can constrain whether obscuration is an evolutionary AGN phase. Also, if mergers are significant for AGN obscuration, it is possible that the merging of the subhalos, relating to the assembly of the host halo, would leave an imprint on the clustering signal. It has been shown that the amount of substructure in halos of a given mass depends on formation epoch \citep[e.g.,][]{Gao:2005,Dalal:2008}, as subhalos in early-forming hosts have more time to fall toward the center and thus are more concentrated. Therefore, if unobscured AGN reside preferentially in halos formed late, then both the one-halo and two-halo terms of their correlation function would be reduced. This is quite a different explanation than suggested by the HOD analysis (that they are preferentially central galaxies). We investigated this scenario with a simple toy model: we populated the halo catalog with our subhalo model and then split the sample by the NFW concentration of their host halos ($c\equiv r_{\rm vir}/r_{s}$, where $r_{\rm vir}$ is the virial radius and $r_{s}$ is the NFW scale radius), which correlates with halo formation time \citep[e.g.,][]{Wechsler:2002}. We assumed there is a maximum threshold concentration to host an unobscured AGN, and a minimum threshold concentration to host an obscured AGN. While reality is likely to be more complicated, this simple model can explain the overall trend. We found that the obscured sample is best fit by $c>10\pm 2$ ($\chi^2_{\nu}=1.8$), and the unobscured sample is best fit by $c<13\pm 2$ ($\chi^2_{\nu}=0.9$; the median concentration of the mock sample is $c\sim 10$). Figure \ref{fig:conc} shows the projected cross-correlation function of both models, compared to the obscured and unobscured AGN samples, and Table \ref{table:ctable} summarizes the best-fit parameters. There is good agreement with the data, even with such a simple model. The average halo concentrations of each mock sample are $c\sim 8.5$ (unobscured) and $c\sim 27$ (obscured), which correspond to the formation epochs of $z\sim 1$ and $z\sim 5.5$, respectively (see \citealt{Wechsler:2002}, who define the formation epoch as the time when the halo mass accretion rate, $d\log M_{h}/d\log a$, falls below 2). From this exercise, we see that obscured AGN do not necessarily reside in more massive halos than unobscured AGN (concentration is inversely proportional to mass); rather, it is possible that unobscured AGN instead prefer halos with low concentration and/or later formation epochs. Evidence of this was seen in a comparison between narrow- and broad-line AGN in SDSS; Type 2 AGN seem to reside in groups that are more centrally concentrated \citep{Jiang:2016}. It remains unclear whether the distribution of satellites or the halo formation epoch would be driving this preference. Note that these results are the opposite of what we would expect for the evolutionary picture in which a merger-triggered AGN is first obscured and then evolves into an unobscured phase; in that case, the obscured AGN would reside in the most recently formed halos, with much smaller difference in average host halo formation epoch between each sample. \begin{table} \centering \begin{tabular}{c c c c c c c c} \hline \hline AGN Sample & $c_{min}$& $c_{max}$& $\langle c\rangle$ & $\tilde{M_{h}}$ & $ \langle M_{h} \rangle$ & $\chi^{2}_{\nu}$ \\ \hline Obscured & $10.0^{+1.5}_{-2.0}$ & - &8.5& $12.3$ & $13.4$ & 1.8 \\ Unobscured & - & $13.5^{+2.0}_{-2.5}$ &27.0& $12.3$ & $13.1$ & 0.9 \\ \hline \end{tabular} \caption{Parameters for the best-fit subhalo models, which assume a threshold halo concentration (a maximum for the unobscured sample, and a minimum for the obscured sample). } \label{table:ctable} \end{table} \subsection{Clustering versus Black Hole Mass} Figure \ref{fig:mbh} shows the results of the correlation function measurements and HOD fitting for the AGN sample divided into two bins of black hole mass. We again randomly down-sampled each bin in order to avoid selection effects. The differences between the two samples are not significant; there is a $\sim 1\sigma$ difference in $\alpha$, in the sense that the satellite slope is shallower for large black hole masses than for the smaller ones, with best-fit values $0.2\pm0.5$ and $0.9\pm0.3$, respectively. A satellite power-law slope of 0 is consistent with the population residing purely in central galaxies; this is within the uncertainties, given such large black hole mass bin sizes and the limited sample size. (While the $\chi^2_{\nu}$ is large for the small black hole bin (2.6), it should be noted that it becomes 1.3 with the same best-fit parameters after removing one data point.) While the correlation between black hole mass and halo mass \citep[e.g.,][]{Silk:1998,El-Zant:2003,Booth:2010} would predict that the larger black hole bin would have a larger bias, as was found in \cite{Krumpe:2015}, we find no significant difference. The median halo masses for each bin are $\log~M_{h}/M_{\odot}h^{-1}=12.6\pm0.3$ and $12.8\pm0.3$, for small and large black holes, respectively. Our results may suggest that larger black holes are less likely to reside in satellite galaxies, which would make sense assuming a correlation between the mass of the black hole and the mass of its host \emph{subhalo}. More data are needed to conclusively confirm this. \begin{figure*} \centering \includegraphics[width=.49\textwidth]{f11.pdf} \includegraphics[width=.49\textwidth]{f12.pdf} \includegraphics[width=.75\textwidth]{f13.pdf} \caption{Upper left: projected cross-correlation function in two bins of black hole mass, $M_{\rm bh}<10^{8} M_{\odot}$ (cyan) and $M_{bh}>10^{8} M_{\odot}$ (purple), with corresponding HOD model fits (solid lines) and subhalo models (dotted lines). Upper right: $\Delta \chi^{2}$ contour map of the HOD fit for each mass bin. Lower panels: distributions of the log of the X-ray luminosity (matched), redshift, and log of the host galaxy stellar mass.} \label{fig:mbh} \end{figure*} \section{Discussion} \subsection{Environments of Local AGN} We have cross-correlated hard X-ray selected AGN with 2MASS near-infrared-selected galaxies to constrain how an unbiased sample of local AGN occupy their halos. Analyzing the sample in terms of an HOD model, we find that the number of AGN hosted in a halo roughly scales with halo mass, as is the case for the overall galaxy population. This is inconsistent with the notion that AGN are predominantly in central galaxies \citep[e.g.,][]{Starikova:2011,Richardson:2013}, as our results suggest a significant fraction of AGN are in satellites. This agrees with several recent studies \citep{Allevato:2012,Oh:2014,Silverman:2014}. Additionally, using a subhalo-based model that corrects for catalog incompleteness, we find that the host galaxy stellar mass distribution can determine the environments of AGN on average, via the stellar mass-subhalo mass relation \citep{Behroozi:2010}. This was also found when comparing predictions of this model with the weak gravitational lensing signal of X-ray-selected AGN in COSMOS \citep{Leauthaud:2015}. The typical halo mass found for the BASS AGN with our HOD analysis, $\log M_h/M_{\odot}~h^{-1} = 12.8$, lies between those typically found for soft X-ray-selected AGN ($\log M_h/M_{\odot}~h^{-1} \sim 13$) and optically selected AGN ($\log M_h/M_{\odot}~h^{-1} \sim 12.5$), and thus is broadly consistent with earlier results from AGN clustering studies across a large range of luminosity and redshift \citep{Croom:2005,Gilli:2005,Gilli:2009,Ross:2009,Shen:2009,Allevato:2011,Krumpe:2012,Allevato:2014}. The typical halo masses of BASS AGN correspond to galaxy group environments. \subsection{Obscured versus Unobscured Environments} We split our sample in two bins of $N_{\rm H}$ to test whether AGN with different column densities (i.e., obscured versus unobscured) live in different environments, for samples matched in luminosity and redshift, in order to avoid bias in the observed volume; we note that the host galaxy stellar mass distributions are also similar (Figure \ref{fig:t1t2_edd}). We find differences in their correlation functions, predominantly on small scales. Our HOD fits suggest that obscured AGN live in more massive halos and in denser environments than unobscured AGN. The simplest unification models attribute obscuration to the circumnuclear material absorbing the radiation produced in the broad-line region. In that case, whether the AGN is observed as obscured or unobscured depends only on viewing angle \citep{Urry:1995}, which means the halo-scale environments should be the same (statistically) for both populations. Although we now know that circumnuclear geometry is not the only factor, as there is a dependence of luminosity and $\lambda_{\rm Edd}$ on covering factor \citep[e.g.,][]{Ricci:2017}, the analysis of our matched samples shows that these factors are not biasing our results. Large column densities can also come from the host galaxy; for example, from a random molecular cloud that happens to lie along the line of sight or from the orientation of the galaxy disk. In the present case, only about 5\% of the sample lies in an edge-on host galaxy, and the results do not change when those AGN are removed. Another possibility is from inflowing gas following a merger \citep[e.g.,][]{Hopkins:2008,Kocevski:2015,Ricci:2017C}. In general, because the probability of galaxy interactions depends on the environment \cite[e.g.,][]{Shen:2009,Jian:2012}, it is possible that either major mergers or smaller galaxy interactions play a role in causing the clustering difference. \cite{Dipompeo:2017B} found a similar clustering difference on large scales with \emph{WISE} infrared-selected AGN at $z \sim 1$. They interpreted their results as obscuration being an evolutionary phase of merger-driven quasar fueling, in which the quasar is first obscured, followed by an unobscured phase after gas `blow-out'. The resulting observations of obscured AGN living in larger halos would be a selection effect based on this model, by using luminosity-limited samples. Assuming this scenario for our AGN sample, the halo mass differences should be minimal at these low luminosities ($\sim 0.2$ dex of $M_{\odot}h^{-1}$)---much less than our results based on the HOD analysis. It is unlikely that major mergers trigger these low-luminosity, low-redshift AGN. Indeed, only 8\% of BAT AGN are in the final phases of major mergers, where obscuration is found to peak \citep{Koss:2010}. The shallow satellite power-law slope of unobscured AGN, $\alpha$, obtained from the HOD analysis, would suggest that the fraction of galaxies that host unobscured AGN drops as a function of halo mass. This would mean that unobscured AGN avoid the richest clusters. Because high velocity encounters in the largest clusters disfavor galaxy mergers, perhaps a higher fraction of unobscured AGN were triggered by an earlier major merger, such that there was sufficient time to clear the surrounding gas and dust. Using the analytical function of the instantaneous galaxy merger rate from \cite{Shen:2009B}, we estimate that major mergers occur roughly four times more often in halos of the average mass hosting unobscured AGN than in halos hosting obscured AGN around $z\sim 0.1$. However, we compared the halo masses for obscured and unobscured AGN from the 2MASS group catalog \citep{Lu:2016} for the sources with counterparts in 2MASS, and found no evidence that obscured AGN live in preferentially larger halos. Alternatively, we have shown that a difference in halo concentration, opposed to differing halo occupation distributions and/or typical halo masses, fits the data equally well. Highly concentrated halos of a given mass would have a high concentration of satellite galaxies, and therefore have a higher probability of galaxy interactions (i.e., minor mergers and encounters, as opposed to major mergers that predominantly occurred at high redshifts). Indeed, \cite{Jiang:2016} found that SDSS Type 2 satellites were more concentrated than Type 1 satellites, and \cite{Villarroel:2014} calculated an enhanced number of SDSS Type 2 vs. Type 1 companions around Type 2 AGN. The excess of Compton-thick BASS AGN in mergers would also support this scenario \citep{Koss:2016}. However, after removing clear cases of mergers and interactions in the obscured sample by visual inspection, the clustering differences slightly increased rather than decreased --- the opposite of what this scenario would predict. Additionally, the unobscured sample is the one that is more inconsistent with the clustering statistics expected for its stellar mass distribution, suggesting unobscured AGN have more environmental dependencies than obscured AGN. Instead, the observed difference between the clustering of obscured and unobscured AGN may be due to a difference in their host halo assembly histories. Halo concentration correlates with formation epoch, and so unobscured AGN tend to reside in halos that were assembled more recently in cosmic time than halos hosting obscured AGN. This means that the merging of their subhalos, and hence the merging of the galaxies within these subhalos, occurred around $z\sim 1$, opposed to at higher redshift for obscured AGN host halos. Therefore, the progenitors of $z=0$ unobscured AGN underwent major merger events statistically more recently than obscured AGN. If the major mergers triggered a powerful quasar that blew away much of the surrounding gas and dust, then it would explain the lower column densities we see in AGN in recently formed halos. Perhaps obscured AGN host halos had, on average, a more quiescent history dominated by secular processes, allowing nuclear obscuring material to remain. This scenario, with the different host halo histories rather than AGN triggering processes, explains the distinct clustering signatures we see for unobscured and obscured AGN at $z\sim0$. However, it is uncertain if this explanation is consistent with higher redshift studies \citep[e.g.,][]{Allevato:2014}; an investigation of obscured vs. unobscured AGN clustering with samples of matched stellar mass distributions across a wide range of redshift (and luminosity) is needed. \subsection{Possible Dependence of Environment on Black Hole Mass} There is a small ($\sim 1\sigma$) difference between the clustering of AGN with black holes of mass $<10^{8} M_{\odot}$ and $>10^{8} M_{\odot}$. The flatter satellite power-law slope indicated by our analysis may suggest that larger black holes tend to lie in central galaxies rather than satellites, while smaller black holes tend to lie in satellites. A correlation between the SMBH and the mass of its subhalo goes in the right direction. However, more data are clearly necessary to confirm this weak signal. \section{Summary} In this study, we characterized the environments of a sample of accreting SMBHs unbiased toward obscuration by measuring the cross-correlation function of BASS AGN and 2MASS galaxies. We compared our results to mock samples created from simulations in order to model how AGN occupy their host dark matter halos. From fitting an HOD model to the cross-correlation function of the full sample, and by comparing with a subhalo model that assumed only stellar mass determines clustering statistics, we concluded that BASS AGN, on average, occupy dark matter halos consistently with the overall inactive galaxy population. However, subsamples based on column density and black hole mass have differing clustering statistics. We found that absorbed AGN reside in denser environments than unabsorbed AGN, despite no significant difference in their luminosity, redshift, or stellar mass distributions. Our subhalo model analysis suggests they may reside in halos with statistically different concentrations/assembly histories. The alternative interpretation from the HOD analysis --- that they have systematically different halo occupation distributions and host halo masses --- seems to contradict the finding that stellar mass drives the clustering amplitude. Lastly, we found a hint that a larger fraction of high-mass black holes ($M>10^{8} ~M_{\odot}$) reside in central galaxies than for lower mass black holes. \acknowledgments M.P. would like to thank Andrew Hearin for helpful discussions. M.P., N.C., and C.M.U acknowledge support from NASA-SWIFT GI: Nr. 80NSSC18K0505, NSF grant 1715512, NASA CT Space Grant, and Yale University. M.K. acknowledges support from NASA through ADAP award NNH16CT03C, and C.R. acknowledges support from FONDECYT 1141218, CONICYT PAI77170080, Basal-CATA PFB--06/2007, and the China-CONICYT fund. \software{CorrFunc \citep{Sinha:2017}, Halotools \citep{Hearin:2017}, Astropy \citep{Astropy:2013}, Matplotlib \citep{matplotlib:2007}.} \bibliographystyle{yahapj}
1,941,325,220,680
arxiv
\section{Background} Experiments designed to study the effect of electric current on domain wall motion in magnetic nanowires show that domain walls move over large distances with a velocity proportional to the applied current.\cite{Koo:2002,Tsoi:2003,Klaui:2003,Grollier:2003,Vernier:2004,Yamaguchi:2004,Lim:2004,Klaui:2005,Hayashi:2006,Beach:2006} Most theories ascribe this behavior to the interplay between {\it spin-transfer} (the quantum mechanical transfer of spin angular momentum between conduction electrons and the sample magnetization) and magnetization damping of the Gilbert type.\cite{Gilbert} Contrary to the second point, we argue in this paper that Landau-Lifshitz damping\cite{LL} provides the most natural description of the dynamics. This conclusion is based on the premises that damping should always reduce magnetic free energy and that microscopic calculations must be consistent with statistical and thermodynamic considerations. Theoretical studies of current-induced domain wall motion typically focus on one-dimensional models where current flows in the $x$-direction through a magnetization ${\bf M}(x)=M\hat{\bf M}(x)$. When $M$ is constant, the equation of motion is \begin{equation} \label{DSZ1} {\dot{\bf M}} = -\gamma {\bf M} \times {\bf H} + {\bf N}_{\rm ST} +{\bf D}. \end{equation} The precession torque $-\gamma {\bf M} \times {\bf H}$ depends on the gyromagnetic ratio $\gamma$ and an effective field $\mu_0{\bf H}=-\delta F/\delta {\bf M}$ which accounts for external fields, anisotropies, and any other effects that can be modelled by a free energy $F[{\bf M}]$ ($\mu_0$ is the magnetic constant). The spin-transfer torque ${\bf N}_{\rm ST}$ is not derivable from a potential, but its form is fixed by symmetry arguments and model calculations.\cite{Berger:1978,Bazaliy:1998,Ansermet:2004,Tatara:2004,Waintal:2004,ZL:2004,Thiaville:2005,Dugaev:2005,XZS:2006} A local approximation \cite{caveat} (for current in the $x$-direction) is \begin{equation} \label{DSZ2} {\bf N}_{\rm ST} = -\upsilon \left[\partial_x{\bf M} - \beta \hat{\bf M}\times\partial_x{\bf M}\right] . \end{equation} The first term in (\ref{DSZ2}) occurs when the spin current follows the domain wall magnetization adiabatically, {\it i.e.}, when the electron spins remain largely aligned (or antialigned) with the magnetization as they propagate through the wall. The constant $\upsilon$ is a velocity. If $P$ is the spin polarization of the current, $j$ is the current density, and $\mu_B$ is the Bohr magneton, \begin{equation} \label{speed} \upsilon={-Pj\mu_B\over eM}. \end{equation} The second term in (\ref{DSZ2}) arises from non-adiabatic effects. The constant $\beta$ is model dependent. The damping torque ${\bf D}$ in (\ref{DSZ1}) accounts for dissipative processes, see \onlinecite{Heinrich} for a review. Two phenomenological forms for ${\bf D}$ are employed commonly: the Landau-Lifshitz form\cite{LL} with damping constant $\lambda$, \begin{equation} \label{DL} {\bf D}_L= - \lambda {\bf\hat{ M}} \times \left( {\bf M} \times {\bf H} \right), \end{equation} and the Gilbert form\cite{Gilbert} with damping constant $\alpha$, \begin{equation} \label{DG} {\bf D}_G= \alpha {\bf \hat{M}} \times {\dot{\bf M}}. \end{equation} The difference between the two is usually very small and almost all theoretical and simulation studies of current-induced domain wall motion solve (\ref{DSZ1}) with the Gilbert form of damping.\cite{ZL:2004,Thiaville:2005,Li:2004,Thiaville:2004,He:2005,Dugaev:2005,Tatara:2005,He:2006} This is significant because, as we now discuss, Gilbert damping and Landau-Lifshitz damping produce quite different results for this problem when the same spin transfer torque is used. Consider a N\'{e}el wall where ${\bf M}$ lies entirely in the plane of a thin film when the current is zero. By definition, $\hat{\bf M}\times{\bf H}=0$ if we choose ${\bf M}(x)$ as the equilibrium structure which minimizes the free energy $F[{\bf M}]$. The wall distorts if $\hat{\bf M}\times{\bf H}\neq 0$ for any reason. The theoretical literature cited above shows that, with damping omitted, the N\'{e}el wall moves undistorted at the speed $\upsilon$ [see (\ref{speed})] when $\beta=0$ in (\ref{DSZ2}). Gilbert damping brings this motion to a stop because ${\bf D}_G$ rotates ${\bf M}(x)$ out-of-plane until the torque from magnetostatic shape anisotropy cancels the spin-transfer torque. However, if the non-adiabatic term in (\ref{DSZ2}) is non-zero, steady wall motion occurs at speed $\beta \upsilon /\alpha$. Using this information, two recent experiments\cite{Hayashi:2006,Beach:2006} used their observations of average domain wall velocities very near $\upsilon$ to infer that $\beta \approx \alpha$ for permalloy nanowires. This is consistent with microscopic calculations (which include disorder-induced spin-flip scattering) that report $\beta=\alpha$ \cite{Tserkovnyak:2006} or $\beta\approx\alpha$ \cite{Kohno:2006} for realistic band models of an itinerant ferromagnet. On the other hand, calculations for ``s-d'' models of ferromagnets with localized moments find little numerical relationship between $\beta$ and $\alpha$ \cite{Kohno:2006,Duine:2007}. A rather different interpretation of the data follows from a discussion of current-driven domain wall motion in the s-d model offered by Barnes and Maekawa.\cite{Barnes:2005} These authors argue that there should be no damping of the magnetization when a wall which corresponds to a minimum of the free energy $F[{\bf M}]$ simply translates at constant speed. This is true of ${\bf D}_L$ in (\ref{DL}) because ${\bf M}\times{\bf H}=0$ but it is {\it not} true of ${\bf D}_G$ in (\ref{DG}) because ${\bf \dot{M}}\neq 0$ when ${\bf N}_{\rm ST}\neq 0$. From this point of view, the ``correct'' equation of motion is \begin{equation} \label{correct} {\dot{\bf M}} = -\gamma {\bf M} \times {\bf H} -\upsilon \partial_x {\bf M} - \lambda {\bf\hat{ M}} \times \left( {\bf M} \times {\bf H} \right), \end{equation} because it reduces (for energy-minimizing walls) to \begin{equation} \label{mini} {\dot {\bf M}}= -\upsilon\partial_x {\bf M}. \end{equation} In the absence of extrinsic pinning, this argument identifies the experimental observation of long-distance wall motion with a uniformly translating solution ${\bf M}(x-\upsilon t)$ of (\ref{mini}) with minimum energy. As we discuss below, it is possible to convert between descriptions with Landau-Lifshitz and Gilbert dampings by concurrently changing the value of the non-adiabatic spin-transfer torque. The Landau-Lifshitz description in Eq.~\ref{correct} is equivalent to one with Gilbert damping with $\beta=\alpha$. The goal of this paper is to argue that there are conceptual reasons to prefer the description with Landau-Lifshitz damping even when $\beta\ne\alpha$. Section II presents micromagnetic simulations that confirm the discussion above and describes further details. Then, the remainder of this paper provides three theoretical arguments which support the use Landau-Lifshitz damping for current-driven domain wall motion (in particular) and for other magnetization dynamics problems (in general). First, we reconcile our preference for Landau-Lifshitz damping with the explicit microscopic calculations of Gilbert damping and non-adiabatic spin torque reported in Refs.~\onlinecite{Tserkovnyak:2006,Kohno:2006,Duine:2007}. Second, we show that Gilbert damping can increase the magnetic free energy in the presence of spin-transfer torques. Finally, we show that Landau-Lifshitz damping is uniquely selected for magnetization dynamics when the assumptions of non-equilibrium thermodynamics are valid. \section{Micromagnetics} Our analysis begins with a check on the robustness of the foregoing model predictions using full three-dimensional micromagnetic simulations of current-driven domain wall motion.\cite{OOMMF} We studied nanowires 12~nm thick and 100~nm wide with material parameters chosen to simulate ${\rm Ni}_{80}{\rm Fe}_{20}$. At zero current, this geometry and material system support in-plane magnetization with stable domain walls of transverse type.\cite{Donahue:1997} Figure~\ref{transverse} shows the wall position as a function of time for a transverse domain wall for several values of applied current density $j$. The curves labelled Gilbert ($\alpha=0.02$) show that wall motion comes quickly to a halt. Examination of the magnetization patterns confirms the torque cancellation mechanism outlined above. The curves labelled Landau-Lifshitz show that the wall moves uniformly with the velocity given by (\ref{speed}) which is independent of the damping parameter $\lambda$.\cite{caveat2} \begin{figure} \centerline{\includegraphics[width=0.5\textwidth]{transverse.eps}} \caption{Position versus time for a transverse domain wall and several values of the applied current density computed with adiabatic spin torques ($\beta=0$) and the two forms of damping in (\ref{DL}) and (\ref{DG}). } \label{transverse} \end{figure} The sudden turn-on of the current and hence Oersted magnetic field at $t=0$ generates the small amplitude undulations of the curves in Figure~\ref{transverse} but otherwise has little effect on the dynamics. An initial state of a stable vortex wall in a 300 nm wide wire produces similar results, except that under the Gilbert formulation the vortex wall moves about twenty times farther before stopping as compared to the transverse wall in the 100 nm wire. We conclude from these simulations that the basic picture of domain wall dynamics gleaned from one-dimensional models is correct. The magnetic free energy behaves differently in simulations depending on whether Landau-Lifshitz or Gilbert damping is used. Before the current is turned on, the domain wall is in a configuration that is a local minimum in the energy. For Landau-Lifshitz damping, the energy remains largely constant near this minimum and is exactly constant if the Oersted fields are ignored. For Gilbert damping, the energy increases when the current is turned on and the walls distort. For a transverse wall, the distortion is largely an out of plane tilting. Initially, the energy increases at a rate proportional to the damping parameter (ignoring higher order corrections discussed in the next section). The details of this behavior are somewhat obscured by the oscillations due to the Oersted magnetic field, but are quite apparent in simulations in which this field is omitted. As the wall tilts out of plane, the torque due to the magnetostatic field opposes the wall motion and the wall slows down. Eventually the torque balances the adiabatic spin transfer torque and the wall stops. In simulations using Gilbert damping, the change in magnetic free energy between the initial and final configurations is independent of the damping parameter as it is determined by the balance between the magnetostatic torque and the adiabatic spin transfer torque. However, the amount of time before the wall stops and the distance the wall moves are inversely proportional to the damping parameter. The Gilbert damping torque is responsible for this increase in energy as can be seen from analyzing the directions of the other torques. Precessional torques, like those due to the exchange and the magnetostatic interactions that are important in these simulations, by their nature are directed in constant energy directions and do not change the magnetic free energy. The adiabatic spin transfer torque is in a direction that translates the domain wall and does not change the energy in systems where the energy does not depend on the position of the wall. Thus, in simulations of ideal domain wall motion without Oersted fields, the Gilbert damping torque is the only torque that changes the energy. Throughout these simulations, the Gilbert damping torque is in a direction that increases rather than decreases the magnetic free energy. \section {Magnetic Damping With Spin-Transfer Torque} When ${\bf N}_{\rm ST}=0$, it is well known that a few lines of algebra converts the equation of motion (\ref{DSZ1}) with Gilbert damping into (\ref{DSZ1}) with Landau-Lifshitz damping (and vice-versa) with suitable redefinitions of the precession constant $\gamma$ and the damping constants $\lambda$ and $\alpha$.\cite{Bertotti} The same algebraic manipulations \cite{Tserkovnyak:2006} show that (\ref{correct}) is mathematically equivalent to a Gilbert-type equation with $\alpha=\lambda/\gamma$: \begin{equation} \label{equivG} \begin{array}{l} {\dot {\bf M}}=-\gamma(1+\alpha^2) {\bf M}\times {\bf H}+\alpha{\bf \hat{M}}\times \dot{\bf M} \nonumber \\ \\ \hspace{.35in} -\upsilon \left [\partial_x {\bf M}-\alpha {\bf \hat{M}}\times \partial_x {\bf M}\right]. \\ \end{array} \end{equation} To analyze (\ref{equivG}), we first ignore spin-transfer (put $\upsilon=0$) and note that this re-written Landau-Lifshitz equation differs from the conventional Gilbert equation only by an $O(\alpha^2)$ renormalization of the gyromagnetic ratio. Consequently, first-principles derivations of any equation of motion for the magnetization must be carried to second order in the putative damping parameter if one hopes to distinguish Landau-Lifshitz damping from Gilbert damping. This observation shows that papers that derive Gilbert damping\cite{Kambersky:1970,pump,Tserkovnyak:2006,Kohno:2006,Duine:2007,Koopmans:2005} or Landau-Lifshitz damping\cite{Callen:1958,Fredkin:2000} from microscopic calculations carried out only to first order in $\alpha$ cannot be used to justify one form of damping over the other. Now restore the spin-transfer terms in (\ref{equivG}) and note that the transformation to this equation from (\ref{correct}) automatically generates a non-adiabatic torque with $\beta=\alpha$. This transformation means that to lowest order in $\alpha$ and $\beta$, an equation of motion with Gilbert damping and a non-adiabatic coefficient $\beta_{\rm G}$ is equivalent to an equation of motion with Landau-Lifshitz damping with non-adiabatic coefficient $\beta_{\rm LL}= \beta_{\rm G}-\alpha$. This shows that equivalent equations of motion can be made using either form of damping, albeit with rather different descriptions of current induced domain wall motion. Nevertheless, as we argue below, there are conceptual advantages to the Landau-Lifshitz form. \section{Landau-Lifshitz Damping Uniquely Reduces Magnetic Free Energy} Landau-Lifshitz damping irreversibly reduces magnetic free energy when spin-transfer torque is present. The same statement is not true for Gilbert damping. This can be seen from the situation described in Section~II where Gilbert damping causes a minimum energy domain wall configuration to distort and tilt out of plane. Nothing prevents an increase in magnetic free energy for this open system, but it is clearly preferable if changes magnetic configurations that increase $F[{\bf M}]$ come from the effects of spin-transfer torque rather than from the effects of a torque intended to model dissipative processes. This is an important reason to prefer ${\bf D}_L$ in (\ref{DL}) to ${\bf D}_G$ in (\ref{DG}). This argument depends crucially on the fact that the adiabatic spin-transfer torque is {\it not} derivable from a free energy as we discuss henceforth. The field ${\bf H}$ in Eq.~(\ref{DSZ1}) is the (negative) gradient of the magnetic free energy. The component of this gradient in the direction that does not change the size of the magnetization is $- \hat{\bf M}\times[{\bf M}\times{\bf H}]$. Since this direction is exactly that of the Landau-Lifshitz form of the damping, Eq.~(\ref{DL}), it follows that this form of the damping always reduces this magnetic free energy. When the Gilbert form of the damping, Eq.~(\ref{DG}), is used in Eq.~(\ref{DSZ1}), it is possible to rewrite the damping term as $D_{\rm G} = -\alpha\gamma \hat{\bf M}\times[{\bf M}\times{\bf H}-(1/\gamma){\bf N}_{\rm ST}]+{\cal O}(\alpha^2)$. Further, one can always write ${\bf N}_{\rm ST}= -\gamma{\bf M}\times{\bf H}_{\rm ST}$ where ${\bf H}_{\rm ST}$ is an effective ``spin transfer magnetic field''. However, unlike the field $\mu_0 {\bf H}=-\delta {\bf F}/\delta {\bf M}$ in (\ref{DSZ1}), there is no ``spin transfer free energy'' $F_{\rm ST}$ which gives ${\bf H}_{\rm ST}$ as its gradient: \begin{equation} \label{nfe} \mu_0 {\bf H}_{\rm ST} = -{\delta F_{\rm ST}\over\delta {\bf M}}~~~~~~~~~~({\rm not~correct}). \end{equation} If (\ref{nfe}) were true, the lowest order (in $\alpha$) Gilbert damping term $-\alpha\gamma \hat{\bf M}\times[{\bf M}\times({\bf H}+{\bf H}_{\rm ST})]$ would indeed always lower the sum $F+F_{\rm ST}$. Unfortunately, a clear and convincing demonstration of the non-conservative nature of the spin-transfer torque is not easy to find. Therefore, in what follows, we focus on the adiabatic contribution to (\ref{DSZ2}) and show that a contradiction arises if (\ref{nfe}) and its equivalent, \begin{equation} \label{fake} dF_{\rm ST}=-\mu_0 {\bf H}_{\rm ST}\cdot d{\bf M}, \end{equation} are true. \begin{figure} \centerline{\includegraphics[width=0.5\textwidth]{DWModel.eps}} \caption{A one-dimensional N\'{e}el domain wall with magnetization ${\bf M}(x)$.} \label{DomainWall} \end{figure} For this argument we consider a simpler model than that discussed in Section~II. Figure~\ref{DomainWall} shows the magnetization ${\bf M}(x)$ for a one dimensional N\'{e}el wall in a system with uniaxial anisotropy along the $x$-direction. The domain wall of width $w$ is centered at $x=0$ and the plane of the magnetization is tilted out of the $x$-$y$ plane by an angle $\phi$. A convenient parameterization of the in-plane rotation angle $\theta(x)$ is \begin{equation} \label{wall} \theta(x)=\pi/2+\sin^{-1}[\tanh(x/w)]. \end{equation} Therefore, \begin{equation} \label{Mlabel} {\bf M}=M[\cos\theta(x),\sin\theta(x)\cos\phi,\sin\theta(x)\sin\phi], \end{equation} where $\cos\theta(x) = -\tanh(x/w)$ and \begin{equation} \label{sine} \sin\theta(x) = {\rm sech}(x/w). \end{equation} The magnetic free energy of this domain wall is independent of both its position and its orientation (angle $\phi$). For electron flow in the $x$-direction, (\ref{DSZ2}) shows that the adiabatic piece of the spin-transfer torque lies entirely in the plane of the magnetization: \begin{equation} \label{nad} {\bf N}^{\rm ad}_{\rm ST}\propto \theta'(x)(-\sin\theta,\cos\theta\cos\phi,\cos\theta\sin\phi). \end{equation} This torque rotates the magnetization in a manner which produces uniform translation of the wall in the $x$-direction with no change in $\phi$. Since \begin{equation} \label{dt} \theta'(x)= (1/w) {\rm sech}(x/w), \end{equation} comparison with (\ref{sine}) shows that ${\bf N}^{\rm ad}_{\rm ST}=0$ outside the wall as expected. The magnetic free energy of the domain wall does not change as the wall is translated. Now, as indicated above (\ref{nfe}), we are free to interpret the foregoing wall translation as resulting from local precession of ${\bf M}(x)$ around an effective field ${\bf H}_{\rm ST}(x)$ directed perpendicular to the plane of the domain wall. Specifically, \begin{equation} \label{perp} {\bf H}_{\rm ST}(x) \propto \theta'(x)(0,-\sin\phi,\cos\phi). \end{equation} However, if (\ref{nfe}) and thus (\ref{fake}) are assumed to be correct, the magnitude and direction of ${\bf H}_{\rm ST}$ imply that the putative free energy $F_{\rm ST}$ decreases when ${\bf M}(x)$ rotates rigidly around the $x$-axis in the direction of increasing $\phi$.\cite{porridge} On the other hand, the free energy must return to its original value when $\phi$ rotates through $2\pi$. Since the gradient (\ref{nfe}) can never increase the free energy, we are forced to conclude that our assumption that $F_{\rm ST}$ exists is incorrect. \section{A Langevin Equation for the Magnetization} Neglected work by Iwata\cite{Iwata} treats magnetization dynamics from the point of view of the thermodynamics of irreversible processes.\cite{Prig} His non-perturbative calculations uniquely generates the Landau-Lifshitz form of damping. In this section, we make equivalent assumptions but go farther and derive an expression for the damping constant. Mori and co-workers did this using a projection operator method.\cite{Mori} Our more accessible discussion follows Reif's derivation of a Langevin equation for Brownian motion.\cite{Reif} We begin by taking the energy change in a unit volume \begin{equation} dE=- \mu_0 H_{\alpha}dM_{\alpha}, \label{thermo} \end{equation} where the repeated index $\alpha$ implies a sum over Cartesian coordinates. It is crucial to note that the magnitude $\vert {\bf M}\vert = M$ is fixed so only rotations of ${\bf M}$ toward the effective field ${\bf H}$ change the energy of the system. The interaction with the environment enters the equation of motion for the magnetization through a fluctuating torque $N'_{\alpha}$: \begin{equation} \frac{dM_{\alpha}}{dt}=-\gamma({\bf M}\times{\bf H})_{\alpha}+N'_{\alpha}. \label{eqmot} \end{equation} The torque ${\bf N}'$ is perpendicular to ${\bf M}$ since $|{\bf M}|=M$. We consider the evolution of the magnetization over a time interval $\Delta t$ which is much less than the precession period, but much greater than the characteristic time scale for the fluctuations $\tau^{*}$. After this time interval, the statistical average of the change in magnetization $\Delta M_{\alpha}=M_{\alpha}(t+\Delta t)-M_{\alpha}(t)$ is \begin{equation} \Delta M_{\alpha}=-\gamma({\bf M}\times{\bf H})_{\alpha}(\Delta t)+\int_t^{t+\Delta t}dt' <N'_{\alpha}(t')>. \label{Delta_m} \end{equation} The equilibrium Boltzmann weighting factor $W_0$ gives $<~N'_{\alpha}(t')>_0=0$. However, $<~N'_{\alpha}(t')>\ne0$~when~the magnetization is out of equilibrium. Indeed, this method derives the damping term precisely from the bias built into the fluctuations due to the changes $\Delta E= -\mu_0 H_\nu \Delta M_\nu$ in the energy of the magnetic system. The Boltzmann weight used to calculate $<N'_{\alpha}(t')>$ is $W=W_0\exp(- \Delta E/(k_{\rm B}T))$ where (assuming that ${\bf H}$ does not change much over the integration interval), \begin{eqnarray} \Delta E(t') &=&-\mu_0 H_\nu(t')\int_{t}^{t'}\frac{dM_\nu(t'')}{dt''}dt'' \nonumber\\ &\approx& -\mu_0 H_\nu(t)\int_{t}^{t'}N'_\nu(t'')dt''. \label{Delta_E} \end{eqnarray} Note that precession does not contribute to $\Delta E(t')$. Only motions of the magnetization that change the energy of the magnetic subsystem produce bias in the torque fluctuations. Therefore, since $W=W_0(1- \Delta E/(k_{\rm B}T))$ for small $\Delta E/(k_{\rm B}T)$, the last term in (\ref{Delta_m}) now involves only an average over the equilibrium ensemble: \begin{eqnarray} \label{almost} \Delta M_{\alpha}&\approx&-\gamma({\bf M}\times{\bf H})_{\alpha}(\Delta t) \\ &+&\frac{\mu_0 H_{\nu}(t)}{k_{\rm B}T} \int_{t}^{t+\Delta t}dt' \int_{t}^{t'}dt''<N'_{\alpha}(t')N'_{\nu}(t'')>_{0}.\nonumber \end{eqnarray} We recall now that the torque fluctuations are correlated over a microscopic time $\tau^{*}$ that is much shorter than the small but macroscopic time-interval over which we integrate. Therefore, to the extent that memory effects are negligible, we define the damping constant $\lambda$ (a type of fluctuation-dissipation result) from \begin{equation} \int_{t}^{t'}dt''<N'_{\alpha}(t')N'_{\nu}(t'')>~\approx~ \lambda (k_{\rm B}T M/\mu_0) \delta^\perp_{\alpha\nu}, \label{correlateT} \end{equation} for $|t'- t|\ge\tau^{*}$ and with $\delta^\perp_{\alpha\nu}= \delta_{\alpha\nu}-\hat{M}_\alpha \hat{M}_\nu$, which restricts the fluctuations to be transverse to the magnetization, but otherwise uncorrelated. This approximation reduces the last term in (\ref{almost}) to $\lambda MH_{\perp\alpha} \Delta t$, where ${\bf H}_\perp=-\hat{\bf M}\times(\hat{\bf M}\times {\bf H})$, is the piece of ${\bf H}$ which is perpendicular to ${\bf M}$. Substituting (\ref{correlateT}) into (\ref{almost}) gives the final result in the form, \begin{equation} \label{final} \frac{d{\bf M}}{dt}\approx-\gamma({\bf M}\times{\bf H})-\lambda \hat{\bf M}\times({\bf M}\times{\bf H}). \end{equation} Equation~(\ref{final}) is the Landau-Lifshitz equation for the statistically averaged magnetization. It becomes a Langevin equation when we add a (now) unbiased random torque to the right hand side. The procedure outlined above generates higher order terms in $\lambda$ from the expansion of the thermal weighting to higher order in $\Delta E$. The second order terms involve an equilibrium average of three powers of $N'$. These are zero for Gaussian fluctuations. The third order terms involve an average of four powers of $N'$, and are non-zero. They lead to a term proportional to $\lambda^{2}H^{2}_{\perp}{\bf H}_{\perp}$, which we expect to be small and to modify only large-angle motions of the magnetization. \section{Summary} In this paper, we analyzed current-driven domain wall motion using both Gilbert-type and Landau-Lifshitz-type damping of the magnetization motion. Equivalent equations of motion can be written with either type of damping, but the implied description of the dynamics (and the relative importance of adiabatic and non-adiabatic effects) is very different in the two cases. With Landau-Lifshitz damping assumed, adiabatic spin transfer torque dominates and produces uniform translation of the wall. Non-adiabatic contributions to the spin transfer torque distort the wall, raise its magnetic energy, and thus produce a magnetostatic torque which perturbs the wall velocity. Damping always acts to reduce the distortion back towards the original minimum-energy wall configuration. With Gilbert damping assumed, the damping torque itself distorts and thereby raises the magnetic energy of the moving wall. The distortion-induced magnetostatic torque stops domain wall motion altogether. Additional wall distortions produced by non-adiabatic spin-transfer torque are needed to produce wall motion. In our view, Landau-Lifshitz damping is always preferable to Gilbert damping. When spin-transfer torque is present, this form of damping inexorably moves the magnetic free energy toward a local minimum. Gilbert damping does not. Even in the absence of spin-transfer torque, arguments based on irreversible thermodynamics show that the Landau-Lifshitz form of damping is uniquely selected for a macroscopic description.\cite{Iwata} Here, we proceeded equivalently and derived the Landau-Lifshitz equation of motion as the unique Langevin equation for the statistical average of a fluctuating magnetization with fixed spin length. A.Z. and W.M.S. gratefully acknowledge support from the U.S. Department of Energy under contracts DE-FG02-04ER46170 and DE-FG02-06ER46278. We thank R. A. Duine, H. Kohno, R. D. McMichael, J. Sinova, N. Smith, G. Tatara, and Y. Tserkovnyak for useful discussions.
1,941,325,220,681
arxiv
\section{Coulomb gauge at leading order} The first part of this talk concerns the construction of a leading order truncation to the Dyson-Schwinger equations of Coulomb gauge QCD \cite{Watson:2011kv}. Let us begin by considering the following (standard) functional integral \begin{equation} Z=\int{\cal D}\Phi e^{\imath{\cal S}},\;\;\;\;{\cal S}={\cal S}_q+\int dx(E^2-B^2)/2, \end{equation} where the action (${\cal S}$) is split into a quark component, ${\cal S}_q$, and the Yang-Mills part. ${\cal D}\Phi$ generically denotes the integration measure. The chromomagnetic field, $\vec{B}$, will not concern us in the following. The chromoelectric field, $\vec{E}$, is given by (superscript indices $a,\ldots$ denote the color index in the adjoint representation) \begin{equation} \vec{E}^a=-\partial_0\vec{A}^a-\vec{D}^{ab}A_0^b,\;\;\;\;\vec{D}^{ab}=\delta^{ab}\div-gf^{acb}\vec{A}^c, \end{equation} where $\vec{D}$ is the spatial component of the covariant derivative in the adjoint color representation (the $f$ are the usual $SU(N_c)$ structure constants). We work in Coulomb gauge ($\div\cdot\vec{A}=0$), for which the corresponding Faddeev-Popov (FP) operator is $-\div\cdot\vec{D}$. There are two important points: the FP operator involves purely spatial operators and the chromoelectric field is linear in the temporal component of the gauge field, $A_0$. We now convert to the first order formalism by introducing an auxiliary field $\vec{\pi}$ via the identity \begin{equation} \exp{\left\{\imath\int dx\,E^2/2\right\}}=\int{\cal D}\vec{\pi}\exp{\left\{\imath\int dx\left[-\pi^2/2-\vec{\pi}^a\cdot\vec{E}^a\right]\right\}}. \end{equation} The field $\vec{\pi}$ is split into transverse ($\div\cdot\vec{\pi}^\perp=0$, henceforth we drop the $\perp$) and longitudinal ($\div\phi$) parts. Since the action is now linear in $A_0$, we can integrate it out, to give \begin{equation} Z=\!\int\!\!{\cal D}\Phi\delta\left(\div\cdot\vec{A}\right) \delta\left(\div\cdot\vec{\pi}\right)\mbox{Det}\left[-\div\cdot\vec{D}\right]\delta\left(\div\cdot\vec{D}\phi+\rho\right)e^{\imath{\cal S}'},\;\; \rho^a=gf^{abc}\vec{A}^b\cdot\vec{\pi}^c+g\ov{q}\left[\gamma^0T^a\right]q, \end{equation} where $\rho$ is the color charge (including the quark component, with quark field $q$ and the Hermitian color generator $T^a$). The $\phi$-field can also be integrated out to cancel the FP determinant (Coulomb gauge is formally ghost free). However, noting the temporal zero modes of the FP operator, i.e., those spatially independent fields for which $-\div\cdot\vec{D}\phi(x_0)=0$, one is left with \cite{Reinhardt:2008pr} \begin{equation} Z=\!\!\int\!\!\!{\cal D}\Phi\delta\!\left(\div\cdot\vec{A}\right)\!\delta\!\left(\div\cdot\vec{\pi}\right)\!\delta\!\left(\int d\vec{x}\rho\!\!\right)e^{\imath{\cal S}''},\; {\cal S}''\sim\int\!\!dx\left[\ldots-\rho^a\hat{F}^{ab}\rho^b/2\right]. \end{equation} In the above, one sees that there are two transverse degrees of freedom for the gluon and the total color charge is conserved and vanishing. The Coulomb kernel $\hat{F}=[-\div\cdot\vec{D}]^{-1}(-\nabla^2)[-\div\cdot\vec{D}]^{-1}$ is nonlocal in $\vec{A}$, so we make the leading order truncation whereby it is replaced by its expectation value, which is related to the temporal component of the gluon propagator: $\hat{F}\rightarrow\ev{\hat{F}}\sim W_{00}$. It is known that in Coulomb gauge on the lattice, $W_{00}$ is infrared (IR) enhanced, going like $\sigma/\vec{q}^4$ but with a coefficient $\sigma$ somewhat larger than the Wilson string tension (see e.g., Refs.~\cite{Iritani:2010mu,Zwanziger:2002sh}). The charge conservation term is rewritten in the limiting form of a Gaussian, mimicking the Coulomb term: \begin{equation} \delta\left(\int d\vec{x}\rho\right)\sim\lim_{{\cal C}\rightarrow\infty}{\cal N}\exp{\left\{-\imath/2\int dx\,dy\rho_x^a\delta^{ab}{\cal C}\delta(x_0-y_0)\rho_y^b\right\}}, \end{equation} and such that we now have instantaneous four-point interaction terms including $\Gamma_{AA\pi\pi}$ and $\Gamma_{\ov{q}q\ov{q}q}$: \begin{equation} {\cal S}_{\mbox{int}}\sim\int dx\,dy\,\left[-\rho_x^a\delta^{ab}\tilde{F}_{xy}\rho_y^b/2\right],\;\;\;\; g^2C_F\tilde{F}(\vec{q})=(2\pi)^3{\cal C}\delta(\vec{q})+8\pi\sigma/\vec{q}^4 \end{equation} ($C_F=(N_c^2-1)/2N_c$). This interaction contains the charge constraint and leads directly to a linear rising potential with a string tension $\sigma$. To complete the leading order truncation scheme, we restrict to one-loop terms in the following equations and disregard all but the $\tilde{F}$ interaction terms. \section{Gluon gap equation} In the first order formalism, the transverse spatial gluon degrees of freedom ($\vec{A}$, $\vec{\pi}$) have been separated such that there are three propagators $W_{AAij}$, $W_{\pi\pi ij}$ and $W_{A\pi ij}$ ($i,j$ are the spatial indices), correspondingly with three proper functions $\Gamma_{AAij}$, $\Gamma_{\pi\pi ij}$ and $\Gamma_{A\pi ij}$ related via a matrix inversion structure (see, e.g., Ref.~\cite{Watson:2006yq}). Since the interaction content of our truncated system is instantaneous, the energy dependence of these functions is trivial (and the mixed functions will play no role in the discussion here). There are two scalar dressing functions of interest, both functions of spatial momentum: $\Gamma_{AA}(\vec{k}^2)$ and $\Gamma_{\pi\pi}(\vec{k}^2)$. The spatial gluon propagator $W_{AA}$ has the form ($W_{\pi\pi}$ is similar) \begin{equation} W_{AAij}(k)=\imath t_{ij}(\vec{k})\frac{\Gamma_{\pi\pi}(\vec{k}^2)}{[k_0^2-\vec{k}^2\Gamma_{AA}(\vec{k}^2)\Gamma_{\pi\pi}(\vec{k}^2)+\imath0_+]} \end{equation} ($t_{ij}$ is the transverse spatial momentum projector). The truncated Dyson-Schwinger equations have the mnemonic form (omitting kinematical factors etc.) \begin{equation} \Gamma_{\pi\pi}(\vec{p}^2)\sim1+\int dk\tilde{F}(\vec{p}-\vec{k})W_{AA}(k),\;\;\;\; \Gamma_{AA}(\vec{p}^2)\sim1+\int dk\tilde{F}(\vec{p}-\vec{k})W_{\pi\pi}(k). \end{equation} The charge constraint term of the interaction $\tilde{F}$ (i.e., the term $\sim{\cal C}\delta(\vec{p}-\vec{k})$) immediately tells us that both $\Gamma_{AA}$ and $\Gamma_{\pi\pi}$ are divergent as ${\cal C}\rightarrow\infty$ (there is also an IR divergence), meaning that the gluon self-energy is infinite and the propagator poles are shifted to infinity. This has the natural interpretation that one requires infinite energy to create a (colored) gluon from the (colorless) vacuum. If, however, we consider the static gluon propagator $W_{AA}^{(s)}$, written as \begin{equation} W_{AAij}^{(s)}(\vec{k})=\int\frac{dk_0}{2\pi}W_{AAij}(k)=t_{ij}(\vec{k})\frac{\sqrt{G_k}}{2|\vec{k}|},\;\;\;\;G_k=\frac{\Gamma_{\pi\pi}(\vec{k}^2)}{\Gamma_{AA}(\vec{k}^2)} \end{equation} then we can combine the Dyson-Schwinger equations to get the gluon gap equation \begin{equation} G_p=1+\frac{g^2N_c}{4}\int\frac{d\vec{k}\,\tilde{F}(\vec{p}-\vec{k})}{(2\pi)^3|\vec{k}|}t_{ji}(\vec{p})t_{ij}(\vec{k})\left[\sqrt{G_k}-\frac{\vec{k}^2}{\vec{p\,}^2}\frac{G_p}{\sqrt{G_k}}\right]. \end{equation} This equation has previously been derived in the Coulomb gauge Hamiltonian approach \cite{Szczepaniak:2001rg}. The dressing function for the static propagator, $G$, is IR finite and independent of the charge constraint. Solving numerically (in units of $\sigma$), one sees that the solution has the form $G_x=x/(x+\kappa_x)$ for an IR constant `mass' function $\kappa(x)$ and where $x=\vec{k}^2$. $\kappa_x$ is logarithmically dependent on the numerical ultraviolet (UV) cutoff $\Lambda$ (dimensions of $[\mbox{mass}]^2$), despite the fact that the interaction has the form $1/\vec{q}^4$. $\kappa$ is plotted in the left panel of Fig.~\ref{fig:hap0}. However, defining $a=\kappa(x=0)$ and introducing the scaled variable $x'=x/a$, one finds that $\ov{\kappa}(x')=\kappa(x=x'a)-a$ is independent of $\Lambda$, shown in the right panel of Fig.~\ref{fig:hap0}. It turns out in general that by simply writing all dimensionfull quantities in units of the (dynamically generated) gluon mass function at some point, one may construct $\Lambda$-independent dressing functions (and subsequently $G$) without introducing a renormalization constant \cite{Watson:2011kv}. \begin{figure}[t] \vspace{0.8cm} \begin{center} \includegraphics[width=0.45\linewidth]{hap0.eps} \hspace{0.5cm} \includegraphics[width=0.45\linewidth]{hap1.eps} \end{center} \vspace{0.3cm} \caption{\label{fig:hap0}[left panel] $\kappa_x$ as a function of $x=\vec{k}^2$ and [right panel] $\ov{\kappa}(x')$ as a function of $x'=x/a$ for various values of the UV-cutoff $\Lambda$. All dimensionfull quantities are in appropriate units of the string tension, $\sigma$. See text for details.} \end{figure} \section{Quark gap equation} Given that the interaction content arising from the Coulomb term couples to the gluon and quark charges in the same manner, the quark sector turns out to be very similar to the gluon sector within the leading order truncation scheme considered here. The instantaneous character of the interaction leads immediately to the following form for the quark propagator in terms of two dressing functions $A$ and $B$ (both functions of $\vec{k\,}^2$): \begin{equation} W_{\ov{q}q}(k)=(-\imath)\frac{\gamma^0k_0-\vec{\gamma}\cdot\vec{k}A_k+B_k}{[k_0^2-\vec{k}^2A_k^2-B_k^2+\imath0_+]}. \end{equation} A possible term $\sim\gamma^0k_0\vec{\gamma}\cdot\vec{k}$ does not appear, just as in the perturbative case \cite{Popovici:2008ty}. The static quark propagator, $W_{\ov{q}q}^{(s)}$, can be written in terms of the mass function, $M$, and quasiparticle energy, $\omega$: \begin{equation} W_{\ov{q}q}^{(s)}(\vec{k})=\int\frac{dk_0}{2\pi}W_{\ov{q}q}(\vec{k})=\frac{\vec{\gamma}\cdot\vec{k}-M_k}{2\omega_k},\;\;\;\;M_k=\frac{B_k}{A_k},\;\;\omega_k^2=\vec{k}^2+M_k^2. \end{equation} The Dyson-Schwinger equations for the dressing functions $A$ and $B$ have the mnemonic form \begin{equation} A_p\sim1+\int d\vec{k}\tilde{F}(\vec{p}-\vec{k})/\omega_k,\;\;\;\;B_p\sim m+\int d\vec{k}\tilde{F}(\vec{p}-\vec{k})M_k/\omega_k, \end{equation} showing us via the charge constraint that the quark self-energy is divergent (like for the gluon) and one requires infinite energy to extract a single quark from the vacuum. However, combining the equations in terms of the mass function, $M$, leads to the Adler-Davis gap equation \cite{Adler:1984ri} \begin{equation} M_p=m+\frac{1}{2}g^2C_F\int\frac{d\vec{k}\,\tilde{F}(\vec{p}-\vec{k})}{(2\pi)^3\omega_k}\left[M_k-\frac{\vec{p}\cdot\vec{k}}{\vec{p\,}^2}M_p\right]. \end{equation} The mass function is IR finite and independent of the charge constraint. While the above equation was originally derived for chiral quarks (in the Hamiltonian formalism), in the leading order truncation scheme presented here one can show \cite{Watson:2011kv} that it also reproduces the Coulomb gauge heavy quark limit (in the absence of pure Yang-Mills corrections) \cite{Popovici:2010mb}. The mass function is plotted on the left panel of Fig.~\ref{fig:mfunc}. One can see that chiral symmetry is dynamically broken, although the chiral condensate is too small \cite{Watson:2012ht} (this can be improved by considering the spatial quark-gluon vertex \cite{Pak:2011wu}). In the right panel of Fig.~\ref{fig:mfunc}, the dressing $M(x)-m$ is plotted. As the quark mass increases the dressing initially also increases, but for heavier quarks becomes smaller and in the heavy quark limit, $M\rightarrow m$. \begin{figure}[t] \vspace{0.8cm} \begin{center} \includegraphics[width=0.45\linewidth]{eap0.eps} \hspace{0.5cm} \includegraphics[width=0.45\linewidth]{eap1.eps} \end{center} \vspace{0.3cm} \caption{\label{fig:mfunc}[left panel] Quark mass function, $M(x)$, and [right panel] dressing, $M(x)-m$, plotted as functions of $x=\vec{p\,}^2$ for a range of quark masses. All dimensionfull quantities are in appropriate units of the string tension, $\sigma$. See text for details.} \end{figure} \section{Bethe-Salpeter equation} Within this leading order truncation scheme, it is possible to study the quark-antiquark Bethe-Salpeter equation for (color singlet, flavor nonsinglet) pseudoscalar and vector mesons with arbitrary quark masses \cite{Watson:2012ht}. The pseudoscalar case will be discussed here -- the vector case is similar. In the Coulomb gauge rest frame, the Bethe-Salpeter vertex for pseudoscalar meson can be written (omitting flavor factors) \begin{equation} \Gamma_{PS}(\vec{p};P_0)=\gamma^5\left[\Gamma_0+P_0\gamma^0\Gamma_1+\vec{\gamma}\cdot\vec{p}\,\Gamma_2+P_0\gamma^0\vec{\gamma}\cdot\vec{p}\,\Gamma_3\right], \end{equation} where $P_0^2=M_{PS}^2$ is the total energy squared (at resonance) for the quark-antiquark pair and $\vec{p}$ the spatial momentum flowing along the quark line. The dressing functions $\Gamma_i$ all have the argument $\vec{p\,}^2$. There are two basic quantities of initial interest (trace over Dirac matrices): \begin{equation} \left\{\begin{array}{c}f_{PS}\\h_{PS}\end{array}\right\}=\frac{N_c}{M_{PS}^2}\mbox{Tr}_d\int\frac{dk}{(2\pi)^4} \left\{\begin{array}{c}\gamma^5P_0\gamma^0\\M_{PS}^2\gamma^5\end{array}\right\} W_{\ov{q}q}^+(k^+)\Gamma_{PS}(\vec{k};P_0)W_{\ov{q}q}^-(k^-), \label{eq:fhps} \end{equation} where $k^\pm$ represents the energy and spatial momentum argument $(k^0\pm P_0/2,\vec{k})$ and the two quark propagators $W_{\ov{q}q}^\pm$ correspond to bare quark masses $m^\pm$. $f_{PS}$ is the pseudoscalar meson leptonic decay constant. $h_{PS}$ is related to $f_{PS}$ via the axialvector Ward-Takahashi identity (AXWTI) \cite{Watson:2012ht,Maris:1997hd} and this can be compared to the Gell-Mann-Oakes-Renner relation in the chiral limit \begin{equation} M_{PS}^2f_{PS}=(m^++m^-)h_{PS},\;\;\;\;h_{PS}\stackrel{m^\pm\rightarrow0}{\longrightarrow}-\ev{\ov{q}q}/f_{PS}, \end{equation} indicating that $h_{PS}$ is a generalization of the chiral condensate to finite, arbitrary mass quarks. Evaluating the trace and energy integrals for the right-hand side of \eq{eq:fhps}, one obtains spatial integrals over a combination of terms involving IR divergent quantities such as $A$. However, $f_{PS}$ and $h_{PS}$ must be IR finite. Assuming the form \begin{equation} f_{PS}=2\imath N_c\int\frac{d\vec{k}}{(2\pi)^3\omega_k^+\omega_k^-}\frac{[M_k^++M_k^-]}{[\omega_k^++\omega_k^-]}f_k,\;\;h_{PS}=2\imath N_c\int\frac{d\vec{k}}{(2\pi)^3\omega_k^+\omega_k^-}[\omega_k^++\omega_k^-]h_k, \end{equation} the combinations of divergent factors are then contained within the two functions $f$ and $h$. Here is where Coulomb gauge does something special: when one expands the truncated Bethe-Salpeter equation \begin{equation} \Gamma_{PS}(\vec{p}\,;P^0)=-\imath g^2C_F\int\frac{dk}{(2\pi)^4}\tilde{F}(\vec{p}-\vec{k})\gamma^0W_{\ov{q}q}^+(k^+)\Gamma_{PS}(\vec{k};P_0)W_{\ov{q}q}^-(k^-)\gamma^0, \end{equation} the right-hand side takes the mnemonic form $\Gamma_i\sim\int\tilde{F}[\ldots][f_k\;\mbox{or}\;h_k]$ where the terms represented by the dots ($[\dots]$) involve combinations of only the finite functions $M_k^\pm$ or $\omega_k^\pm$. The Bethe-Salpeter equation can thus be rewritten in terms of only $f$ and $h$. The equal mass case is \begin{align} h_p&=\frac{P_0^2}{4\omega_p^2}f_p +\frac{1}{2}g^2C_F\int\frac{d\vec{k}\tilde{F}(\vec{p}-\vec{k})}{(2\pi)^3\omega_k} \left\{h_k-h_p\frac{\s{\vec{p}}{\vec{k}}}{\vec{p\,}^2}\right\}, \nonumber\\ f_p&=h_p +\frac{1}{2}g^2C_F\int\frac{d\vec{k}\tilde{F}(\vec{p}-\vec{k})}{(2\pi)^3\omega_k} \left\{ f_k\frac{\left[\s{\vec{p}}{\vec{k}}+M_pM_k\right]} {\left[\vec{k}^2+M_k^2\right]} -f_p\frac{\s{\vec{p}}{\vec{k}}}{\vec{p\,}^2} \right\}. \label{eq:psbsee} \end{align} The arbitrary mass case has a similar form. The corresponding vector meson equation is also similar, but involves four functions. One can see that the above form for the Bethe-Salpeter equation thus behaves like the previously discussed gap equations for $G$ and $M$, where the charge constraint and IR divergences cancel and despite the interaction, the functions $f$ and $h$ are finite. The equations can be compared to those of, for example, Refs.\cite{Govaerts:1983ft}. Turning to the results, the normalized (see Ref.~\cite{Watson:2012ht}) pseudoscalar and vector meson dressing functions are plotted for the chiral quark case in Fig.~\ref{fig:eqvert} and one sees that indeed, the functions are all finite. \begin{figure}[t] \vspace{0.8cm} \begin{center} \includegraphics[width=0.45\linewidth]{reseqvertps.eps} \hspace{0.5cm} \includegraphics[width=0.45\linewidth]{reseqvertv.eps} \end{center} \vspace{0.3cm} \caption{\label{fig:eqvert}[left panel] Pseudoscalar and [right panel] vector normalized vertex functions with (equal) chiral quarks, plotted as a function of $x=\vec{k}^2$. All dimensionfull quantities are in appropriate units of the string tension, $\sigma$. See text for details.} \end{figure} In Fig.\ref{fig:eqm}, the meson masses and leptonic decay constants for equal quark masses are plotted (in units of the string tension $\sigma$). Inserting typical values for $\sigma$ \cite{Watson:2012ht}, it becomes obvious that whilst dynamical chiral symmetry breaking is visible ($M_{PS}\sim\sqrt{m}$ as $m\rightarrow0$), the leptonic decay constants are too small, as is the mass-splitting between states for larger quark masses. \begin{figure}[t] \vspace{0.8cm} \begin{center} \includegraphics[width=0.45\linewidth]{reseqml.eps} \hspace{0.5cm} \includegraphics[width=0.45\linewidth]{reseqfl.eps} \end{center} \vspace{0.3cm} \caption{\label{fig:eqm}Pseudoscalar and vector meson masses [left panel] and leptonic decay constants [right panel] with equal mass quarks, plotted as a function of the quark mass. All dimensionfull quantities are in appropriate units of the string tension, $\sigma$. See text for details.} \end{figure} Looking at the case for one fixed chiral quark plotted in Fig.~\ref{fig:hlm}, one sees that both the pattern for chiral symmetry breaking ($M_{PS}\sim\sqrt{m}$ as $m\rightarrow0$) and the leading order heavy quark limit ($f_{PS}\sqrt{M_{PS}},f_{V}\sqrt{M_{V}}\sim\mbox{const.}$ as $m\rightarrow\infty$) are present. The leading order Coulomb gauge truncation scheme thus qualitatively accommodates both chiral and heavy quark physics. \begin{figure}[t] \vspace{0.5cm} \begin{center} \includegraphics[width=0.45\linewidth]{reshlm.eps} \hspace{0.5cm} \includegraphics[width=0.45\linewidth]{reshlfm.eps} \end{center} \vspace{0.0cm} \caption{\label{fig:hlm}Pseudoscalar and vector meson masses [left panel] and $f_{PS}\sqrt{M_{PS}}$, $f_V\sqrt{M_V}$ [right panel] with one fixed chiral quark, plotted as a function of the other quark mass. All dimensionfull quantities are in appropriate units of the string tension, $\sigma$. See text for details.} \end{figure} \begin{acknowledgments} It is a pleasure to thank the organizers for a most enjoyable and stimulating conference. \end{acknowledgments}
1,941,325,220,682
arxiv
\section{Introduction} The antinomy of two equally justified principles makes the essence of tragedy in literature (Hegel). In the same way, the competition of ordering principles in complex systems adds a new enthralling element to their macroscopic phenomenology. The purest manifestation of this interplay is two either intertwining or mutually excluding ordered phases in a physical system \cite{Fisher73,PhysRevB.13.412,PhysRevB.8.4270,PhysRevLett.33.813,PhysRevLett.88.059703,Pelissetto2002549,Calabrese:2003ia,PhysRevB.67.054505,PhysRevE.78.041124,PhysRevE.88.042141,PhysRevE.90.052129}. Corresponding examples can be found everywhere in physics, including magnetic systems \cite{PhysRevLett.34.1638,PhysRevB.18.6165,PhysRevLett.95.217202,PhysRevB.22.1429,PhysRevB.24.1244}, high-temperature superconductors \cite{Zhang21021997,PhysRevB.60.13070,PhysRevB.66.094501,PhysRevB.81.235108,PhysRevB.83.155125,PhysRevB.89.121116}, graphene \cite{PhysRevLett.97.146401,PhysRevB.84.113404,PhysRevB.90.041413,Classen:2015ssa}, and dense quark matter \cite{Berges:1998rc,Strodthoff:2011tz,Fukushima:2013rx}. The theoretical description of such situations is complicated by the need to accurately resolve both universal and non-universal features of the particular phase structure. In fact, many aspects of competing orders are already universally dictated by the theory of critical phenomena. For this consider a physical system with two distinct macroscopically ordered phases which are separated from the disordered phase by means of second order phase transitions each. The corresponding phase transitions are approached by separately fine-tuning two parameters, $g_1$ and $g_2$, which can be temperature, coupling strength, etc. One may now, for instance, want to know whether there is a second order multicritical point $(g_{1\rm c},g_{2\rm c})$ in the phase diagram where both transition lines meet, or whether there is a coexistence phase of both orders. If a second order multicritical point exists, the system in its vicinity will be described by a critical quantum field theory for the two order parameter fields. The properties of this field theory also determine whether there can be a coexistence phase. Whereas classical multicritical phenomena are typically captured by an euclidean O($N$)+O($M$)-model, bosonic self-energy effects or the presence of gapless fermions can complicate the setting for quantum phase transitions \cite{ZinnJustinBook,SachdevBook,HerbutBook}. Furthermore, even if a multicritical point exists, the microscopic parameters of the model may not lie in the basin of attraction of this fixed point, or there may be energetic reason for the second order lines to actually become first order lines before they meet. In these cases a coexistence phase is excluded as well. Thus the theoretical modelling of competing orders requires both a resolution of universal properties of a given system under consideration, e.g. critical phenomena, but also non-universal ones, e.g. the effective potential to resolve first order phase transitions. Furthermore, the method should be applicable beyond the realm of classical phase transitions. A versatile tool which qualifies here is provided by the Functional Renormalization Group (FRG) \cite{Wetterich:1992yh,Morris:1993qb,Berges:2000ew,Gies:2006wv,Schaefer:2006sr,Pawlowski:2005xe,Delamotte:2007pf,Kopietz2010,RevModPhys.84.299,Braun:2011pp,Boettcher201263}. It naturally captures universal aspects and critical phenomena in the long-wavelength limit, but at the same time allows to resolve system-specific properties at intermediate scales. In the following we focus on classical multicritical phenomena in O($N$)+O($M$)-models. Those are the key building blocks for more involved setups as the bosonic two-field model always appears as a subsector of the corresponding set of beta functions. In the vicinity of one of the second order lines described above, say the one associated to tuning $g_1$, the corresponding order parameter fluctuations are captured by an effective action or free energy functional $\Gamma$ for the order parameter field $\vec{\Phi}$, which we assume to be O($N$)-symmetric. We approximate here the effective action within the \emph{local potential approximation} including a wave-function renormalization (called LPA$^\prime$ in the following) and write \begin{align} \label{Int1} \Gamma^N[\vec{\Phi}] \simeq \int \mbox{d}^dx \Bigl( -Z_{\Phi} \vec{\Phi}\cdot \nabla^2 \vec{\Phi}+ V(\Phi)\Bigr), \end{align} where $\vec{\Phi}$ is an $N$-component vector and $\Phi=|\vec{\Phi}|$. $V(\Phi)$ is called the effective potential. The translation invariant ansatz is also applicable in the low-energy limit for spin systems close to the phase transition. The critical point of the effective action in Eq. (\ref{Int1}) is approached by fine-tuning a parameter $\sigma \to\sigma_{\rm c}$. Close to the transition line we have a linear relation $g_1-g_{\rm c}\propto \sigma-\sigma_{\rm c}$ which links the actual physical system to the corresponding O($N$)-model. In the same way, the vicinity of the multicritical point is captured by means of an O($N$)+O($M$)-model, which we approximate within LPA$^\prime$ according to the effective action \begin{align} \nonumber \Gamma^{N,M}[\vec{\Phi},\vec{\Psi}] \simeq \int \mbox{d}^dx \Bigl(&-Z_{\Phi} \vec{\Phi}\cdot \nabla^2 \vec{\Phi} -Z_{\Psi} \vec{\Psi}\cdot \nabla^2 \vec{\Psi}\\ \label{Int2} &+V(\Phi,\Psi)\Bigr). \end{align} Herein, $\vec{\Phi}$ and $\vec{\Psi}$ are $N$- and $M$-component vectors, respectively. Furthermore we restrict to the three-dimensional case in the following, $d=3$. A reasonable candidate for a second order multicritical point of the two-field model is given by the decoupled fixed point (DFP) solution. In this case $\Gamma^{N,M}[\vec{\Phi},\vec{\Psi}]=\Gamma^N[\vec{\Phi}]+\Gamma^M[\vec{\Psi}]$, such that the critical properties are inherited from the individual single-field models. This solution represents a stable fixed point of the theory provided that mixed terms in the effective action, like $\int \mbox{d}^dx \lambda_{\Phi\Psi}\Phi^2\Psi^2$, become irrelevant in the infrared. In particular, the scaling dimension of the coupling $\lambda_{\Phi\Psi}$, denoted by $\theta_3$ in the following, is negative for a stable DFP. Aharony's exact scaling relation \cite{PhysRevLett.88.059703,PhysRevLett.51.2386} states that $\theta_3$ satisfies \begin{align} \label{Int3} \theta_3^{(\rm scal.)} = \frac{1}{\nu_1} + \frac{1}{\nu_2}-d \end{align} at the DFP. Herein, $\nu_1$ and $\nu_2$ are the usual correlation length critical exponents of the associated O($N$)- and O($M$)-models in $d$ dimensions, respectively. The latter are well-studied in the literature \cite{Pelissetto2002549}, from what the possibility of a stable DFP can be deduced immediately. Note that the DFP solution is such that it allows for a coexistence phase of both orders \cite{PhysRevE.88.042141}. It turns out that for the important cases $N,M\leq3$ the value of $\theta_3$ is rather small. Thus, quantitatively small errors in the $\nu_i$ or violations of the scaling relation in the two-field model might turn a stable fixed point into a seemingly unstable one. Previous studies of O($N$)+O($M$)- and O($N_1$)+O($N_2$)+O($N_3$)-models with the FRG within LPA$^\prime$ indicate a small but visible violation of the scaling relation \cite{PhysRevE.88.042141,Wetzlar,PhysRevE.90.052129}. We resolve this issue and show that it is related to a truncation-related ambiguity in defining the stability matrix of the critical theory. We demonstrate that the scaling relation is valid within LPA$^\prime$ for a commonly used regularization scheme. We argue that although the scaling relation might be violated, the correct fixed point is approached during the renormalization group flow. This is of great importance for computing the phase structure of more complicated models. This paper is organized as follow. In Sec. \ref{SecON} we discuss critical phenomena of O($N$)-models in three dimensions by determining the associated scaling potentials. Those are then used in Sec. \ref{SecONM} to study the stability of the decoupled and the isotropic fixed point in O($N$)+O($M$)-models. In particular, we compute the critical exponents of the associated multicritical points and show that Aharony's scaling relation is valid within LPA$^\prime$. In Sec. \ref{SecAmb} we comment on an ambiguity in defining the stability matrix for the truncated system, and show how it influences both critical and multicritical phenomena. We summarize our main findings in Sec. \ref{SecCon} and give an outlook on possible extension of our approach. In App. \ref{AppRec} we derive explicit algebraic expressions for the beta functions of the single-field model which are used throughout the work. Some formulas for the two-field model which are used in Sec. \ref{SecAmb} are collected in App. \ref{AppTwo}. \section{Scaling potentials for O(N)-models}\label{SecON} We compute the critical scaling effective actions $\Gamma^N$ and $\Gamma^{N+M}$ in Eqs. (\ref{Int1}) and (\ref{Int2}) by means of the FRG. The latter is formulated in terms of the effective average action, $\Gamma_k$, where $k$ is a momentum scale. The effective average action interpolates between the microscopic action on large scales and $\Gamma_{k=0}=\Gamma$ \cite{Morris:1993qb,Wetterich:2001kra}. Its evolution with $k$ is given by the exact Wetterich flow equation \begin{align} \label{ONa} \partial_k \Gamma_k = \frac{1}{2} \mbox{Tr} \Bigl( \frac{1}{\Gamma^{(2)}_k+R_k}\partial_k R_k\Bigr). \end{align} For an introduction to the method we refer to Refs. \cite{Wetterich:1992yh,Morris:1993qb,Berges:2000ew,Gies:2006wv,Schaefer:2006sr,Pawlowski:2005xe,Delamotte:2007pf,Kopietz2010,RevModPhys.84.299,Braun:2011pp,Boettcher201263}. Here we note that the second functional derivative $\Gamma_k^{(2)}$ appearing on the right hand side makes the equation highly coupled and non-linear. Furthermore, the setting requires to specify a regulator $R_k$, which regularizes the infrared properties of the functional integral. Here we employ the so-called \emph{optimized regulator} for the individual O($N$)-fields \cite{PhysRevD.64.105007,Litim:2001fd}. It is diagonal in field space and given in momentum space by \begin{align} \label{ONb} R_{k}(\vec{q}^2) = Z_\Phi(k^2-\vec{q}^2)\Theta(k^2-\vec{q}^2), \end{align} where $\Theta$ is the step function and $\vec{q}$ is the euclidean momentum. We seek scaling solutions of the Wetterich equation, which describe the system at its critical point. We introduce our notation for the single-field model here, but all notions immediately generalize in a straightforward way to the two-field model. By defining the dimensionless renormalized field $ \vec{\phi} = \frac{1}{Z_\Phi^{1/2} k^{(d-2)/2}}\vec{\Phi}$, the dimensionless effective potential \begin{align} \label{ONc} v_k(\phi) =k^{-d} V_k(\Phi) \end{align} is a function of $\phi=|\vec{\phi}|$ due to O($N$)-symmetry. The scaling solution $v(\phi)$ satisfies $\dot{v}(\phi)=0$ for all $\phi$, where the dot denotes a derivative with respect to renormalization group time $t=\log(k)$, i.e. $\dot{v}_k = k \partial_k v_k$. The scale dependence of the wave function renormalization is encoded in the anomalous dimension $\eta$. We have \begin{align} \label{ONd} \eta = - \frac{1}{Z_\Phi} \dot{Z}_\Phi. \end{align} In the following we resolve the functional form of the effective potential $v(\phi)$, but approximate the $\phi$-dependence of $\eta$ by evaluating it at the minimum of $v(\phi)$, see Eq. (\ref{ON4}). This constitutes a truncation of the more general function $\eta(\phi)$. With the described parametrization and regularization scheme we truncate the exact equation (\ref{ONa}), such that exact relations will typically be violated. On the other hand, this allows for an approximate solution of the flow equation. The scaling potential for the O($N$)-model within LPA$^\prime$ from the FRG with the optimized regulator is found as the solution of \begin{align} \label{ON1} 0 = -d v(\phi) + a \phi v'(\phi) + \frac{b}{1+v''(\phi)} + \frac{b(N-1)}{1+\phi^{-1}v'(\phi)}, \end{align} where \begin{align} \label{ON2} a &= \frac{d-2+\eta}{2},\ b= \frac{4v_d}{d} \Bigl(1-\frac{\eta}{d+2}\Bigr),\\ \label{ON3} v_d &= \frac{1}{2^{d+1}\pi^{d/2}\Gamma(d/2)}, \end{align} and the dimension is set by $d=3$. The value of the anomalous dimension $\eta$ has to be determined self-consistently such as to satisfy \begin{align} \label{ON4} \eta = \frac{8v_d}{d}\frac{\lambda^2}{\kappa^2(1+\lambda)^2}, \end{align} where $\kappa$ is the minimum of $v(\phi)$ and $\lambda=v''(\kappa)$. The boundary conditions for the potential at the origin are $v'(0)=0$ and $v''(0)=\sigma$. Herein, $\sigma$ is a relevant parameter which has to be fine-tuned such that there exists a solution of Eq. (\ref{ON1}) for all $\phi$. Of course, $v''(0)=\sigma$ translates to $v(0)=bN/d(1+\sigma)$. One could subtract a suitable term from Eq. (\ref{ON1}) such that $v(0)=0$ since $v(0)$ has no physical significance. Eq. (\ref{ON1}) has a trivial constant solution with $\eta=\sigma=0$, which corresponds to the Gaussian fixed point. The constant solution, however, is unstable with respect to perturbations due to the operators $\phi^2$ and $\phi^4$. The number of relevant (dangerous) perturbations of the solution decides on the likeliness of the latter to be realized in a physical system. We seek solutions to Eq. (\ref{ON1}) which have only one relevant direction. In three dimensions, such a solution exists for all $N$. It corresponds to the Wilson--Fisher fixed point of the renormalization group trajectory. To simplify the following discussion, we refer to the ``solution'' of Eq. (\ref{ON1}) as the one solution $v_\star(\phi)$ which has only one relevant direction. The latter requires fine-tuning of $\sigma$. Further solutions might exist, but are not of interest to us here. For large field amplitudes the solution of Eq. (\ref{ON1}) behaves as $v(\phi) \sim \phi^{d/a}$. This scaling regime, however, cannot persist to small values of $\phi$ as the term $1/(1+v''(\phi))$ breaks the invariance of $v(\phi)$ with respect to a rescaling of $\phi$. Furthermore, the minimum $\kappa$ turns out to be such that $\kappa < 0.5$ for all relevant cases. Hence it is dominated by the region of small field amplitudes, and, accordingly, also $\lambda$ and $\eta$ can be deduced from a sufficiently precise resolution of the function $v(\phi)$ for small $\phi$. This observation provides the basis for Taylor expansion schemes based on the shooting method \cite{Morris:1994ie,Morris:1994ki,Codello:2012sc,PhysRevLett.110.141601}. For the shooting method, Eq. (\ref{ON1}) is treated as the evolution of $v(\phi)$ in the formal time-variable $\phi$ with initial conditions $v'(0)=0$ and $v''(0)=\sigma$. For a detailed introduction to the method we refer to Ref. \cite{Codello:2012sc} and restrict to a simple sketch here. Given a value for the parameter $\eta$, say $\eta=0$, there exists exactly one $\sigma=\sigma(\eta)<0$ such that Eq. (\ref{ON1}) can be integrated up to very large $\phi$, possibly $\phi\to\infty$. In practice, the right $\sigma(\eta)$ can be found to arbitrary precision by scanning possible candidates for $\sigma$ in nested intervals. The obtained potential $v(\phi,\eta)$ will typically not satisfy Eq. (\ref{ON4}) for the anomalous dimension. However, by iterating this step while using $\eta$ from the previous step, the solution collapses to the scaling solution $v_\star(\phi) = v(\phi,\eta_\star)$ rather quick in three dimensions. \begin{figure}[t] \centering \includegraphics[width=8cm]{figure_potential} \caption{Scaling potential for the Ising model ($N=1$) determined from Eq. (\ref{ON1}). The solid (blue) line shows $v_\star(\phi)$ obtained with the shooting method. The dashed (red) lines display the corresponding Taylor expansion with $\sigma_{\rm c}$ found from the shooting solution. We show expansions to order $\phi^2$, $\phi^4$, and $\phi^6$. The polynomial ansatz to order $\phi^8$ cannot be distinguished from the solid line within the resolution of this plot. The inset shows $\phi^{-d/a}v(\phi)$ approaching a constant value for large field amplitudes.} \label{FigPotO1} \end{figure} The scaling potentials for O($N$)-models found from the shooting method are numerically exact within LPA$^\prime$. We show an example for $N=1$ in Fig. \ref{FigPotO1}. However, the important region around the minimum $\kappa$, which determines $\eta$, is also well-captured by a Taylor expansion \begin{align} \label{ON5} v(\phi) = \sum_{n=0}^L \frac{v_n}{n!} \phi^n \end{align} around $\phi=0$ with sufficiently large $L$. The coefficients $v_n$ can be obtained from inserting this ansatz into Eq. (\ref{ON1}) with the right $\eta$ and $\sigma_{\rm c}$ known from shooting. In fact, the coefficient $v_n$ can be expressed as an explicit polynomial in the coefficients $v_{n-1},\dots,v_0$, see Eq. (\ref{Rec8}). As a consequence, the full set of coefficients $\{v_n\}$ can recursively be determined from $v_2=\sigma_{\rm c}$, which allows for very large $L$ \cite{Morris:1994ki}. Our procedure for a given O($N$)-model is thus as follows: \begin{itemize} \item[(1)] Determine $\sigma_{\rm c}$ and $\eta$ with the shooting method. \item[(2)] Compute the coefficients $v_n$ from the algebraic recursion relation (\ref{Rec8}) starting from $v_1=0$ and $v_2=\sigma_{\rm c}$. \end{itemize} The explicit recursion formula (\ref{Rec8}) for the coefficients $v_n$ allows for a fully algebraic solution of Eq. (\ref{ON1}) as well \cite{Litim:2002cf}. We name this Taylor shooting method. In this approach we determine $\sigma(\eta)$ such that the (squared) right hand side of Eq. (\ref{ON1}), after inserting the polynomial ansatz (\ref{ON5}) with a given $\sigma$, is smaller than a certain $\varepsilon>0$ for all $\phi \leq \phi_{\rm max}$. For large enough $L$, the result for $\sigma$ converges to $\sigma_{\rm c}$ as obtained from the shooting method. Accordingly, both methods yield the same Taylor expansion (\ref{ON5}) and the same critical exponents. Although the Taylor shooting method is conceptually interesting, as it is fully algebraic, the shooting method is much more efficient in finding $\sigma_{\rm c}$ for practical purposes. The flow equation for the coefficient $v_n$ is given by $\beta_n=(\partial^n\beta/\partial\phi^n)_{\phi=0}$, where $\beta(\phi)$ is the right hand side of Eq. (\ref{ON1}). The same algebraic recursion relations which determine the set $\{v_n\}$ can be used to find $\{\beta_n\}$, see Eq. (\ref{Rec6}). We define the stability matrix $\mathcal{M}$ of the set of differential equations for $\{v_n\}$ by its entries \begin{align} \label{ON6} \mathcal{M}_{nm}[v_\star] = -\frac{\partial \beta_n}{\partial v_m}\Bigr|_{v=v_\star}. \end{align} The derivative is applied for fixed $\eta$. Due to the overall minus sign, relevant infrared directions are signalled by positive eigenvalues of $\mathcal{M}$. We order the set of eigenvalues $\{\theta_i\}$ of $\mathcal{M}$ such that $\theta_1 > \theta_2 > \dots > \theta_L$. For sufficiently large $L$ we have \begin{align} \label{ON7} \nu = \frac{1}{\theta_1}, \end{align} where $\nu$ is the usual correlation length exponent. We display the values of $\eta \equiv \eta_\star$ and $\nu$ found from the shooting method in Tab. \ref{TableON}. For comparison we display reference values from Monte Carlo simulations and the $\varepsilon$-expansion with $\varepsilon=4-d$ in Tab. \ref{TableONref}. The Gaussian fixed point with $v(\phi)=v_0$ yields $\nu=1/2$ and $\eta=0$. In this case $\mathcal{M}$ has eigenvalues $(2,1,0,-1,-2,\dots)$. \begin{table} \begin{tabular}[t]{|c||c|c|c|c|} \hline Model & $\sigma_{\rm c}$ & $\eta$ & $\theta_1$ & $\nu=\frac{1}{\theta_1}$ \\ \hline\hline O(1) & -0.16574 & 0.0443 & 1.545 & 0.647\\ O(2) & -0.20460 & 0.0437 & 1.435 & 0.697\\ O(3) & -0.23588 & 0.0409 & 1.347 & 0.742\\ \hline \end{tabular} \caption{Critical exponents for three-dimensional O($N$)-models within LPA$^\prime$ from the FRG with the optimized regulator. The stability matrix eigenvalue $\theta_1$ and the critical exponent $\nu$ are evaluated from the stability matrix of a polynomial ansatz to order $L=30$ in Eq. (\ref{ON5}). The results are well-converged at this order. The value of $\sigma_{\rm c}$ can be computed efficiently with the shooting method. For demonstration purposes we show the leading digits of $\sigma_{\rm c}$, although we take into account more digits for the numerical evaluation.} \label{TableON} \end{table} \begin{table} \begin{tabular}[t]{|c||c|c|c|c|} \hline \ & \multicolumn{2}{|c|}{Monte Carlo} & \multicolumn{2}{|c|}{$\varepsilon$-Expansion} \\ \hline\hline Model & $\nu$ & $\eta$ & $\nu$ & $\eta$ \\ \hline O(1) & 0.63002(10) & 0.03627(10) & 0.6290(25) & 0.0360(50) \\ O(2) & 0.6717(1) & 0.0381(2) & 0.6680(35) & 0.0380(50) \\ O(3) & 0.7112(5) & 0.0375(5) & 0.7045(55) & 0.0375(45) \\ \hline \end{tabular} \caption{Reference values for the critical exponents $\nu$ and $\eta$ of the O($N$)-model from Monte Carlo simulations and the $\varepsilon$-expansion with $\varepsilon=4-d$. The Monte Carlo values for $N=1, 2, 3$ are taken from Refs. \cite{PhysRevB.82.174433}, \cite{PhysRevB.74.144506}, \cite{PhysRevB.65.144520}. The results of the $\varepsilon$-expansion are from Ref. \cite{Guida:1998bx} (labelled as ``free'' therein). The deviation of the reference critical exponents from the ones obtained in this work are not related to the shooting method, but root in the simplified LPA$^\prime$-ansatz for the scaling effective action in Eq. (\ref{Int1}).} \label{TableONref} \end{table} \section{Stability of fixed points for O(N)+O(M)-models}\label{SecONM} The scaling solutions found for O($N$)-models can be used to obtain information on multicritical phenomena in O($N$)+O($M$)-models. More precisely, we can deduce the stability of the isotropic fixed point (IFP) and decoupled fixed point (DFP) for the two-field model. The input for this consists in the values of $\sigma_{\rm c}(N)$ and $\eta(N)$ from the O($N$)-models. They allow to construct the $v_n$ of the O($N$)-model scaling solution $v_\star(\phi)$, and from this we compute the stability matrix of the O($N$)+O($M$)-model at the IFP and DFP, respectively. In the following we consider the two-field model along the lines of the single-field model, with an analogous notation. For the two dimensionless renormalized fields we write $\psi$ and $\chi$ to distinguish them from the field $\phi$ of the single field model. We also label them by ``1'' and ``2''. The beta function for the effective potential $v(\psi,\chi)$ of the O($N$)+O($M$)-model from the FRG within LPA$^{\prime}$ \cite{PhysRevE.88.042141} is given by \begin{align} \nonumber \beta(\psi,\chi) = &\mbox{ }-d v + a_1 \psi v_\psi + a_2 \chi v_\chi \\ \nonumber &\mbox{ }+ \frac{b_1(1+v_{\chi\chi})+b_2(1+v_{\psi\psi})}{(1+v_{\psi\psi})(1+v_{\chi\chi})-v_{\psi\chi}^2} \\ \label{ONM3} &\mbox{ }+ \frac{b_1(N-1)}{1+\psi^{-1}v_\psi}+\frac{b_2(M-1)}{1+\chi^{-1}v_\chi}, \end{align} where a subscript denotes a partial derivative with respect to the corresponding variable. The expressions for $a_{1,2}$ and $b_{1,2}$ coincide with those for $a$ and $b$ in Eq. (\ref{ON2}) when replacing $\eta \to \eta_{1,2}$. Finding the scaling solution $v_\star(\psi,\chi)$ from $\beta(\psi,\chi)=0$ for all $\psi,\chi$ is a rather involved task. However, two candidates which immediately come into mind are the ones associated with the IFP and the DFP. The scaling potential $v_\star(\psi,\chi)$ of the two-field model at the IFP is given by \begin{align} \label{ONM1} v_\star(\psi,\chi) = v_\star^{N+M}(\phi), \end{align} where $\phi^2 = \psi^2 + \chi^2$ and $v_\star^{N+M}(\phi)$ is the scaling potential of the O($N+M$)-model. The system then possesses an enhanced symmetry. At the DFP, instead, we have \begin{align} \label{ONM2} v_\star(\psi,\chi) = v^N_\star(\psi) + v^M_\star(\chi), \end{align} where $v_\star^N$ and $v_\star^M$ are the scaling solutions for the single-field O($N$)- and O($M$)-models. Both field variables are then independent from each other. At the IFP we have $\eta_1=\eta_2=\eta^{N+M}$, whereas $\eta_1=\eta^N$ and $\eta_2=\eta^M$ at the DFP. In order to decide on the stability of a scaling potential $v_\star(\psi,\chi)$ we compute the corresponding stability matrix $\mathcal{M}$ of the two-field model. A stable fixed point has only two relevant directions and thus there are exactly two positive eigenvalues of $\mathcal{M}$. The sign of the third eigenvalue, $\theta_3$, decides on the stability of $v_\star(\psi,\chi)$. To compute the stability matrix, we insert the formal ansatz \begin{align} \label{ONM4} v(\psi,\chi) = \sum_{n=0}^L \sum_{m=0}^{L-n} \frac{v_{nm}}{n!m!} \psi^n \chi^m \end{align} into Eq. (\ref{ONM3}) to obtain the beta function for the coefficient $v_{nm}$ as \begin{align} \label{ONM5} \beta_{nm} = \frac{\partial^n}{\partial \psi^n} \frac{\partial^m}{\partial \chi^m} \beta(\psi,\chi) \Bigr|_{\psi=\chi=0}. \end{align} The entries of the stability matrix associated to the solution $v_\star(\psi,\chi)$ are given by \begin{align} \label{ONM6} \mathcal{M}_{nm,n'm'}[v_\star] = - \frac{\partial\beta_{nm}}{\partial v_ {n'm'}}\Bigr|_{v=v_\star}. \end{align} As in the single-field case, we keep $\eta_1$ and $\eta_2$ fixed when applying the derivative. We write $v_\star(\phi)=\sum_n \frac{v_n^N}{n!}\phi^n$ for the scaling solution of the single-field O($N$)-model. All $v_n$ with odd $n$ vanish. The scaling potential at the IFP with $N=M$ then has coefficients \begin{align} \label{ONM7} v_{2n,2m} = \frac{(2n)!(2m)!(n+m)!}{(2n+2m)!n!m!} v^{2N}_{2n+2m}. \end{align} At the DFP we have \begin{align} \label{ONM8} v_{nm} = v^N_n \delta_{m0} +v^M_m\delta_{n0} \end{align} with Kronecker delta $\delta_{nm}$. We summarize the results for the eigenvalues of the stability matrix at the IFP and DFP in Tabs. \ref{TableIFP} and \ref{TableDFP}. We confirm the expectation that $\theta_2$ at the IFP and $\theta_1$ and $\theta_2$ at the DFP can be deduced from the knowledge of the eigenvalues of the critical single-field models. (The same is true for further irrelevant eigenvalues, which we do not display here, see for instance \cite{PhysRevE.88.042141}.) The remaining eigenvalues, however, are genuinely determined from the two-field model, and are thus a consequence of the approximations which lead to Eq. (\ref{ONM3}) for the beta function of the O($N$)+O($M$)-model. \begin{table} \begin{tabular}[t]{|c||c|c|c|} \hline Model & $\theta_1$ & $\theta_2$ & $\theta_3$ \\ \hline\hline O(1)+O(1) & 1.756 & \emph{1.435} & -0.042\\ O(1.15)+O(1.15)& 1.767 & \emph{1.406} & -0.001\\ O(1.25)+O(1.25) & 1.774 & \emph{1.388} & 0.025 \\ O(1.5)+O(1.5) & 1.790 & \emph{1.347} & 0.086 \\ \hline \end{tabular} \caption{Stability of the IFP for the three-dimensional O($N$)+O($M$)-model. As the associated scaling potential only depends on the solution of the single field O($N+M$)-model, we can set $N=M$ for simplicity. We display the three largest eigenvalues of the stability matrix of the two-field model at the IFP. The second one (in italics) coincides with $\theta_1$ from the O($N+M$)-model. Within our approximation the IFP gets unstable at a critical field value of $(N+M)_{\rm c}\simeq 2.3$, where $\theta_3$ becomes positive.} \label{TableIFP} \end{table} \begin{table} \begin{tabular}[t]{|c||c|c|c||c|} \hline Model & $\theta_1$ & $\theta_2$ & $\theta_3$ & $\theta_3^{(\rm scal.)}$ \\ \hline\hline O(1)+O(1) & \emph{1.545} & \emph{1.545} & 0.090 & 0.090\\ O(1)+O(2) & \emph{1.545} & \emph{1.435} & -0.020 & -0.020\\ O(1)+O(3) & \emph{1.545} & \emph{1.347} & -0.108 & -0.108\\ \hline \end{tabular} \caption{Stability of the DFP for the three-dimensional O($N$)+O($M$)-model. The eigenvalues $\theta_i$ are derived from the stability matrix of the two-field model at the DFP. The first two of them (in italics) coincide with the largest eigenvalue of the corresponding O($N$)- and O($M$)-model, see Tab. \ref{TableON}. It is a nontrivial finding that $\theta_3$ coincides with $\theta_3^{(\rm scal.)}=\theta_1+\theta_2-3$ from the scaling relation (\ref{Int3}). This shows the consistent treatment of critical and multicritical phenomena with the FRG within LPA$^\prime$. The scaling relation was found to be valid for all cases which have been tested, also including non-integer $N$ and $M$. } \label{TableDFP} \end{table} In the case of the IFP the enhanced symmetry corresponding to the O($N+M$)-group allows to choose $N=M$ without loss of generality. We find the critical number of field components for the stability of the IFP to be given by $(N+M)_{\rm c}\approx 2.3$ in agreement with the FRG-study in Ref. \cite{PhysRevE.88.042141}. Using an exponential regulator function, the upper boundary has been found to be $(N+M)_{\rm c}=3.1$ with the FRG \cite{PhysRevB.65.140402}. This may be compared to $(N+M)_{\rm c}=2.89(4)$ and $(N+M)_{\rm c}=2.87(5)$ from resummed six-loop perturbation theory and constrained five-loop $\epsilon$-expansion \cite{PhysRevB.61.15136}. These results strongly suggest that the O(1)+O(1)-model has a stable IFP. If $N+M$ slightly exceeds the critical value, the stable fixed point is given by the biconical fixed point (BFP) instead \cite{PhysRevB.67.054505,PhysRevE.88.042141}. The BFP supports a coexistence phase and is linked to tetracritical behavior. The corresponding critical exponents at the multicritical point differ from those of the IFP and DFP. The stability region of the DFP in the $(N,M$)-plane is bounded from below by the sign change of the third eigenvalue, $\theta_3$, of the stability matrix of the two-field model. We find that $\theta_3$ at the DFP indeed agrees with the prediction for $\theta_3^{(\rm scal.)}$ from the scaling relation in Eq. (\ref{Int3}). Here we say that the scaling relation is satisfied if it is true for the number of significant digits displayed in Tab. \ref{TableDFP}. This accuracy is probably sufficient for deciding on the stability of fixed points in all relevant physical cases. Indeed, from the results presented in Tab. \ref{TableDFP} it appears to be very unlikely that the fixed point of an actual physical system is such that $|\theta_3|<0.001$. Although the present analysis indicates that $\theta_3=\theta_3^{(\rm scal.)}$ is valid to higher accuracy, an extensive numerical analysis of the associated convergence is outside of the scope of this work. Within LPA$^\prime$ we find that O($N$)+O($M$)-models with integer values of $N+M>2$ support a stable DFP. Due to the validity of the scaling relation, this information can also be deduced from the critical exponents $\nu_i$ in Tab. \ref{TableON}. The slight deviation of the $\nu_i$ from the world's best values \cite{Pelissetto2002549} due to our approximation can have profound implications on the (apparent) stability of the DFP in a given model. In particular, for the O($1$)+O($2$)-model we find the DFP to be the stable fixed point. In contrast, from the Monte Carlo reference values in Tab. \ref{TableONref} we find $\theta_3^{(\rm scal.)}\simeq0.08$ for the O(1)+O(2)-case, hence suggesting an unstable DFP. For a discussion of the current status of understanding of the stable fixed point in this model we refer to Ref. \cite{PhysRevE.88.042141}. Whereas the scaling potentials of the IFP and DFP are constructed from the solutions of the single-field O($N$)-models, the determination of scaling potentials which are genuinely multicritical, such as the one related to the BFP, requires to solve the partial differential equation $\beta(\psi,\chi)=0$ for $v(\psi,\chi)$ in both variables, $\psi$ and $\chi$. However, inserting the Taylor expansion (\ref{ONM4}) into this differential equation does not yield a recursion relation for the $v_{nm}$. Hence the Taylor shooting method cannot be applied in a straightforward fashion. This also suggests a failure of the shooting method. The generalization of the shooting method to find generic scaling potentials $v_\star(\psi,\chi)$ in both field variables constitutes an interesting and important direction for future works on multicritical phenomena with the FRG. \section{Truncated stability matrices}\label{SecAmb} In this section we discuss a truncation-related ambiguity in the definition of the stability matrix $\mathcal{M}$, which leads to an apparent violation of the scaling relation (\ref{Int3}) in one case. We further show that this violation of the scaling relation does not conflict predictions on the stability of the IFP and DFP since in both cases $\theta_3$ turns out to be independent of the corresponding definition. For this we recall that in Eq. (\ref{ON6}) we defined $\mathcal{M}_{nm}=-\partial \beta_n/\partial v_m$ by means of a derivative where the anomalous dimension $\eta$ is kept fixed. However, due to Eq. (\ref{ON4}), $\eta$ depends on $\kappa$ and $\lambda=v''(\kappa)$. With this, $\eta$ implicitly depends on the coefficients $v_n$, and the $v_n$-derivative may be applied to $\eta(\kappa,\lambda)$ appearing in $\beta_n$ as well. In fact, both approaches for computing the stability matrix are applied in the literature. To make the following discussion more transparent we introduce $u(\rho)=v(\sqrt{2\rho})$ and the expansion \begin{align} \label{Amb1} u(\rho) = \sum_{n=2}^{K} \frac{u_n}{n!} (\rho-\rho_0)^n, \end{align} where $\rho=\phi^2/2$ and $\rho_0=\kappa^2/2$ such that $u'(\rho_0)=0$. The running couplings of the system are \begin{align} \{g_n\}=(\rho_0,u_2,\dots,u_{K}). \end{align} The corresponding flow equations are given by $\dot{\rho}_0 = -\beta'(\rho_0)/u_2,\ \dot{u}_n = \beta^{(n)}(\rho_0)+u_{n+1}\dot{\rho}_0$. Here $\beta(\rho)$ is the right hand side of Eq. (\ref{ON1}) expressed in terms of $u$. We have \begin{align} \nonumber \beta(\rho) = & -d u(\rho) +2a \rho u'(\rho)+\frac{b}{1+u'(\rho)+2\rho u''(\rho)}\\ \label{Amb3} &+\frac{b(N-1)}{1+u'(\rho)}. \end{align} The term $u_{n+1} \dot{\rho}_0$ originates from the implicit dependence of $u_n$ on the expansion point $\rho_0$. The anomalous dimension (\ref{ON4}) in this parametrization reads \begin{align} \label{Amb4} \eta = \frac{16 v_d}{d} \frac{\rho_0 u_2^2}{(1+2u_2\rho_0)^2}. \end{align} The coefficients $u_n$ of the scaling solution $u_\star(\rho)$ are easily derived from the $v_n$ of $v_\star(\phi)$. The expressions for the beta functions of the O($N$)+O($M$)-model in terms of $u$ are presented in App. \ref{AppTwo}. Our goal is to compare results for critical exponents from two different definitions of the stability matrix for O($N$)- and O($N$)+O($M$)-models, which we denote by (A) and (B) in the following. In the single-field case we either compute the stability matrix for fixed anomalous dimension, i.e. \begin{align} \label{AmbA} (A):\ \mathcal{M}_{nm} = -\Bigl(\frac{\partial \beta_n}{\partial g_m}\Bigr)_\eta, \end{align} or we account for the implicit running coupling dependence of $\eta$ via \begin{align} \label{AmbB} (B):\ \mathcal{M}_{nm} = -\Bigl(\frac{\partial \beta_n}{\partial g_m}\Bigr)_\eta - \Bigl(\frac{\partial \beta_n}{\partial \eta}\Bigr)_{u}\ \frac{\partial \eta}{\partial g_m}. \end{align} Both expressions are evaluated for $u_\star$. In the last line, the second term yields a nonvanishing contribution for $g_m=\rho_0$ or $g_m=u_2$. In the previous sections we have applied scheme (A). The generalizations of both definitions to the two-field model are given in Eqs. (\ref{AmbATwo}) and (\ref{AmbBTwo}). We find the correlation length exponents $\nu_i$ obtained from (A) and (B) to deviate at the percent level, see Tab. \ref{TableAmb}. As a consequence, we obtain different results for $\theta_3^{(\rm scal.)}$ from Eq. (\ref{Int3}) in O($N$)+O($M$)-models at the DFP. Most dramatically, applying the scaling relation yields contradictory results on the stability of the DFP for the O(1)+O(2)-model. \begin{table} \begin{tabular}[t]{|c||c|c||c|c||c|c|} \hline Model & \multicolumn{2}{|c||}{LPA} & \multicolumn{2}{|c||}{LPA$^\prime$, (A)} & \multicolumn{2}{|c|}{LPA$^\prime$, (B)} \\ \hline\hline & $\eta$ & $\nu$ & $\eta$ & $\nu$ & $\eta$ & $\nu$ \\ \hline O(1) & 0 & 0.650 & 0.0443 & 0.647 & 0.0443 & 0.637\\ O(2) & 0 & 0.708 & 0.0437 & 0.697 & 0.0437 & 0.686\\ O(3) & 0 & 0.761 & 0.0409 & 0.742 & 0.0409 & 0.732\\ \hline\hline DFP & $\theta_3$ & $\theta_3^{(\rm scal.)}$ & $\theta_3$ & $\theta_3^{(\rm scal.)}$ & $\theta_3$ & $\theta_3^{(\rm scal.)}$ \\ \hline O(1)+O(1) & 0.079 & 0.079 & 0.090 & 0.090 & 0.090 & 0.141 \\ O(1)+O(2) & -0.048 & -0.048 & -0.020 & -0.020 & -0.020 & 0.028 \\ O(1)+O(3) & -0.147 & -0.147 & -0.108 & -0.108 & -0.108 & -0.063\\ \hline \end{tabular} \caption{We compare critical exponents computed from truncated stability matrices where the variation of the anomalous dimension is either neglected or respected, labelled (A) and (B). They correspond to Eqs. (\ref{AmbA}),(\ref{AmbATwo}) and (\ref{AmbB}),(\ref{AmbBTwo}), respectively. Within scheme (A), which has been applied in the previous sections, and which also includes LPA as a special case, the scaling relation $\theta_3=\theta_1+\theta_2-d$ is satisfied at the DFP. In contrast, the scaling relation is violated when applying (B). However, the eigenvalue $\theta_3$ found from the two-field model coincides in both cases.} \label{TableAmb} \end{table} In the two-field model, the ambiguity in defining $\mathcal{M}$ seems to only afflict those eigenvalues which are inherited from the single-field models. Those have been highlighted in italics in Tabs. \ref{TableIFP} and \ref{TableDFP}. Indeed, at the IFP, the results for $\theta_1$ and $\theta_3$ computed with (A) agree with the results of Ref. \cite{PhysRevE.88.042141} computed with (B). In contrast, the value of $\theta_2$, which is identical to $1/\nu$ of the corresponding O($N+M$)-model, disagrees due to the difference in the correlation length exponents. A similar behavior is found at the DFP. We find the first two eigenvalues $\theta_1$ and $\theta_2$ within (A) and (B) to coincide with $1/\nu_i$ from the individual single-field models within (A) and (B), respectively. For (A), this is shown in Tab. \ref{TableDFP}. The third exponent at the DFP, $\theta_3$, is found to be independent of the prescription for computing the stability matrix. Consequently, the scaling relation $\theta_3=\theta_1+\theta_2-d$ is satisfied for (A), whereas it is violated for (B). Within a truncation which neglects the anomalous dimension, referred to as LPA, the scaling relation is also satisfied. These findings are summarized in Tab. \ref{TableAmb}. In the analysis put forward so far we focussed on solving fixed point equations. The corresponding set of equations arises in the $k\to0$ limit of the renormalization group flow of $\Gamma_k$ for a system tuned to criticality. During the evolution with $k>0$, the apparent ambiguity between (A) and (B) is lifted. The running of couplings \emph{does} resolve the $\kappa$- and $\lambda$-dependence of the anomalous dimension. Variations in $\kappa_k$ and $\lambda_k$ will influence $\eta_k=\eta(\kappa_k,\lambda_k)$ and might drive the system away from criticality. Hence the scaling exponents of eigenperturbations which appear during the flow are determined by scheme (B). Accordingly, the scaling relation is violated within LPA$^\prime$ during the flow for a Taylor expansion of the effective potential. It is an important finding of our investigation that the scaling dimension $\theta_3$ of the operator $\Phi^2\Psi^2$ at the IFP and DFP is independent of the definition of the stability matrix. As a result, when considering a particular physical system by means of the flow equation (\ref{ONa}), the underlying O($N$)+O($M$)-model for low $k$ resolves the stability of the IFP or DFP (i.e. the value of $\theta_3$) in a unique fashion. The scaling relation (\ref{Int3}) allows to \emph{predict} the value of $\theta_3$ at the DFP by inserting the $\nu_i$ of the O($N$)-model as obtained from scheme (A). \section{Conclusions and outlook}\label{SecCon} In this work we have investigated scaling solutions for O($N$)- and O($N$)+O($M$)-models in three dimensions within the framework of Functional Renormalization. The results have been obtained for a specific, but commonly applied truncation and regularization scheme for the effective average action. Our main findings are summarized in the following list. \begin{itemize} \item[(1)] By a combination of shooting and algebraic recursion techniques it is possible to efficiently determine scaling solutions for O($N$)-models and to decide on the stability of the IFP and DFP for multicritical O($N$)+O($M$)-models. \item[(2)] Aharony's scaling relation (\ref{Int3}) is valid at the DFP within LPA$^\prime$. In particular, the value of $\theta_3$ at the DFP within our truncation can be deduced from critical exponents of O($N$)-models. \item[(3)] Previously found violations of the scaling relation are related to an ambiguity in defining the stability matrix of the truncated system. The scaling relation is violated during the renormalization group flow of running couplings. \item[(4)] The value of $\theta_3$ at both the IFP and DFP is not affected by this ambiguity. Therefore, the stability of multicritical points is faithfully captured by the running of couplings. The violation of the scaling relation during the flow is thus reduced to a mere little blemish. \end{itemize} We conclude that Functional Renormalization provides a consistent picture of both critical and multicritical phenomena for scalar theories. In particular, the present truncation scheme with a Taylor expansion of the effective potential is simple enough to be applied to more complicated physical systems. We do not expect the regularization scheme to qualitatively change any of the above statements. To reach higher quantitative precision for critical exponents in O($N$)- and O($N$)+O($M$)-models, further improvements of the truncation are required. For one, the Taylor expansion of $v(\phi)$ is expected to have a finite radius of convergence. It thus fails to resolve the asymptotic scaling behavior for large $\phi$. Although the latter is captured by the shooting method, this information is eventually lost when defining the stability matrix in terms of the associated Taylor coefficients $v_n$. A possible way around this problem is to use spectral methods \cite{Borchardt:2015rxa}, such as projection onto a complete set of orthogonal polynomials, to resolve the full functional form of $v(\phi)$. The corresponding techniques can also be used to determine the scaling dimension of eigenperturbations $\delta v(\phi)$ of the system. It will be interesting to study whether approximation schemes beyond a Taylor expansion fulfill the scaling relation also within (B), and thus eliminate the above-mentioned blemish. Another direction of improving the present truncation consists in the inclusion of a field-dependent wave function renormalization, or kinetial, $Z_\Phi(\Phi)$. Among other changes on the right hand side of Eq. (\ref{ON1}), we will have a field-dependent anomalous dimension $\eta(\phi)$. The scaling solution then consists of two functions, $v_\star(\phi)$ and $\eta_\star(\phi)$. The corresponding field-dependence can now be computed with the same methods as introduced above. In particular, we may hope to resolve the ambiguity between (A) and (B) in this way. Indeed, it probably originates from the rather crude approximation $\eta(\phi) \approx \eta(\kappa)$ for all $\phi$. The remaining dependence of $\eta(\kappa)$ onto the running couplings is very limited in its ways to react onto perturbations. Therefore, defining the stability matrix according to (B) need not be a consistent improvement of the truncation, and (A) is the safer choice, as it can be seen as an expansion in $\eta\ll1$. Exciting application of the extension by means of $\eta(\phi)$ are found in lower-dimensional systems \cite{PhysRevLett.75.378,PhysRevB.64.054513,PhysRevE.90.062105}. The results of this work provide a solid basis for studies of competing ordering phenomena in the realm of fermionic quantum phase transitions with the FRG. Therein, the beta functions for the O($N$)+O($M$)-model appear as a subset of the larger set of flow equations for the whole system. For instance, such a system could be given by the Gross--Neveu--Yukawa-model with O(1)- and O(3)-symmetric order parameter fields in 2+1 dimensions, describing multicriticality of gapless Dirac fermions in graphene \cite{Classen:2015ssa}. Within LPA$^\prime$ it will be exciting to see whether associated scaling relations are still satisfied within (A). Furthermore, our finding of the scheme-independence of $\theta_3$ for both the IFP and DFP -- if it persists in the presence of fermions -- allows to unambiguously resolve the corresponding value of $\theta_3$ and thus the stability of multicritical points from the flow. Note that fermion-boson-couplings are in general also $\phi$-dependent, e.g. given by a Yukawa coupling $h(\phi)$, which introduces new challenges for solving the scaling equations \cite{Pawlowski:2014zaa,Vacca:2015nta}. Again, Taylor expansion schemes or spectral methods qualify as candidates to address such questions. \begin{center} \textbf{Acknowledgements} \end{center} The author thanks M. M. Scherer, C. Wetterich, and S. Wetzel for inspiring discussions. This work is supported by the Graduate Academy Heidelberg and ERC Advanced Grant No. 290623.
1,941,325,220,683
arxiv
\section*{Acknowledgments} The work was partially supported by the National Science Foundation (NSF CNS--1822848 and NSF DGE--2039610). Portions of this work were conducted with the advanced computing resources provided by Texas A\&M High Performance Research Computing. \bibliographystyle{unsrt} \section{DETERRENT: Detecting Trojans using Reinforcement Learning} \label{sec:proposed_framework} We now formulate the trigger activation problem as an RL problem, but it suffers from challenges related to scalability, efficiency, and poor performance. We then address these challenges and devise a final RL agent that outperforms all existing techniques. \subsection{A Simple Formulation} \label{sec:initial_formulation} As shown in Figure~\ref{fig:Trojan}, to activate the trigger, the defender has to apply an input pattern that forces all four rare nets to take their rare values simultaneously,\footnote{For the sake of conciseness, henceforth, we shall use the phrase ``activate the rare nets'' instead of ``force the rare nets to take their rare values.''} but the defender does not know which four rare nets constitute the trigger. A na\"ive solution is to generate one input pattern for each combination of four rare nets. Such an approach would require up to ${^r}C_{4}$ test patterns ($r$ is the total number of rare nets), which would be infeasible to employ in practice. However, one input pattern can activate multiple different combinations of rare nets simultaneously. So, we need to find a minimal set of input patterns that can collectively activate all combinations of rare nets. This problem is a variant of the set-cover problem, which is NP-complete~\cite{cormen2009introduction}. We call a set of rare nets \textit{compatible} if there exists an input pattern that can activate all the rare nets in the set simultaneously. Thus, our objective is to develop an RL agent that generates maximal sets of compatible rare nets. We now map the trigger activation problem into an RL problem by formulating it as a Markov decision process. \begin{itemize}[leftmargin=*] \item \textbf{States $\mathcal{S}$} is the set of all subsets of the rare nets. An individual state $s_t$ represents the set of compatible rare nets at time $t$. \item \textbf{Actions $\mathcal{A}$} is the set of all rare nets. An individual action $a_t$ is the rare net chosen by the agent at time $t$. \item \textbf{State transition $P(s_{t+1}|a_t,s_t)$} is the probability that action $a_t$ in state $s_t$ leads to the state $s_{t+1}$. In our case, if the chosen rare net (i.e., the action) is compatible with the current set of rare nets (i.e., the current state), we add the chosen rare net to the set of compatible rare nets (i.e., the next state). Otherwise, next state remains the same as the current state. Thus, in our case, the state transition is deterministic, as shown below. \[ s_{t+1}= \begin{dcases} \{a_t\} \cup s_t, & \text{if } a_t\text{ is compatible with }s_t\\ s_t, & \text{otherwise} \end{dcases} \] \item \textbf{Reward function $R(s_t,a_t)$} is equal to the square of the size of the next state for compatible states, and 0 otherwise. \[ R(s_{t},a_t)= \begin{dcases} |\{a_t\} \cup s_t|^2 = |s_{t+1}|^2, & \text{if } a_t\text{ is compatible with }s_t\\ 0, & \text{otherwise} \end{dcases} \] The reward is designed so that the agent tries to maximize the size of the state, i.e., the number of compatible rare nets. We square the reward at each step, but any power greater than 1 would be appropriate since we want the reward function to be convex to account for the fact that as the size of the state grows, the difficulty of finding a new compatible rare net increases. \item \textbf{Discount factor $\gamma$} $(0 \leq \gamma \leq 1)$ indicates the importance of future rewards relative to the current reward. \end{itemize} The initial state $s_0$ is a singleton set containing a randomly chosen rare net. At each step $t$, the agent in state $s_t$ chooses an action $a_t$, arrives in the next state $s_{t+1}$ according to the state transition rules, and receives a reward $r_{t}$. This cycle of state, action, reward, and next state is repeated $T$ times, and this constitutes one \textit{episode}. At the end of each episode, the state of the agent reflects the rare nets that are compatible.\footnote{For software implementation, we represent the states (which are defined as sets) as binary vectors, with each element on the vector indicating whether the corresponding rare net is present in the state or not.} Since the state and action spaces are discrete, we train our agent using the Proximal Policy Optimization (PPO) algorithm with default parameters unless specified otherwise~\cite{schulman2017proximal}. Once the agent returns the maximal sets of compatible rare nets after training, we pick the $k$ largest distinct sets and generate the test patterns corresponding to those sets using a Boolean satisfiability (SAT) solver. $k$ is a hyperparameter of our technique. Our experiments indicate that this simple agent performs well on small benchmarks. But, for larger benchmarks like the \texttt{MIPS} processor from OpenCores~\cite{OpenCores_MIPS} we obtain low trigger coverage ($\approx$70\% after training for 12 hours). We analyzed the basic architecture in detail, and it faces certain challenges which are presented next. \begin{figure}[htb] \centering \includegraphics[width=0.475\textwidth,trim={0.2cm 0.4cm 0 0},clip]{figures/figure2.pdf} \caption{Combinations of reward and masking methods for \texttt{MIPS}. Eoe: End-of-episode, M: Masking, NM: No masking } \label{fig:challenge_sol2} \end{figure} \subsection{End-of-Episode Reward Computation} \noindent\textbf{Challenge 1: Large training time.} The basic architecture requires computing the reward for each time step, which involves checking if the selected action is compatible with the current state or not. For a large benchmark like the \texttt{MIPS} processor, the check takes a few seconds (because of the large number of gates in the benchmark) each time, and the agent requires millions of steps to learn. Hence, the training time becomes prohibitively large. \noindent\textbf{Solution 1.} To address challenge 1, we reduce the frequency of reward computation by computing it only at the end of the episode. At all intermediate steps, the reward is set to 0. While this approach speeds up the training by a factor of $\approx86\times$, the rewards become sparse, and it affects the performance of our agent. However, the impact on performance is only $5.6\%$, as shown in Table~\ref{tab:challenge_sol1}. \begin{table}[htb] \caption{Comparison of training rates for the reward methods for the \texttt{MIPS} benchmark: all steps vs. end-of-episode.} \label{tab:challenge_sol1} \resizebox{0.45\textwidth}{!}{% \begin{tabular}{cccc} \toprule \multirow{2}{*}{Method} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Max. \# compatible \\ rare nets\end{tabular}} & \multicolumn{2}{c}{Rate} \\ \cmidrule(lr){3-4} & & (steps/min) & (eps./min) \\ \midrule Reward at all steps & 53 & 108 & 0.72\\ End-of-episode reward & 50 & 9387 & 63\\ Improvement & -5.6\% & 86.91$\times$ & 87.5$\times$\\ \bottomrule \end{tabular} } \end{table} \subsection{Masking Actions for Efficiency} \noindent\textbf{Challenge 2: Wasted efforts in choosing actions.} Another challenge that the basic architecture suffers from is inefficiency in choosing actions. At each step, the actions available to the agent remain the same, irrespective of the state of the agent. This leads to situations where the agent chooses an action that has already been chosen in the past, or that is known to be not compatible with at least one of the rare nets in the current state. Hence, the time spent by the agent on such steps is wasted. \begin{figure}[tb] \centering \includegraphics[width=0.475\textwidth,trim={0 0.35cm 0 0.35cm},clip]{figures/figure3.pdf} \caption{Total loss trends in \texttt{c2670}: default vs. boosted exploration.} \label{fig:challenge_sol3} \end{figure} \begin{figure*}[tb] \centering \includegraphics[width=\textwidth,trim={0.75cm 0.73cm 0.6cm 0},clip]{figures/figure4.pdf} \caption{Architecture of DETERRENT.} \label{fig:RL_pipeline} \end{figure*} \noindent\textbf{Solution 2.} To increase the efficiency of the agent in choosing actions, we mask the actions available to the agent based on the state at any given time step. This ensures that at each time step, the agent only chooses actions that lead it to a new state. Additionally, reward computation also becomes less sparse because episode lengths reduce due to masking (episode ends when there are no available actions). Since we are eliminating actions from the state space, one may wonder if this approach may eliminate optimal actions. We now prove that this is not possible for our problem formulation. \begin{theorem} Masking actions does not prevent our agent from learning anything that it could have learned otherwise. \end{theorem} \begin{proof} Let $\mathcal{P}'$ and $\mathcal{P}$ denote an agent that masks and does not mask actions, respectively. Suppose both $\mathcal{P}$ and $\mathcal{P}'$ are in state $s$. Let $\mathcal{A}$ denote the complete set of actions, and $\mathcal{A}_s$ denote the set of masked actions for state $s$. So, $\mathcal{A}_s = \{i|i \text{ is compatible with }s \text{ and } i \notin s\}$ and $\mathcal{A}_s \subseteq \mathcal{A}$. If $\mathcal{P}$ chooses an action $a \in \mathcal{A} \setminus \mathcal{A}_s$ (i.e., an action in the set difference), then $\mathcal{P}$ will stay in the same state because the rare net corresponding to such an action $a$ would either be incompatible with $s$ or it would already be in $s$. On the other hand, for any action $a' \in \mathcal{A}_s$ chosen by $\mathcal{P}$, agent $\mathcal{P}'$ can also choose the same action $a'$ since it is in $\mathcal{A}_s$. Hence, masking does not prevent our agent from learning anything that the corresponding unmasked agent could have learned. \end{proof} To enable masking, we compute pairwise compatibility of all rare nets using a SAT solver before training. Since the compatibility computation for each unique pair is independent, we parallelize it across 64 processes to reduce the runtime. During training, for a given state $s$ (i.e., set of compatible rare nets at the current step), all actions (i.e., rare nets) that are not compatible with any of the rare nets in $s$ are masked off, and hence, are not chosen. To design the best architecture, we implemented agents with all combinations of reward methods (at all steps and end-of-episode) and masking (with and without). The results in Figure~\ref{fig:challenge_sol2} demonstrate that to obtain the maximum number of compatible rare nets, the optimal architecture should mask actions based on state and provide rewards at each time step. \subsection{Boosting Exploration} \noindent\textbf{Challenge 3: Convergence to local optima.} Since the agent's objective is to generate maximal sets of rare nets, for certain benchmarks (for instance, \texttt{c2670}), the agent gets stuck in local optima. In other words, the agent quickly learns to capitalize on sub-optimal sets of compatible rare nets, thereby missing out on the diversity of the sets of compatible rare nets, resulting in poor trigger coverage. \noindent\textbf{Solution 3.} To force the agent to explore, we (1) include an entropy term in the loss function of the agent and (2) control the smoothing parameter that affects the variance of the loss calculation. To implement (1), we modify the total loss function to $l = l_{\pi} + c_\epsilon \times l_{\epsilon} + c_v \times l_{v}$, where $l$ is the total loss, $l_{\pi}$ is the loss of the policy network, $l_\epsilon$ is the entropy loss, $l_v$ is the value loss, and $c_\epsilon$ and $c_v$ are the coefficients for the entropy and value losses, respectively. We set $c_{\epsilon} = 1$. The entropy loss is inversely proportional to the randomness in the choice of actions. To implement (2), we set the parameter $\lambda$ for policy loss $l_{\pi}$ in PPO to $0.99$. This leads to variance in the loss calculation and hence in the actions chosen by the agent. Thus, we penalize the agent for having less variance in its choice of actions. Hence, the agent is forced to explore more and is likely to converge to a better state, i.e., a state with more compatible rare nets. Figure~\ref{fig:challenge_sol3} shows that by modifying the loss function and the smoothing parameter in PPO, the loss does not become $0$ quickly, forcing the agent to explore more. \subsection{Putting it All Together} \label{sec:final} The final architecture of DETERRENT is illustrated in Figure~\ref{fig:RL_pipeline}. In an offline phase, we find the rare nets of the design and generate pairwise compatibility information for them in a parallelized manner. Then, for each episode, the agent starts with a random rare net and takes an action according to the policy (a neural network) and the action mask. The masked action is evaluated to produce a reward for the agent, and the agent moves to the next state. This procedure repeats for $T$ steps (i.e., an episode). Internally, after a certain number of episodes, the PPO algorithm translates the rewards into losses (depending on the output of the policy network, which generates actions, and the value network, which predicts the expected reward of the action), which are used to update the parameters of the policy and value networks. Eventually, when the agent has learned the task, the losses become negligible, and the reward saturates. Once the RL agent gives us the maximal sets of compatible rare nets, we pick the $k$ largest distinct sets and generate the test patterns, one for each of those sets, using a SAT solver. \section{Experimental Evaluation}\label{sec:results} \subsection{Experimental Setup} \label{sec:setup} \begin{table*}[ht] \centering \caption{Comparison of trigger coverage (Cov. (\%)) and test length of DETERRENT with random simulations, Synopsys TestMAX~\cite{TestMAX}, TARMAC~\cite{TARMAC_TCAD}, and TGRL~\cite{pan2021automated}. Evaluation is done on 100 random four-width triggered HT-infected netlists.} \label{tab:my-table} \resizebox{\textwidth}{!}{ \begin{tabular}{cccccccccccccc} \toprule \multirow{3}{*}{Design}& \multirow{1}{*}[-0.1cm]{Number} & \multirow{3}{*}{\# Gates} & \multicolumn{2}{c}{Random} & \multicolumn{2}{c}{TestMAX~\cite{TestMAX}} & \multicolumn{2}{c}{TARMAC~\cite{TARMAC_TCAD}} & \multicolumn{2}{c}{TGRL~\cite{pan2021automated}} & \multicolumn{3}{c}{\textbf{DETERRENT (this work)}}\\ \cmidrule(lr){4-5} \cmidrule(lr){6-7} \cmidrule(lr){8-9} \cmidrule(lr){10-11} \cmidrule(lr){12-14} & of rare & & Test & Cov. & Test & Cov. & Test & Cov.& Test & Cov. & Test & Patterns Red./ & Cov. \\ & nets & & Length & (\%) & Length & (\%) & Length & (\%) & Length & (\%) & Length & TARMAC \& TGRL & (\%) \\ \midrule \texttt{c2670} & 43 & 775 & \multicolumn{1}{c}{5306} & 10 & \multicolumn{1}{c}{89} & 27 & \multicolumn{1}{c}{5306} & 100 & \multicolumn{1}{c}{5306} & 96 & \multicolumn{1}{c}{8} & \multicolumn{1}{c}{\textbf{663.25$\bm{\times}$}} & 100 \\ \texttt{c5315} & 165 & 2307 & \multicolumn{1}{c}{8066} & 37 & \multicolumn{1}{c}{103} & 5 & \multicolumn{1}{c}{8066} & 61 & \multicolumn{1}{c}{8066} & 94 & \multicolumn{1}{c}{1585} & \multicolumn{1}{c}{\textbf{5.08$\bm{\times}$}} & 99 \\ \texttt{c6288} & 186 & 2416 & \multicolumn{1}{c}{3205} & 54 & \multicolumn{1}{c}{38} & 4 & \multicolumn{1}{c}{3205} & 100 & \multicolumn{1}{c}{3205} & 85 & \multicolumn{1}{c}{2096} & \multicolumn{1}{c}{\textbf{1.52$\bm{\times}$}} & 99 \\ \texttt{c7552} & 282 & 3513 & \multicolumn{1}{c}{9357} & 10 & \multicolumn{1}{c}{137} & 4 & \multicolumn{1}{c}{9357} & 73 & \multicolumn{1}{c}{9357} & 71 & \multicolumn{1}{c}{5910} & \multicolumn{1}{c}{\textbf{1.58$\bm{\times}$}} & 85 \\ \texttt{s13207} & 604 & 1801 & \multicolumn{1}{c}{9659} & 3 & \multicolumn{1}{c}{106} & 4 & \multicolumn{1}{c}{9659} & 80 & \multicolumn{1}{c}{9659} & 5 & \multicolumn{1}{c}{9600} & \multicolumn{1}{c}{\textbf{1.01$\bm{\times}$}} & 80 \\ \texttt{s15850} & 649 & 2412 & \multicolumn{1}{c}{9512} & 3 & \multicolumn{1}{c}{110} & 3 & \multicolumn{1}{c}{9512} & 79 & \multicolumn{1}{c}{9512} & 8 & \multicolumn{1}{c}{6197} & \multicolumn{1}{c}{\textbf{1.53$\bm{\times}$}} & 81 \\ \texttt{s35932} & 1151 & 4736 & \multicolumn{1}{c}{3083} & 99 & \multicolumn{1}{c}{37} & 68 & \multicolumn{1}{c}{3083} & 100 & \multicolumn{1}{c}{3083} & 58 & \multicolumn{1}{c}{6} & \multicolumn{1}{c}{\textbf{513.83$\bm{\times}$}} & 100 \\ \texttt{MIPS} & 1005 & 23511 & \multicolumn{1}{c}{25000} & 0 & \multicolumn{1}{c}{796} & 0 & \multicolumn{1}{c}{25000} & 100 & \multicolumn{1}{c}{---} & --- & \multicolumn{1}{c}{1304} & \multicolumn{1}{c}{\textbf{19.17$\bm{\times}$}} & 97 \\ \cmidrule{1-14} Avg. & 511 & 5184 & \multicolumn{1}{c}{6884} & 27.75${}^\dagger$ & \multicolumn{1}{c}{88.57} & 10${}^\dagger$ & \multicolumn{1}{c}{6884} & 83.5${}^\dagger$ & \multicolumn{1}{c}{6884} & 86.5${}^\dagger$ & \multicolumn{1}{c}{3628.85} & \multicolumn{1}{c}{\textbf{169.68$\bm{\times}$}${}^\ddagger$} & \textbf{95.75}${}^\dagger$ \\ \bottomrule \end{tabular} } \\ [1mm] ${}^\dagger$The coverages are averaged over \texttt{c2670}, \texttt{c5315}, \texttt{c6288}, and \texttt{c7552}. ${}^\ddagger$The reduction is averaged over all except \texttt{MIPS}. \end{table*} We implemented our RL agent using \textit{PyTorch1.6} and trained it using a Linux machine with Intel 2.4 GHz CPUs and an NVIDIA Tesla K80 GPU. We used the SAT solver provided in the \textit{pycosat} library. We implemented the parallelized version of TARMAC in \textit{Python 3.6}. We used \textit{Synopsys VCS} for logic simulations and for evaluating test patterns on HT-infected netlists. Similar to prior works (TARMAC and TGRL), for sequential circuits, we assume full scan access. To enable a fair comparison, we implemented and evaluated all the techniques on the same benchmarks as TARMAC and TGRL, which were provided to us by the authors of TGRL. They also provided us with the TGRL test patterns. We also performed experiments on the \texttt{MIPS} processor from OpenCores~\cite{OpenCores_MIPS} to demonstrate scalability. For \texttt{MIPS}, we use vectorized environment with 16 parallel processes to speed up the training. For evaluation, we randomly inserted $100$ HTs in each benchmark and verified them to be valid using a Boolean satisfiability check. \subsection{Trigger Coverage Performance} In this section, we compare the trigger coverage provided by different techniques (Table~\ref{tab:my-table}). In addition to TARMAC and TGRL, we also compare the performance of DETERRENT with random test patterns and patterns generated from an industry-standard tool, \textit{Synopsys TestMAX}~\cite{TestMAX}. We used the number of patterns from TGRL as a reference for the random test patterns and TARMAC to enable a fair comparison. For TestMAX, the number of patterns is determined by the tool in the default setting (\texttt{run\_atpg}). Note that for \texttt{s13207}, \texttt{s15850}, and \texttt{s35932}, the netlists corresponding to the test patterns provided by the authors of TGRL were not available to us at the time of writing the manuscript. Hence, we could only evaluate the TGRL test patterns for those circuits on our benchmarks. Due to this, the trigger coverage of TGRL for these benchmarks is low. Additionally, TGRL does not evaluate on the \texttt{MIPS} benchmark. Hence the corresponding cells in the table are empty. To enable a fair comparison, we have not included \texttt{s13207}, \texttt{s15850}, and \texttt{s35932} in the average test length, as well as \texttt{MIPS} in the average trigger coverages for all techniques in Table~\ref{tab:my-table}. The results demonstrate that DETERRENT achieves better trigger coverage than all other techniques while reducing the number of test patterns. On average, DETERRENT improves the coverage over random patterns ($68\%$), TestMAX ($85.75\%$), TARMAC ($12.25\%$), and TGRL ($9.25\%$), and achieves two orders of magnitude reduction in the number of test patterns over TARMAC and TGRL ($169\times$). \subsection{Impact of Trigger Width} Trigger width, i.e., the number of rare nets that constitute the trigger, directly affects the stealth of the HT. As the trigger width increases, the difficulty to activate the trigger increases exponentially. For example, for a rareness threshold of $0.1$, if the trigger width is $4$, the probability of activating the trigger through random simulation is $10^{-4}$. Whereas, if the trigger width is $12$, the probability reduces to $10^{-12}$. Thus, it is necessary to maintain the performance with increasing trigger width. Figure~\ref{fig:my_label1} illustrates the results for \texttt{c6288}; we chose this benchmark as TGRL provides a good trigger coverage. With increasing trigger width, the performance of TGRL drops drastically. DETERRENT maintains a steady trigger coverage, demonstrating that it can activate extremely rare trigger conditions. \begin{figure}[tb] \centering \includegraphics[width=0.475\textwidth,trim={0.3cm 0.3cm 0cm 0.2cm},clip]{figures/figure5.pdf} \caption{Impact of trigger width on the trigger coverage of TGRL~\cite{pan2021automated} and DETERRENT for \texttt{c6288}.} \label{fig:my_label1} \end{figure} \subsection{Trigger Coverage vs. Number of Patterns} We now investigate the marginal impact of test patterns on trigger coverage. To do so, we analyze the increase in trigger coverage provided by each test pattern for DETERRENT and TGRL. Figure~\ref{fig:my_label4} demonstrates that DETERRENT obtains the maximum trigger coverage with very few patterns as opposed to TGRL. \begin{figure}[tb] \centering \includegraphics[width=0.475\textwidth,trim={0 0.4cm 0 0.1cm},clip]{figures/figure6.pdf} \caption{Trigger coverage vs. test patterns comparison.} \label{fig:my_label4} \end{figure} \subsection{Impact of Rareness Threshold} Rareness threshold is the probability below which nets are classified as rare, i.e., the logic values of these nets are strongly biased towards 0 or 1. For a given trigger width ($\alpha$), as the rareness threshold increases, the number of rare nets increases (say by a factor of $\beta$), and so, the number of combinations possible for constructing the trigger increases by a factor of $\beta^{\alpha}$, making it much more difficult to activate. Figure~\ref{fig:my_label5} shows that the number of rare nets increases with increasing threshold (leading to up to $64\times$ more potential trigger combinations), but DETERRENT is still able to achieve similar trigger coverage ($\leq 2\%$ drop) with less than $2500$ patterns.\footnote{The authors of TGRL did not provide us the test patterns for thresholds other than $0.1$. Hence, we do not compare with TGRL for other threshold values.} In another experiment, we trained the agent using rare nets for a threshold of $0.14$ and evaluated the generated test patterns on rare nets with threshold of $0.1$---the trigger coverage is $99\%$. This hints that we can train the agent for a large set of rare nets and use it to generate patterns for a subset of rare nets. \begin{figure}[tb] \centering \includegraphics[width=0.475\textwidth,trim={0 0.4cm 0 0.3cm}, clip]{figures/figure7.pdf} \caption{Impact of rareness threshold on the number of rare nets and the trigger coverage of DETERRENT for \texttt{c6288}.} \label{fig:my_label5} \end{figure} \section{Discussion and Future Work} \label{sec:discussion} \noindent\textbf{Comparison with TGRL~\cite{pan2021automated}.} Our RL agent architecture is entirely different from TGRL. TGRL maximizes a heuristic based on the rareness and testability of nets. In contrast, we identify the problem of trigger activation to be a set-cover problem and find maximal sets of compatible rare nets. Moreover, TGRL states and actions are test patterns generated by flipping bits probabilistically, whereas our agent's efforts are more directed by generating maximal sets of compatible rare nets. Due to our formulation, we achieve better coverage but with orders of magnitude fewer test patterns than TGRL (see Section~\ref{sec:results}). \noindent \textbf{Feasibility of using a SAT solver.} We use a SAT solver for the compatibility check during training and for generating test patterns from the maximal sets of compatible rare nets provided by the RL agent. Nevertheless, our technique is scalable for larger designs (as evidenced by our results) because: (i)~During training, we reduce the runtime of using the SAT solver as we generate a dictionary containing the compatibility information offline in a parallelized manner. (ii)~When generating the test patterns, we only require invoking the SAT solver $T$ times, where $T$ is the required number of test patterns. Hence, even for large benchmarks like \texttt{MIPS}, we can generate test patterns that outperform all the HT detection techniques in less than $12$ hours. \noindent\textbf{Meta-learning.} We generated test patterns for individual benchmarks using separate agents. Since the training time of our agents for all benchmarks is less than 12 hours, it is practical to use our technique. As part of future work, we would like to explore the principles of designing a standalone agent that can be trained on a corpus of benchmarks once and be used to generate test patterns for unseen benchmarks. To that end, we plan to extend the current framework by using principles from meta-learning. \section{Assumptions and Background} \label{sec:background} \subsection{Threat Model} \label{sec:threat_model} We assume the standard threat model used in logic testing-based HT detection~\cite{chakraborty2009mero,TARMAC_TCAD,pan2021automated}. We assume that the adversary inserts HTs in rare nets of the design to remain stealthy. The defender's (i.e., our) objective is to generate a minimal set of test patterns that activate unknown trigger conditions. We generate test patterns using only the golden (i.e., HT-free) netlist. \subsection{Reinforcement Learning} \label{sec:RL_background} RL is a machine learning methodology where an intelligent agent learns to navigate an environment to maximize a cumulative reward. It is formalized as a Markov decision process. An RL agent interacts with the environment in discrete time steps. At each step, the agent receives the current state and the reward, and it chooses the action which is sent to the environment. The environment moves the agent to a new state and provides a reward corresponding to the state transition and action. The aim of the RL agent is to learn a policy $\pi$ that maximizes the expected cumulative reward. The policy maps state-action pairs to probabilities of taking that action in a given state. The agent learns the optimal or near-optimal policy in a trial-and-error method by interacting with the environment. \section{Conclusion} \label{sec:Conclusion} Prior works on trigger activation for HT detection have shown reasonable trigger coverage, but they are ineffective, not scalable, or require a large number of test patterns. To address these limitations, we develop an RL agent to guide the search for optimal test patterns. However, in order to design the agent, we face several challenges like inefficiency and lack of scalability. We overcome these challenges using different features like masking and boosting exploration of the agent. As a result, the final architecture generates a compact set of test patterns for designs of all sizes, including the \texttt{MIPS} processor. Experimental results demonstrate that our agent reduces the number of test patterns by $169\times$ on average while improving trigger coverage. Further evaluations show that our agent is robust against increasing complexity. Our agent maintains steady trigger coverage for different trigger widths, whereas the state-of-the-art technique's performance drops drastically. Our agent also maintains performance against the increasing number of possible trigger combinations. Although this work demonstrates the power of RL for trigger activation, the challenges related to scalability and efficiency are not specific to the current problem. The ways in which we overcame the challenges can be used to develop better defenses for other hardware security problems. \section{Introduction} \label{sec:Introduction} Reinforcement learning (RL) helps a computing system (a.k.a. agent) to learn by its own experience through exploring and exploiting the underlying environment. Over time, the agent takes optimal actions in sequence, even with limited or no knowledge regarding the environment. From a cybersecurity perspective, such RL agents are attractive as they can generate optimal defense techniques in an unknown adversarial environment. Given the latest improvements in RL algorithms, these agents can efficiently navigate high-dimensional search space to find optimal actions. Hence, researchers have used RL agents to develop promising approaches for several security problems, including intrusion detection~\cite{RL_intrusion_detection}, fuzzing~\cite{RL_Fuzzing,RL_Fuzzing_USENIX}, and developing secure cyber-physical systems~\cite{RL_CPS1,RL_CPS2,nguyen2019deep_new}. However, research in hardware security is still in its infancy to reap the power of RL in developing optimal defenses in adversarial environments. In this work, we showcase how RL can be used to efficiently detect hardware Trojans (HTs). Out of the many problems in hardware security, the HT detection problem presents significant computational challenges to the defender in detecting them in an unknown environment (i.e., HT-infected design). The increasing cost of integrated circuit (IC) manufacturing has forced semiconductor companies to send their designs to untrusted, off-shore foundries. Malicious components known as HTs inserted during the fabrication stage can leak secret information, degrade performance, or cause a denial of service. \subsection{Hardware Trojans} \label{sec:HTs} An HT consists of two components: \textit{trigger} and \textit{payload}. When the trigger is activated, the payload causes a malicious effect in the design. Figure~\ref{fig:Trojan} illustrates an HT that flips an output upon trigger activation. The trigger comprises multiple nets, called \textit{select nets}, in the design. For instance, the adversary can choose the select nets so that the trigger gets activated only under extremely rare conditions. This is achieved by determining a \textit{rareness threshold}\footnote{Rareness threshold is the probability below which nets are classified as rare nets.} and constructing the trigger using the corresponding \textit{rare nets}. \begin{figure}[tb] \includegraphics[width=0.475\textwidth,trim={0 0.8cm 0 0.6cm},clip]{figures/figure1.pdf} \caption{Example of an HT in a design with $150$ rare nets.} \label{fig:Trojan} \end{figure} Detecting HTs is difficult since they are designed to be stealthy~\cite{xiao2016Trojan_survey}. Consider the example in Figure~\ref{fig:Trojan} with $150$ rare nets. Four of them are used for the trigger. Thus, the defender needs to check up to ${}^{150}C_{4} \approx 20\times10^6$ different combinations of rare nets, which is extremely challenging. Such a large space makes it difficult even for conventional automatic test pattern generation (ATPG) tools~\cite{TestMAX} to activate the trigger. \subsection{Hardware Trojan Detection Techniques} \label{sec:detecting_HTs} One can classify the HT detection techniques under two broad categories: logic testing and side-channel analysis. Logic testing involves the application of test patterns to the HT-infected design to activate the trigger~\cite{chakraborty2009mero,TARMAC_TCAD,pan2021automated}. However, activating an extremely rare trigger is challenging because the possible combinations of rare nets are extensive. On the other hand, side-channel-based detection techniques detect HTs based on the differences in the side-channel measurements (such as power or timing) between the golden (i.e., HT-free) design and an HT-infected design~\cite{narasimhan2012hardware,huang2016mers,huang2018scalable,lyu2019efficient}. However, since HTs have an extremely small footprint compared to the overall size of the design, their impact on side-channel metrics is usually negligible and concealed under process variation and environmental effects~\cite{rai2009performance}. We refer interested readers to~\cite{xiao2016Trojan_survey} for a detailed survey on HTs and HT detection techniques. Note that activating the trigger is not only essential for logic testing techniques but also helpful for side-channel-based techniques because activating the trigger leads to an increase in the side-channel footprint of the HT, making it easier to detect~\cite{TARMAC_TCAD}. Although activating the trigger is critical, it is difficult to do so efficiently. Consider Figure~\ref{fig:Trojan}; the defender needs up to $20\times 10^6$ test patterns to guarantee trigger activation because the defender does not know which rare nets make the trigger. Next, we outline the ideal characteristics required from any technique for activating the trigger. \textbf{(1) High trigger activation rate}: The technique should activate a large number of trigger conditions to detect HTs successfully.\footnote{Trigger activation rate, i.e., the proportion of trigger conditions activated by a set of test patterns, is also called trigger coverage.} \textbf{(2) Small test generation time}: The time required to generate the test patterns should not be large; otherwise, the technique will not be scalable to larger designs. \textbf{(3) Compact set of test patterns}: The number of test patterns required to activate the trigger conditions should be small. A large number of test patterns affect the testing cost adversely. \textbf{(4) Feedback-guided approach}: The technique should analyze the test patterns and their impact on the circuit to generate new test patterns, thereby reducing the test generation time and the size of the test set. \subsection{Prior Works and Their Limitations} \label{sec:prior_work} \noindent\textbf{MERO} generates test patterns that activate each rare net $N$ times~\cite{chakraborty2009mero}. The hypothesis is that if all the rare nets are activated $N$ times, the test patterns are likely to activate the trigger. The algorithm starts with a large pool of random test patterns and iteratively performs circuit simulation to keep track of the number of rare nets that get activated. While MERO provides moderate performance for small benchmarks, it fails for large benchmarks. For instance, the trigger coverage of MERO for the \texttt{MIPS} processor is only 0.2\%~\cite{TARMAC_TCAD}, as it violates the characteristics (1), (2), (3), and (4) mentioned above. \noindent\textbf{TARMAC} overcomes the limitations of MERO by transforming the problem of test pattern generation into a clique cover problem~\cite{TARMAC_TCAD}. It iteratively finds maximal cliques of rare nets that satisfy their rare values. By not relying on brute force, TARMAC outperforms MERO by a factor of 71$\times$ on average. However, the performance of TARMAC is sensitive to randomness since the algorithm relies on randomly sampled cliques. Although the test generation time for TARMAC is short, it violates characteristics (3) and (4). \noindent\textbf{TGRL} uses RL along with a combination of rareness and testability measures to overcome the limitations of TARMAC~\cite{pan2021automated}. TGRL achieves better coverage than TARMAC and MERO while reducing the run-time. However, it still violates characteristic (3), as evidenced by our results in Section~\ref{sec:results}. \subsection{Our Contributions} \label{sec:research_contributions} As discussed above, all existing techniques for trigger activation fall short on one or more fronts. In this work, we propose a new technique that is designed to satisfy all four ideal characteristics. We model the test generation problem for HT detection as an RL problem because test generation involves searching a large space to find an optimal set of test patterns. This is exactly what RL algorithms do: they navigate large search spaces to find optimal solutions. However, there are several challenges that need to be overcome to realize a practical and scalable RL agent, such as (i)~large amount of training time required for large designs, (ii)~the agent needs to be efficient while choosing actions, and (iii)~some challenging benchmarks require smart fine-tuning. We provide further details on how we overcome these challenges in Section~\ref{sec:proposed_framework}. The primary contributions of our work are as follows. \begin{itemize}[leftmargin=*] \item We develop an RL technique that is efficient in activating rare trigger conditions, thereby addressing the limitations of the state-of-the-art HT detection techniques. \item We overcome several challenges to make our technique scalable to a large design like the \texttt{MIPS} processor. \item We perform an extensive evaluation on diverse benchmarks and demonstrate the capability of our technique, which outperforms the state-of-the-art logic-testing techniques on all benchmarks. \item Our technique provides two orders of magnitude ($169\times$) reduction in the size of the test set compared to existing techniques. \item Our technique maintains similar trigger coverage ($\leq 2\%$ drop) with increasing number of rare nets, whereas the state-of-the-art technique's performance drops to $0\%$. \item Our technique maintains similar trigger coverage ($\leq 2\%$ drop) for at least $64\times$ more potential trigger conditions. \item We release our benchmarks and test patterns~\cite{DETERRENT-git}. \end{itemize}
1,941,325,220,684
arxiv
\section{Introduction} Black hole physics has entered a new era since the detection of the gravitational waves from a binary black hole merger by Laser Interferometer Gravitational-Wave Observatory (LIGO) and Virgo~\cite{Collaboration2016} and the first picture of a supermassive black hole at the center of galaxy M87 photographed by the Event Horizon Telescope (EHT)~\cite{Collaboration2019,Collaboration2019a,Collaboration2019b,Collaboration2019c,Collaboration2019d,Collaboration2019e}. Recently, the picture of the black hole in our Milky way was also taken by EHT~\cite{Collaboration2022,Collaboration2022a,Collaboration2022b,Collaboration2022c,Collaboration2022d,Collaboration2022e}. These breakthroughs provide us with more possibilities to test some fundamental physical problems, for example, does singularity exist~\cite{Berti2015,Cardoso2016}? Usually, a spacetime singularity is located at the center of a black hole. However, from the quantum aspect, spacetime should not be singular. In order to mimic black holes classically, some ultra-compact objects have been constructed, such as, gravastars~\cite{Mazur2001}, boson stars~\cite{Schunck2008}, and wormholes~\cite{Solodukhin2005,Dai:2019mse,Simonetti:2020ivl,Bambi:2021qfo}. For more details, see the review ~\cite{Cardoso2019} and references therein. But usually they need some exotic matters and the UV origin is unclear. From the top-down point view, string theory is regarded as the candidate that can unify quantum theory and gravity. Some horizonless models constructed from string theory, such as fuzz balls~\cite{Gibbons2013}, are similar to black holes up to the Planck scale, and they have smooth microstate geometries. However, a lot of degrees of freedom in supergravity are needed, and the astrophysical observations of these horizonless moldes are difficult~\cite{Bena2020,Bena2020a,Bena2020b}. Recently, a five-dimensional nonsingular topological star/black hole model was proposed based on a five-dimensional Einstein-Maxwell theory~\cite{Bah2020,Bah2020a}. The spacetime in this model has advantages both in microstate (smooth geometry) and macrostate geometries (similar to classical black holes). So it is interesting to study their astrophysical observations. Last year, Lim studied the motion of a charged particle in this nonsingular topological star/black hole model~\cite{Lim2021}. The thermodynamic stability of the solutions has been carefully analysed in Ref.~\cite{Bah:2021irr}. Integrating the extra dimension, a four-dimensional Einstein-Maxwell-Dilaton theory can be obtained, and a static spherically symmetric solution was solved in this background~\cite{Bena2020a,Bena2020b}. Shadows of this black hole were studied in Ref.~\cite{Guo2022}. In this paper, we will study the quasinormal modes (QNMs) of this model. As the characteristic modes of a dissipative system, QNMs play important roles in a lot of aspects of our world. Due to the presence of the event horizon, black holes are nature dissipative systems. For a binary black hole merger system, there are three stages: inspiral, merger, and ringdown. In the ringdown stage, the gravitational waves are regarded as a superposition of QNMs~\cite{Berti2007}. Compared with the normal modes, the eigenfunctions of QNMs generally do not form a complete set, and they are not normalizable~\cite{Nollert1998}. The frequencies of QNMs are complex, and the imaginary parts are related to the decay timescale of the perturbation. One can use the QNMs to infer the mass and angular momentum of a black hole~\cite{Echeverria:1989hg} and to test the validity of the no-hair theorem~\cite{Berti:2005ys,Berti:2007zu,Isi:2019aib}. The echoes in the ringdown signals can be used to distinguish the black hole from the ultra compact objects~\cite{Cardoso2016,Cardoso2019,Cardoso:2017cqb}. Recently, the pseudospectrum of gravitational physics showed that the QNM spectrum is unstable for the fundamental mode and the overtone modes~\cite{Jaramillo:2020tuu,Cheung:2021bol}. Besides, the properties of QNMs can also be used to constrain modified gravity theories~\cite{Wang:2004bv,Blazquez-Salcedo:2016enn,Franciolini:2018uyq,Aragon:2020xtm,Liu:2020qia,Karakasis:2021tqx,Cano:2021myl,Gonzalez:2022upu,Zhao:2022gxl}. The stability under perturbations of the background spacetime can also be partly revealed from the QNM frequencies~\cite{Ishibashi:2003ap,Chowdhury:2022zqg}. Except for black hole physics, QNMs are also very useful in other dissipative systems, such as leaky resonant cavities~\cite{Kristensen2015}, and brane world theories~\cite{Seahra2005,Seahra2005a,Tan2022}. So, QNMs have been studied widely~\cite{Cai:2015fia,Cardoso:2019mqo,McManus:2019ulj,Cardoso:2020nst,Guo:2021enm}. In this paper, we are interested in the QNMs of the four-dimensional spherically Bah-Heidmann black hole with a magnetic charge. The organization of this paper is as follows. In Sec.~\ref{themodel}, we briefly review the Bah-Heidmann black hole and the KK reduction. In Sec.~\ref{pereqs}, we study the linear perturbation of the electromagnetic field and gravitational field. Separating radial part of the perturbed fields from the angular part, we derive the perturbation equations. In Sec.~\ref{QNM}, we compute the QNM frequencies using the matrix-valued direct integration method. Finally, we give our conclusions in Sec.~\ref{conclusion}. \section{The \textcolor[rgb]{0.00,0.07,1.00}{charged black hole with scalar hair}}\label{themodel} In this section we briefly review the black hole proposed by Bah and Heidmann~\cite{Bah2020,Bah2020a}. We start from a five-dimensional Einstein-Maxwell theory. The action is \begin{equation} S=\int d^5x\sqrt{-\hat{g}}\left(\fc{1}{16\pi G_5}\hat{R}-\fc{1}{16\pi}\hat{F}^{MN}\hat{F}_{MN}\right),\label{action5} \end{equation} where $\hat{F}_{MN}$ is the electromagnetic field tensor and $G_{5}$ is the five-dimensional gravitational constant. The quantities with hat denote that they are constructed in the five-dimensional spacetime. The capital Latin letters $M, N...$ denote the five-dimensional coordinates. The metric with three-dimensional spherically symmetry can be assumed as~\cite{Stotyn2011} \begin{eqnarray} ds^2&=&-f_S(r)dt^2+f_B(r)dy^2+\frac{1}{f_S(r)f_B(r)}dr^2\nn\\ &+&r^2d\theta^2+r^2\sin^2\theta d\phi^2. \label{metric_five} \end{eqnarray} The extra dimension, denoted by the coordinate $y$, is a warped circle with radius $R_y$. The field strength with a magnetic flux is \be \hat{F}=P\sin\theta d\theta\wedge d\phi.\label{field_strength} \ee The solution with double Wick rotation symmetry is~\cite{Stotyn2011} \beq f_{B}(r)&=&1-\frac{r_B}{r}, \nonumber \\ f_{S}(r)&=&1-\frac{r_S}{r}, \\ P&=&\pm\frac{1}{G_{5}}\sqrt{3r_S r_B}.\nonumber \eeq That is to say, the metric~\eqref{metric_five} is invariant under rotation ($t$, $y$, $r_S$, $r_B$) $\rightarrow$ ($iy$, $it$, $r_B$, $r_S$). There are two coordinate singularities located at $r=r_S$ (corresponding to a horizon) and $r=r_B$ (corresponding to a degeneracy of the $y$-circle). Bah and Heidmann found that, after some coordinate transformations, a smooth bubble locates at $r=r_B$~\cite{Bah2020,Bah2020a}. This provides an end of the spacetime. For $r_S\geq r_{B}$, the bubble is hidden behind the horizon and the metric~\eqref{metric_five} describes a black string. For $r_S<r_{B}$, the spacetime ends at the bubble before reaching the horizon and the metric~\eqref{metric_five} describes a topological star~\cite{Bah2020,Bah2020a}. The metric~\eqref{metric_five} can be rewritten as \be ds_5^2=e^{2\Phi}ds_4^2+e^{-4\Phi}dy^2, \ee where \beq e^{2\Phi}&=&f_B^{-1/2}\label{solutionphi},\\ ds_4^2&=&f_B^{\frac{1}{2}}\lt(-f_Sdt^2+\frac{dr^2}{f_Bf_S}+r^2d\theta^2+r^2\sin^2\theta d\phi^2\rt). \nn\\ \label{metric_four} \eeq We can integrate the extra dimension $y$ (this process is called Kaluza-Klein reduction). Then, a four-dimensional Einstein-Maxwell-dilaton theory is obtained from the five-dimensional Einstein-Maxwell theory \beq S_4&=&\int d^4x\sqrt{-g}\Big(\fc{1}{16\pi G_4}R_4-\fc{3}{8\pi G_4}g^{\mu\nu}\pd_{\mu}\Phi\pd_{\nu}\Phi\nn\\ &-& \fc{e^{-2\Phi}}{16\pi e^2}F_{\mu\nu}F^{\mu\nu}\Big)\label{action4}, \eeq where $e^2\equiv\fc{1}{2\pi R_y}$ and $\Phi$ is a dilaton field. The Greek letters $\mu, \nu,...$ denote the four-dimensional coordinates. Here, $g_{\mu\nu}$ and $F_{\mu\nu}$ are the four-dimensional metric~\eqref{metric_four} and the electromagnetic field strength, respectively. The four-dimensional Ricci scalar $R_4$ is determined by the metric $g_{\mu\nu}$, and the four-dimensional gravitational constant is defined as \be G_4=e^2 G_5. \ee Varying the action~\eqref{action4} with respect to the scalar field $\Phi$, the vector potential $A_{\mu}$, and the metric $g_{\mu \nu}$, we obtain the field equations \beq \fc{6}{G_4}\Box\Phi+\fc{e^{-2\Phi}}{e^2}F_{\mu\nu}F^{\mu\nu}&=&0,\label{EFphi}\\ \nabla^{\mu}F_{\mu\nu}&=&0,\label{EFA}\\ R_{\mu\nu}-\fc{1}{2}g_{\mu\nu}&=&8\pi G_4 T_{\mu\nu},\label{EFg} \eeq where $\Box$ is the four-dimensional D'Alembert operator, $T_{\mu\nu}=T^\text{s}_{\mu\nu}+T^\text{m}_{\mu\nu}$ is the energy momentum tensor containing the contributions of the scalar field and the magnetic field: \beq T^\text{s}_{\mu\nu}&=&\fc{3}{4\pi G_4}\nabla_{\mu}\Phi\nabla_{\nu}\Phi-\fc{3}{8\pi G_4}g_{\mu\nu}\Box\Phi,\\ T^\text{m}_{\mu\nu}&=&\fc{e^{-2\Phi}}{4\pi e^2}F_{\mu\alpha}F_{\nu}^{\alpha}-\fc{e^{-2\Phi}}{16\pi e^2}g_{\mu\nu}F_{\alpha\beta}F^{\alpha\beta}. \eeq We can solve the vector potential corresponding to the magnetic field as \be A_{\mu}=(0,0,0,-\fc{e}{2}\sqrt{\fc{3r_B r_S}{G_4}}\cos\theta)\label{solutionnF}. \ee Thus, the field strength reads as \be F_{\mu\nu}=\lt[\begin{array}{cccc} 0&0&0&0\\ 0&0&0&0\\ 0&0&0&\fc{e}{2}\sqrt{\fc{3r_B r_S}{G_4}}\sin\theta\\ 0&0&-\fc{e}{2}\sqrt{\fc{3r_B r_S}{G_4}}\sin\theta&0 \end{array}\rt]. \ee Note that, when $r_B=0$, the metric~\eqref{metric_four} recovers to the Schwarzschild one. The parameters $r_S$ and $r_B$ are related to the four-dimensional Arnowitt-Deser-Misner mass $M$ and the magnetic charge $Q_\text{m}$ as \beq M&=&\left(\fc{2r_S+r_B}{4 G_4}\right),\\ Q_\text{m}&=&\fc{1}{2}\sqrt{\fc{3r_B r_S}{G_4}}. \eeq On the other hand, in terms of $M$ and $Q_\text{m}$, we have \beq r_S^{(1)}&=&2G_4(M-M_{\triangle}),~~~r_B^{(1)}=G_4(M+M_{\triangle});\label{MQ1}\\ r_S^{(2)}&=&G_4(M+M_{\triangle}),~~~~r_B^{(2)}=2G_4(M-M_{\triangle}),\label{MQ2} \eeq where \be M_{\triangle}^2=M^2-\left(\fc{\sqrt{2} Q_\text{m}}{\sqrt{3 G_4 }}\right)^2.\label{Mdelta} \ee Note that, in four-dimensional spacetime, when $r<r_B$, $f_B^{1/2}$ becomes imaginary. So, $r=r_B$ is the end of the spacetime. This is consistent with the result in five-dimensional spacetime~\cite{Bah2020,Bah2020a}. Usually, a black string scenario has the Gregory-Laflamme instability~\cite{Gregory:1993vy}. However, compact extra dimensions leading to a discrete KK mass spectrum makes it possible to avoid the Gregory-Laflamme instability. Stotyn and Mann demonstrated that, the solution~\eqref{MQ1} is unstable under perturbation, while, when $R_y>\fc{4\sqrt{3}}{3}Q_{\text{m}}$, the solution~\eqref{MQ2} is stable. That is to say, the solution~\eqref{MQ2} does not have the Gregory-Laflamme instability. Actually, the spacetime at $r=r_B$ is singular in four-dimensional spacetime. When $r_B\geq r_S$ the metric~\eqref{metric_four} corresponds to a naked singularity, and when $r_B<r_S$ the metric~\eqref{metric_four} corresponds to a black hole, which is named as a charged black hole with scalar hair. In this paper, we will only focus on the case $r_B<r_S$, i.e., the charged black hole with scalar hair. \section{Perturbation equations}\label{pereqs} With the background solution \eqref{solutionphi},~\eqref{metric_four}, and \eqref{solutionnF}, we can derive the equations of motion for the perturbations. The perturbed scalar field, vector potential, and metric field can be written as \beq \Phi&=&\bar{\Phi}+\varphi,\\ A_{\mu}&=&\bar{A}_{\mu}+a_{\mu},\\ g_{\mu\nu}&=&\bar{g}_{\mu\nu}+h_{\mu\nu}, \eeq where the quantities with a bar represent the background fields, $\varphi$, $a_{\mu}$, and $h_{\mu\nu}$ denote the corresponding perturbations. Because the background spacetime is spherical symmetric, the perturbations can be divided into three parts based on their transformations under rotations on the 2-sphere: scalars, two-dimensional vectors, and two-dimensional tensors. The spherical harmonic function $Y_{l,m}(\theta,\phi)$ behaves as a scalar under rotations, so it is the scalar base. The two-dimensional vector and tensor bases are introduced as follows~\cite{Wheeler,Ruffini,Edmonds,Regge,Chandrasekhar} \beq (V^1_{l,m})_a&=&\pd_a Y_{l,m}(\theta,\phi), \\ (V^2_{l,m})_a&=&\gamma^{bc}\epsilon_{ac}\pd_b Y_{l,m}(\theta,\phi), \eeq for the vector part, and \beq (T^1_{l,m})_{ab}&=&(Y_{l,m})_{;ab}, \\ (T^2_{l,m})_{ab}&=&Y_{l,m}\gamma_{ab},\\ (T^3_{l,m})_{ab}&=&\fc{1}{2}\lt[\epsilon_a^c(Y_{l,m})_{;cb}+\epsilon_b^c(Y_{l,m})_{;ca}\rt], \eeq for the tensor part. Here, the Latin letters $a, b, c$ denote the angular coordinates $\theta$ and $\phi$, $\gamma$ is the induced metric on the 2-sphere with radius $1$, and $\epsilon$ is the totally antisymmetric tensor in two dimensions. The semicolon denotes the covariant derivative on the 2-sphere. The above quantities behave differently under the space inversion, i.e., $(\theta,\phi)\rightarrow(\pi-\theta,\pi+\phi)$. A quantity is called even or polar, if it acquires a factor $(-1)^l$ under space inversion. A quantity is called odd or axial, if it acquires a factor $(-1)^{l+1}$ under space inversion. So the above quantities can be divided into two classes, the even parts $V^1_{l,m}, T^1_{l,m}, T^2_{l,m}$, and the odd parts $V^2_{l,m}, T^3_{l,m}$. Note that, the spherical harmonic function $Y_{l,m}(\theta,\phi)$ is even-parity. Usually, the perturbation equations will not mix polar and axial contributions. However, we can see from Eqs.~\eqref{solutionphi},~\eqref{metric_four}, and~\eqref{solutionnF} that the background scalar field and metric field are even-parity and the background vector potential is odd-parity. So we expect that the scalar perturbation and even-parity parts of the metric perturbations couple to the odd-parity parts of the electromagnetic perturbations to the linear order (type-I coupling). And the odd-parity parts of the metric perturbations couple to the even-parity parts of the electromagnetic perturbations to the linear order (type-II coupling). Note that, the scalar perturbation only contains the even part. Actually, these coupled perturbation equations have been studied in Refs.~\cite{Nomura:2020tpc,Meng:2022oxg}. In this paper, we study the type-II coupling perturbations. Based on the principle of general covariance, the theory should keep covariant under an infinitesimal coordinate transformation. Thus, we can choose a specific gauge to simplify the problem. In the Regge-Wheeler gauge~\cite{Regge}, the odd parts of the perturbation $h_{\mu\nu}$ can be written as \be h_{\mu\nu}=\sum_{l}e^{-i\omega t}\lt[\begin{array}{cccc} 0&0&0&h_0\\ 0&0&0&h_1\\ 0&0&0&0\\ *&*&0&0 \end{array}\rt]\sin\theta\pd_{\theta}Y_{l,0}(\theta).\label{hmunu} \ee The magnetic field also has a gauge freedom. Following Ref.~\cite{Zerilli:1973}, we denote \be \tilde{f}_{\mu\nu}=\partial_{\mu}a_{\nu}-\partial_{\nu}a_{\mu}\label{fmunu}, \ee the even parts of the perturbation $\tilde{f}_{\mu\nu}$ can be written as \be \tilde{f}_{\mu\nu}=\sum_{l}e^{-i\omega t}\lt[\begin{array}{cccc} 0&f_{01}&f_{02}&0\\ *&0&f_{12}&0\\ 0&*&0&0\\ 0&*&0&0 \end{array}\rt]\sin\theta\pd_{\theta}Y_{l,0}(\theta).\label{ffmunu} \ee Note that, we have chosen $m=0$ for simplify, because the perturbation equations do not depend on the value of $m$~\cite{Regge}. The asterisks denote elements obtained by symmetry. The functions $h_0, h_1, f_{01}, f_{02},$ and $f_{12}$ only depend on the coordinate $r$. The perturbation of the vector potential can be expanded as \beq a_t&=&-\sum_{l}e^{-i\omega t}f_{02}Y_{l,0},\\ a_r&=&-\sum_{l}e^{-i\omega t}f_{12}Y_{l,0},\\ a_{\theta}&=&0,\\ a_{\phi}&=&0. \eeq The field strength $f_{01}$ can be derived from Eq.~\eqref{fmunu} as \be f_{01}=\pd_r f_{02}+i\omega f_{12}.\label{eqf01} \ee Substituting Eqs.~\eqref{hmunu} and \eqref{ffmunu} into the equations of motion \eqref{EFA} and \eqref{EFg}, after some algebra calculations we can obtain the following master perturbation equations \beq \fc{d^2\psi_\text{g}}{dr_*^2}+(\omega^2-V_{11})\psi_\text{g}-V_{12}\psi_\text{m}&=&0,\label{mastereq1}\\ \fc{d^2\psi_\text{m}}{dr_*^2}+(\omega^2-V_{22})\psi_\text{m}-V_{21}\psi_\text{g}&=&0,\label{mastereq2} \eeq where \beq \psi_\text{g}&\equiv& f_B^{1/4}f_S\fc{1}{r}h_1,\\ \psi_\text{m}&\equiv& \sqrt{f_B}r^2f_{01}, \eeq $r_*$ is the tortoise coordinate defined as \be dr_*=\fc{1}{\sqrt{f_B}f_S}dr\label{tort}, \ee and \beq V_{11}&=&f_S\lt[\fc{l(l+1)}{r^2}-\fc{3(r_B^2(13r_S-9r)+16r_S r^2)}{16f_B r^5}\rt]\nn\\ &+&f_S\fc{3r_B (2r-7r_S)}{4f_B r^4}, \label{V11}\\ V_{12}&=&-\fc{2i f_S f_B^{1/4}}{el(l+1)r^3}\sqrt{3r_B r_S G_4}\omega , \label{V12} \\ V_{21}&=&\fc{i\sqrt{3 r_B r_S}e f_S}{2\sqrt{G_4}\omega f_B^{1/4}r^3}(l-1)l(l+1)(l+2), \label{V21}\\ V_{22}&=&f_S\lt[\fc{3r_B r_S}{r^4}+\fc{l(l+1)}{r^2}\rt]\label{V22}. \eeq The details of deriving the master equation ~\eqref{mastereq1} and \eqref{mastereq2} are shown in Appendix~\ref{appendix}. Note that, when the magnetic charge $Q_\text{m}$ vanishes, or $r_B$ approaches to zero, the gravitational perturbation $\psi_\text{g}$ and the magnetic field perturbation $\psi_\text{m}$ will decouple. Furthermore, the potential $V_{11}$ will reduce to the potential for the gravitational perturbation of the Schwarzschild black hole. Besides, the parameters $e$ and $G_4$ do not affect the quasinormal modes. To see this, we can redefine \beq \tilde\psi_\text{m}\equiv \fc{\sqrt{G_4}}{e}\psi_\text{m} \eeq to eliminate the parameters $e$ and $G_4$ in Eqs.~\eqref{mastereq1} and \eqref{mastereq2}. The corresponding potentials are \beq \tilde{V}_{12}&=&-\fc{2i f_S f_B^{1/4}}{l(l+1)r^3}\sqrt{3r_B r_S}\omega , \label{V12ok}\\ \tilde{V}_{21}&=&\fc{i\sqrt{3 r_B r_S}f_S}{2\omega f_B^{1/4}r^3}(l-1)l(l+1)(l+2).\label{V21ok} \eeq In the following, we use the redefined quantities but omit the tilde above them. \section{Quasinormal modes}\label{QNM} In this section we will solve the master perturbation equations \eqref{mastereq1} and \eqref{mastereq2} to obtain the frequencies of the QNMs. We focus on the QNMs of the solution~\eqref{MQ2}, because it is free of the Gregory-Laflamme instability. We know from Eq.~\eqref{Mdelta} that the range of the magnetic charge $Q_\text{m}$ is $[0,\sqrt{\fc{3}{2}G_4}M]$. Compared with the range of the electric charge of the Reissner-Norstr\"{o}m (RN) black hole $[0,\sqrt{G_4} M]$, the range of the magnetic charge is larger than that of the RN black hole electric charge. Note that, we only study the charged black hole with scalar hair, that is, $r_B<r_S$. In this situation, the range of the magnetic charge $Q_\text{m}$ is $[0,2\sqrt{\fc{G_4}{3}}M]$. This range is still larger than that of the RN black hole electric charge. The perturbation equations \eqref{mastereq1} and \eqref{mastereq2} are coupled and can be rewritten into a compact form \be \fc{d^2\mathbf{Y}}{dr_*^2}+(\omega^2-\mathbf{V})\mathbf{Y}=0, \ee where \begin{displaymath} \mathbf{Y}= \left( \begin{array}{c} \psi_\text{g} \\ \psi_\text{m} \end{array} \right) \end{displaymath} and $\mathbf{V}$ is a $2\times 2$ matrix with components ~\eqref{V11},~\eqref{V22},~\eqref{V12ok}, and \eqref{V21ok}. The physical boundary conditions for the QNM problem are pure ingoing waves at the event horizon \be Y_n\sim b_n e^{-i\omega r_*},~~r_*\rightarrow -\infty, \ee and pure outgoing waves at spatial infinity \be Y_n\sim B_n e^{i\omega r_*},~~r_*\rightarrow +\infty, \ee where $Y_n$ is the $n$-th component of $\mathbf{Y}$, $b_n$ and $B_n$ are coefficients of the boundary conditions. With these boundary conditions, solving the QNM frequencies is an eigenvalue problem. In this paper, we use the matrix-valued direct integration method. More details can be seen in Ref.~\cite{Pani:2013pma}. \begin{table}[!htb] \begin{center \begin{tabular}{|c|c|c|c|} \hline $Q_\text{m}/M$ &~Charged~BH~ & $Q/M$~ &~~~~RN~BH \\ \hline ~~ & $\omega_\text{R} M$~~~~$\omega_\text{I} M$~& &$\omega_\text{R} M$~~~~~$\omega_\text{I} M$~\\ \hline 0 & 0.37367~~~~-0.088962 &0& 0.37367~~~~-0.088962\\ \hline 0.2 & 0.37474~~~~-0.089081 &0.2& 0.37474~~~~-0.089075\\ \hline 0.4 & 0.37848~~~~-0.089429 &0.4& 0.37844~~~~-0.089398\\ \hline 0.6 & 0.38641~~~~-0.089982 &0.6& 0.38622~~~~-0.089814\\ \hline 0.8 & 0.40163~~~~-0.090500 &0.8& 0.40122~~~~-0.089643\\ \hline 1 & 0.43219~~~~-0.089574 &0.9999& 0.43134~~~~-0.083460\\ \hline \end{tabular}\\ \caption{The fundamental QNMs for the gravitational field $\psi_\text{g}$ of the charged black hole with scalar hair and the RN black hole for different values of the magnetic charge $Q_\text{m}$ and electric charge $Q$. The angular number $l$ is set to $l=2$.} \label{realpart} \end{center} \end{table} \begin{table}[!htb] \begin{center \begin{tabular}{|c|c|c|c|} \hline $Q_\text{m}/M$ &~Charged~BH~ & $Q/M$~ &~~~~RN~BH \\ \hline ~~ & $\omega_\text{R} M$~~~~$\omega_\text{I} M$~& &$\omega_\text{R} M$~~~~~$\omega_\text{I} M$~\\ \hline 0 & 0.45715~~~~-0.094784 &0& 0.45759~~~~-0.095004\\ \hline 0.2 & 0.46295~~~~-0.095377 &0.2& 0.46297~~~~-0.095373\\ \hline 0.4 & 0.47969~~~~-0.096462 &0.4& 0.47993~~~~-0.096442\\ \hline 0.6 & 0.51053~~~~-0.098155 &0.6& 0.51201~~~~-0.098017\\ \hline 0.8 & 0.56316~~~~ -0.10008 &0.8& 0.57013~~~~-0.099069\\ \hline 1 & 0.66161~~~~-0.099878 &0.9999& 0.70430~~~~-0.085973\\ \hline \end{tabular}\\ \caption{The fundamental QNMs for the magnetic field $\psi_\text{m}$ of the charged black hole with scalar hair and the electric field $\psi_\text{e}$ of the RN black hole for different values of the magnetic charge $Q_\text{m}$ and electric charge $Q$. The angular number $l$ is set to $l=2$.} \label{impart} \end{center} \end{table} \begin{figure*}[htb] \begin{center} \subfigure[~The real parts of the QNMs for the charged black hole with scalar hair.] {\label{TSreals} \includegraphics[width=5.5cm]{TSreals.eps}} \subfigure[~The real parts of the QNMs for the RN black hole.] {\label{RNreals} \includegraphics[width=5.5cm]{RNreals.eps}} \subfigure[~The imaginary parts of the QNMs for the RN black hole.] {\label{RNims} \includegraphics[width=5.5cm]{RNims.eps}} \end{center} \caption{The effects of the magnetic charge $Q_\text{m}$ of the charged black hole with scalar hair and the electric charge $Q$ of the RN black hole on the fundamental QNMs. The solid and dashed lines correspond to the QNMs of the magnetic field $\psi_\text{m}$ (or the electric field $\psi_\text{e}$) and the gravitational field $\psi_\text{g}$, respectively. The black, red, and blue lines correspond to the QNMs with $l=2$, $l=4$, and $l=6$, respectively. (a) The real parts of the QNMs for the charged black hole with scalar hair. (b) The real parts of the QNMs for the RN black hole. (c) The imaginary parts of the QNMs for the RN black hole.} \label{TSRN} \end{figure*} \begin{figure*}[htb] \begin{center} \subfigure[~$l=2$] {\label{TSl2im} \includegraphics[width=5.5cm]{TSl2im.eps}} \subfigure[~$l=3$] {\label{TSl3im} \includegraphics[width=5.5cm]{TSl3im.eps}} \subfigure[~$l=4$] {\label{TSl4im} \includegraphics[width=5.5cm]{TSl4im.eps}} \subfigure[~$l=5$] {\label{TSl5im} \includegraphics[width=5.5cm]{TSl5im.eps}} \subfigure[~$l=6$] {\label{NSl6Im} \includegraphics[width=5.5cm]{TSl6im.eps}} \end{center} \caption{The effect of the magnetic charge $Q_\text{m}$ on the imaginary parts of the fundamental QNMs for the charged black hole with scalar hair. The solid and dashed lines correspond to the QNMs of the magnetic field $\psi_\text{m}$ and the gravitational field $\psi_\text{g}$, respectively. The angular number $l$ is set to $l=2,3,4,5,6$.} \label{TSims} \end{figure*} \begin{figure*}[htb] \begin{center} \subfigure[~The real parts of the QNMs with $l=7,8,9,10$.] {\label{NSl78910Re} \includegraphics[width=5.5cm]{TSreals78910.eps}} \subfigure[~The imaginary parts of the QNMs with $l=7,8$.] {\label{NSl78Im} \includegraphics[width=5.5cm]{TSim78.eps}} \subfigure[~The imaginary parts of the QNMs with $l=9,10$.] {\label{NSl910Im} \includegraphics[width=5.5cm]{TSim910.eps}} \end{center} \caption{The effect of the magnetic charge $Q_\text{m}$ on the frequencies of the fundamental QNMs for the charged black hole with scalar hair. The solid and dashed lines correspond to the QNMs of the magnetic field $\psi_\text{m}$ and the gravitational field $\psi_\text{g}$, respectively. The black, red, blue, and purple lines correspond to the QNMs with $l=7$, $l=8$, $l=9$, and $l=10$, respectively. (a) The real parts of the QNMs with $l=7,8,9,10$. (b) The imaginary parts of the QNMs with $l=7,8$. (c) The imaginary parts of the QNMs with $l=9,10$.} \label{TSl7} \end{figure*} We solve the fundamental QNMs numerically, which dominate the ringdown waveform at late time. The values of the frequencies of fundamental QNMs for the gravitational field $\psi_\text{g}$ and the magnetic field $\psi_\text{m}$ for different values of the magnetic charge $Q_\text{m}$ with $l=2$ are shown in Tables.~\ref{realpart} and \ref{impart}. When $Q_\text{m}=0$, the metric~\eqref{metric_four} reduces to the Schwarzschild metric. The master equation ~\eqref{mastereq1} reduces to the odd parity gravitational perturbation of the Schwarzschild black hole in general relativity. The frequencies of the QNMs are also same as the Schwarzschild black hole case. This confirms that our numerical method is valid. Note that the charge of the charged black hole with scalar hair can be seen as a dark charge, which is very different from the U(1) charge of electromagnetism of the RN black hole. In this paper, we would like to compare our results with that of the RN black hole. Comparing the QNMs of the charged black hole with scalar hair and the RN black hole, we can see that, the differences of their numerical values are very small. So it almost can not distinguish them from the gravitational wave data. Note that, for the extreme RN black hole, the singularity structure of the perturbation equations is different from the nonextreme ones. The QNMs for the maximally charged RN black hole was studied in Ref.~\cite{Onozawa:1995vu}. Our results for the RN black hole with $Q/M=0.9999$ are taken from that paper. The effects of the magnetic charge $Q_\text{m}$ of the charged black hole with scalar hair and the electric charge $Q$ of the RN black hole on the fundamental QNMs are shown in Figs.~\ref{TSRN}-\ref{TSl7}. From Figs.~\ref{TSreals} and \ref{RNreals}, it can be seen that the real parts of the QNMs for both black holes increase with the magnetic charge $Q_\text{m}$ or the electric charge $Q$. The imaginary parts of the QNMs for the RN black hole increase first then decrease as the electric charge $Q$ increases, which can be seen from Fig.~\ref{RNims}. However, the situation for the imaginary parts of the charged black hole with scalar hair is different, which can be seen in Fig. \ref{TSims}. When $2\leq l\leq 6$, an interesting phenomenon appears. The number of peaks of the QNMs' imaginary parts for the magnetic field $\psi_\text{m}$ (solid lines in Fig.~\ref{TSims}) exactly equals to the value of $l$. While the number of peaks of QNMs' imaginary parts for the gravitational field $\psi_\text{g}$ (dashed lines in Fig.~\ref{TSims}) is always less than that of the magnetic field $\psi_\text{m}$. This phenomenon does not appear in the RN black hole. When $l>6$, we do not find such particular phenomenon anymore. We can see from Fig.~\ref{TSl7} that, when $l=7$ and $l=8$, the imaginary parts for both the magnetic field $\psi_\text{m}$ and the gravitational field $\psi_\text{g}$ increase with the magnetic charge $Q_\text{m}$. When $l=9$ and $l=10$, the imaginary parts for the magnetic field $\psi_\text{m}$ decrease first then increase as the magnetic charge $Q_\text{m}$ increases, while the imaginary parts for the gravitational field $\psi_\text{g}$ increase with the magnetic charge $Q_\text{m}$. \section{Conclusions}\label{conclusion} In five-dimensional spacetime, based on the Einstein-Maxwell action~\eqref{action5}, Bah and Heidmann proposed a nonsingular black hole/topology star. This is similar to the classical black hole in macrostate geometries, more importantly, it can be constructed from type IIB string theory. Integrating the extra dimension $y$, the five-dimensional Einstein-Maxwell theory reduces to a four-dimensional Einstein-Maxwell-dilaton theory which supports a spherically static black hole/topological star solution with a magnetic charge. We investigated the QNMs of the charged black hole with scalar hair by studying the linear perturbation of the gravitational field and the electromagnetic field. Because of the spherical symmetry of the background spacetime, the radial parts of the perturbed fields can be decomposed from the angular parts. The angular parts can be expanded by the spherical harmonics. The background scalar field~\eqref{solutionphi} and metric field~\eqref{metric_four} are even parity under the space inversion, however, the background magnetic field~\eqref{solutionnF} is odd parity. So the scalar perturbation and even-parity parts of the metric perturbations couple to the odd-parity parts of the electromagnetic perturbations to the linear order, and the odd-parity parts of the metric perturbations couple to the even-parity parts of the electromagnetic perturbations to the linear order, which we named as type-I and type-II couplings, respectively. For simplicity, we study the type-II coupling perturbations. Finally, we obtained two coupled perturbation equations~\eqref{mastereq1} and \eqref{mastereq2}. Because the extra dimension radius $R_y$ can be eliminated from the master equations by a transformation of the electromagnetic field $\psi_\text{m}$, the extra dimension radius $R_y$ has no effect on the QNMs. Using the matrix-valued direct integration method, we computed the fundament QNMs for both the gravitational perturbation and the magnetic field perturbation, which will dominate the ringdown wave at late time. The values of the frequencies of the fundamental QNMs for the gravitational field $\psi_\text{g}$ and the magnetic field $\psi_\text{m}$ for different values of the magnetic charge $Q_\text{m}$ with $l=2$ are shown in Tables.~\ref{realpart} and \ref{impart}. The differences of the frequencies of the fundamental QNMs between the charged black hole with scalar hair and the RN black hole are very small. So it almost can not distinguish them from the gravitational wave data. The effect of the magnetic charge $Q_\text{m}$ of the charged black hole with scalar hair on the fundamental QNMs are shown in Figs.~\ref{TSreals},~\ref{TSims}, and~\ref{TSl7}. The real parts of the QNMs increase with the magnetic charge $Q_\text{m}$, which is similar to that of the RN black hole. An interesting phenomenon which does not find in the RN black hole is that, when $2\leq l\leq 6$, the number of peaks of the fundamental QNMs' imaginary parts for the magnetic field $\psi_\text{m}$ exactly equals to the value of $l$. We only studied the type-II coupling perturbations, where the scalar field does not couple to the other two fields. So we expect that the type-I coupling perturbations will give us more information about the charged black hole with scalar hair, which will be studied in the future. \section{Acknowledgments} We thank Pierre Heidmann for important comment, suggestion and discussion. This work was supported by National Key Research and Development Program of China (Grant No. 2020YFC2201503), the National Natural Science Foundation of China (Grants No. 12147166, No. 11875151, No. 12075103, and No. 12247101), the China Postdoctoral Science Foundation (Grant No. 2021M701529), the 111 Project (Grant No. B20063), and Lanzhou City's scientific research funding subsidy to Lanzhou University. \begin{widetext} \begin{appendices} \section{Explicit perturbation equations}\label{appendix} In this appendix we give the details of how to get the master perturbation equations ~\eqref{mastereq1} and \eqref{mastereq2}. The nonvanishing parts of the perturbed Einstein equations are the $(t,\phi)$, $(r,\phi)$, and $(\theta,\phi)$ components \beq &&2 e \Big(4 f_B \fc{r_S}{r}-f_S\fc{ r_B }{ r}-\fc{f_S r_B^2}{f_B r^2} + 4 l(l+1) + 10 f_S \fc{r_B}{r} + 8 \fc{r_S}{r} + 12 \fc{r_B r_S}{r^2}\Big)h_0-8ef_Bf_S r^2 h_0''\nn\\ &&-4 ie f_S r \omega \left(r f_B'+4 f_B\right)h_1 -8ief_Bf_S r^2 \omega h_1'=-16a\sqrt{G_4}\sqrt{f_B}f_{02}\label{eqtphi}, \eeq \beq &&8 e f_B^2 \left(r^4 \omega ^2-f_S \left(r_S (2 f_B+3 r_B)-2 r f_B r_S+(l(l+1)-2) r^2\right)\right)h_1+4i\fc{e}{r}f_B\omega(4f_B+rf_B')h_0\nn\\ &&-8ier^4f_B^2\omega h_0'=16a\sqrt{G_4}r^2f_B^{5/2}f_Sf_{12}\label{eqrphi}, \eeq \beq 2f_S h_1 f_B f_S'+f_S^2 \left(h_1 f_B'+2 f_B h_1'\right)+2 i h_0 \omega=0\label{eqthetaphi}, \eeq where the constant $a$ is defined as $a\equiv e\sqrt{3r_B r_S}$. And the nonvanishing parts of the perturbed Maxwell equations are the $t$, $r$, and $\theta$ components \beq f_S r\left( r f_B'+4 f_{B}\right)f_{01}+2 f_B f_S r^2 f_{01}'-2l(l+1)f_{02}&=&\fc{a}{f_B r^2 \sqrt{G_4}}l(l+1)h_0,\label{eqat}\\ 2i\omega \sqrt{f_B}r^4 f_{01}+2f_S\sqrt{f_B}r^2l(l+1)f_{12}&=&-\fc{a}{\sqrt{G_4}} f_S l(l+1)h_1,\label{eqar}\\ 2 f_B^{3/2} f_S \kappa_4 r^3 (f_{12} f_S)'+\sqrt{f_B} r^3 \left(f_{12} f_S^2 f_B'+2 i f_{02} \omega \right)&=&\frac{a}{\sqrt{G_4}} \left(3 f_S h_1-f_B f_S (f_S r h_1)'- \omega r h_0\right).\label{eqatheta} \eeq Actually, among the six perturbed equations only four of them are independent. Equation~\eqref{eqtphi} can be derived from Eqs.~\eqref{eqrphi},~\eqref{eqthetaphi}, and \eqref{eqatheta} with the background Einstein equation~\eqref{EFg}. Similarly, Eq.~\eqref{eqatheta} can also be obtained by using Eqs.~\eqref{eqar},~\eqref{eqat}. Therefor, we can use four independent equations~\eqref{eqrphi}-\eqref{eqar} and an identity~\eqref{eqf01} to solve five independent variables $h_0$, $h_1$, $f_{01}$, $f_{02}$, and $f_{12}$. The variable $h_0$ can be solved from Eq.~\eqref{eqthetaphi} as \beq h_0=\fc{i}{2\omega}f_S\lt(f_S f_B'+2f_B f_S'\rt)h_1+2f_S^2 f_B h_1'.\label{eqh0} \eeq Using this formula and Eqs.~\eqref{eqat} and \eqref{eqar}, we can obtain $f_{02}$ and $f_{12}$ in terms of $h_1$ and $f_{01}$ as \beq f_{02}&=&\fc{f_S r}{2l(l+1)}\lt[2rf_B f_{01}'+(4f_B+r f_B')f_{01}\rt]-\fc{i a f_S}{4r^2\sqrt{G_4}\omega \sqrt{f_B}}\lt[2f_B(h_1 f_S)'+f_S f_B' h_1\rt],\label{eqf02}\\ f_{12}&=&-\fc{i\omega r^2}{f_S (l+1)l}f_{01}-\fc{a}{2\sqrt{G_4}\sqrt{f_B}r^2}h_1\label{eqf12}. \eeq Substituting Eqs.~\eqref{eqh0}-\eqref{eqf12} into Eqs.~\eqref{eqrphi} and \eqref{eqf01} we can obtain two second order differential equations in which $h_1$ and $f_{01}$ are coupled \beq &-&\fc{1}{2}\sqrt{f_B}f_S h_1''+\lt[\fc{\sqrt{f_B}}{2r^2}(2rf_S-3r_S)-\fc{r_B f_S}{2r^2\sqrt{f_B}}\rt]h_1'+\Big[\fc{r_B^2f_S}{8r^4f_B^{3/2}}-\fc{\omega^2}{2\sqrt{f_B}f_S}\nn\\ &-&\fc{1}{4r^4\sqrt{f_B}}\big((3r_Br_S-2(l-1)(l+2)r^2)-5r r_B f_S\big)+\fc{\sqrt{f_B}}{2r^4 f_S}(4r r_S f_S-r_S^2)\Big]h_1=\fc{2i \omega a \sqrt{G_4}}{e^2l(l+1)f_S}f_{01},\label{eqh1f01} \eeq \beq &-&\fc{r^2f_Sf_B }{l(l+1)}f_{01}''+\fc{2f_B(r_S+4rf_S)-3r_B f_S}{2l(l+1)}f_{01}'+ \lt[1-\fc{r^2\omega^2}{l(l+1)f_S}+\fc{f_S'(4rf_B+r_B)}{2l(l+1)}-\fc{f_S(5r_B+4rf_Bf_S)}{2l(l+1)r}\rt]f_{01}\nn\\ &=&-\fc{ia\sqrt{f_B}f_S^2}{2r^2\omega \sqrt{G_4}}h_1''-\fc{ia f_S[r_B f_S+f_B(3r_S-2rf_S)]}{2r^4\omega \sqrt{G_4} f_B}h_1'-\fc{ia r_B^2f_S^2}{8r^6\omega\sqrt{G_4} f_B^{3/2}}h_1\nn\\ &+&\lt[\fc{i a r_S\sqrt{f_B}(r_S-3r f_S)}{2r^6\omega\sqrt{G_4}}-\fc{i a (3r r_B f_S^2-2r^4\omega^2-3r_B r_S f_S)}{4r^6\omega\sqrt{G_4}\sqrt{f_B}}\rt]h_1.\label{eqf01h1} \eeq In order to get the Schr\"{o}dinger-like form, we need to define the following master variables \beq \psi_\text{g}\equiv f_B^{1/4}f_S\fc{1}{r}h_1,\\ \psi_\text{m}\equiv \sqrt{f_B}r^2f_{01}. \eeq In the tortoise coordinate $r_*$, Eqs.~\eqref{eqh1f01} and \eqref{eqf01h1} can be rewritten into the form of Eqs.~\eqref{mastereq1} and \eqref{mastereq2}. \end{appendices} \end{widetext}
1,941,325,220,685
arxiv
\section{Introduction} In recent decades, Lorentz invariance has been experimentally reaffirmed to increasingly high precision \cite{kostelecky_data_2011}. Nevertheless, the breakdown in the structure of space-time eventuated by many theories of quantum gravity \cite{garay_quantum_1995,magueijo_lorentz_2002,collins_lorentz_2004} continues to motivate scrutiny of this symmetry of nature \cite{beane_constraints_2014,lambiase_lorentz_2018,brun_detecting_2019}. Of particular present interest are those efforts to reconcile Lorentz invariance with a supposed discreteness of space-time. Some success in this direction has been achieved in loop quantum gravity \cite{rovelli_reconcile_2003} and causal set theory \cite{bombelli_discreteness_2009}. In a classical setting, Regge calculus \cite {regge_general_1961} has also been championed as a discrete, generally covariant theory of gravity \cite{feinberg_lattice_1984}, though its geometries' inequivalence under gauge transformations challenges that view \cite{loll_discrete_1998}. In the present work, we develop a new formalism that reconciles Poincar\'e invariance with a discrete, but otherwise classical, universe. Our own motivation toward this effort arises from a desire to formulate algorithms that exactly conserve energy and momentum in simulations of classical physical systems. Because the space-times of algorithms are necessarily discrete, and the Noether symmetries \cite{noether_invariant_1971} they model necessarily continuous, vital conservation laws are generally broken in any first principles simulation. This forfeiture of space-time symmetry is a central challenge of computational physics, whose resolution bears upon questions of theoretical physics as well. $\mOne$-loop theory is here introduced as a formalism for a lattice gauge theory of the Poincar\'e group, ${\mP=\mmT\rtimes SO^+(3,1)}$. We adopt an unconventional view of Poincar\'e symmetry that identifies $\mP$ as a gauge group of foreground physical fields, rather than the symmetry group of a background space-time. We correspondingly regard the lattice of $\mOne$-loop theory as a mere graph, rather than an embedding in a continuous space-time, possessing dimensionality and extent. The dynamical framework we adopt relinquishes Lagrangian and Hamiltonian formalism, and instead reformulates Yang-Mills \cite{yang_conservation_1954} equations of motion directly in a discrete, gauge-invariant construct we call the $\mOne$-loop. The $\mOne$-loop generalizes the Wilson loop \cite{wilson_confinement_1974} and derives its physics from a conserved current $\mJ$, rather than a Lagrangian $\mL$ or Hamiltonian $\mH$. $\mOne$-loop dynamics are described not by pointwise-defined differential equations---${\ws{E}(\mL)=0}$ or ${\d{}{t}=\{\cdot,\mH\}}$, say---but by finite lattice loops of Lie group elements whose composition evaluates to the identity: ${[g_1\cdots g_n](\mJ)=\mOne}$. Briefly, the basic $\mOne$-loop for a gauge group $G$ is \begin{eqn} \delta\Omega\cdot J&=\mOne. \label{OneLoopFieldEqn} \end{eqn} This relation recovers its Yang-Mills counterpart in the continuous space-time limit, \begin{eqn} \delta\mD A+j&=0. \label{YangMillsFieldEqn} \end{eqn} The current $J$ and holonomy $\Omega$ of Eq.~(\ref{OneLoopFieldEqn}) are $G$-valued lattice loops, and ${\mOne\in G}$ denotes the identity. $\delta$ in Eq.~(\ref{OneLoopFieldEqn}) is a covariant codifferential redefined as a map between $G$-valued loops, satisfying ${\delta^2\Omega=\mOne}$ and ${(\delta\alpha)^{-1}=\delta(\alpha^{-1})}$ $\forall$ $\alpha$. Thus, ${\delta J=\mOne}$ follows from Eq.~(\ref{OneLoopFieldEqn}). This $\mOne$-loop conservation law forms a lattice counterpart to the Yang-Mills relation ${\delta j=0}$. Whereas Yang-Mills theories are defined for compact, reductive gauge groups, however, $\mOne$-loop theory is designed for the gauge groups of reductive Cartan geometries \cite{cartan_les_1926,sharpe_differential_1997}, such as $\mP$. To construct a Poincar\'e $\mOne$-loop theory, a $\mP$-valued current $J$ will be defined. In fact, by requiring that currents transform in the adjoint representation, the $\mOne$-loop formalism uniquely determines $J$ from a mere choice of matter field. In this work, $J$ is thereby constructed from a recently defined Poincar\'e representation \cite{glasser_lifting_2019,glasser_restoring_2019}, the 5-vector $\Phi$. The $\mP$-valued holonomy $\Omega$ will likewise be formed from a Poincar\'e gauge field, $A$. Finally, leveraging ideas from Cartan geometry, we will define the operator $\delta$ in a manner comparable to the Wilson loop reconstruction of the covariant derivative $\mD$. The resulting $\mOne$-loop theory constitutes a lattice Poincar\'e gauge theory of gravity. We will demonstrate that in the torsionless continuum limit, this theory recovers Einstein's vacuum equations and its fields evolve along geodesics. In the appropriate limit, therefore, Poincar\'e $\mOne$-loop theory accords with general relativity \cite{einstein_gr_1915} in vacuum. In the presence of matter, however, torsion and angular momentum play important dynamical roles in $\mOne$-loop theory---as they do in most Poincar\'e gauge theories of gravity \cite{kibble_lorentz_1961,sciama_physical_1964,hehl_spin_1973,hehl_spin_1974,hehl_general_1976,hehl_four_1980,trautman_fiber_1980,popov_theory_1975,popov_einstein_1976,tseytlin_poincare_1982,aldrovandi_complete_1984,aldrovandi_natural_1986,aldrovandi_quantization_1988}. We shall contextualize $\mOne$-loop theory within this existing literature. In its continuum limit, $\mOne$-loop theory will be seen to recover the field equations of a less-studied Poincar\'e gauge theory \cite{popov_theory_1975,popov_einstein_1976,tseytlin_poincare_1982,aldrovandi_complete_1984,aldrovandi_natural_1986,aldrovandi_quantization_1988}. The remainder of this paper is organized as follows: section \ref{WilsonLoopSect} motivates the $\mOne$-loop as a natural generalization of the Wilson loop; section \ref{U1Warmup} further motivates $\mOne$-loop theory by studying a $U(1)$ lattice gauge theory; section \ref{Poincare1LoopTheory} introduces Poincar\'e $\mOne$-loop theory and comprises the core of this paper; section \ref{1LoopContinuumLimit} compares the continuum limit of $\mOne$-loop theory with existing gauge theories of gravity; and section \ref{conclusionSect} concludes. \section{A Motivating Aside: Dynamical Variables for Gauge Theories\label{WilsonLoopSect}} We briefly review an argument \cite{wu_concept_1975} for the naturalness of Wilson loops as dynamical variables for gauge fields. The experimentally-confirmed Aharonov-Bohm effect \cite{aharonov_significance_1959} demonstrates that the 2-form Faraday tensor ${F(x)\in\Lambda^2[M,\mathfrak{u}(1)]}$ under-describes the effects of the electromagnetic gauge field. On the other hand, the gauge field ${A(x)\in\Lambda^1[M,\mathfrak{u}(1)]}$ over-describes them: $A(x)$ can be freely gauge-transformed without physical consequence. Physical gauge-theoretic dynamical variables are therefore to be found somewhere `between' $A$ and $F$. For abelian gauge theories, the group-valued Wilson loop \cite{weyl_elektron_1929} \begin{eqn} W_C=P\exp\left[i\oint_CA(x)\right] \label{WilsonLoop} \end{eqn} satisfies both criteria of a dynamical variable; it is gauge-invariant and captures the Aharonov-Bohm effect. In non-abelian gauge theories, however, $W_C$ generally transforms nontrivially under a gauge transformation; although the path-ordering operator $P$ ensures the gauge invariance of $W_C$ at all intermediate points of the loop $C$, its basepoint $x_0$ leads $W_C$ to transform as the adjoint ${g(x_0)W_C g^{-1}(x_0)}$. As a result, the invariant dynamical variable in non-abelian gauge theories is defined by the trace of Eq.~(\ref{WilsonLoop}), that is, $\ws{Tr}[W_C]$. In the present effort, we pursue a slightly different strategy in defining physical variables. In particular, we regard as physical variables only those group-valued loops that evaluate to the identity element, $\mOne$: \begin{eqn} \mOne_C=P\exp\left[i\oint_C(A|j)(x)\right]=\mOne. \label{OneLoop} \end{eqn} In this expression, we have generalized the integrand of Eq.~(\ref{WilsonLoop}), allowing it to be either the gauge field $A(x)$ or the current $j(x)$, depending on the point ${x\in C}$. Like the gauge field, the current $j(x)$ of an arbitrary gauge theory is a $\mg$-valued 1-form \cite{bleecker_gauge_2005}. As such, $\mOne_C$ generalizes the Wilson loop to allow for its dependence on ${j(x)\in\Lambda^1[M,\mg]}$, while restricting it to be identity-valued. In what follows, we refer to a loop in the form of Eq.~(\ref{OneLoop}) as a $\mOne$-loop. When $\mOne$-loops are defined on a lattice, they serve as discrete, gauge-invariant counterparts to classical physics' pointwise-defined differential equations of motion. Unlike $W_C$, the $\mOne$-loop is a suitable physical variable for an arbitrary gauge theory; because of its identity value, $\mOne_C$ is gauge-invariant for abelian and non-abelian groups alike. $\mOne_C$ may also be inverted or cyclically permuted without penalty; neither the orientation nor the basepoint $x_0$ of $C$ affects its evaluation to $\mOne$. We emphasize two additional properties of $\mOne_C$, shared by $W_C$. First, whereas $F(x)$ is evaluated at a point in space-time, the spatio-temporal extent of the loop $C$ is non-vanishing. This motivates Wilson's exploration of gauge theories on a discrete lattice, where loops are necessarily non-vanishing and perhaps more readily defined. Second, $\mOne_C$ and $W_C$ both reveal the naturalness of working with Lie group elements rather than Lie algebra elements. For example, an observed phase difference in the Aharonov-Bohm experiment only fixes the integral in Eq.~(\ref{WilsonLoop}) up to integer multiples of ${2\pi}$. Its Lie algebra value is therefore unobservable and indeterminate, while its corresponding group element, by contrast, is fully specified. \section{Attempting a $\mOne$-loop\\$U(1)$ Lattice Gauge Theory\label{U1Warmup}} We first motivate $\mOne$-loop theory with a familiar gauge theory---scalar QED. We begin by recalling its classical equations of motion, derived from the Lagrangian ${\mL=-(\mD^\mu\phi)^*(\mD_\mu\phi)-m^2\phi^*\phi-\frac{1}{4}F^{\mu\nu}F_{\mu\nu}}$ in continuous ${\mR^{3,1}}$ space-time with flat metric ${g_{\mu\nu}(x)=\eta_{\mu\nu}}$ of signature ${(-\text{+++})}$: \begin{subequations} \begin{alignat}{1} \mD^\mu F_{\mu\nu}+ej_\nu&=0 \label{U1ClassicalEOM_field}\\ (\mD^\mu\mD_\mu-m^2)\phi&=0, \label{U1ClassicalEOM_matter} \end{alignat} \end{subequations} where ${F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu}$, ${\mD_\mu\phi=(\partial_\mu-ieA_\mu)\phi}$, and ${A_\mu\md x^\mu\in\Lambda^1[\mR^{3,1},\mathfrak{u}(1)]}$. Like the gauge field, the current ${j_\mu=i\left[(\mD_\mu\phi)^*\phi-\phi^*(\mD_\mu\phi)\right]}$ is a $\mathfrak{u}(1)$-valued 1-form, ${j_\mu\md x^\mu\in\Lambda^1[\mR^{3,1},\mathfrak{u}(1)]}$. For convenience, we recall that ${\mD^\mu F_{\mu\nu}=\partial^\mu F_{\mu\nu}}$ due to the trivial abelian adjoint action of ${g\in U(1)}$ on ${X\in\mathfrak{u}(1)}$, that is, ${\ws{Ad}_gX=gXg^{-1}=X}$. Our strategy requires that all dynamical equations are re-expressed as $\mOne$-loops in the form of Eq.~(\ref{OneLoop}). To that end, we first restate the gauge field equation of motion by formally exponentiating the ${\mathfrak{u}(1)}$-valued Eq.~(\ref{U1ClassicalEOM_field}): \begin{eqn} \exp\big(\mD^\mu F_{\mu\nu}+ej_\nu\big)=\mOne. \label{stepToGroupElements} \end{eqn} While identity-valued, as desired, this relation for the pointwise-defined Faraday tensor $F_{\mu\nu}$ must be further re-expressed as a loop integral of $A_\mu$ and $j_\mu$, as in Eq.~(\ref{OneLoop}). We therefore reconstruct Eq.~(\ref{stepToGroupElements}) on a hypercubic lattice ${\{\v{n}\}=\mZ^4}$, letting Greek indices $\{\alpha,\beta,\dots\}$ correspond to lattice directions $\{t,x,y,z\}$. First, we derive a discrete form of Eq.~(\ref{U1ClassicalEOM_field}) from a lattice action $S$ defined over $\mZ^4$: \begin{eqn} S=\sum\limits_{\v{n}\in\raisebox{-.8pt}{\footnotesize$\mZ^4$}}\left[-\frac{1}{4}F^{\mu\nu}\n F_{\mu\nu}\n+ej_\mu\n A^\mu\n\right], \label{DiscMaxwellAction} \end{eqn} where ${F_{\mu\nu}=\md_\mu^+A_\nu\n-\md_\nu^+A_\mu\n}$ with finite difference operator ${\md_\mu^\pm f\n=\pm(f[\v{n}\pm\hat{\mu}]-f\n)}$. For now, we regard ${j_\mu\n}$ as arbitrary and independent of $A^\nu$. Setting ${\partial S/\partial A^\nu\n=0}$, we derive the following discrete gauge field equation of motion from Eq.~(\ref{DiscMaxwellAction}): \begin{eqn} \eta^{\mu\sigma}\md^-_\mu F_{\sigma\nu}\n+ej_\nu\n=0. \label{discreteGaugeFieldEOM} \end{eqn} We now substitute the left-hand side of Eq.~(\ref{discreteGaugeFieldEOM}) into the exponent of Eq.~(\ref{stepToGroupElements}) to discover the following $\mOne$-loop on $\mZ^4$: \begin{eqn} \mOne_\nu\n&=J_\nu\n\prod\limits_{\mu\neq\nu}G^\mu_{~\nu}\n\\ &=J_\nu\n\prod\limits_{\mu\neq\nu}\Big(U^\mu_{~~\nu}\n U^{-\mu}_{~~~~\nu}\n\Big)=\mOne. \label{groupElementGaugeEOM} \end{eqn} This expression may be compared with Eq.~(\ref{OneLoop}). It is a $\mOne$-loop reformulation of Maxwell's equations in the desired form, ${[g_1\cdots g_n](j)=\mOne}$. In Eq.~(\ref{groupElementGaugeEOM}), $\nu$ is fixed and we have defined \begin{eqn} G^\mu_{~\nu}\n&=U^\mu_{~~\nu}\n U^{-\mu}_{~~~~\nu}\n\\ U^{\pm\mu}_{~~~~\nu}\n&=\exp(\eta^{\mu\mu}\log U_{\pm\mu,\nu}\n)\\ U_{\mu\nu}\n&=U_\nu\n^{-1}U_\mu[\v{n}+\hat{\nu}]^{-1}U_\nu[\v{n}+\hat{\mu}]U_\mu\n\\ U_\mu\n&=\exp\left(iA_\mu\n\right)\\ J_\mu\n&=\exp\left(iej_\mu\n\right). \label{DefineLatticeVariables} \end{eqn} We have used the fact that $U(1)$ is abelian to freely factor the exponentiation of Eq.~(\ref{discreteGaugeFieldEOM}). For ${\mu=t}$, we note that ${U^\mu_{~~\nu}\n=U_{\mu\nu}\n^{-1}}$. A depiction of $J_y\n$ and $G^x_{~y}\n$ is rendered in Fig.~\ref{GxyPlot}. \begin{figure}[b!] \vspace*{-60pt} \hbox{\hspace{-5pt}\includegraphics[width=88mm]{GxyLoop.pdf}} \setlength\abovecaptionskip{-35pt} \setlength\belowcaptionskip{-5pt} \caption{Noting the positive spatial signature of the metric $\eta_{\mu\nu}$, we depict ${G^x_{~y}\n=U_{xy}\n U_{-x,y}\n}$ and $J_y\n$. The current ${J_y\n}$ forms a round-trip loop along lattice edge ${[\v{n}|\v{n}+\hat{y}]}$. A $\mOne$-loop is thus formed by ${J_y\n G^t_{~y}\n G^x_{~y}\n G^z_{~y}\n=\mOne}$.} \label{GxyPlot} \end{figure} Two obstructions to a $U(1)$ $\mOne$-loop theory are now seen plain. The first is, in part, aesthetic: The metric structure of Maxwell's equations is rather shoehorned into Eq.~(\ref{DefineLatticeVariables}). Beyond our desire to recover Eq.~(\ref{discreteGaugeFieldEOM}), there is no geometric motivation for the appearance of $\eta^{\mu\mu}$ in $U^{\pm\mu}_{~~~~\nu}\n$, nor is there a readily apparent generalization of Eq.~(\ref{groupElementGaugeEOM}) for a curved metric ${g_{\mu\nu}}$. Second, although the $\mOne$-loop of Eq.~(\ref{groupElementGaugeEOM}) defines the desired dynamics for the $U(1)$ gauge field of lattice scalar QED, a complete theory must also specify dynamics for the matter field $\phi\n$ (and $j_\mu\n$ therewith). A variational Lagrangian approach can derive intuitive lattice discretizations of Eq.~(\ref{U1ClassicalEOM_matter}), such as \cite{shi_simulations_2018} \begin{eqn} \hspace{-5pt}\left[\mOne-S^{-\mu}e^{ieA_\mu\n}\right]_\mu&\left[e^{-ieA_\mu\n}S^\mu-\mOne\right]^\mu\phi\n-m^2\phi\n=0, \label{discreteEOM} \end{eqn} where ${S^{\pm\mu}\left(e^{A_\nu\n}f\n\right)=e^{A_\nu[\v{n}\pm\hat{\mu}]}f[\v{n}\pm\hat{\mu}]}$ defines the shift operator $S^{\pm\mu}$, and a Minkowski-signed sum over $\mu$ is implicit. However, while Eq.~(\ref{groupElementGaugeEOM}) formulates the gauge field equation of motion as a $\mOne$-loop, Eq.~(\ref{discreteEOM}) is not of this form. Indeed, because the matter field equation of motion---Eq.~(\ref{U1ClassicalEOM_matter})---is not Lie-algebra-valued, it offers no such re-expression. To fully define a $\mOne$-loop theory, therefore, we must find an alternative group-valued representation of matter field dynamics. A solution to this impasse is suggested by the following observation: The vanishing divergence of the energy-momentum tensor nearly enforces a system's equations of motion. This may be seen for Klein-Gordon and Dirac field theories as follows: \begin{eqn} \partial^\mu T_{\mu\nu}^\phi&=\partial^\mu\left[\partial_\mu\phi\partial_\nu\phi+\eta_{\mu\nu}\mL^\phi\right]\\ &=\partial_\nu\phi\left[\partial^2\phi-m^2\phi\right]\\ \partial^\mu T_{\mu\nu}^\psi&=\partial^\mu\left[i\bar{\psi}\gamma_\mu\partial_\nu\psi-\eta_{\mu\nu}\mL^\psi\right]\\ &=\partial_\nu\bar{\psi}\left[-i\gamma^\mu\partial_\mu\psi+m\psi\right]+\left[i\partial_\mu\bar{\psi}\gamma^\mu+m\bar{\psi}\right]\partial_\nu\psi, \label{consLawsAreEOM} \end{eqn} where we have defined ${2\mL^\phi=-\partial^\sigma\phi\partial_\sigma\phi-m^2\phi^2}$ and ${\mL^\psi=i\bar{\psi}\gamma^\sigma\partial_\sigma\psi-m\bar{\psi}\psi}$. We conclude from Eq.~(\ref{consLawsAreEOM}) that, wherever the matter fields $\phi$ and $\psi$ are non-constant, their respective equations of motion are enforced by energy-momentum conservation---that is, by ${\mmT}$ translation symmetry. (A similar result obtains in theories of fluids when their dynamical equations are written in conservative form.) In this sense, the energy-momentum tensor contains comparable information to a theory's Lagrangian or Hamiltonian. Thus, for a gauge theory of a group ${G\supset\mmT}$, matter field dynamics may be determined by its $G$-valued conservation law. To address both aforementioned obstructions, therefore, we shall apply the preceding construction of ${U(1)}$ theory toward a $\mOne$-loop lattice gauge theory of the Poincar\'e group, ${\mP=\mmT\rtimes SO^+(3,1)}$. We will form a ten-component, Poincar\'e-valued energy-momentum $\mJ$ from a Poincar\'e representation---the 5-vector $\Phi$. In lieu of applying variational derivatives to derive Euler-Lagrange equations ${\ws{E}[\mL(\Phi)]=0}$, or a Poisson bracket to derive flows ${\dot{\Phi}=\{\Phi,\mH\}}$, we shall construct a $\mOne$-loop to discover the dynamics of $\Phi$ from $\mJ$. We thereby recover matter field dynamics from a ${\mP}$-valued $\mOne$-loop lattice gauge theory, whose gravitational dynamics will be explored in the continuum limit. \section{$\mOne$-loop Poincar\'e Gauge Theory\label{Poincare1LoopTheory}} We first define the Poincar\'e representation $\Phi$ on the hypercubic infinite lattice ${\{\v{n}\}=\mZ^4}$. We let Latin indices ${a,b,{\dots}\in\{t,x,y,z\}}$ correspond to lattice directions and let Greek indices ${\alpha,\beta,{\dots}\in\{0,1,2,3\}}$ denote internal Lorentz degrees of freedom. Following \cite{glasser_lifting_2019,glasser_restoring_2019}, we define the 5-vector $\Phi\n$ to be a 5-component vector at lattice vertex ${\v{n}\in\mZ^4}$ that gauge transforms under local Poincar\'e transformations, as follows: \begin{eqn} \Phi'\n&=g\n\tleft\Phi\n\\ &=(\Lambda,\vphi)\n\tleft\Phi\n\\[3pt] &=\left[\begin{matrix}\Lambda^\mu_{~\nu}&\v{0}\\\vphi_\nu&1\end{matrix}\right]\left[\begin{matrix}\pi^\nu\\\phi\end{matrix}\right]\\[3pt] &=\left[\begin{matrix}\Lambda^\mu_{~\nu}\pi^\nu\\\phi+\vphi_\nu\pi^\nu\end{matrix}\right]. \label{definePhiLatticeFieldAction} \end{eqn} Here, $'$ denotes a gauge transformation, $\tleft$ denotes a left group action, and $\n$ is often omitted for brevity. Thus, ${\Phi\n}$ is a real representation of the Poincar\'e group, ${\mP\rightarrow GL_5\big(\{\Phi\},\mR\big)}$, a transpose of the Bargmann representation \cite{bargmann_unitary_1954} of space-time transformations. The 5-vector formalism was previously adopted in the Duffin-Kemmer system \cite{rabin_homology_1982}, however, the gauge transformation of Eq.~(\ref{definePhiLatticeFieldAction}) has not been identified in such work. We next describe a Poincar\'e gauge field along the lattice links of $\mZ^4$. We let ${\mmp=\ws{Lie}(\mP)}$, ${\mathfrak{h}=\ws{Lie}(H)}$ and ${\mt=\ws{Lie}(\mmT)}$ denote Lie algebras, where ${H=SO^+(3,1)}$. We define the lowered-index vierbein ${e_{\mu a}\n=\eta_{\mu\nu}e^\nu_{~a}\n}$ to be a $\mt$-valued translation gauge field on the link ${[\v{n}|\v{n}+\hat{a}]}$. We likewise define the $\mathfrak{h}$-valued gauge field ${\Gamma^\mu_{~\nu a}\n}$, which couples to ${e_{\mu a}\n}$ through the group action of $\mP$. To establish notation, we thereby construct a covariant derivative of the 5-vector as follows: \begin{eqn} \mD^+_a\Phi\n&=\Phi^+_a\n-\Phi\n\\ &=U_a\n^{-1}\cdot\Phi[\v{n}+\hat{a}]-\Phi\n\\ \left[\begin{matrix}\mD^+_a\pi^\mu\n\\[3pt]\mD^+_a\phi\n\end{matrix}\right]&=\left[\begin{matrix}(\pi^+_a)^\mu\n\\\phi^+_a\n\end{matrix}\right]-\left[\begin{matrix}\pi^\mu\n\\\phi\n\end{matrix}\right]\\ &=\exp\hspace{-2pt}\left[\begin{matrix}\Gamma^\mu_{~\nu a}\n & \v{0}\\e_{\nu a}\n & 0\end{matrix}\right]^{-1}\left[\begin{matrix}\pi^\nu\\\phi\end{matrix}\right][\v{n}+\hat{a}]-\left[\begin{matrix}\pi^\mu\\\phi\end{matrix}\right]\n. \label{CovDerivDefine5Vector} \end{eqn} A backward covariant derivative is analogously defined: \begin{eqn} \mD^-_a\Phi\n&=\Phi\n-\Phi^-_a\n\\ &=\Phi\n-U_a[\v{n}-\hat{a}]\cdot\Phi[\v{n}-\hat{a}]. \end{eqn} The parallel transport operator $U_a\n$ gauge transforms as usual: ${U_a'\n=g[\v{n}+\hat{a}]U_a\n g\n^{-1}}$, where ${g\in\mP}$. The $\mOne$-loop dynamics of the 5-vector must derive from a Lie-algebra-valued current ${\mJ\in\Lambda^1[\mZ^4,\mmp]}$. Since $\mOne$-loop theory forgoes a Hamiltonian or Lagrangian structure, the properties of this current must be independently defined. We therefore pause to define $\mJ$ for a general lattice gauge theory in the $\mOne$-loop formalism. \textbf{Definition:} Let ${G\rightarrow GL(V)}$ be a representation of a Lie group $G$ on a matter field ${\xi\n\in V}$ valued in vector space $V$. Let ${\{\latlink\}=V\times V\times GL(V)}$ denote the space of data determining a lattice edge in $\mZ^d$. We define the \emph{$\mOne$-loop current} ${\mJ\in\Lambda^1[\mZ^d,\mg]}$ to be a $G$-equivariant ${\text{$\mg$-valued}}$ 1-form, where ${\mg=\ws{Lie}(G)\subset\mathfrak{gl}(V)}$. In particular, ${\mJ:\{\latlink\}\rightarrow\mg}$ is required to satisfy ${\mJ\circ\Psi_g=\ws{Ad}_g\circ\mJ}$ ${\forall~g\in G}$, where $\Psi_g$ denotes the gauge transformation of a lattice edge, as follows: \begin{eqn} \begin{alignedat}{3} {\Psi_g:}~~~\{\latlink\}~~&\rightarrow&~~&\{\latlink\}\\ (\xi_1,\xi_2,U)~~&\mapsto&&(g\tleft\xi_1,~h\tleft\xi_2,~h\cdot U\cdot g^{-1}) \label{Psi_g_defn} \end{alignedat} \end{eqn} for ${h\in G}$ arbitrary. Thus, an arbitrary gauge transformation on $\mZ^4$ maps ${\mJ_a\n}$ to ${\mJ_a'\n=\mJ(\Psi_g(\latlinkna))=g\n\mJ_a\n g\n^{-1}}$, where ${\mJ_a\n=\mJ(\latlinkna)}$ and ${\latlinkna=\big(\xi\n,\xi[\v{n}+\hat{a}],U_a\n\big)\in\{\latlink\}}$. The preceding definition of the current thereby enforces an adjoint action befitting the red loop depicted in Fig.~\ref{GxyPlot}. This definition uniquely determines the $\mOne$-loop current of 5-vector lattice gauge theory. To see how, we first note that $\mJ_a\n$ can only depend on ${\big\{\Phi\n,\Phi[\v{n}+\hat{a}],U_a\n\big\}}$. Furthermore, since the $G$-equivariance of $\mJ_a\n$ precludes the appearance of ${g[\v{n}+\hat{a}]}$ in its gauge transformations, the dependence of ${\mJ_a\n}$ on ${\{\Phi[\v{n}+\hat{a}],U_a\n\}}$ is limited to the pairing ${\Phi^+_a\n=U_a\n^{-1}\Phi[\v{n}+\hat{a}]}$. (This construction accounts for the permissible arbitrariness of $h$ in the definition of $\Psi_g$ in Eq.~(\ref{Psi_g_defn}).) We must therefore solve for a $\mmp$-valued current ${\mJ_a\n=\mJ(\Phi,\Phi^+_a)}$ that transforms in the adjoint Poincar\'e representation---that is \begin{eqn} \ws{Ad}_{(\Lambda,\vphi)}\left[\begin{matrix}\Gamma & \v{0}\\e & 0\end{matrix}\right]&=\left[\begin{matrix}\Lambda & \v{0}\\\vphi & 1\end{matrix}\right]\left[\begin{matrix}\Gamma & \v{0}\\e & 0\end{matrix}\right]\left[\begin{matrix}\Lambda & \v{0}\\\vphi & 1\end{matrix}\right]^{-1}\\[3pt] &=\left[\begin{matrix}\Lambda\Gamma\Lambda^{-1} & \v{0}\\(\vphi\Gamma+e)\Lambda^{-1} & 0\end{matrix}\right]. \label{adjointTransf} \end{eqn} Studying the Poincar\'e transformation of $\Phi$ in Eq.~(\ref{definePhiLatticeFieldAction}), up to a multiplicative constant there is found to be a unique such current: \begin{eqn} \mJ_a\n&=\left[\begin{array}{c|c}L^\mu_{~\nu a}\n & \v{0}\\[3pt]\hline T_{\nu a}\n & 0\end{array}\right]\\[3pt] &=\frac{1}{2}\left[\begin{array}{c|c}\pi^\mu\boxtimes(\pi^+_a)_\nu & \v{0} \\[2pt]\hline\vphantom{\bigg|}\pi_\nu\phi^+_a-\phi(\pi^+_a)_\nu & 0\end{array}\right]. \label{defineTheCurrent} \end{eqn} Here, ${\pi_\nu=\pi^\sigma\eta_{\sigma\nu}}$ and ${\boxtimes:\mt^*\times\mt\rightarrow\mathfrak{h}}$ is the box map \begin{eqn} x^\mu\boxtimes y_\nu&=x\boxtimes(y^T\eta)=\Big[yx^T-xy^T\Big]\eta, \label{boxproduct} \end{eqn} where $\eta$ denotes the ${4\times4}$ Minkowski matrix. ($\boxtimes$ roughly resembles the hat map of a 4-vector cross product.) It is readily confirmed that under the gauge transformation of Eq.~(\ref{definePhiLatticeFieldAction}), ${\mJ_a\n}$ transforms in the adjoint representation of Eq.~(\ref{adjointTransf}), that is, ${\mJ\circ\Psi_g=\ws{Ad}_g\circ\mJ}$ holds. We note that ${\mD_a^+\Phi}$ can be freely substituted for ${\Phi^+_a}$ in Eq.~(\ref{defineTheCurrent}) without affecting its value. In particular, ${L^\mu_{~\nu a}=\frac{1}{2}\pi^\mu\boxtimes\mD_a^+\pi_\nu}$ since ${\pi^\mu\boxtimes\pi_\nu=0}$, and ${T_{\nu a}=\frac{1}{2}(\pi_\nu\mD^+_a\phi-\phi\mD_a^+\pi_\nu)}$ by a simple cancellation. In this form, $T_{\nu a}$ more closely resembles a current equivalent to the energy-momentum ${T^\phi_{\mu\nu}}$ of scalar field theory, as we now show. We recall the two relations that define equivalence classes of nontrivial Noether currents in Lagrangian mechanics \cite{olver_applications_1986}. Two currents ${j_\mu[\phi]\cong\tilde{j}_\mu[\phi]}$ are equivalent (in the sense that their mutual conservation arises from the same Noether symmetry) if they \begin{enumerate}[label=(\roman*)] \item differ by an expression that vanishes on-shell; or \item satisfy ${\partial^\mu(j_\mu[\phi]-\tilde{j}_\mu[\phi])=0}$ off-shell. \end{enumerate} We use both of these relations in the following calculation: \begin{eqn} T^\phi_{\mu\nu}&=\partial_\mu\phi\partial_\nu\phi-\frac{1}{2}\eta_{\mu\nu}\left(\partial^\sigma\phi\partial_\sigma\phi+m^2\phi^2\right)\\ &\stackrel{\mathclap{\normalfont\mbox{\tiny(i)}}}{\cong}\partial_\mu\phi\partial_\nu\phi-\frac{1}{2}\eta_{\mu\nu}\left(\partial^\sigma\phi\partial_\sigma\phi+\phi\partial^2\phi\right)\\ &\stackrel{\mathclap{\normalfont\mbox{\tiny(ii)}}}{\cong}\frac{1}{2}\Big[\partial_\mu\phi\partial_\nu\phi-\phi\partial_\mu\partial_\nu\phi\Big]\\ &\approx\frac{1}{2}e_\mu^{~a}\Big[\pi_\nu\partial_a\phi-\phi\partial_a\pi_\nu\Big]\approx e_\mu^{~a}T_{\nu a}|_{\mR^{3,1}} \label{scalarTis5vectorT} \end{eqn} where we define ${T_{\nu a}|_{\mR^{3,1}}=\frac{1}{2}(\pi_\nu\partial_a\phi-\phi\partial_a\pi_\nu)}$, and set ${\partial_\mu\approx e_\mu^{~a}\partial_a}$ and ${\pi_\nu\approx\partial_\nu\phi}$ in the continuous, flat space-time limit, as in \cite{glasser_lifting_2019,glasser_restoring_2019}. Therefore, ${T_{\nu a}|_{\mR^{3,1}}}$ forms a continuous analogue of the 5-vector energy-momentum ${T_{\nu a}=\frac{1}{2}(\pi_\nu\mD^+_a\phi-\phi\mD_a^+\pi_\nu)}$. We now take up our central effort: the construction of $\mP$-valued $\mOne$-loops that recover ${\delta\mD A+j=0}$ and ${\delta j=0}$ in the continuum limit. In Eq.~(\ref{defineTheCurrent}), we have already derived a group-valued current suitable for the matter field sector of such $\mOne$-loops. In particular, we define ${J:\{\latlink\}\rightarrow\mP}$ such that \begin{eqn} J(\latlinkna)=\exp(\kappa\mJ_a\n) \end{eqn} for some coupling constant $\kappa$. The first step in building the gauge field sector is similarly immediate: A realization of ${\mD A}$ suitable for $\mOne$-loop theory is found at once in the Wilson loop, a 2-form ${\Omega:\{\latsquare\}\rightarrow\mP}$ defined by ${\Omega(\latsquarenab)=U_{ab}\n}$, as specified in Eq.~(\ref{DefineLatticeVariables}). To complete this construction, we must define a $\mOne$-loop covariant codifferential operator $\delta$ and thereby assemble the desired $\mP$-valued analogue of the Yang-Mills field equation, ${\delta\Omega\cdot J=\mOne}$. We require $\delta$ to satisfy ${\delta^2\Omega=\mOne}$ and ${(\delta\alpha)^{-1}=\delta(\alpha^{-1})}$ for any form $\alpha$. The resulting $\mOne$-loop conservation law ${\delta J=\mOne}$ will then determine matter field dynamics, as anticipated in Eq.~(\ref{consLawsAreEOM}). Taken together, these relations ensure the integrability (or solvability) of $\mOne$-loop theory. It is illuminating to first consider what a $\mOne$-loop conservation law ${\delta J=\mOne}$ could look like. We assume that it holds at each ${\v{n}\in\mZ^4}$, so that currents along all eight lattice links terminating on $\v{n}$ ought to play a role. Whereas ${\mJ_a\n}$ has endpoints at $\v{n}$, however, ${\mJ_a[\v{n}-\hat{a}]}$ has endpoints at ${\v{n}-\hat{a}}$. To incorporate the link ${[\v{n}-\hat{a}|\v{n}]}$, therefore, we substitute ${\Phi^+_a\rightarrow\Phi^-_a}$ in Eq.~(\ref{defineTheCurrent}), yielding the $\mmp$-valued current ${\mJ_{\bar{a}}\n}$, defined as follows: \begin{eqn} \mJ_{\bar{a}}\n=\mJ_a\n\Big|_{\Phi_a^+\rightarrow\Phi_a^-}=-\ws{Ad}_{U_a{[\v{n}-\hat{a}]}}\mJ_a[\v{n}-\hat{a}]. \label{transvertedCurrent} \end{eqn} The currents ${\mJ_a[\v{n}-\hat{a}]}$ and ${\mJ_{\bar{a}}\n}$ are defined by the same data, ${\latlinkan=\big(\Phi[\v{n}-\hat{a}],\Phi\n,U_a[\v{n}-\hat{a}]\big)}$. Their adjoint relationship in Eq.~(\ref{transvertedCurrent}), readily confirmed using $\Phi_a^-$ in Eq.~(\ref{defineTheCurrent}), demonstrates their compatibility under parallel transport. Eq.~(\ref{transvertedCurrent}) notably resembles the transformation of the Maurer-Cartan form $\omega_G$ under the inversion map $\iota$ \cite{sharpe_differential_1997}: ${(\iota^*\omega_G)(X_g)=-\ws{Ad}_g(\omega_G(X_g))}$ $\forall$ ${X_g\in T_gG}$. \begin{figure}[h!] \hbox{\hspace{-10pt}\includegraphics[width=95mm]{Current6Loops.pdf}} \caption{Six of the eight elements comprising the $\mOne$-loop conservation law ${\delta J\n=[g_1\cdots g_8](\mJ)=\mOne}$ are depicted at vertex $\v{n}$. In a continuous space-time limit, this conservation law recovers its Yang-Mills analogue, ${\delta j=0}$.} \label{Current6Loops} \end{figure} An intuitive $\mOne$-loop analogue for ${\delta j=0}$, therefore, roughly takes the eight element form ${[g_1\cdots g_8](\mJ)=\mOne}$, as depicted in Fig.~\ref{Current6Loops}. The group elements $\{g_i\}$ must depend on $\mJ$ in a manner we shall make precise. To that end, we next revisit elements of Cartan geometry \cite{sharpe_differential_1997} and reinterpret them in the $\mOne$-loop formalism. Let us recall ${\delta:\Lambda^\ell[M,\mR]\rightarrow\Lambda^{\ell-1}[M,\mR]}$, the codifferential of an $\ell$-form on a semi-Riemannian manifold $M$: \begin{eqn} \delta\alpha=g^{ab}\big[\mathrm{i}_{e_a}(\nabla_{e_b}\alpha)\big]&=\eta^{\mu\nu}\big[\mathrm{i}_{e_\mu}(\nabla_{e_\nu}\alpha)\big]. \label{codiffDef} \end{eqn} Here, $\{e_a\}$ is any local basis of ${TM}$ with inverse metric $g^{ab}$ and $\{e_\mu\}$ any local orthonormal basis. $\mathrm{i}_X$ denotes an interior product and $\nabla_X$ a covariant derivative with respect to ${X\in\mathfrak{X}(M)}$, where ${\mathfrak{X}(M)=\Gamma(TM)}$ denotes the set of vector fields on $M$. Eq.~(\ref{codiffDef}) is equivalent to the more widely used definition, ${\delta=\pm{\star}\md{\star}}$ \cite{eells_selected_1983}. We now specialize $M$ to space-time, specifically, to the four-dimensional base space of a reductive Cartan geometry ${(P,\omega)}$. Here, ${\omega\in\Lambda^1[P,\mmp]}$ is a $\mmp$-valued Cartan connection on the right ${H}$-principal bundle ${P\xrightarrow{\pi}M}$, where ${\mmp=\mathfrak{h}\oplus\mt}$ is a direct sum of ${\ws{Ad}_H}$-modules and ${H=SO^+(3,1)}$. A Cartan connection establishes, by definition, an isomorphism ${\omega_p:T_pP\rightarrow\mmp}$ $\forall$ ${p\in P}$, such that $\omega^{-1}_p$ is everywhere well-defined. As such, the universal covariant derivative with respect to ${A\in\mmp}$ of any function ${f\in\mathcal{C}(P)}$ may be defined as follows \cite{sharpe_differential_1997}: \begin{eqn} \nabla_Af=[\omega^{-1}(A)]f. \label{univCovDeriv} \end{eqn} ${\nabla_A}$ differentiates $f$ with respect to the $\omega$-constant vector field ${\omega^{-1}(A)\in \mathfrak{X}(P)}$. In our notation, the universal $\nabla_A$ is distinguished from $\nabla_X$ merely by the distinct setting of ${A\in\mmp}$ and ${X\in\mathfrak{X}(M)}$. An intuitive picture of this machinery will facilitate a $\mOne$-loop reformulation of $\delta$. First, we reinterpret the operator ${\nabla_{e_\mu}}$ in Eq.~(\ref{codiffDef}) as the universal covariant derivative of Eq.~(\ref{univCovDeriv})---in particular, we regard ${e_\mu\in\mt\subset\mmp}$ as a Lie algebra element. For any such ${e_\mu\in\mt}$, the $\omega$-constant vector field ${\omega^{-1}(e_\mu)}$ generates, by definition, geodesics on $P$. ${\mathrm{i}_{e_\mu}(\nabla_{e_\mu}\alpha)}$ therefore represents the change in $\alpha$ along the geodesic generated by ${\omega^{-1}(e_\mu)}$. In this way, $\delta\alpha$ sums the change in $\alpha$ over orthonormal geodesics, weighted by a metric factor as in Eq.~(\ref{codiffDef}). In the $\mOne$-loop formalism, this metric structure is not provided by the base manifold (or lattice) but by the Lie algebra $\mmp$. In particular, we introduce the following nondegenerate, ${\ws{Ad}_H}$-invariant metric ${\langle \cdot,\cdot\rangle_\mmp}$ on $\mmp$: \begin{eqn} \langle A,B\rangle_\mmp&=\ws{Tr}\left(A\ms{\eta}B^T\ms{\eta}\right)\hspace{24pt}\text{where }\ms{\eta}=\left[\begin{matrix}\eta & \v{0}\\ \v{0} & 1\end{matrix}\right]\in\mR^{5\times5}\\ &=\ws{Tr}\left(\Gamma_A\eta\Gamma_B^T\eta\right)+e_A\eta e_B^T \label{PoincareMetric} \end{eqn} $\forall$ ${A,B\in\mmp}$. Here, ${A=(\Gamma_A,e_A)\in\mmp}$ denotes a matrix Lie algebra element as appears in Eqs.~(\ref{CovDerivDefine5Vector}) and (\ref{adjointTransf}), with $e_A$ a row vector. $e_A\eta e_B^T$ may be recognized as the semi-Riemannian metric. We now construct the $\mOne$-loop operator $\delta$. We first define a discrete Cartan connection ${\omega:\{\latlink\}\rightarrow\mmp}$ such that ${\exp(\omega_a\n)=U_a\n\in\mP}$. Since $\delta$ aggregates over orthonormal geodesics, we choose an arbitrary basis $\{e_\mu\}$ of ${\mt\subset\mmp}$ that is orthonormal with respect to ${\langle\cdot,\cdot\rangle_\mmp}$. We further define the neighborhood $\mathcal{N}_\v{n}$ of $\v{n}$, comprised of $\v{n}$ and its eight nearest neighbors in ${\mZ^4}$. Now, $\mathcal{N}_\v{n}$ is said to be \emph{rectified} if each connection ${\{\omega_a\n,\omega_{\bar{a}}\n\}}$ evaluates to a distinct basis vector in ${\{\pm e_\mu\}}$, e.g. ${\omega_x\n=-\omega_{\bar{x}}\n=e_\mu}$. To rectify $\mathcal{N}_\v{n}$, we apply suitable gauge transformations at each neighbor of $\v{n}$. We require, however, that the chosen transformations ${\{g[\v{n}\pm\hat{a}]\in\mP\}}$ preserve the `isomorphism' of $\omega$ on $\mathcal{N}_\v{n}$. In effect, the vierbeins ${e^\mu_{~a}\n}$ and ${e^\mu_{~\bar{a}}\n}$ should remain nonsingular as they are smoothly rectified toward ($\pm$) the identity matrix. Concretely, this procedure yields transformed comparators of the form ${U_a'\n=g[\v{n}+\hat{a}]U_a\n=\exp(e_\mu)}$, where ${g\n=\mOne}$. Any matter or gauge field data that has already been defined within or adjoining $\mathcal{N}_\v{n}$ must also be gauge-transformed accordingly; therefore, not all lattice neighborhoods can be rectified simultaneously. We note that data as yet undefined need not (and of course cannot) be transformed in this way. We shall denote a rectified neighborhood by ${\bar{\mathcal{N}}_\v{n}}$. Crucially, we observe that $\bar{\mathcal{N}}_\v{n}$ defines a bijection, ${r:\{\pm\mu\}\rightarrow\{a,\bar{a}\}}$. At last, we define ${\delta:\Lambda^\ell[\bar{\mathcal{N}}_\v{n},\mP]\rightarrow\Lambda^{\ell-1}[\bar{\mathcal{N}}_\v{n},\mP]}$ as a map between rectified $\mP$-valued forms. A rectified form ${\alpha\in\Lambda^\ell[\bar{\mathcal{N}}_\v{n},\mP]}$ is a form defined on a rectified neighborhood that transforms under a gauge transformation as ${\alpha'\n=g\n\alpha\n g\n^{-1}}$. (Intuitively, a rectified form is a closed loop with endpoints at $\v{n}$.) Clearly, $J$ and $\Omega$ are rectified forms on $\bar{\mathcal{N}}_\v{n}$, while ${U_a=\exp(\omega_a)}$ is not. $\delta$ is now readily defined: \begin{eqn} \delta J\n&=\exp\sum\limits_{\nu\in\{\pm\mu\}}\log J^\nu\n\\ \delta\Omega_b\n&=\exp\sum\limits_{\nu\in\{\pm\mu\}}\log\Omega^\nu_{~b}\n. \label{deltaOneLoops} \end{eqn} \begin{figure}[b!] \vspace*{-40pt} \hbox{\includegraphics[width=89mm]{OneLoop_dOJ.pdf}} \setlength\abovecaptionskip{-15pt} \setlength\belowcaptionskip{-5pt} \caption{Four of the six holonomies comprising the $\mOne$-loop ${\delta\Omega_t\n J_t\n=\mOne}$ are depicted with $J_t\n$. ($U_{zt}$ and $U_{\bar{z}t}$ are not shown.) $\delta\Omega_t\n$ can be regarded as a `simultaneous' multiplication of the six holonomies adjoining lattice edge ${[\v{n}|\v{n}+\hat{t}]}$.} \label{OneLoopPlot} \end{figure} The notation of Eq.~(\ref{deltaOneLoops}) requires some clarification. $\exp$ and $\log$ denote matrix exponentials and logarithms, respectively. As in Eq.~(\ref{DefineLatticeVariables}), a raised index indicates a metric factor, such that ${\log J^\mu=\eta^{\mu\mu}\log J_\mu}$. (This metric factor is now seen to arise from its associated $e_\mu$ geodesic.) Furthermore, $J_{\mu}\n$ denotes ${J_{r(\mu)}\n}$, the current on a single link in $\bar{\mathcal{N}}_\v{n}$, as determined by the bijection $r$. Similarly, ${\Omega_{\mu b}\n}$ denotes the holonomy ${\Omega_{r(\mu)b}\n}$ on a single plaquette. As seen in Eq.~(\ref{transvertedCurrent}), the `backward' currents ${\{\mJ_{\bar{a}}\}}$ are implicitly negated, as are the `reversed' holonomies $\{\log\Omega_{\bar{a}b}\}$. Thus, the intuitive picture of ${\delta J}$ (or ${\delta\Omega}$) as the metric-weighted change in $J$ (or $\Omega$) along geodesics is realized. The use of the logarithm in the definition of $\delta$ enables the `simultaneous' multiplication of non-abelian group elements---without preferential ordering. $\delta$ of a rectified 1-form (e.g. $\delta J$) can therefore be imagined as a simultaneous contraction of loops along all edges adjoining a vertex, and $\delta$ of a rectified 2-form (e.g. $\delta\Omega$) as the simultaneous contraction of loops along all plaquettes adjoining an edge. (See Figs.~\ref{Current6Loops} and \ref{OneLoopPlot}.) The absence of ordering in these multiplications ensures that ${\delta^2\Omega=\mOne}$ and ${(\delta\alpha)^{-1}=\delta(\alpha^{-1})}$, properties of $\delta$ that are readily verified with Eq.~(\ref{deltaOneLoops}). The basic $\mOne$-loop of Eq.~(\ref{OneLoopFieldEqn}) is now completely defined. Lattice fields are thus evolved by solving the $\mOne$-loops ${\delta\Omega_a\cdot J_a=\mOne}$ and ${\delta J=\mOne}$ for their unknown data. We assume that this evolution proceeds time-slice by time-slice, and is therefore realizable by the following iterative algorithm: \begin{enumerate}[label=(\roman*)] \item Self-consistently initialize ${\Phi\n}$, ${\Phi[\v{n}+\hat{t}\hspace{1pt}]}$, ${\omega_a\n}$ and ${\omega_b[\v{n}+\hat{t}\hspace{1pt}]}$ $\forall$ ${\v{n}\in\{n_t=0\}}$, ${a\in\{t,\v{x}\}}$ and ${b\in\{\v{x}\}}$. \item Since no $\mOne$-loop is completed by defining a gauge field along a temporal link ${[n_t=1|n_t=2]}$, any such link may be freely specified. Thus, assign the temporal gauge: ${\omega_t\n=e_0}$ $\forall$ ${\v{n}\in\{n_t=1\}}$. \item Solve ${\delta J\n=\mOne}$ for $\Phi[\v{n}+\hat{t}\hspace{1pt}]$ $\forall$ ${\v{n}\in\{n_t=1\}}$. \item Solve ${\delta\Omega_b\n J_b\n=\mOne}$ for ${\omega_b[\v{n}+\hat{t}\hspace{1pt}]}$ $\forall$ ${\v{n}\in\{n_t=1\}}$ and ${b\in\{\v{x}\}}$. \item Return to (ii), assigning ${\omega_t\n=e_0}$ $\forall$ ${\v{n}\in\{n_t=2\}}$. \end{enumerate} A conceptually straightforward approach to calculating $\delta$ in steps (iii) and (iv) is to rectify a maximal set of disjoint neighborhoods ${\bar{\mathcal{M}}=\sqcup_\v{n}\{\bar{\mathcal{N}}_\v{n}\}}$ on ${\mZ^4|_{n_t=1}}$, solve for rectified forms on $\bar{\mathcal{M}}$, and then repeat for the as-yet-unrectified neighborhoods on ${\mZ^4|_{n_t=1}\backslash\bar{\mathcal{M}}}$. We note, however, that rectified forms on a neighborhood ${\mathcal{N}_\v{n}}$ are in fact invariant under the `$\v{n}$-adjacent' transformations ${\{g[\v{n}\pm\hat{a}]\}}$ (since their loops terminate at $\v{n}$). In principle, therefore, a more streamlined approach could solve all neighborhoods of $\mZ^4$ in parallel as if they were rectified, and afterward resolve any mismatches of gauge. We observe that the preceding algorithm maximally leverages the gauge-invariance of the $\mOne$-loop; the rectification of ${\mathcal{N}_\v{n}}$ by $\delta$ is made permissible by the gauge invariance of ${\delta J=\mOne}$ and ${\delta\Omega_b\n J_b\n=\mOne}$. Let us consider the solvability of this algorithm. A self-consistent initialization of step (i) requires that ${\delta\Omega_t\n J_t\n=\mOne}$ $\forall$ ${\v{n}\in\{n_t=0\}}$. By specifying ${J_t\n\in\mP}$ first, and then the matter fields ${\Phi\n}$ and ${\Phi^+_t\n}$ comprising it, various suitable initial conditions are readily found. All steps of the algorithm are then immediately solvable, except perhaps step (iii). We note, however, that Eq.~(\ref{defineTheCurrent}) is linear in ${\Phi^+_a}$. Therefore, since every leg of ${\delta J\n}$ shares the same $\Phi\n$, a solution ${\Phi^+_t\n}$ to ${\delta J\n=\mOne}$ must exist. For completeness, we nevertheless note the following conditions on ${\Phi=[\pi^\mu;\phi]}$ necessary for the existence of a solution ${\Phi^+_t}$ to ${\mJ(\Phi,\Phi^+_t)=({\Gamma}^\mu_{~\nu},{e}_\nu)}$: \begin{eqn} \pi^\mu\boxtimes{e}_\nu+\phi{\Gamma}^\mu_{~\nu}&=0\\ \pi^\sigma{\Gamma}^{\tau\mu}+\pi^\tau{\Gamma}^{\mu\sigma}+\pi^\mu{\Gamma}^{\sigma\tau}&=0, \label{consConditions} \end{eqn} where ${({\Gamma}^\mu_{~\nu},{e}_\nu)\in\mmp}$ and ${{\Gamma}^{\sigma\tau}={\Gamma}^\sigma_{~\nu}\eta^{\nu\tau}}$. Furthermore, when the representation $\Phi$ is fully specified, the preceding algorithm is not only solvable but uniquely determined---up to arbitrary choices of gauge. In particular, once the mass and `time direction' of $\Phi$ are fixed (such that ${\pi^\mu\pi_\mu+m^2=0}$ and ${\pi^0>0}$ $\forall$ $\v{n}$, for example), the conservation law ${\delta J=\mOne}$ fully determines its evolution. Likewise, ${\delta\Omega\cdot J=\mOne}$ uniquely determines $\omega_a$. We shall leave a more robust examination of the dynamics of $\Phi$ to future work. For now, having described an algorithm for the evolution of Poincar\'e $\mOne$-loop theory, we examine its physics in the continuum limit. \section{Gravity in the $\mOne$-loop Formalism\label{1LoopContinuumLimit}} We consider the continuum limit of $\mOne$-loop Poincar\'e lattice gauge theory. We denote our gauge field by ${\omega_a\n=A_a\n\in\mmp}$ and define its comparators with a lattice parameter $\Delta$, that is: ${U_a\n=\exp(\Delta A_a\n)}$. Applying the BCH formula---see \cite{kogut_introduction_1979} Eq.~(8.7)---and expanding gauge fields at lattice points away from $\v{n}$---e.g. ${A_b[\v{n}\pm\hat{a}]=[A_b\pm\Delta\partial_aA_b+\frac{\Delta^2}{2}\partial_a^2A_b+\mO(\Delta^3)]}$---we find, in the ${\Delta\rightarrow0}$ limit: \begin{eqn} \log U_{ab}\n&=\Delta^2F_{ab}+\mO(\Delta^3)\\ \log U_{\bar{a}b}\n+\log U_{ab}\n&=\Delta^3\mD_aF_{ab}+\mO(\Delta^4), \label{DeltaExpansionsOfUab} \end{eqn} where ${F_{ab}=\partial_{[a}A_{b]}-[A_a,A_b]}$ and ${\mD_a=\partial_a-[A_a,\cdot]}$. (Note, no index summation is implied in Eq.~(\ref{DeltaExpansionsOfUab}); we omit the conventional factor of $\frac{1}{2}$ in our notation for the antisymmetrization of indices; and the sign conventions in ${F_{ab}}$ and $\mD_a$ arise because the gauge field has a left action, $\tleft$.) Computing the Lie brackets in the definitions of $F_{ab}$ and $\mD_a$, we explicitly evaluate the fields of Eq.~(\ref{DeltaExpansionsOfUab}) for our $\mmp$-valued connection as follows: \begin{subequations} \begin{alignat}{1} \hspace{-5pt}A_a&=\left[\begin{array}{c|c}\Gamma^\mu_{~\nu a} & \v{0}\\\hline e_{\nu a} & 0\end{array} \right]\label{PoincareA}\\ \hspace{-5pt}F_{ab}&=\left[\begin{array}{c|c}F^\mu_{~\nu ab} & \v{0}\\\hline F_{\nu ab} & 0\end{array}\right]=\left[\begin{array}{c|c}\partial_{[a|}\Gamma^\mu_{~\nu |b]}-\Gamma^\mu_{~\sigma[a|}\Gamma^\sigma_{~\nu|b]} & \v{0}\\\hline\partial_{[a|}e_{\nu|b]}-e_{\sigma[a|}\Gamma^\sigma_{~\nu|b]} & 0\end{array}\right]\label{PoincareF}\\ \hspace{-5pt}\mD_cF_{ab}&=\left[\begin{array}{c|c}\partial_cF^\mu_{~\nu ab}-\Gamma^\mu_{~\sigma c}F^\sigma_{~\nu ab}+F^\mu_{~\sigma ab}\Gamma^\sigma_{~\nu c} & \v{0}\\\hline\partial_cF_{\nu ab}-e_{\sigma c}F^\sigma_{~\nu ab}+F_{\sigma ab}\Gamma^\sigma_{~\nu c} & 0\end{array}\right]. \label{PoincareDF} \end{alignat} \end{subequations} We now substitute Eqs.~(\ref{deltaOneLoops}), (\ref{DeltaExpansionsOfUab}) and (\ref{PoincareDF}) into the $\mOne$-loop ${\delta\Omega_b\n J_b\n=\mOne}$ of Eq.~(\ref{OneLoopFieldEqn}), keeping terms to least order in $\Delta$. Working on $\bar{\mathcal{N}}_\v{n}$, we thus discover \begin{eqn} \mD^aF_{ab}+\kappa\mJ_b=\left[\begin{array}{c|c}\partial^aF^\mu_{~\nu ab}+\kappa L^\mu_{~\nu b} & \v{0}\\\hline\partial^aF_{\nu ab}-e_\sigma^{~a}F^\sigma_{~\nu ab}+\kappa T_{\nu b} & 0\end{array}\right]&=\mZero, \label{continuum1Loop5VT} \end{eqn} where we have set ${\Gamma^\mu_{~\nu a}\n=0}$ and ${g^{ab}\n=\eta^{ab}}$. (In the continuum limit, rectification resembles a local application of Riemann normal coordinates.) ${L^\mu_{~\nu b}}$ and $T_{\nu b}$ in Eq.~(\ref{continuum1Loop5VT}) are assumed to be in the ${\Delta\rightarrow0}$ limit. Restoring $\Gamma^\mu_{~\nu a}$, we may re-express the field equations of Eq.~(\ref{continuum1Loop5VT}) more schematically as \begin{eqn} \partial R-[\Gamma,R]+\kappa L&=0\\ \partial S-[e,R]-[\Gamma,S]+\kappa T&=0 \label{schematicFieldEq} \end{eqn} where $R$ (i.e. $F^\mu_{~\nu ab}$) and $S$ (i.e. $F_{\nu ab}$) roughly represent space-time curvature and torsion, respectively---an interpretation we shall justify in Eq.~(\ref{FinGRLimit}). We emphasize that these field equations comprise the continuous limit of the well-posed discrete algorithm of the previous section. Let us compare this result with existing gauge theories of gravity. The earliest attempt at a modern gauge theory of gravity was made in 1955 by Utiyama \cite{utiyama_invariant_1956}, who identified the Lorentz group as the relevant gauge group. ${SO^+(3,1)}$ is an instinctive fit for gravity, not least because the Lorentz field strength $F^\mu_{~\nu ab}$ essentially reproduces the Riemann tensor of curved space-time, as in Eq.~(\ref{FinGRLimit}). Utiyama's formalism appears to suggest \cite{hammond_torsion_2002}, however, that the sole Noether current associated with gravity is angular momentum ($L$)---a result that perhaps underrates energy-momentum ($T$) as a source of gravitation. Subsequent efforts were made to incorporate a more complete description of the gauge symmetries and conserved quantities of gravity. An examination of the literature reveals that, since the 1960s, at least two parallel tracks developed in Poincar\'e gauge theories of gravity. These might be called the L (Lagrangian) track \cite{kibble_lorentz_1961,sciama_physical_1964,hehl_spin_1973,hehl_spin_1974,hehl_general_1976,hehl_four_1980,trautman_fiber_1980} and the YM (Yang-Mills) track \cite{popov_theory_1975,popov_einstein_1976,tseytlin_poincare_1982,aldrovandi_complete_1984,aldrovandi_natural_1986,aldrovandi_quantization_1988}. The widely studied L-track originated in the 1960s. Kibble \cite{kibble_lorentz_1961} and Sciama \cite{sciama_physical_1964} extended Utiyama's gauge theory to the Poincar\'e group, yielding Einstein-Cartan-Sciama-Kibble gravity, or $U_4$ theory \cite{hehl_general_1976}. While the Poincar\'e gauge field curvatures of $U_4$ theory are identical to those of Eq.~(\ref{PoincareF}), the matter couplings of its field equations differ considerably from those of Eq.~(\ref{schematicFieldEq}). In its simplest form, $U_4$ theory couples angular momentum ($L$) not with curvature ($R$), but with a non-propagating torsion ($S$). Somewhat unexpectedly, therefore, angular momentum is coupled in $U_4$ theory to the gauge field curvature associated with the translation subgroup ${\mmT\subset\mP}$. The L-track hews to a Lagrangian formalism and, in all of its manifestations, derives from the \emph{terra firma} of a variational principle. The YM-track originated in the 1970s with an attempt by Popov and Daikhin \cite{popov_theory_1975,popov_einstein_1976} to derive a more orthodox gauge theory of gravitation. This branch of Poincar\'e gauge theory is of particular relevance here, because its field equations \cite{tseytlin_poincare_1982,aldrovandi_complete_1984} are precisely recovered in the continuum limit of Poincar\'e $\mOne$-loop theory, as derived in Eqs.~(\ref{continuum1Loop5VT})-(\ref{schematicFieldEq}). Unlike those of the L-track, these field equations couple a propagating torsion ($\partial S$) to energy-momentum ($T$). The YM-track has proven to be underivable from a Lagrangian formalism \cite{tseytlin_poincare_1982,aldrovandi_complete_1984,aldrovandi_natural_1986,aldrovandi_quantization_1988}. It is perhaps unsurprising, then, that a new dynamical formalism such as $\mOne$-loop theory might, in its continuum limit, rediscover it. We shall further characterize key results of the YM-track in our concluding discussion. For now, having contextualized the continuum limit of Poincar\'e $\mOne$-loop theory, we proceed to demonstrate its recovery of Einstein's vacuum equations. We take a general relativistic (GR) limit of Eq.~(\ref{continuum1Loop5VT}) by imposing two additional assumptions upon it, namely, metric compatibility and zero torsion. The former---${\mD_cg_{ab}=0}$---may be established by defining a vanishing covariant derivative of the translation gauge field \cite{blagojevic_gravitation_2002}\cmmnt{Eq.~(3.43)}: \begin{eqn} 0&=\mD_ae_{\mu b}\\ &=\partial_ae_{\mu b}+\Gamma^\sigma_{~\mu a}e_{\sigma b}+\Gamma^c_{~ba}e_{\mu c}. \label{CovDivE} \end{eqn} Here, we have introduced the affine connection $\Gamma^c_{~ba}$, whose degrees of freedom are not independent and are fixed in terms of the Poincar\'e gauge fields by Eq.~(\ref{CovDivE}). The Riemann tensor is then defined as usual in terms of this affine connection: \begin{eqn} R^c_{~dab}=\partial_{[a|}\Gamma^c_{~d|b]}-\Gamma^c_{~e[a|}\Gamma^e_{~d|b]}. \end{eqn} The latter assumption, zero torsion, is defined as follows: \begin{eqn} S^c_{~ab}=\Gamma^c_{~[ab]}=0. \label{ZeroTorsion} \end{eqn} We now substitute Eqs.~(\ref{CovDivE})-(\ref{ZeroTorsion}) to eliminate the Lorentz gauge field $\Gamma^\mu_{~\nu a}$ in Eq.~(\ref{PoincareF}). Simplifying, we find that in the GR limit \begin{eqn} F^\mu_{~\nu ab}&=e^{\mu d}e_{\nu c}R^c_{~dba}\\ F_{\nu ab}&=e_{\nu c}S^c_{~ab}=0, \label{FinGRLimit} \end{eqn} where ${e^\mu_{~b}e_\mu^{~a}=\delta^a_b}$ and ${g_{ab}=e^\mu_{~a}\eta_{\mu\nu}e^\nu_{~b}}$. Therefore, the Riemann and torsion tensors are closely related to the gauge field curvatures defined in Eq.~(\ref{PoincareF}), as desired. Since ${F^{\mu\nu}_{~~ab}}$ is antisymmetric in its first two and last two indices, it further follows from Eq.~(\ref{FinGRLimit}) that, in the GR limit, $R^{cd}_{~~ab}$ is as well. Finally, substituting Eq.~(\ref{FinGRLimit}) into the translation components of Eq.~(\ref{continuum1Loop5VT}) and setting ${T_{\nu b}=0}$, we thus recover Einstein's vacuum equations \begin{eqn} e_{\nu c}R^{ca}_{~~ba}=0, \label{EinsteinVacEqn} \end{eqn} as desired. \section{Discussion and Conclusions\label{conclusionSect}} The $\mOne$-loop formalism has been demonstrated to successfully define a lattice gauge theory of the Poincar\'e group. By reinterpreting $\mP$ as an internal gauge group, $\mOne$-loop theory preserves Poincar\'e symmetry on a discrete lattice, and recovers general relativity in its torsionless, continuum vacuum limit. This new formalism comprises several technical innovations: \begin{enumerate}[label=(\roman*)] \item the $\mOne$-loop of Eq.~(\ref{OneLoop})---a relative of the Wilson loop that reconstitutes differential equations of motion; \item the 5-vector $\Phi$ of Eq.~(\ref{definePhiLatticeFieldAction})---a new representation of the Poincar\'e group; \item the definition of $\mOne$-loop current, which uniquely determines the Poincar\'e current ${\mJ_a}$ of Eq.~(\ref{defineTheCurrent}); and \item the lattice covariant codifferential $\delta$ of Eq.~(\ref{deltaOneLoops}), motivated by Cartan geometry. \end{enumerate} The dynamics of the resulting Poincar\'e gauge theory are determined by the basic $\mOne$-loop ${\delta\Omega\cdot J=\mOne}$, as defined in Eq.~(\ref{OneLoopFieldEqn}). This $\mP$-valued analogue of a Yang-Mills field equation defines not only the dynamics of the Poincar\'e gauge field, but matter field dynamics as well. Indeed, matter field equations of motion are superfluous in $\mOne$-loop theory, as they follow from the conservation of the $\mP$-valued Noether current, ${\delta J=\mOne}$, guaranteed in turn by ${\delta^2\Omega=\mOne}$. The $\mOne$-loop formalism thereby defines a computable, exactly-energy-momentum-conserving algorithm for the dynamics of a 5-vector matter field evolving under gravity. A $\mOne$-loop theory is decidedly rigid in the sense that very few arbitrary choices are made in its construction. Given a $G$-representation and a reductive Cartan geometry with $\mg$-valued connection $\omega$ and base-space $M$, a corresponding $\mOne$-loop theory is already quite fixed: the hypercubic lattice $\mZ^d$ is defined such that ${d=\dim[M]}$; the holonomy $\Omega$ is fixed by $\omega$; the $\mOne$-loop current $J$ is fixed by the $G$-representation, as demonstrated for $G=\mP$ in Eq.~(\ref{defineTheCurrent}); and the interaction of matter and gauge fields is wholly determined by the $\mOne$-loop ${\delta\Omega\cdot J=\mOne}$. The definition of the operator $\delta$ is itself quite constrained by its need to satisfy ${\delta^2\Omega=\mOne}$. The choice to relinquish a Lagrangian in favor of the $\mOne$-loop formalism was not undertaken without considerable effort by the authors to construct a satisfactory Lagrangian Poincar\'e lattice gauge theory. In a Lagrangian approach, Poincar\'e symmetry generators naturally arise as vector fields on space-time, which are ill-defined on a discrete lattice. An effort to `lift' these generators to vertical gauge symmetries of a discrete Lagrangian apparently requires the introduction of new fields that do not have a clear physical interpretation \cite{glasser_lifting_2019,glasser_restoring_2019}. More abstractly, this earlier work revealed a natural tension between the additive structure of a Lagrangian action---integrated over space-time or summed over lattice vertices---and the multiplicative group structure of the Poincar\'e symmetries. A Hamiltonian approach was also considered, however, operator-based Hamiltonian theories are predicated on the evolution of a continuous time parameter that is unsuitable for computation. Although gauge-compatible splitting methods \cite{glasser_geometric_2020} enable the preservation of gauge structure in discrete-time Hamiltonian algorithms, it is unclear how such a splitting in time can be extended to a `four-dimensional splitting' over a space-time lattice. $\mOne$-loop formalism was developed to address these challenges. It assumes a multiplicative structure on the lattice, wherein adjacent vertices are related strictly by group-valued fields. The result can be viewed as a discrete realization of the integral formalism \cite{yang_integral_1974} of early gauge theory. By virtue of its manifest gauge invariance, Poincar\'e $\mOne$-loop theory improves upon Regge calculus as a classical, discrete theory of gravity. Its continuum limit, as derived in Eqs.~(\ref{continuum1Loop5VT})-(\ref{schematicFieldEq}), recovers the field equations of the YM-track of Poincar\'e gauge theory \cite{popov_theory_1975,popov_einstein_1976,tseytlin_poincare_1982,aldrovandi_complete_1984,aldrovandi_natural_1986,aldrovandi_quantization_1988}. Despite the YM-track exhibiting many promising features of a gravitational theory \cite{aldrovandi_complete_1984}, the incompatibility of its field equations with Lagrangian mechanics has led some of its investigators to view the YM-track with disfavor \cite{tseytlin_poincare_1982,aldrovandi_natural_1986,aldrovandi_quantization_1988}. Poincar\'e $\mOne$-loop theory addresses some of the concerns raised in this prior work, as we now discuss. First, the determination of matter couplings in the YM-track has not been well understood; for example, the interpretation of $T$ in Eq.~(\ref{schematicFieldEq}) as an energy-momentum has been in doubt \cite{tseytlin_poincare_1982}. $\mOne$-loop theory addresses this issue by defining a new formalism that explicitly defines the properties of a matter current and its coupling to a gauge field. In Eq.~(\ref{defineTheCurrent}), this formalism was shown to uniquely determine $\mJ_a$, the $\mmp$-valued current of the 5-vector field. Second, from a more philosophical point of view, authors of the YM-track caution generally against its failure to derive from a variational principle \cite{tseytlin_poincare_1982,aldrovandi_natural_1986}. However, the crucial use of Cartan geodesics in the $\mOne$-loop formalism lends it a variational character, even absent a Lagrangian. Lastly, some authors of the YM-track note that, although it has not made unphysical predictions of classical gravitational dynamics, its lack of a Lagrangian complicates its quantization via a path integral approach. This difficulty is understood to render the YM-track unsuitable as a quantum theory of gravity \cite{aldrovandi_quantization_1988}. In the present work, we have aspired to a more modest goal: a classical, computable, and physically sensible algorithm that preserves Poincar\'e symmetry in a discrete universe. $\mOne$-loop theory is essentially a computable physical theory stripped of all considerations except symmetry principles. Its constituents---holonomies and Noether currents---arise directly as representations of a symmetry group, with as little additional structure as possible. With this spare framework, $\mOne$-loop theory demonstrates that there is no essential conflict between discrete space-time and Poincar\'e symmetry. An algorithm for gravitational simulations has thus been developed that, in principle, covariantly conserves energy and momentum to machine precision. In future work, we shall consider the physical effects of discreteness on the gravitational dynamics of Poincar\'e $\mOne$-loop theory. We shall also explore $\mOne$-loop theories for more inclusive gauge groups, such as ${G=\mP\times U(1)}$, and for fermionic $\mP$-representations, whose spin and orbital angular momenta can both be expected to appear in the matter current. \section{Acknowledgments} Thank you to Eugene Kur for illuminating discussions, and to Professor Nathaniel Fisch for his encouragement of this effort. A.S.G. acknowledges the generous support of the Princeton University Charlotte Elizabeth Procter Fellowship. This research was further supported by the U.S. Department of Energy (DE-AC02-09CH11466).
1,941,325,220,686
arxiv
\section{Introduction} Quantum speed limit describes the maximal evolution speed of a quantum system from an initial state to a target state, which has been found applications in the fields of quantum computation \cite{lloyd1999ultimate}, quantum thermodynamics \cite{PhysRevLett.105.170402}, quantum metrology \cite{demkowiczdobrzanski2012} and quantum control \cite{caneva2009optimal}. The minimum evolution time between two distinguishable states of a quantum system is defined as the quantum speed limit time (QSLT) \cite{PhysRevLett.65.1697,americanjournal1.16940,Margolus1998The,PhysRevA.67.052109,PhysRevLett.103.160502}, and has been widely used to characterize the maximal evolution speed. The QSLT first proposed for a closed quantum system to naturally evolve to an orthogonal state is characterized by unifying the Mandelstam-Tamm (MT) bound \cite{J.Phys.USSR} and Margolus-Levitin (ML) bound \cite{Margolus1998The}. Since the inevitable interaction of quantum systems with their surrounding environment, the generalizations of QSLT for open quantum systems have attracted much attention, and some valuable works have been done \cite{PhysRevLett.110.050402,PhysRevLett.110.050403,PhysRevLett.111,Zhang2014Quantum} in recent years. In Ref. \cite{PhysRevLett.111}, the authors have proposed a unified bound of QSLT including both MT and ML types for non-Markovian dynamics with pure initial states. For a wider range of applications, another unified bound of QSLT applied to both mixed and pure initial states has been derived by introducing relative purity \cite{Audenaert2014} as the distance measure \cite{Zhang2014Quantum}. These results have stimulated the interest of some further research about quantum speed limit. Recently, some remarkable progresses about the analysis of environmental effects on the quantum speed limit for open quantum systems have been made. For example, in Refs. \cite{PhysRevLett.111,Zhang2014Quantum,PhysRevLett.114.233602}, authors have investigated the quantum speed limit for cavity QED systems and found that the non-Markovianity can speed up the quantum evolution. Some works have provided the quantum speed limit of a central spin trapped in a spin-chain environment to study the behaviors of QSLT in the critical vicinity \cite{wei2016quantum,hou2015quantum}. The quantum speed limit in spin-boson models have also been studied in Refs. \cite{Zhang2014Quantum,dehdashti2015decoherence}. Furthermore, to realize the controllable quantum evolution speed in open quantum systems, some methods have been proposed, such as dynamical decoupling pulses \cite{Song2017Control}, external classical driving field \cite{PhysRevA.91.032112} and optimal control \cite{caneva2009optimal}. Most of these existing studies have been restricted to the environments with zero temperature, which motivates us to do some investigations about the quantum speed limit in a finite-temperature environment. Besides, how to realize the controllable quantum evolution speed in a finite temperature environment is also one of issues that draw our attention. In this paper, we firstly investigate the quantum evolution speed of a qubit which is locally coupled to its finite-temperature environment with Ohmic-like spectrum by using the stochastic decoupling method \cite{Shao2004,Shao2010Dissipative,PhysRevE.84.051112,Wu2014Double}. It is shown that the quantum evolution speed of a qubit can be accelerated by the high temperature in the strong-coupling regime. For the weak-coupling case, the bath temperature plays a role of dual character in affecting the quantum evolution speed, which means that the high temperature not only leads to the speed-up but also speed-down processes. Furthermore, we find that the quantum evolution speed can be controlled by applying the bang-bang pulse, and a relative steady value of quantum evolution speed is obtained. Interestingly, the bath temperature and Ohmicity parameter also play roles of dual character in the strong-coupling regime since the presence of bang-bang pulse, which are not found in the case without pulse. Secondly, we study the quantum evolution speed of a bare qubit coupled to a nonlinear thermal bath (spin-boson model) \cite{PRL.120401,Huang2010Effect} by applying the hierarchical equations of motion (HEOM) \cite{JPSJ.58.101,JPSJ.75.082001,PhysRevE.75.031107,1.2713104,1.3213013,PhysRevA.85.062323,1367-2630} which is an effective numerical method that beyond the Born-Markov approximation. It is found that the quantum evolution speed in strong-coupling regime with low temperature behaves similarly to that in the weak-coupling regime where the quantum evolution speed can be accelerated by the increase of temperature. However, the rise of temperature induces the speed-down process in the strong-coupling regime with high temperature. As a comparison, the dynamics of quantum coherence is also explored in different situations. This paper is organized as follows. In Sec. \ref{sec2}, we investigate the quantum evolution speed of a qubit in bosonic environment by making use of the stochastic decoupling approach. In Sec. \ref{sec3}, we study the quantum evolution speed of a qubit in nonlinear environment by resorting to the HEOM method. The conclusion of this paper is given in Sec. \ref{sec4}. \section{Quantum evolution speed in the bosonic environment}\label{sec2} In this section, we study the quantum evolution speed of a qubit coupled to its own finite-temperature environment. First, we briefly outline the definition of quantum speed limit for an open quantum system. The maximal rate of evolution can be characterized by the QSLT which is defined as the minimal time a quantum system needs to evolve from an initial state to a final state. In open quantum systems, the dynamical evolutions are governed by the time-dependent master equation $L_t \rho_t = \dot{\rho}_t$ with $L_t$ being the positive generator of the dynamical semigroup. Based on the relative purity along with von Neumann trace inequality and the Cauchy-Schwarz inequality, a unified lower bound on the QSLT including both MT and ML types has been derived for arbitrary initial mixed states in the open quantum systems \cite{Zhang2014Quantum}, which reads \begin{equation} \tau_{\rm{QSL}}=max\left\lbrace\frac{1}{\overline{\sum_{i=1}^{n} \sigma_i \rho_i}},\frac{1}{\overline{\sqrt{\sum_{i=1}^{n} \sigma_i^2}}} \right\rbrace\ast\left| f_{ t +\tau_{\rm{D}}} -1 \right| Tr (\rho_t^2) \label{qslt} \end{equation} where $\overline{X} = \tau_{\rm{D}}^{-1} \int_{t}^{t+\tau_{\rm{D}}} X(t') \rm{d} t'$. $\sigma_i$ and $\rho_i$ are the singular values of $L_t \rho_t$ and $\rho_t$, respectively. $\tau_{\rm{D}}$ denotes the driving time. The relative purity between initial state $\rho_{t}$ and final state $\rho_{t+\tau_{\rm{D}}}$ of the quantum system is defined as $f_{t+\tau_{\rm{D}}} = Tr\left[\rho_{t+\tau_{\rm{D}}}\rho_{t}\right]/Tr(\rho^2_{t})$. The first system under our consideration is a qubit locally coupled to a finite-temperature bosonic environment, also known as the spin-boson model, whose total Hamiltonian is described as ($\hbar=1$) \cite{Palma567} \begin{equation} \mathcal{H}= \frac{\Omega}{2} \sigma_z + \sum_{k} \omega_k b_k^\dagger b_k + \sum_{k} g_k \sigma_z ( b_k^\dagger + b_k) \end{equation} where $\sigma_z$ is the standard Pauli operator in the $z$ direction, $b_k^\dagger$ and $b_k$ denote the creation and annihilation operators of $k$th oscillators with the frequency $\omega_k$, respectively. The $g_k$ represents the coupling strength of qubit to the finite-temperature bath represented by a set of harmonic oscillators. We investigate the dissipative quantum dynamics of the system by making use of the stochastic decoupling approach \cite{Shao2004,Shao2010Dissipative,PhysRevE.84.051112,Wu2014Double}, which is previously used in the calculation of partition functions and real-time dynamics for many-body systems. Based on the Hubbard–Stratonovich transformation, the dissipative interaction between the qubit and the heat bath is decoupled via stochastic fields, then the separated system and bath thus evolve in common white noise fields. The reduced density matrix comes out as an ensemble average of its random realizations. By resorting to the Girsanov transformation \cite{Shao2004,PhysRevE.84.051112}, a stochastic differential equation for the random density matrix is obtained and can be used to derive the desired master equation. Applying to the system of interest, the master equation is given as \begin{equation} \label{master equation} \frac{\rm{d}}{\rm{d} t} \rho_{\rm{s}} (t)= -\rm{i} \frac{\Omega}{2}\left[ \sigma_z, \rho_{\rm{s}} (t)\right] -\mathcal{D}(t)\left[\sigma_z, \left[\sigma_z, \rho_{\rm{s}} (t)\right]\right], \end{equation} where $\mathcal{D}(t)=\int_{0}^{t}\rm{d} t'C_{\rm{R}}(t')$ with $C_{\rm{R}} (t)$ being the real part of the correlation function $C(t)$. Assuming the bath is in a thermal equilibrium state $\rho_{\rm{b}}(0)=e^{-H_{\rm{b}}T^{-1}}/Tr_{\rm{b}}(e^{-H_{\rm{b}}T^{-1}})$ with the Boltzmann constant $k_{B}=1$, then the correlation function for this bosonic bath is given by \begin{equation} \label{correlation function} C(t)= \int \rm{d}\omega J(\omega) \left[ \coth \left(\frac{\omega}{2T}\right) \cos(\omega t) -\rm{i}\sin(\omega t) \right], \end{equation} in which $J(\omega)$ is the bath spectral density function, and $T$ represents the temperature. As a result, the reduced density matrix of qubit can be obtained by solving the Eq. (\ref{master equation}): \begin{equation} \label{reduced matrix A} {\rho _{\rm{s}}}(t) = \left( {\begin{array}{*{20}{c}} {{\rho_{\rm{ee}}}(0)}&{{\rho_{\rm{eg}}}(0){e^{\rm{i}{\Omega}t - \Gamma (t)}}} \\ {{\rho_{\rm{ge}}}(0){e^{ - \rm{i}{\Omega}t - \Gamma (t)}}}&{{\rho_{\rm{gg}}}(0)} \end{array}} \right), \end{equation} where \begin{equation} \label{decoherence factor} \Gamma(t)= 4\int_{0}^{\infty} \rm{d}\omega J(\omega) \frac{1-\cos(\omega t)}{\omega^2} \coth \left(\frac{\omega}{2T}\right) \end{equation} is the decoherence factor. In this article, we consider the spectral density of the environmental modes is Ohmic-like $J(\omega)=\Lambda (\omega^s/\omega^{s-1}_{\rm{c}}) e^{-\omega/\omega_{\rm{c}}}$ with $\Lambda$ being the dimensionless coupling constant and $\omega_{\rm{c}}$ being the cutoff frequency. It is possible to obtain Ohmic reservoirs ($s=1$) and sub-Ohmic reservoirs($s<1$) by changing the Ohmicity parameter $s$. In the Bloch sphere representation, a generally mixed state $\varrho$ of a qubit can be written in terms of Pauli matrices $\varrho = \frac{1}{2} (I + v_x \sigma_x +v_y \sigma_y + v_z \sigma_z)$, where the coefficients $v_x$, $v_y$, $v_z$ are the Bloch vector, and $I$ is the identity operator of the qubit. The time evolution of the reduced density matrix $\rho_t$ in the Bloch sphere representation can be derived from Eq. (\ref{reduced matrix A}) \begin{equation} {\rho _{\rm{s}}}(t) =\frac{1}{2}\left( {\begin{array}{*{20}{c}} {1 + {v_z}}&{({v_x} - \rm{i}{v_y}){q_t}} \\ {({v_x} + \rm{i}{v_y}){q^{*}_t}}&{1 - {v_z}} \end{array}} \right), \end{equation} where $q_t = e^{\rm{i}{\Omega}t - \Gamma_t}$ with $\Gamma_t$ being the decoherence factor, see Eq. (\ref{decoherence factor}). It is readily find that the excited state population is unchanged, thus the evolution of qubit is a dephasing process. According to Eq. (\ref{qslt}), the QSLT for the qubit to evolve from initial state $\rho_t$ to final state $\rho_{t +\tau_{\rm{D}}}$ in this dephasing model is given by \begin{equation} \label{qsltcoherence} \tau_{\rm{QSL}} =\frac{\frac{1}{2} \sqrt{v_x^2 +v_y^2} \left|(q_t-q_{t+\tau_{\rm{D}}})q_t^{*} +H.c. \right|}{\frac{1}{\tau_{\rm{D}}} \int_{t}^{t+\tau_{\rm{D}}} \left|\dot{q}_{t'} \right| \rm{d} t'}. \end{equation} \begin{figure}[ht] \centering \subfigure{ \label{figure1a} \includegraphics[width=8cm]{figure1a}} \subfigure{\label{figure1b} \includegraphics[width=8cm]{figure1b}} \caption{(a) The QSLT $\tau_{\rm{QSL}}/\tau_{\rm{D}}$ as a function of the initial time parameter $t$ for different temperatures in the strong-coupling regime $\Lambda=0.2$. The inset shows the QSLT $\tau_{\rm{QSL}}/\tau_{\rm{D}}$ as a function of the temperature $T$ for $t=0.3$. (b) The QSLT $\tau_{\rm{QSL}}/\tau_{\rm{D}}$ as a function of the initial time parameter $t$ for different temperatures in the weak-coupling regime $\Lambda=0.001$. The inset shows the QSLT $\tau_{\rm{QSL}}/\tau_{\rm{D}}$ as a function of the temperature $T$ for $t=0.5$ and $t=3$. Here we set the driving time $\tau_{\rm{D}}=1$. Other parameters are $\Omega=1$, $\omega_{\rm{c}}=50$ and $s=1$.} \end{figure} \subsection{The evolution of quantum speed limit} In this section, we give the results about the evolution of quantum speed limit for a qubit locally coupled to a finite-temperature environment. For simplicity, we assume the initial state of qubit is $\rho_0=1/2(\left|0\right\rangle \left\langle 0\right|+\left|0\right\rangle \left\langle 1\right|+\left|1\right\rangle \left\langle 0\right|+\left|1\right\rangle \left\langle 1\right|)$ and fix the computational basis $\left\lbrace |0\rangle, |1\rangle \right\rbrace $ as the reference basis, in which $|0\rangle$ and $|1\rangle$ are the ground and excited states of Pauli operator $\sigma_z$, respectively. In Fig. \ref{figure1a}, we plot the QSLT for the Ohmic dephasing model as a function of initial time parameter $t$ with different temperatures $T$ in the strong-coupling regime. The inset shows the variation of QSLT as a function of temperature $T$ for a fixed initial time parameter $t=0.3$. We can find that the QSLT of qubit decays monotonically to zero with the growth of time $t$, and the more the temperature increases, the smaller the QSLT becomes, which means that the high temperature speed up the evolution of qubit. This result can be understood by the fact that a higher bath temperature induces a more severe decoherence which leads to the acceleration of quantum evolution. In the weak-coupling regime, it can be seen from Fig. \ref{figure1b} that the temperature shows different effects on the QSLT compared to what it exhibits in the strong-coupling regime. The high temperature initially prolongs the QSLT and suppresses the quantum evolution speed, however, some time later, the high temperature can speed up the quantum evolution, which means that the bath temperature exhibits two sides for the quantum evolution speed in the weak-coupling regime. When the temperature increases, the decay rate of QSLT is enhanced. Moreover, we can observe from the inset of Fig. \ref{figure1b} that a asymptotic value of QSLT below the driving time is obtained as the zero temperature is approached. This result suggests that the extreme low bath temperature may freeze the speed of evolution of qubit in this dissipative system. \begin{figure}[!ht] \centering \subfigure{ \label{figure2a} \includegraphics[width=9cm]{figure2a}} \subfigure{ \label{figure2b} \includegraphics[width=8cm]{figure2b}} \subfigure{\label{figure2c} \includegraphics[width=8cm]{figure2c}} \caption{(a) The ratio of QSLT to the $l_1$-norm quantum coherence $\tau_{\rm{QSL}}/(\tau_{\rm{D}}\cdot C)$ as a function of the initial time parameter $t$ for different temperatures in the weak-coupling regime $\Lambda=0.001$ (dashed line)and strong-coupling regime $\Lambda=0.2$ (solid line). The relative purity $f$ as a function of the initial time parameter $t$ for different temperatures in the (b) strong-coupling regime $\Lambda=0.2$ and (c) weak-coupling regime $\Lambda=0.001$. The insets show the time evolutions of $l_1$-norm coherence for different temperatures in the (b) strong-coupling regime $\Lambda=0.2$ and (c) weak-coupling regime $\Lambda=0.001$. Here we set the driving time $\tau_{\rm{D}}=1$. Other parameters are $\Omega=1$, $\omega_{\rm{c}}=50$ and $s=1$.} \end{figure} Here, we would like to provide a possible physical explanation why the QSLT shows different behaviors in the strong-coupling and weak-coupling regimes for changing temperatures in this quantum dissipative system. In the dephasing model, it has been found that the QSLT relates to the quantum coherence of the initial state under a given driving time $\tau_{\rm{D}}$ \cite{Zhang2014Quantum}, which can also be confirmed from the term $ \sqrt{v_x^2 +v_y^2} $ in Eq. (\ref{qsltcoherence}). If we do a further derivation, Eq. (\ref{qsltcoherence}) can be rewritten as \begin{equation} \label{qsltcoherencet} \tau_{\rm{QSL}} = C_t \cdot\frac{{\left| {{e^{ - {\Gamma _t}}} - \cos (\Omega {\tau_{\rm{D}}}){e^{ - {\Gamma _{t + {\tau_{\rm{D}}}}}}}} \right|}}{{\frac{1}{{{\tau_{\rm{D}}}}}\int_t^{t + {\tau_{\rm{D}}}} {{e^{ - {\Gamma _{t'}}}}\sqrt {(\Gamma _{t'}^2 + {\Omega ^2}{{t'}^2})(\dot \Gamma _{t'}^2 + {\Omega ^2})} \rm{d} t'} }} \end{equation} in which $C_t = \sqrt{v_x^2 +v_y^2} e^{-\Gamma_{t}}$ is the $l_1$-norm quantum coherence of qubit according to the definition in Ref. \cite{PhysRevLett.Coherence}. Based on Eq. (\ref{qsltcoherencet}), we can explore the relationship between the QSLT and quantum coherence in this dephasing model. It is clear that the QSLT at time $t$ is directly related to the quantum coherence of the same time. To get more insight on the role of temperature in the quantum evolution speed, we utilize the ratio of QSLT to quantum coherence to analysis the different phenomenons observed above. The ratio as a function of the initial time parameter for different temperatures is displayed in Fig. \ref{figure2a}. We can observe that in the strong-coupling regime, the ratios have little gaps at first for different temperatures, and then the gaps enlarge gradually as time goes on, whereas the ratio always have relatively stable gaps from beginning for different temperatures in the weak-coupling regime. This result suggests that the second term on the right of Eq. (\ref{qsltcoherencet}) may lead to the different performances of quantum evolution speed, although which only gives a superficial interpretation, it provides inspiration for further studying. Since the expression of QSLT in Eq. (\ref{qslt}) is based on the relative purity, we mainly focus on the investigation about relative purity in the following. In Figs.\ref{figure2b} and \ref{figure2c}, we plot the time evolutions of relative purity and $l_1$-norm coherence for different temperatures in the strong-coupling and weak-coupling regimes, respectively. The relative purity gradually increases to the maximum as time goes on in the strong-coupling regime, and the higher temperature leads to faster increase. Instead, the quantum coherence gradually decreases to the minimum in the time evolution, and the higher temperature leads to faster decrease. This is due to the fact that the increase of temperature brings about more intense thermal fluctuation which induces the stronger decoherence. Similar behaviors can be found in the weak-coupling case which is displayed in Fig. \ref{figure2c}. One difference is that the changing rates of relative purity and quantum coherence are smaller than the ones in the strong-coupling case. Another difference is that the values of relative purity at the initial time $t=0$ are not consistent which leads to the curves for different temperatures cross each other. Comparing to the Fig. \ref{figure1b}, we can find that the bath temperature plays a role of dual character in affecting the QSLT and this phenomenon may be linked to the performance of relative purity. It is clearly observed from the time evolutions of quantum coherence that the finial states $\rho_{0+\tau_{\rm{D}}}$ with driving time $\tau_{\rm{D}}=1$ under various temperatures have little difference in the strong-coupling regime, while have obvious gaps in the weak-coupling case, which contributes to the initial values of relative purity are consistent in the strong-coupling regime, however, are different in the weak-coupling case. Thus, the reason why the quantum evolution speed shows different performances in the strong-coupling and weak-coupling regimes is threefold: first, the quantum evolution speed at time $t$ is related to the quantum coherence at the same moment. Second, the higher temperature brings about more intense thermal fluctuation in the strong-coupling regime than in the weak-coupling case, which leads to the quantum coherence decreasing faster in the strong-coupling regime than in the weak-coupling case. Third, the initial values of relative purity depends on the driving time which may contribute to the different behaviors of relative purity for various temperatures in strong-coupling and weak-coupling regimes. Above discussions may help us understand the effects of bath temperature on the quantum evolution speed of this dephasing model, and realize that the environment-assisted speed-up and speed-down processes are possible. \begin{figure}[htbp] \centering \subfigure{ \label{figure3a} \includegraphics[width=6cm]{figure3a}} \subfigure{ \label{figure3b} \includegraphics[width=6cm]{figure3b}} \subfigure{ \label{figure3c} \includegraphics[width=6cm]{figure3c}} \subfigure{ \label{figure3d} \includegraphics[width=6cm]{figure3d}} \caption{ Contour plot of variation of the QSLT $\tau_{\rm{QSL}}/\tau_{\rm{D}}$ as a function of temperature $T$ and coupling strength $\Lambda$ for (a) $t=0.1$, (b) $t=0.5$, (c) $t=1$ and (d) $t=1.5$. Other parameters are $\tau_{\rm{D}}=1$, $\Omega=1$, $\omega_{\rm{c}}=50$ and $s=1$. } \label{figure3} \end{figure} We display the contour plot of QSLT as a function of bath temperature $T$ and coupling strength $\Lambda$ for different initial time parameters in Fig. \ref{figure3}. It is quite clear from Fig. \ref{figure3}(a) that the QSLT has a peak in the weak-coupling regime with the initial time parameter $t=0.1$, which means that the QSLT increases at first, then decreases along with the growth of bath temperature. The bath temperature plays a role of dual character in affecting the quantum evolution speed in the weak-coupling regime. By contrast, in the strong-coupling regime, the QSLT only decreases with the increase of the bath temperature. Furthermore, it is clearly observed from the Fig. \ref{figure3}(b)-(d) that the dual character of temperature exhibited in the weak-coupling regime gradually disappears as the time goes on, and then the quantum evolution speed is accelerated by increasing temperature in both strong-coupling and weak-coupling regimes. Therefore, the bath temperature plays a more important and broader role in the weak-coupling regime than it does in the strong-coupling one. \begin{figure}[t] \centering \subfigure[]{ \label{figure4a} \includegraphics[width=7cm]{figure4a}} \subfigure[]{ \label{figure4b} \includegraphics[width=7cm]{figure4b}} \subfigure[]{ \label{figure4c} \includegraphics[width=7cm]{figure4c}} \subfigure[]{ \label{figure4d} \includegraphics[width=7cm]{figure4d}} \caption{The QSLT $\tau_{\rm{QSL}}/\tau_{\rm{D}}$ of qubit as functions of the initial time parameter $t$ and bath temperature $T$ for the sub-Ohmic spectrum $s=0.6$ in the (a) strong-coupling regime $\Lambda=0.2$ and (b) weak coupling regime $\Lambda=0.001$, respectively. The QSLT $\tau_{\rm{QSL}}/\tau_{\rm{D}}$ of qubit as functions of the Ohmicity parameter $s$ and initial time parameter $t$ for bath temperature $T=1$ in the (c) strong-coupling regime $\Lambda=0.2$ and (d) weak-coupling regime $\Lambda=0.001$, respectively. Other Parameters are chosen as $\tau_{\rm{D}}=1$, $\omega_{\rm{c}}=50$ and $\Omega =1$. } \end{figure} In the following, we explore the variations of QSLT of qubit for the sub-Ohmic reservoirs with different relevant parameters. We plot the QSLT as functions of the initial time parameter $t$ and bath temperature $T$ for the sub-Ohmic spectrum $s=0.6$ in the strong-coupling regime [Fig. \ref{figure4a}] and weak coupling regime [Fig. \ref{figure4b}], respectively. It is found that the effects of bath temperature on the QSLT of qubit for the sub-Ohmic spectrum are similar to the case for Ohmic spectrum. We can see from Fig. \ref{figure4a} that the QSLT decreases monotonically with the initial time parameter $t$ in the strong-coupling regime, and the increase of bath temperature leads to the shorter QSLT, namely, the faster quantum evolution. In the weak-coupling regime, as shown in Fig. \ref{figure4b}, the bath temperature also plays a role of dual character in influencing the speed of evolution. Initially, the QSLT is a monotonic increasing function of bath temperature $T$, however, some time later, it is changed to be a monotonic decreasing function of $T$. Moreover, a relative steady speed of evolution can be obtained at the zero temperature, which is a unique phenomenon in weak coupling regime. To get more insight on the role of the sub-Ohmic reservoirs in influencing the QSLT, we display the QSLT as functions of the Ohmicity parameter $s$ and initial time parameter $t$ in the strong-coupling regime [Fig. \ref{figure4c}]and weak-coupling regime [Fig. \ref{figure4d}], respectively. It is clear to see that the QSLT is indeed extended with the increase of the Ohmicity parameter $s$ in the strong-coupling regime and the QSLT is not a simple monotone function of the Ohmicity parameter $s$ in the weak-coupling regime. In comparison, we can see from Fig. \ref{figure4d} that the QSLT decreases to a minimum with the growth of the Ohmicity parameter $s$ in the beginning of the evolution, and after a certain time the QSLT first increases to a maximum with increasing $s$, and then decreases with further increase of $s$, which means that a nonmonotonic behavior of the QSLT is displayed. Furthermore, the quantum evolution speed of qubit for sub-Ohmic spectrum is faster than the one for Ohmic spectrum in the strong-coupling regime, which is inverse in the weak-coupling regime. \subsection{Control of the quantum evolution speed by applying bang-bang pulses} \begin{figure}[htbp] \centering \includegraphics[width=9cm]{figure5} \caption{ The QSLT $\tau_{\rm{QSL}}/\tau_{\rm{D}}$ of qubit versus the initial time parameter $t$ with different pulse interval : $\Delta t =0.05$ (green dotted line), $\Delta t =0.02$ (blue dotted-dashed line), $\Delta t =0.01$ (black dashed line), $\Delta t =0.005$ (red solid line) for the Ohmic spectrum in the strong-coupling regime $\Lambda=0.2$. Other parameters are $\tau_{\rm{D}}=1$, $\omega_{\rm{c}}=20$, $\Omega =1$ and $T=1$. } \label{figure5} \end{figure} The major obstacle to the development of quantum technologies is the destruction of all quantum properties caused by the inevitable interaction of quantum systems with their surrounding environment. Much effort has been made to minimize the influence of environmental noise or suppress the decoherence induced by environment in the practical realization of the quantum tasks. One of the interesting approaches is the \textquotedblleft dynamical decoupling\textquotedblright or \textquotedblleft bang-bang\textquotedblright pulses \cite{PhysRevAbangbang,PhysRevA.69.030302,PhysRevLett.82.2417}, which is based on applying the strong and sufficiently fast pulses to restore the quantum coherence of target system. In this section, we mainly investigate the effect of bang-bang pulses on the quantum evolution speed of qubit. The Hamiltonian of control pulses is given by \cite{PhysRevA.69.030302} \begin{equation} H_{\rm{p}} (t) = \sum_{n=1}^{N} \mathcal{A}_n (t) e^{\rm{i}\Omega t\sigma_z/2} \sigma_x e^{-\rm{i}\Omega t\sigma_z/2}, \end{equation} where the pulse amplitude $\mathcal{A}_n (t) = \mathcal{A}$ for $t_n\leqslant t \leqslant t_n + \lambda$ and 0 otherwise, lasting for a duration $\lambda \ll \Delta t$, with $t_n = n \Delta t$ being the time at which the $n$th pulse is applied. Here, we only consider the $\pi$ pulses for our investigation, which means that the amplitude $\mathcal{A}$ and the duration $\lambda$ of a pulse satisfy $2\mathcal{A}\lambda = \pm \pi$. It is not difficult to obtain the time evolution operator in the present of dynamical decoupling pulses at time $t=2N\Delta t + \epsilon$ \begin{eqnarray} \mathbb{U}(t)=\left\{ {\begin{array}{*{20}{c}} {{\mathcal{U}_{\rm{o}}}(\epsilon){{[{\mathcal{U}_{\rm{c}}}]}^N}\begin{array}{*{20}{c}} {}&{}&{}&{\begin{array}{*{20}{c}} {} \end{array}} \end{array}\begin{array}{*{20}{c}} {}&{} \end{array}0 \leqslant \epsilon < \Delta t} \\ {{\mathcal{U}_{\rm{o}}}(\epsilon - \Delta t){\mathcal{U}_{\rm{p}}}(\lambda ){\mathcal{U}_{\rm{o}}}(\Delta t){{[{\mathcal{U}_{\rm{c}}}]}^N}\begin{array}{*{20}{c}} {} \end{array}\Delta t \leqslant \epsilon < 2\Delta t} \end{array}} \right. \end{eqnarray} where $N=\left[t/(2\Delta t)\right]$ with $\left[ \dots \right]$ denoting the integer part, and the $\epsilon$ is the residual time after $N$ cycles. $\mathcal{U}_{\rm{o}}$ and $\mathcal{U}_{\rm{p}}$ are the evolution operators corresponding to the original Hamiltonian without and with the dynamical decoupling pulses, respectively. $\mathcal{U}_{\rm{c}}$ represents the time evolution operator for an elementary cycle $2\Delta t$ which is given by $ \mathcal{U}_{\rm{c}} = \mathcal{U}_{\rm{p}} (\lambda) \mathcal{U}_{\rm{o}} (\Delta t) \mathcal{U}_{\rm{p}} (\lambda) \mathcal{U}_{\rm{o}} (\Delta t)$. For simplicity, we only focus on the periodic points $t_{2N}=2N\Delta t$. It has been found that the decoherence factor $\Gamma(t)$ needs to be replaced by a new function $\Gamma_{\rm{p}} (N, \Delta t)$ in the presence of the decoupling pulses \cite{PhysRevA.69.030302}, \begin{eqnarray} \Gamma_{\rm{p}} (N, \Delta t) =4 \int \rm{d}\omega J(\omega) \coth \left(\frac{\omega}{2T}\right) \times \frac{1-\cos (\omega t_{2N})}{\omega^2} \tan^2 \left(\frac{\omega \Delta t}{2}\right). \end{eqnarray} Comparing $\Gamma_{\rm{p}} (N, \Delta t)$ with $\Gamma (t)$, we notice that the original bath spectral density has been transformed from $J(\omega)$ to the effective spectral density $J(\omega) \tan^2 (\omega \Delta t/2)$ after applying the bang-bang pulses. In Fig. \ref{figure5}, we illustrate the QSLT of qubit versus the initial time parameter $t$ with different pulse intervals for the Ohmic spectrum in the strong-coupling regime. In the beginning of the evolution, the QSLT of qubit becomes shorter since the smaller pulse interval. On the contrary, the QSLT decreases with the increase of pulse interval in the latter stage. Here, the fast pulse is not only able to accelerate the dynamical evolution, but also able to decelerate the dynamical evolution, which also plays a role of dual character. More interestingly, when $\Delta t$ is small enough (or in the limit $\Delta t\rightarrow 0$), the bang-bang pulse enables us to obtain a relative steady quantum evolution speed which almost remains constant. This is due to the fact that bang-bang pulse can effectively suppress the decoherence by averaging out the unwanted effects of environmental interaction\cite{PhysRevAbangbang,PhysRevA.69.030302,PhysRevLett.82.2417}. \begin{figure}[tbph] \centering \subfigure[]{ \label{figure6a} \includegraphics[width=7cm]{figure6a}} \subfigure[]{ \label{figure6b} \includegraphics[width=7cm]{figure6b}} \caption{(a)The QSLT $\tau_{\rm{QSL}}/\tau_{\rm{D}}$ of qubit as a function of the initial time parameter $t$ and bath temperature $T$ with bang-bang pulse for the sub-Ohmic spectrum $s=0.6$ in the strong-coupling regime $\Lambda=0.2$. (b) The QSLT $\tau_{\rm{QSL}}/\tau_{\rm{D}}$ of qubit as a function of the initial time parameter $t$ and Ohmicity parameter $s$ with bang-bang pulse at bath temperature $T=1$ in the strong-coupling regime $\Lambda=0.2$. Other Parameters are chosen as $\tau_{\rm{D}}=1$, $\omega_{\rm{c}}=20$, $\Delta t =0.05$ and $\Omega =1$. } \end{figure} Next, we turn to focus on the quantum evolution speed for a sub-Ohmic spectrum in the present of bang-bang pulse. We mainly investigate the effects of bath temperature $T$ [see Fig. \ref{figure6a}] and Ohmicity parameter $s$ [see Fig. \ref{figure6b}] on the QSLT in the strong-coupling regime. Comparing to Fig. \ref{figure4a}, we can observe from Fig. \ref{figure6a} that the bath temperature plays a role of dual character in influencing the quantum evolution speed in the strong-coupling regime since the applied bang-bang pulse, which can not be found in the case without bang-bang pulse. Figure \ref{figure6b} presents the QSLT as a function of the initial time parameter $t$ and Ohmicity parameter $s$ with bang-bang pulse in the strong-coupling regime. In this situation, the Ohmicity parameter $s$ also plays a role of dual character in influencing the evolution of qubit, which is different in the case without bang-bang pulse [Fig. \ref{figure4c}]. In the beginning of evolution, the larger $s$ leads to a shorter QSLT which corresponds to a speed-up evolution. After a certain time, instead, the larger $s$ leads to a longer QSLT which corresponds to a speed-down evolution. In general, on the one hand, the bang-bang pulse can be used to control the quantum evolution speed in this dephasing model. On the other hand, since the presence of bang-bang pulse, the relevant environmental parameters, such as bath temperature and Ohmicity parameter, play some more complicated and diverse roles in affecting the quantum evolution speed. \section{Quantum evolution speed in the nonlinear environment}\label{sec3} Next, we consider another model: a bare qubit (labeled $A$) interacts with the other one (labeled $B$) which is coupled to a thermal bath. The qubit $B$ and thermal bath constitute the well-known spin-boson model which is used as a nonlinear environment of qubit $A$ \cite{PRL.120401}. The Hamiltonian of total system is given by \begin{eqnarray} \mathcal{H}^{\prime} &=&H_{\rm{s}}+H_{\rm{b}}+H_{\rm{int}}, \\ H_{\rm{s}}&=&\frac{\Omega_{\rm{A}}}{2} \sigma_z^{\rm{A}} +\frac{\Omega_{\rm{B}}}{2} \sigma_z^{\rm{B}}, \hspace{1cm} H_{\rm{b}}= \sum_{k} \omega_k b_k^\dagger b_k, \\ H_{\rm{int}}&=&f(s)g(b)+g_0\sigma_z^{\rm{A}}\sigma_z^{\rm{B}} \end{eqnarray} where $H_{\rm{s}}$ and $H_{\rm{b}}$ are the Hamiltonians of two qubits and the bath, respectively. $g_0$ represents the interaction strength between the two qubits. $f (s)=\sigma_z^{\rm{B}}$ is the subsystem’s operator coupled to its surrounding bath. The $g(b)= \sum_{k} g_k ( b_k^\dagger + b_k)$ denotes the bath operator. We only focus on the on-resonance case: $\Omega_{\rm{A}} = \Omega_{\rm{B}} = \Omega$. This model has been studied in the precious articles \cite{PRL.120401,Huang2010Effect}, however, there are no exact analytical expression of the reduced density matrix for qubits, and their results involve the Born-Markov approximation. Fortunately, by resorting to the HEOM method which is beyond the Born-Markov approximation, we can deal with this model numerically. As a nonperturbative numerical method, the HEOM consists of a set of differential equations for the reduced subsystem and enables some rigorous studies in chemical and biophysical systems, such as the optical line shapes of molecular aggregates \cite{1.3213013} and the quantum entanglement in photosynthetic light-harvesting complexes\cite{Sarovar2009Quantum}. For the finite-temperature case, we consider the Ohmic spectrum with Drude cutoff: \begin{equation} J(\omega) = \frac{2\Lambda \omega_{\rm{c}} \omega}{\pi(\omega_{\rm{c}}^2+\omega^2)}, \end{equation} in which $\omega_{\rm{c}}$ is the cutoff frequency and $\Lambda$ represents the couping strength between the qubit and the bath. Then the bath correlation function $C(t)$ can be expressed as \cite{JPSJ.58.101,JPSJ.75.082001,PhysRevE.75.031107,1.2713104} \begin{equation} C(t)=\sum_{k=0}^{\infty} \zeta_k e^{-\nu_k t}, \end{equation} where the real and imaginary parts of $\zeta_k$ are respectively given as \begin{eqnarray} \zeta_k^{\rm{R}}&=& 4\Lambda \omega_{\rm{c}} T \frac{\nu_k}{\nu_k^2-\omega_{\rm{c}}^2}(1-\delta_{k0})+ \Lambda \omega_{\rm{c}} \cot(\frac{\omega_{\rm{c}}}{2T})\delta_{k0},\\ \zeta_k^{\rm{I}}&=& -\Lambda \omega_{\rm{c}} \delta_{k0}, \end{eqnarray} with the $\nu_k=2k\pi T(1-\delta_{k0})+\omega_{\rm{c}} \delta_{k0}$ being the $k$-th Matsubara frequency. Since the bath correlation function can be approximately expressed as the sum of the first few terms in the series, the Matsubara frequency has been cut off and the convergence of result has been checked in our numerical calculation. Following the derivation shown in Ref. \cite{PhysRevA.85.062323}, the hierarchy equations of reduced quantum subsystem can be obtained as follows: \begin{equation} \frac{\rm{d}}{\rm{d}t} \rho_{\vec{l}} (t) = -(i H_{\rm{s}}^{\times} + \vec{l} \cdot \vec{\nu}) \rho_{\vec{l}} (t) + \phi \sum_{k=0}^{\epsilon} \rho_{\vec{l}+\vec{e}_k} (t) + \sum_{k=0}^{\epsilon} l_k \psi_k \rho_{\vec{l}-\vec{e}_k} (t), \end{equation} where $\vec{l}=(l_0, l_1,\ldots, l_{\epsilon})$ is a $(\epsilon+1)$-dimensional index, $\vec{\nu}= (\nu_0,\nu_1,\ldots,\nu_{\epsilon})$ and $\vec{e}_k =(0,0,\ldots,1_k,\ldots,0)$ are $(\epsilon+1)$-dimensional vectors with $\epsilon$ being the cutoff number of the Matsubara frequency. Two superoperators $\phi$ and $\psi_k$ are defined as \begin{equation} \phi=i f(s)^\times, \hspace{1cm} \psi_k=i \left[ \zeta_k^{\rm{R}}f(s)^\times +i \zeta_k^{\rm{I}} f(s)^\circ\right], \end{equation} with $X^\times Y=\left[X,Y\right]=XY-YX$ and $X^\circ Y= \left\lbrace X,Y\right\rbrace =XY + YX$. \begin{figure}[htbp] \centering \includegraphics[width=9cm]{figure7} \caption{ The QSLT $\tau_{\rm{QSL}}/\tau_{\rm{D}}$ of qubit $A$ versus the initial time parameter $t$ and the coupling strength $\Lambda$ for the Ohmic spectrum. Other Parameters are chosen as $T=5$, $\tau_{\rm{D}}=10$, $g_0=0.1$,$\omega_{\rm{c}}=5$ and $\Omega =1$.} \label{figure7} \end{figure} We choose the initial state of two qubits as \begin{equation} \rho_{\rm{AB}}(0)=\frac{1}{2}{\left( {\begin{array}{*{20}{c}} 1&1 \\ 1&1 \end{array}} \right)_{\rm{A}}} \otimes \frac{1}{2}{\left( {\begin{array}{*{20}{c}} 1&1 \\ 1&1 \end{array}} \right)_{\rm{B}}}, \end{equation} and display the QSLT of qubit $A$ as function of the initial time parameter and the coupling strength for Ohmic spectrum in Fig. \ref{figure7}. In the strong-coupling regime, the quantum evolution speed of qubit $A$ exhibits a speed-up process since the quantum decoherence effect. In contrast, the QSLT of qubit $A$ in the weak-coupling regime decreases to a minimum in the beginning of the evolution, then revivals and occurs a damped oscillatory behavior. As the coupling strength $\Lambda$ increases, the damped oscillatory behavior fades away. The QSLT behaves different in the strong-coupling and weak-coupling regimes. \begin{figure}[t] \centering \subfigure[]{\label{figure8a} \includegraphics[width=7cm]{figure8a}} \subfigure[]{\label{figure8b} \includegraphics[width=7cm]{figure8b}} \subfigure[]{\label{figure8c} \includegraphics[width=7cm]{figure8c}} \subfigure[]{\label{figure8d} \includegraphics[width=7cm]{figure8d}} \caption{The QSLT $\tau_{\rm{QSL}}/\tau_{\rm{D}}$ and quantum coherence $C$ of qubit $A$ versus the initial time parameter $t$ and the temperature $T$ for the (a)(c) weak-coupling $\Lambda=0.005$ regime and (b)(d) strong-coupling $\Lambda=0.05$ regime. Other Parameters are chosen as $\tau_{\rm{D}}=10$, $g_0=0.1$,$\omega_{\rm{c}}=5$ and $\Omega =1$.} \label{figure8} \end{figure} In the previous section, it is shown that the quantum evolution speed is related to the dynamics of quantum coherence. Here, we choose the measure of quantum coherence based on the quantum Jensen-Shannon divergence \cite{Radhakrishnan2016} to study the dynamics of quantum coherence of qubit $A$ for getting more insight on the QSLT. The expression of quantum coherence is given by \begin{equation} C(\rho)=\sqrt{S\left(\frac{\rho+\rho_{\rm{dia}}}{2}\right)-\frac{S(\rho)+S(\rho_{\rm{dia}})}{2}} \end{equation} where $S(\rho)=-\rm{Tr}\rho \log_2 \rho $ is the von Neumann entropy and $\rho_{\rm{dia}}$ is the incoherent state obtained from $\rho$ by deleting all off-diagonal elements\cite{PhysRevLett.Coherence}. We plot the QSLT and quantum coherence of qubit $A$ as functions of the initial time parameter $t$ and the temperature $T$ in Figs.\ref{figure8}. In the weak-coupling regime, as shown in Figs.\ref{figure8a} and \ref{figure8c}, the QSLT and quantum coherence exhibit the damped oscillatory behaviors and have similar evolutions. The increase of temperature induces the speed-up evolution since the fact that higher temperature brings more intensive decoherence, which can also be confirmed by the dynamics of quantum coherence in Fig. \ref{figure8c}. In contrast, there are some rich and anomalous phenomenons in the strong-coupling regime as shown in Figs.\ref{figure8b} and \ref{figure8d}. In the low-temperature region, the behaviors of QSLT and quantum coherence are analogous to those in the weak-coupling regime. However, as the temperature increases, the QSLT is extended which means the quantum evolution speed is decelerated and not a monotonic increasing function of the bath temperature any more. Superficially, this result is due to the enhancement of quantum coherence by the high temperature, which can be confirmed in the dynamics of quantum coherence in Fig. \ref{figure8d}. The underlying reason for this anomalous phenomenon is that the spin-boson model consisting of qubit $B$ and the thermal bath can be seen as a nonlinear environment for qubit $A$, the power spectrum doesn't necessarily grow with the temperature \cite{PRL.120401}. Thus, some reversed effects occur, i.e., the increase of bath temperature may give rise to the speed-down evolution and the enhancement of quantum coherence. \section{Conclusion}\label{sec4} In conclusion, we have considered two kinds of finite-temperature bosonic baths to investigate the quantum evolution speed of qubit and find that the quantum evolution speed isn't a monotonic function of temperature. For the spin-boson model, the quantum evolution speed of qubit can be accelerated by the high temperature in the strong-coupling regime. In the weak-coupling regime, the bath temperature plays a role of dual character in affecting the quantum evolution speed, which means that the high temperature not only leads to the speed-up but also speed-down processes. The quantum coherence, relative purity and the driving time are responsible for the different behaviors of quantum evolution speed in the strong-coupling and weak-coupling regimes. Furthermore, we can observe that the quantum evolution speed can be controlled by the bang-bang pulse in the strong-coupling regime, and the relative steady quantum evolution speed can be obtained by fast pulse. Interestingly, the bath temperature and Ohmicity parameter also play roles of dual character in the strong-coupling regime since the presence of bang-bang pulse, which are not found in the case without pulse. For the nonlinear bath, we study the quantum evolution speed of qubit by applying the hierarchical equations of motion method. It is shown that the performances of quantum evolution speed in weak-coupling and strong-coupling regimes are very different. In the strong-coupling situation, the quantum evolution speed at low-temperature region behaves similarly to that in the weak-coupling situation where the quantum evolution speed is only a monotonic increasing function of temperature. However, the rise of temperature induces the speed-down process, this anomalous phenomenon is on account of the temperature dependence of the spectral profile in nonlinear bath. As a comparison, the dynamics of quantum coherence is also explored in different situations. These results provide the possibilities to control quantum evolution speed by changing the relevant environmental parameters in the finite-temperature bosonic environments. Finally, we expect our studies to be of interest for experimental applications in quantum computation and quantum information processing. \section{Acknowledgments} This project was supported by the National Natural Science Foundation of China (Grant No.11274274) and the Fundamental Research Funds for the Central Universities (Grant No.2017FZA3005 and 2016XZZX002-01). \section*{References} \bibliographystyle{elsarticle-num}
1,941,325,220,687
arxiv
\section{Introduction} The Schramm-Loewner evolution (SLE$_\kappa$) is a family of random planar fractal curves indexed by the real parameter $\kappa\geq 0$, introduced by Schramm in \cite{schramm2000scaling}. These random fractal curves are proved to describe scaling limits of a number of discrete models that are of great interest in planar statistical physics. For instance, it was proved in \cite{lsw2004conformal} that the scaling limit of loop-erased random walk (with the loops erased in a chronological order) converges in the scaling limit to SLE$_\kappa$ with $\kappa = 2\,.$ Moreover, other two-dimensional discrete models from statistical mechanics including Ising model cluster boundaries, Gaussian free field interfaces, percolation on the triangular lattice at critical probability, and uniform spanning trees were proved to converge in the scaling limit to SLE$_\kappa$ for values of $\kappa=3,$ $\kappa=4,$ $\kappa=6$ and $\kappa=8$ respectively in the series of works \cite{smirnov2010conformal}, \cite{schramm2009contour}, \cite{smirnov2001critical} and \cite{lsw2004conformal}. There are also other models of statistical physics in 2D that are conjectured to have SLE$_\kappa$, for a given value of $\kappa$, as a scaling limit, among which is the two-dimensional self-avoiding walk which is conjectured to converge in the scaling limit to SLE$_{8/3}$. For a detailed exposure and pedagogical introduction to SLE theory, we refer the reader to \cite{rs2005basic}, \cite{lawler2005conformally}, and \cite{kemppainen2017schramm}. Questions concerning the behaviour of the SLE trace at the tip can be found in the existing body of SLE literature, for example in \cite{viklund2012almost} where the almost sure multi-fractal spectrum of the SLE trace near its tip is computed, and in \cite{zhan2016ergodicity} in which the ergodic properties of the harmonic measure near the tip of the SLE trace are studied. However, to the best of our knowledge, the law of the SLE tip at fixed capacity time has not been studied in the SLE literature until very recently. One of the first papers in this direction is \cite{lyons2019convergence} where a method based on stopping times was applied in order to try to deduce information about the law of the SLE tip. In this article, we develop an approach that allows for an in-depth study of this fundamental quantity. More precisely, we derive a PDE whose unique solution is the density of the SLE tip. This allows us to obtain explicit values for the negative second and negative fourth moment of the imaginary value of the SLE tip. We deduce that they are finite only for $\kappa< 8$ resp. $\kappa < 8/3$. For further negative moments, we identify the values of $\kappa$ where the moments are finite. To obtain these results we combine PDE techniques with certain tools from the theory of stochastic stability of stochastic differential equations (SDEs). Namely, we work with an SDE obtained from the backward Loewner differential equation. By a scaling argument, we derive a two-dimensional diffusion process that converges in law to the SLE tip. Using tools from ergodic theory (in the spirit of \cite{MSH}), we prove that this diffusion process has a unique invariant measure. This allows us to show that the density of the SLE tip solves the Fokker-Planck-Kolmogorov (FPK) equation associated with the process. Showing that the density of SLE tip is the unique solution of the FPK equation requires further tools. Note that while there is a vast literature on FPK equations (see e.g. \cite{BKRS}), usually only the case of elliptic operators are considered, while our FPK is hypoelliptic. Therefore, to show uniqueness of solutions to this equation and derive the support of the solution we utilise the generalized Ambrosio-Figalli-Trevisan superposition principle obtained recently in \cite{BRS} as well as more standard methods such as Lyapunov functions and Harnack inequalities. This paper is organised in three sections, the first one being the introduction. In the second section we state the main results. In the last section which is further divided in two subsections we give their proofs. \medskip \noindent\textbf{Convention on constants.} Throughout the paper $C$ denotes a positive constant whose value may change from line to line. \medskip \noindent\textbf{Acknowledgements.} The authors are deeply indebted to Stas Shaposhnikov for his help, patience, detailed explanations of some parts of the theory of FPK equations and for suggesting some useful ideas for the proofs. We are also very grateful to Paolo Pigato and Peter Friz for fruitful discussions. OB has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement No. 683164) and from the DFG Research Unit FOR 2402. VM would like to thank for the supprot the NYU-ECNU Institute of Mathematical Sciences at NYU Shanghai. YY acknowledges partial support from ERC through Consolidator Grant 683164 (PI: Peter Friz). \section{Main results} First, let us introduce the basic notation. For a domain $D\subset\mathbb{R}^k$, $k\ge1$, let $\mathcal{C}^\infty(D,\mathbb{R})$ be a set of functions $D\to\mathbb{R}$ which have derivatives of all orders. The set of functions from $\mathcal{C}^\infty(D,\mathbb{R})$ which are bounded and have bounded derivatives of all orders will be denoted by $\mathcal{C}_b^\infty(D,\mathbb{R})$. As usual, for a function $f\colon D\to\mathbb{R}^d$, $d\ge1$, we will denote its supremum norm by $\|f\|_{\infty}:=\sup_{x\in D} |f(x)|$. Let $\H$ be the open complex upper half-plane $\{\Im(z) > 0\}$. Until the end of the paper we fix $\kappa\in(0,\infty)$. Let $g_t\colon H_t\to\H$, $t\ge0$, be the forward SLE flow, that is the solution to the Loewner ODE \begin{equation*} \partial_t g_t(z) = \frac{2}{g_t(z)-\sqrt{\kappa} B_t}, \quad g_0(z)=z,\quad t\ge0,\, z\in\H, \end{equation*} where $B$ is a standard Brownian motion, $H_t = \{z \in \H\mid T_z > t\}$, and $T_z$ is the time until which the ODE is solvable. Let $(\gamma_t)_{t\ge0}$ be the SLE$_\kappa$\ path associated with this flow. It is well-known \cite[Theorem 3.6]{rs2005basic}, \cite[Theorem 4.7]{lsw2004conformal} that $\mathsf{P}$-a.s. for any $t\ge0$ \begin{equation}\label{SLElim} \gamma(t)=\lim_{u\to0+}g_t^{-1}(\sqrt\kappa B_t+iu). \end{equation} Our main result is the following statement. \begin{theorem}\label{T:main} The random vector $(\Re(\gamma_1),\Im(\gamma_1))$ has a density $\psi\in\mathcal{C}^{\infty}(\mathbb{R}\times(0,\infty),\mathbb{R})$ which is the unique solution in the class of probability densities (non-negative functions that integrate to $1$ over the whole space) of the following PDE: \begin{equation}\label{mainPDE} \frac{\kappa}{2}\d^2_{xx}\psi+\left(\frac12x+\frac{2x}{x^2+y^2}\right)\d_x \psi+ \left(\frac12y-\frac{2y}{x^2+y^2}\right)\d_y\psi+\left(1+\frac{4(y^2-x^2)}{(x^2+y^2)^2}\right)\psi = 0, \end{equation} where $x\in\mathbb{R}$, $y\in(0,\infty)$. Furthermore, $\psi$ is strictly positive in $\mathbb{R}\times(0,2)$, $\psi\equiv 0$ on $\mathbb{R}\times[2,\infty)$, and $\psi(x,y)=\psi(-x,y)$ for $x\in\mathbb{R}$, $y>0$. \end{theorem} We have attached in \cref{fi:simul} numerical simulations of $\gamma(1)$ with various values of $\kappa$. There we have chosen the coordinates $(\alpha,y)$ where $\alpha = \arg\gamma(1)$ and $y=\Im\gamma(1)$ so that they fit well in the plot. \begin{figure}[h] \centering \includegraphics[width=0.49\textwidth]{density2.png} \includegraphics[width=0.49\textwidth]{density4.png} \includegraphics[width=0.49\textwidth]{density6.png} \includegraphics[width=0.49\textwidth]{density8.png} \caption{Simulation of $\gamma(1)$ with $20000$ samples each. Plotted are the coordinates $(\alpha,y)$ where $\alpha = \arg\gamma(1)$ and $y=\Im\gamma(1)$.} \label{fi:simul} \end{figure} As an application of \cref{T:main}, we show that the following quantities can be explicitly calculated. \begin{theorem}\label{T:other} The following holds: \begin{enumerate}[\rm{(}i\rm{)}] \item For any measurable set $\Lambda\subset\overline\HH$ one has \begin{equation}\label{eq:zhan_law} \hskip.15ex\mathsf{E}\hskip.10ex \int_0^\infty \mathbbm 1(\gamma(t) \in \Lambda) \, dt = \frac{\Gamma(1+\frac4\kappa)}{2\sqrt\pi\Gamma(\frac12+\frac4\kappa)} \int_\Lambda \left( 1+\frac{x^2}{y^2} \right)^{-4/\kappa} \,dx\,dy. \end{equation} \item For any $n\in\mathbb{Z}_+$ we have \begin{equation}\label{ineq24} \hskip.15ex\mathsf{E}\hskip.10ex (\Im\gamma_1)^{-2n}<\infty \,\,\text{if and only if $\kappa<8/(2n-1)$}. \end{equation} Further, \begin{align} &\hskip.15ex\mathsf{E}\hskip.10ex(\Im\gamma_1)^{-2}=\frac{2}{8-\kappa}\quad\text{for $\kappa < 8$}\label{ineq25} \\ &\hskip.15ex\mathsf{E}\hskip.10ex(\Im\gamma_1)^{-4}=\frac{16(3-\kappa)}{(12-\kappa)(8-\kappa)(8-3\kappa)}\quad\text{for $\kappa < 8/3$}.\label{ineq26} \end{align} \end{enumerate} \end{theorem} \begin{remark} Note that the left-hand side of \eqref{eq:zhan_law} is an average amount of time SLE spends in a set $\Lambda$. A version of this identity has previously appeared in \cite[Corollary 5.3]{zhan2019decomposition}. However, in that paper the constant in front of the integral has been implicitly specified as $1/C_{\kappa,1}$ with \[ C_{\kappa,1} = \int_\H \left( M_0(z)-\hskip.15ex\mathsf{E}\hskip.10ex[M_1(z)\mathbbm 1_{T_z > 1}] \right) \,dx\,dy \] and $M_t(z) = \abs{g_t'(z)}^2 \left(1+\frac{X_t(z)^2}{Y_t(z)^2}\right)^{-4/\kappa}$. In particular, our result implies \[ C_{\kappa,1} = \frac{2\sqrt\pi\Gamma(\frac12+\frac4\kappa)}{\Gamma(1+\frac4\kappa)} \] \end{remark} As we will point out in \cref{se:density_analysis}, \cref{T:other} may seem like a simple consequence of \cref{T:main} that can be heuristically deduced from integration by parts arguments. However, it is surprisingly tricky to control the boundary behaviour of $\psi$ and its derivatives. Therefore it requires more work to rigorously prove \cref{T:other}. One of our initial motivations was to know more about the marginal law of $\alpha = \arg\gamma(1)$. We believe that the marginal density should behave like $\alpha^{8/\kappa}$ as $\alpha \searrow 0$. We did not succeed in proving this; instead, we prove the following in \cref{se:density_analysis}. Denote $(\alpha, y) = (\arg\gamma(1), \Im\gamma(1))$ and let $q(\alpha, y) = \psi(y\cot\alpha,y)\frac{y}{\sin^2\alpha}$ the density in these coordinates. Then for $n \ge 1$ we have $\int_0^2 y^{-2n} q(\alpha,y)\,dy \approx \alpha^{8/\kappa-2n}$ as $\alpha \searrow 0$. \begin{remark} The support of the density is quite natural since the half-plane capacity of $\gamma[0,t]$ is always at least $\frac{1}{2}\Im\gamma(t)^2$, and hence we always have $\Im\gamma(t) \le \sqrt{2\operatorname{hcap}(\gamma[0,t])} = 2\sqrt{t}$. Note also that $\Im\gamma(t) = 2\sqrt{t}$ is only attained by SLE$_0$, i.e. $\gamma(t) = i2\sqrt{t}$ which is driven by the constant driving function. \end{remark} To obtain these results we establish the following lemma which links the law of SLE$_\kappa$ with invariant measure of a certain diffusion process. Introduce the reverse SLE flow \begin{equation}\label{hdef} \partial_t h_t(z) = \frac{-2}{h_t(z)-\sqrt{\kappa}\widetilde B_t}, \quad h_0(z)=z, \quad t\ge0,\, z\in\H; \end{equation} where $\widetilde B$ is the time-reversed Brownian motion, that is, \begin{equation}\label{RBM} \widetilde B_t:= B_{1-t}- B_1\quad\text{for $t\le1$;}\quad \widetilde B_t:= B'_{t-1}-B_1\quad\text{for $t\ge1$}, \end{equation} where $B'$ is a Brownian motion independent of $B$. It is obvious that $\widetilde B$ is a Brownian motion. \begin{lemma}\label{L:main}We have \begin{equation*} \frac1{\sqrt t}(h_{t}(i)- \sqrt{\kappa}\widetilde B_{t})\to\gamma(1)\,\,\text{in law as $t\to\infty$}. \end{equation*} \end{lemma} \section{Proofs} \subsection{Proofs of \cref{L:main,T:main}} We begin with the proof of \cref{L:main}. \begin{proof}[Proof of \cref{L:main}] Introduce $\widehat f_t(z):=g_t^{-1}(\sqrt\kappa B_t+z)$, $z\in\H$, $t\ge0$. We claim that \begin{equation}\label{claim} \widehat f_1(z) = h_1(z)+\sqrt{\kappa}B_1. \end{equation} Indeed, it follows from \eqref{hdef} that for $z\in\H$, $t\in[0,1]$ \begin{equation*} \partial_t(h_{1-t}(z)+\sqrt{\kappa} B_1) = \frac{2}{h_{1-t}(z)-\sqrt{\kappa}\widetilde B_{1-t}} = \frac{2}{(h_{1-t}(z)+\sqrt{\kappa} B_1)-\sqrt{\kappa} B_t}, \end{equation*} which implies $h_{1-t}(z)+\sqrt{\kappa}B_1 = g_t(h_1(z)+\sqrt{\kappa} B_1)$. Recalling the definition of $\widehat f$ and taking $t=1$, we obtain \eqref{claim}. Next, we note that the following scaling property holds: for any $c>0$ \begin{equation} \label{samelaw} \Law(\frac{1}{c}\sqrt{\kappa}\widetilde B_{c^2 }, \frac{1}{c}h_{c^2 }(cz))=\Law(\sqrt{\kappa}\widetilde B_1, h_1(z))). \end{equation} Indeed, using again the definition of $h$ in \eqref{hdef}, we see that for any $t\ge0$ \begin{equation*} \partial_t (\frac{1}{c}h_{c^2 t}(cz)) = \frac{-2c}{h_{c^2 t}(cz)-\sqrt{\kappa}\widetilde B_{c^2 t}} = \frac{-2}{\frac{1}{c}h_{c^2 t}(cz)-\frac{1}{c}\sqrt{\kappa}\widetilde B_{c^2 t}} . \end{equation*} Since the process $(\frac{1}{c}\sqrt{\kappa}\widetilde B_{c^2 t})_{t \ge 0}$ has the same law as $(\sqrt{\kappa}\widetilde B_t)_{t \geq 0}$ and the solution of the Loewner differential equation is a deterministic function of the driver, we see that \eqref{samelaw} holds. Fix $u>0$. Applying \eqref{claim} with $z=iu$ and \eqref{samelaw} with $z=iu$, $c=1/u$, we deduce \begin{equation*} \Law(\widehat f_1(iu))=\Law(u h_{\frac1{u^{2}}}(i)-u\sqrt\kappa \widetilde B_{\frac1{u^{2}}}). \end{equation*} where we have also used the fact that $B_1=-\widetilde B_1$. Since, by \eqref{SLElim}, we have $\gamma(1) = \lim_{u \searrow 0} \widehat f_1(iu)$, it follows that $u(h_{1/u^2}(i)- \sqrt{\kappa}\widetilde B_{1/u^2})$ converges in law to $\gamma(1)$ as $u \searrow 0$. This implies the statement of the lemma. \end{proof} Recall the definition of the reverse SLE flow $h$ in \eqref{hdef} and the reversed Brownian Motion $\widetilde B$ in \eqref{RBM}. \Cref{L:main} implies the following result. \begin{corollary}\label{Cor:main} Let $(\widehat X_t,\widehat Z_t)_{t\ge0}$ be the stochastic process that satisfies the following equation \begin{align} d\widehat X_t &= \bigl(-\frac12 \widehat X_t -\frac{2\widehat X_t}{\widehat X_t^2+e^{2\widehat Z_t}}\bigr)\, dt + \sqrt{\kappa}\, d\widehat B_t, \label{eq11}\\ d\widehat Z_t &= \bigl(-\frac12 +\frac{2}{\widehat X_t^2+e^{2\widehat Z_t}}\bigr)\, dt,\label{eq22} \end{align} with the initial data $\widehat X_0=\Re (h_1(i))-\sqrt\kappa \widetilde B_1$, $\widehat Z_0=\log(\Im (h_1(i)))$; here $\widehat B_t := -\int_0^t e^{-s/2}\, d\widetilde B_{e^s}$ and the filtration $\widehat{\mathcal{F}}_t :=\sigma(\widetilde B_r, r \in [0,e^t])$. Then \begin{equation}\label{corres} (\widehat X_t,\widehat Z_t)\to (\Re(\gamma_1),\log(\Im(\gamma_1)))\quad \text{in law as $t\to\infty$}. \end{equation} \end{corollary} Note that the initial value of the process $(\widehat X,\widehat Z)$ is random but measurable with respect to $\widehat{\mathcal{F}}_0$. \begin{proof} Put $X_t+iY_t \mathrel{\mathop:}= h_t(i)-\sqrt{\kappa}\widetilde B_t$. Then, it follows from \eqref{hdef} that \begin{align} dX_t &= \frac{-2X_t}{X_t^2+Y_t^2}\, dt - \sqrt{\kappa}\, d\widetilde B_t, \nonumber\\ dY_t &= \frac{2Y_t}{X_t^2+Y_t^2}\, dt,\label{yfirst} \end{align} $X_0=0$, $Y_0=1$. For $t \ge 0$, let $\widehat X_t:= e^{-t/2} X_{e^t}$ and $\widehat Y_t := e^{-t/2} Y_{e^t}$. We apply It\^o's formula to derive \begin{align} d\widehat X_t &= \bigl(-\frac12 \widehat X_t -\frac{2\widehat X_t}{\widehat X_t^2+\widehat Y_t^2}\bigr)\, dt + \sqrt{\kappa}\, d\widehat B_t, \label{eq1}\\ d\widehat Y_t &= \bigl(-\frac12 \widehat Y_t +\frac{2\widehat Y_t}{\widehat X_t^2+\widehat Y_t^2}\bigr)\, dt.\label{eq2} \end{align} Clearly, $\widehat B$ is a standard Brownian motion with respect to the filtration $\widehat{\mathcal{F}}_t$. By definition, we also have $\widehat X_0=X_1$, $\widehat Y_0=Y _1$. The change of variables $\widehat Z_t:=\log\widehat Y_t$ and another application of It\^o's formula implies that the process $(\widehat X,\widehat Z)_{t\ge0}$ satisfies SDE \eqref{eq11}--\eqref{eq22} with the initial conditions $\widehat X_0=X_1=\Re (h_1(i))-\sqrt\kappa \widetilde B_1$, $\widehat Z_0=\log (Y _1)=\log(\Im (h_1(i)))$. Note that by \eqref{yfirst}, $Y_1\ge Y_0=1$, therefore $|\widehat Z_0|<\infty$. Furthermore, \begin{equation*} e^{-t/2} (h_{e^t}(i)- \sqrt{\kappa}\widetilde B_{e^t}) = e^{-t/2} (X_{e^t}+iY_{e^t}) = \widehat X_t +i \widehat Y_t. \end{equation*} Thus, by \Cref{L:main}, we have \begin{equation}\label{convhat} (\widehat X_t,\widehat Y_t)\to(\Re(\gamma(1)), \Im(\gamma(1)))\,\,\text{in law as $t\to\infty$}. \end{equation} Note that \begin{equation}\label{zeroprop} \mathsf{P}(\Im(\gamma(1))=0)=0. \end{equation} Indeed, the trace of a Loewner chain a.s. spends zero capacity time at the boundary, i.e., $\lambda(\{t \mid \Im\gamma(t)=0 \}) = 0$ a.s., where $\lambda$ is the Lebesgue measure (cf. \cite[Proposition~1.7]{yuan2020topology}; the case for SLE$_\kappa${} appeared already in \cite[Corollary~5.3]{zhan2019decomposition}). Therefore, by Fubini's theorem, $\mathsf{P}(\Im(\gamma(t))=0)=0$ Lebesgue a.e.. By scale invariance, this implies \eqref{zeroprop}. Now, combining \eqref{convhat} and \eqref{zeroprop}, we get \eqref{corres}. \end{proof} It follows from \Cref{Cor:main} that to prove \Cref{T:main} one needs to study invariant measures of \eqref{eq11}--\eqref{eq22}. The PDE \eqref{mainPDE} is then the Fokker-Planck-Kolmogorov equation for this process. But since the coefficients have a singularity at $0$, a bit of care is needed to make the statements rigorous. First, we show that this SDE is well-posed and is a Markov process. We begin with the following technical statement. Let $W$ be a standard Brownian motion. For $\varepsilon>0$, let $g_\varepsilon\colon\mathbb{R}\to[\varepsilon/2,+\infty)$ be a $\mathcal{C}^\infty(\mathbb{R})$ function with bounded derivatives of all orders such that \begin{equation*} \begin{cases} g_\varepsilon(x)=x,\quad &x\ge\varepsilon;\\ \varepsilon/2\le g_\varepsilon(x)\le \varepsilon,\quad &-\infty<x<\varepsilon. \end{cases} \end{equation*} \begin{lemma}\label{L:31aux} Fix $\varepsilon>0$ and consider stochastic differential equation \begin{align} d X^\varepsilon_t &= \bigl(-\frac12 X^\varepsilon_t -\frac{2 X^\varepsilon_t}{ (X^\varepsilon_t)^2+ g_\varepsilon(e^{2 Z_t^\varepsilon})}\bigr)\, dt + \sqrt{\kappa}\, d W_t, \label{eq1eps}\\ d Z^\varepsilon_t &= \bigl(-\frac12 +\frac{2}{ (X^\varepsilon_t)^2+ g_\varepsilon(e^{2 Z_t^\varepsilon})}\bigr)\, dt,\label{eq2eps} \end{align} where $(X^\varepsilon_0,Z^\varepsilon_0)=(x_0,z_0)\in\mathbb{R}^2$. Then for any initial condition $(x_0,z_0)\in\mathbb{R}^2$ SDE \eqref{eq1eps}--\eqref{eq2eps} has a unique strong solution. This solution is a strong Feller Markov process. \end{lemma} \begin{proof} Since the drift and diffusion of \eqref{eq1eps}--\eqref{eq2eps} are uniformly Lipschitz continuous functions, it is immediate that SDE \eqref{eq1eps}--\eqref{eq2eps} has a unique strong solution and this solution is a Markov process. To show that $(X_t^\varepsilon,Z_t^\varepsilon)$ is a strong Feller process we use the H\"ormander's theorem. Denote \begin{equation*} b^\varepsilon(x,z):=\begin{pmatrix} b^{1,\varepsilon}(x,z)\\b^{2,\varepsilon}(x,z) \end{pmatrix}:=\begin{pmatrix} -\frac12 x -\frac{2x}{x^2+g_\varepsilon(e^{2z})}\\[1.5ex] -\frac12 +\frac{2}{x^2+g_\varepsilon(e^{2z})} \end{pmatrix},\quad x,z\in\mathbb{R};\qquad \sigma:=\begin{pmatrix} \sqrt \kappa\\0 \end{pmatrix}. \end{equation*} Then we can rewrite \eqref{eq1eps}--\eqref{eq2eps} as \begin{equation}\label{Zeq} d\xi_t^\varepsilon=b^\varepsilon(\xi^\varepsilon_t)dt+\sigma d W_t, \end{equation} where we put $\xi^\varepsilon:=\begin{psmallmatrix} X^\varepsilon\\mathbb{Z}^\varepsilon. \end{psmallmatrix}$ Let us verify that SDE \eqref{Zeq} satisfies all conditions of H\"ormander's theorem \cite[Theorem~1.3]{Hair11} (see also \cite[Theorem~6.1]{Pavl}). We see that the drift $b^\varepsilon$ is in $\mathcal{C}^{\infty}$ and all its derivatives are bounded. Furthermore, if we denote by $[\cdot,\cdot]$ the Lie bracket between two vector fields, then a direct calculation shows that \begin{align*} [b^\varepsilon,\sigma]=\sqrt\kappa\begin{pmatrix} \d_xb^{1,\varepsilon}\\\d_xb^{2,\varepsilon} \end{pmatrix},\quad [\,[b^\varepsilon,\sigma],\sigma]=\kappa\begin{pmatrix} \d_{xx}b^{1,\varepsilon}\\\d_{xx}b^{2,\varepsilon} \end{pmatrix}.\quad \end{align*} Therefore, for $x\neq0$, $z\in\mathbb{R}$ we have $\spn( \sigma,\bigl[b^{\varepsilon}(x,z),\sigma\bigr])=\mathbb{R}^2$, and for $x=0$, $z\in\mathbb{R}$ we have $\spn( \sigma,\bigl[\bigl[b^\varepsilon(x,z),\sigma\bigr],\sigma\bigr])=\mathbb{R}^2$. Thus, the H\"ormander's condition holds. Hence, all the conditions of the H\"ormander's theorem are met and \cite[Theorem~1.3]{Hair11} implies that $(X^\varepsilon,Z^\varepsilon)$ is strong Feller. \end{proof} Now we can show well-posedness of \eqref{eq11}--\eqref{eq22}. \begin{lemma}\label{L:31} For any initial data $(x_0,z_0)\in\mathbb{R}^2$, the stochastic differential equation \eqref{eq11}--\eqref{eq22} has a unique strong solution. This solution is a Markov process in the state space $\mathbb{R}^2$ and its transition kernel $P_t$ is strong Feller for any $t>0$. \end{lemma} \begin{proof} It is immediate to see that for any $T>0$ a solution to \eqref{eq11}--\eqref{eq22} with initial data $(x_0,z_0)\in \mathbb{R}^2$ satisfies \begin{equation}\label{ineqY} \widehat Z_t\ge z_0 -T/2, \end{equation} $t\in[0,T]$. Hence, on time interval $[0,T]$, any solution to \eqref{eq11}--\eqref{eq22} solves SDE \eqref{eq1eps}--\eqref{eq2eps} with $(X_0^\varepsilon,Z_0^\varepsilon)=(x_0,z_0)$, $\varepsilon=\exp(2 z_0-T)$, $W=\widehat B$ and vice versa. Since, by \Cref{L:31}(i), the latter equation has a unique strong solution, we see that SDE \eqref{eq11}-\eqref{eq22} has a unique strong solution on $[0,T]$ and \begin{equation}\label{ref} (\widehat X_t,\widehat Z_t)=( X^\varepsilon_t, Z^\varepsilon_t),\quad t\in[0,T]. \end{equation} Since $T$ is arbitrary, it follows that SDE \eqref{eq11}-\eqref{eq22} has a unique strong solution on $[0,\infty)$. Furthermore, we see from above argument that $(\widehat X_t,\widehat Z_t)_{t\ge0}$ is a Markov process with the state space $\mathbb{R}^2$ equipped with the Borel topology. Now let us show that $(P_t)_{t\ge0}$ is strong Feller. Let $f$ be an arbitrary bounded measurable function $\mathbb{R}^2\to\mathbb{R}$, let $(x_0,z_0)\in \mathbb{R}^2$. Let $(x_0^n,z_0^n)\in\mathbb{R}^2$, $n\in\mathbb{Z}_+$ be a sequence converging to $(x_0,z_0)$ as $n\to\infty$. Without loss of generality we can assume that $z_0^n\ge -2 |z_0|$ for all $n\in\mathbb{Z}_+$. Fix $t>0$. Then \begin{equation}\label{SF1} P_t f(x^n_0,z^n_0)=\hskip.15ex\mathsf{E}\hskip.10ex_{(x^n_0,z^n_0)}f (\widehat X_t,\widehat Z_t)=\hskip.15ex\mathsf{E}\hskip.10ex_{(x^n_0,z^n_0)}f ( X^\varepsilon_t, Z^\varepsilon_t)=P_t^\varepsilon f(x^n_0,z^n_0), \end{equation} where $\varepsilon:=\exp(-4|z_0|-t)$, $P_t^\varepsilon$ is defined in \Cref{L:31aux}, and we used here \eqref{ineqY} and \eqref{ref}. By \Cref{L:31aux}, we have \begin{equation}\label{SF2} P_t^\varepsilon f(x^n_0,z^n_0)\to P_t^\varepsilon f(x_0,z_0)=P_t f(x_0,z_0),\quad \text{as $n\to\infty$}, \end{equation} here we used once again \eqref{ineqY} and \eqref{ref}. Combining \eqref{SF1} and \eqref{SF2}, we see that $P_t$ is strong Feller. \end{proof} To show uniqueness of the invariant measure of $(P_t)$, we will need the following support theorem. For $\delta>0$, $v\in\mathbb{R}^2$ let $B_{\delta,v}$ be the ball of radius $\delta$ centred at $v$. \begin{lemma}\label{L:support} For any $(x_0,z_0)\in\mathbb{R}^2$, $\delta>0$, there exists $T>0$ such that \begin{equation*} P_T((x_0,z_0), B_{\delta,(0,\log 2)})>0. \end{equation*} \end{lemma} \begin{proof} Fix $(x_0,z_0)\in\mathbb{R}^2$. Consider the following deterministic control problem associated with \eqref{eq11}--\eqref{eq22}: \begin{align} \frac{d}{dt}x_t &= \bigl(-\frac12 x_t -\frac{2x_t}{x_t^2+e^{2z_t}}\bigr) + \sqrt{\kappa} \frac{d}{dt} U_t,\label{U1}\\ \frac{d}{dt}z_t &= \bigl(-\frac12 +\frac{2}{x_t^2+e^{2z_t}}\bigr),\label{U2} \end{align} where $x(0)=x_0$, $z(0)=z_0$ and $U\colon[0,T]\to\mathbb{R}$ is a smooth non-random control. We claim that we can find $T>0$ and $U$ such that $x_T=0$ and $|z_T-\log 2|<\delta/2$. First, for $t\in[0,1]$ let $x_t$ be a $\mathcal{C}^1$ path which starts at $x_0$, $x_1=0$, $\frac{d}{dt}x(t)\Bigr|_{t=1}=0$. Let $z_t$, $t\in[0,1]$ be a solution to \eqref{U2} with the initial condition $z_0$ (for $x$ constructed above). Consider now the equation \begin{equation*} \frac{d}{dt}z_t = \bigl(-\frac12 +\frac{2}{e^{2z_t}}\bigr), \quad t\ge1 \end{equation*} with the initial condition $z_1$ constructed above. It is easy to see that there exists $T=T(x_0,z_0)>1$ such that $|z_T-\log 2|<\delta/2$. Set $x_t=0$ for $t\in[1,T]$. Finally, let $U_t$, $t\in[0,T]$ be a $\mathcal{C}^1$ path such that \eqref{U1} holds for $x,z$ constructed above and $U_0=0$. The desired control has been constructed. Now for arbitrary $\varepsilon>0$, consider the event \begin{equation*} A_\varepsilon:=\{\sup_{t\in[0,T]}|W_t-U_t|<\varepsilon\}. \end{equation*} It is well-known (see, e.g., \cite[Theorem~38]{Freedman}) that $\mathsf{P}(A_\varepsilon)>0$. Let $(\widehat X_t,\widehat Z_t)_{t\in[0,T]}$ be the solution of \eqref{eq11}-\eqref{eq22} with the initial condition $(x_0,z_0)$. Then \begin{equation}\label{glb} z_t\ge z_0-T/2,\quad \widehat Z_t\ge z_0-T/2,\qquad \text{for all $t\in[0,T]$}. \end{equation} Therefore, for any $t\in[0,T]$ we have on $A_\varepsilon$ \begin{equation}\label{almostall} |\widehat X_t-x_t|+|\widehat Z_t-z_t|\le C\int_0^t (|\widehat X_s-x_s|+|\widehat Z_s-z_s|)\,ds +\sqrt \kappa\varepsilon, \end{equation} where we used \eqref{glb} and the fact that the Lipschitz constant of the drift of SDE \eqref{eq11}-\eqref{eq22} is bounded on the set $\mathbb{R}\times [z_0-T/2,+\infty)$. By the Gronwall inequality and \eqref{almostall}, we have on $A_\varepsilon$ \begin{equation*} |\widehat X_T-x_T|+|\widehat Z_T-z_T|\le C(T) \varepsilon. \end{equation*} Choose now $\varepsilon$ small enough, such that the right-hand side of the above inequality is less than $\delta/2$. Then recalling that $x_T=0$ and $|z_T-\log 2|<\delta/2$, we finally deduce \begin{equation*} P_T((x_0,z_0), B_{\delta,(0,\log 2)})\ge \mathsf{P}(A_\varepsilon)>0.\qedhere \end{equation*} \end{proof} \begin{lemma}\label{L:UIM} The measure $\pi:=\Law\bigl(\Re(\gamma_1),\log(\Im(\gamma_1))\bigr)$ is the unique invariant measure for the process \eqref{eq11}--\eqref{eq22}. \end{lemma} \begin{proof} The fact that the measure $\pi$ is invariant follows by a standard argument. Denote, as usual, for a measurable bounded function $f$ and a measure $\nu$ \begin{align*} P_tf(x):=\int_{\mathbb{R}^2} f(y) P_t(x,dy),\,\,x\in\mathbb{R}^2;\qquad P_t\nu(A):=\int_{\mathbb{R}^2} P_t(y,A)\,\nu(dy),\,\,A\in\mathcal{B}(\mathbb{R}^2). \end{align*} Consider the measure $\mu:=\Law\Bigl(\Re (h_1(i))-\sqrt\kappa \widetilde B_1, \log(\Im (h_1(i)))\Bigr)$. Rewriting \eqref{corres}, we see that \begin{equation}\label{convergence2} P_t \mu\to \pi\,\,\text{weakly as $t\to\infty$}. \end{equation} Fix any $s\ge0$. Let us show that $P_s \pi=\pi$. Indeed, let $f\colon\mathbb{R}^2\to\mathbb{R}$ be an arbitrary continuous bounded function. Then \begin{align*} \int_{\mathbb{R}^2} f(x) \,P_s\pi(dx)&= \int_{\mathbb{R}^2} P_sf(x) \,\pi(dx) =\lim_{t\to\infty}\int_{\mathbb{R}^2} P_sf(x) \,P_t\mu(dx)\\ &=\lim_{t\to\infty}\int_{\mathbb{R}^2} f(x) \,P_{t+s}\mu(dx)=\int_{\mathbb{R}^2} f(x) \,\pi(dx), \end{align*} where the second identity follows from \eqref{convergence2} and the fact that $P_sf$ is a bounded continuous function (this is guaranteed by the Feller property of $P$). Since $f$ was arbitrary bounded continuous function, we see that $P_s \pi=\pi$ for any $s\ge0$. Thus, the measure $\pi$ is invariant for SDE \eqref{eq11}-\eqref{eq22}. Now let us show that SDE \eqref{eq11}-\eqref{eq22} does not have any other invariant probability measures. Assume the contrary, and let $\nu$ be another invariant measure. By \Cref{L:31} the semigroup $(P_t)$ is strong Feller. Therefore, by \cite[Proposition 7.8]{DPG} \begin{equation}\label{support} \supp(\pi)\cap\supp(\nu)=\emptyset. \end{equation} We claim now that the point $(0,\log2)$ belongs to the support of both of these measures. Indeed, fix arbitrary $\delta>0$. Take any $(x_0,z_0)\in\supp(\nu)$. Then, by \Cref{L:support}, there exists $T>0$, $\varepsilon>0$ such that $P_T((x_0,z_0), B_{\delta,(0,\log 2)})>\varepsilon$. By the strong Feller property of $P_T$, the function $(x,z)\mapsto P_T((x,z), B_{\delta,(0,\log 2)})$ is continuous. Therefore, there exists $\delta'>0$ such that \begin{equation*} P_T((x,z), B_{\delta,(0,\log 2)})>\varepsilon/2,\quad \text{for any $(x,z)\in B_{\delta',(x_0,z_0)}$}. \end{equation*} This implies that \begin{equation*} \nu(B_{\delta,(0,\log 2)})\ge\int_{B_{\delta',(x_0,z_0)}} \nu(x,z) P_T((x,z), B_{\delta,(0,\log 2)})\,dxdz\ge \frac\eps2\nu (B_{\delta',(x_0,z_0)})>0 \end{equation*} where the last inequality follows from the fact that $(x_0,z_0)\in\supp(\nu)$. Since $\delta$ was arbitrary, we see that $(0,\log 2)\in\supp(\nu)$. Similarly, $(0,\log 2)\in\supp(\pi)$, which contradicts \eqref{support}. Therefore, SDE \eqref{eq11}-\eqref{eq22} has a unique invariant measure. \end{proof} Let $L$ be the generator of the semigroup $P$ \begin{equation*} L f := \frac12\kappa \d^2_{xx}f +\bigl(-\frac12 x - \frac{2x}{x^2+e^{2z}}\bigr)\d_xf+\bigl(-\frac12 + \frac{2}{x^2+e^{2z}}\bigr)\d_zf, \end{equation*} where $f\in\mathcal{C}^\infty(\mathbb{R}^2)$. As usual, the adjoint of $L$ will be denoted by $L^*$. \begin{lemma}\label{L:36} The measure $\pi:=\Law\bigl(\Re(\gamma_1),\log(\Im(\gamma_1))\bigr)$ has a smooth density $p$ with respect to the Lebesgue measure. Further, $p$ is the unique solution in the class of densities of the Fokker-Planck-Kolmogorov equation \begin{equation}\label{KFP} L^*p=0. \end{equation} Finally, $p(x,z)=0$ for $x\in\mathbb{R}$, $z\ge\log 2$, and $p(x,z)>0$ for $x\in\mathbb{R}$, $z<\log 2$. \end{lemma} \begin{proof} Since the measure $\pi$ is invariant for $P$, we have (in the weak sense) \begin{equation}\label{stars} L^*\pi=0. \end{equation} Arguing exactly as in the proof of \Cref{L:31aux}, we see that the operator $L^*$ satisfies the H\"ormander's condition. Therefore, by H\"ormander's theorem \cite[Theorem~2.5.3]{OR73}, $L^*$ is hypoelliptic and, therefore, \eqref{stars} implies that the Schwarz distribution $\pi\in\mathcal{C}^\infty(\mathbb{R}^2)$. Thus, the measure $\pi$ has a $\mathcal{C}^\infty$ density $p$ with respect to the Lebesgue measure and \eqref{KFP} holds. Now let us show that \eqref{KFP} does not have any other solutions. Assume the contrary and let $p'$ be another probability density which solves to \eqref{KFP}. Let $\pi'$ be the measure with density $p'$. Consider a Lyapunov function $V$ (the suggestion to take this specific function is due to Stas Shaposhnikov) \begin{equation*} V(x,z):=x^2 + \log (1+z^2),\quad (x,z)\in\mathbb{R}^2. \end{equation*} Then \begin{align*} LV(x,z)&=\kappa-x^2 -\frac{4x^2}{x^2+e^{2z}}-\frac{z}{1+z^2}+ \frac{4z}{(x^2+e^{2z})(1+z^2)}\\ &\le \kappa +3-\Bigl(x^2 + \frac{4|z|\mathbbm 1(z\le0)}{(x^2+e^{2z})(1+z^2)}\Bigr). \end{align*} By \cite[Theorem~2.3.2 and inequality (2.3.2)]{BKRS}, this implies (note that $V$ is obviously quasi-compact in the sense of \cite[Definition~2.3.1]{BKRS}) \begin{equation}\label{almostdone} \int_{\mathbb{R}^2}\bigl(x^2 + \frac{4|z|\mathbbm 1(z\le0)}{(x^2+e^{2z})(1+z^2)}\bigr)p'(x,z)\,dxdz<\infty. \end{equation} Denote by $b$ the drift of \eqref{eq11}-\eqref{eq22} \begin{equation*} b(x,z):=\begin{pmatrix} b^{1}(x,z)\\b^{2}(x,z) \end{pmatrix}:=\begin{pmatrix} -\frac12 x -\frac{2x}{x^2+e^{2z}}\\[1.5ex] -\frac12 +\frac{2}{x^2+e^{2z}} \end{pmatrix},\quad x,z\in\mathbb{R}. \end{equation*} Then \begin{equation*} \frac{1+|b^1(x,z)x|+|b^2(x,z)z|}{1+x^2+z^2}\le 7+x^2 + \frac{4|z|\mathbbm 1(z\le0)}{(x^2+e^{2z})(1+z^2)}. \end{equation*} Combining this with \eqref{almostdone}, we see that for any $T>0$ \begin{equation*} \int_0^T\int_{\mathbb{R}^2} \frac{1+|b^1(x,z)x|+|b^2(x,z)z|}{1+x^2+z^2}p'(x,z)\,dxdzdt<\infty. \end{equation*} Therefore, by the generalized Ambrosio-Figalli-Trevisan superposition principle \cite[Theorem~1.1]{BRS}, there exists a weak solution to SDE \eqref{eq11}--\eqref{eq22} on the interval $[0,T]$ such that for any $t\ge0$ we have $\Law(\widehat X_t, \widehat Z_t)=\pi'$. Thus, the measure $\pi'$ is also invariant for the semigroup $(P_t)$. However, this contradicts \Cref{L:UIM}. Therefore, \eqref{KFP} has a unique solution in the class of probability densities. Recall that in general not every solution to the Fokker-Planck-Kolmogorov equation corresponds to an invariant law, but \cite{BRS} gives conditions when it does; we refer to \cite[p.~719]{BRS} for further discussion. Finally, let us prove the results concerning the support of $p$. Note that if $\widehat Z_0(\omega)>\log 2$, then $\widehat Z_0(\omega)>\widehat Z_1(\omega)$. Let $f\colon\mathbb{R}\to[0,\infty)$ be an increasing function such that $f(x)=0$ for $x\le \log2$ and $f(x)>0$ for $x>\log 2$. Then $f(\widehat Z_0)-f(\widehat Z_1)\ge0$. On the other hand, by invariance \begin{equation*} \hskip.15ex\mathsf{E}\hskip.10ex_\pi (f(\widehat Z_0)-f(\widehat Z_1))=0. \end{equation*} This implies that $\mathsf{P}_\pi$ a.s. we have $f(\widehat Z_0)=f(\widehat Z_1)$. By the definition of $f$ this implies that $\mathsf{P}_\pi(\widehat Z_0>\log2)=0$ and thus $\pi(\mathbb{R}\times (\log2,\infty))=0$. Since the density $p$ is continuous we have \begin{equation}\label{topsup} p(x,z)=0,\quad x\in\mathbb{R}, z\ge\log2. \end{equation} \begin{figure}[h] \centering \includegraphics[width=0.8\textwidth]{support.png} \caption{Support of the density $p$ (yellow and red regions). The process $\widehat Z_t$ is increasing when $(\widehat X_t,\widehat Z_t)$ is in the red region, and decreasing whenever $(\widehat X_t,\widehat Z_t)$ is in the yellow region. The dashed line, which touches the red region, is $z=\log 2$.} \label{fig:supp} \end{figure} Now let us show that $p(x,z)>0$ for any $z<\log2$. The idea of this part of the proof is due to Stas Shaposhnikov. Note that in the domain $$ D:=\{x^2+\exp(2z)<4\} $$ PDE \eqref{KFP} becomes a parabolic equation in $(z,x)$ and on its complement \eqref{KFP} is a backward parabolic equation. This corresponds to the fact that the process $\widehat Z_t$ is increasing on $D$ and decreasing on $\mathbb{R}^2\setminus D$, see \cref{fig:supp}. Suppose the contrary that for some $x_0\in\mathbb{R}$, $z_0<\log 2$ we have $p(x_0,z_0)=0$. We claim that this implies that $p\equiv0$. Note that the set $\{z=z_0\}$ is the set of elliptic connectivity for operator $L^*$ in the sense of \cite[Chapter III.1]{OR73} (see also \cite[Section 2]{Hill}). Therefore, the maximum principle for degenerate elliptic equations \cite[Theorem 3.1.2]{OR73} (see also \cite[Theorem~1]{Hill}, \cite[Theorem~4]{Aleks}) implies that $p(x,z_0)=0$ for any $x\in\mathbb{R}$. Fix now small $\delta$ such that $\delta^2 + \exp(2 z_0)<4$ (this is possible since $z_0<\log2$). Consider now the domain $D':=[-\delta,\delta]\times (-\infty,z_0)\subset D$. In this domain \eqref{KFP} is a parabolic equation \begin{equation}\label{parpde} \d_z p -a(x,z)\d_{xx} p+b(x,z)\d_x p +c (x,z) p =0, \end{equation} for certain smooth functions $a,b,c$ and $$ a(x,z)=\frac{\kappa}{\frac{4}{x^2+e^{2z}}-1}>\frac{\kappa}{\frac{4}{\delta^2+e^{2z_0}}-1}>0, \quad (x,z)\in D'. $$ Therefore, by the Harnack inequality for parabolic equations (see, e.g., \cite[Section~7.1, Theorem~10]{Evans}), we get for arbitrary $z_1\le z_0$, and $C>0$ \begin{equation*} \sup_{x\in(-\delta,\delta)} p(x, z_1) \le C \inf_{x\in(-\delta,\delta)} p(x, z_0)=0. \end{equation*} Using again the maximum principle for degenerate elliptic equations, we deduce from this that $p(x,z_1)=0$ for any $x\in\mathbb{R}$. Since $z_1\le z_0$ was arbitrary we have that $p\equiv 0$ on $\mathbb{R}\times(-\infty;z_0]$. We use a similar argument to treat the case $z\ge z_0$. Consider now the domain $D'':=[3,4]\times (\infty,z_0)\subset D$. In this domain \eqref{KFP} is a \textit{backward} parabolic equation \eqref{parpde} and $$ a(x,z)=\frac{\kappa}{\frac{4}{x^2+e^{2z}}-1}<-\frac{9\kappa}{5}<0, \quad (x,z)\in D''. $$ The Harnack inequality for parabolic equations implies now for arbitrary $z_1\ge z_0$, and $C>0$ \begin{equation*} \sup_{x\in(3,4)} p(x, z_1) \le C \inf_{x\in(3,4)} p(x, z_0)=0. \end{equation*} and thus, as above, the maximum principle implies that $p\equiv 0$ on $\mathbb{R}\times[z_0, \infty)$. Therefore the function $p$ is identically $0$ which is not possible since $p$ is a density. This contradiction shows that $p(x,z)>0$ for any $x\in\mathbb{R}$, $z<\log 2$. Together with \eqref{topsup} this concludes the proof of the theorem. \end{proof} \begin{proof}[Proof of \Cref{T:main}] By \Cref{L:36}, the measure $\Law\bigl(\Re(\gamma_1),\log(\Im(\gamma_1))\bigr)$ has a smooth density $p$ with respect to the Lebesgue measure, which solves \eqref{KFP}. Therefore, the measure $\Law\bigl(\Re(\gamma_1),\Im(\gamma_1)\bigr)$ has a density $$ \psi(x,y):=\frac1y p(x,\log y),\quad x\in\mathbb{R}, y>0. $$ Now, by change of variables, it is easy to see that $\psi$ is the unique solution of \eqref{mainPDE} in the class of probability densities. Since $p(x,z)$ is positive whenever $z<\log 2$, we see that $\psi(x,y)$ is positive whenever $y\in(0,2)$. Finally, it is immediate that the function $\bar\psi(x,y):=\psi(-x,y)$ also solves \eqref{mainPDE}. By uniqueness, this implies that $\psi(x,y)=\psi(-x,y)$. \end{proof} \subsection{Proof of \cref{T:other}} \label{se:density_analysis} To establish \cref{T:other}, it will be convenient to work in the coordinates $(A,U)$, where \begin{equation*} A:=\arg\gamma_1 = \cot^{-1}(\Re\gamma_1/\Im\gamma_1), \quad U:=(\Im\gamma_1)^2. \end{equation*} Denoting the density of $(A,U)$ by $\varphi$, we note that \[ \psi(x,y)=\frac{2 y^2}{x^2+y^2}\varphi(\cot^{-1}(x/y),y^2),\quad x\in\mathbb{R},\ y>0. \] It follows from \Cref{T:main} that the density $\varphi$ is the unique solution to the corresponding Fokker-Planck-Kolmogorov equation, which in the new coordinates is given by \begin{align}\label{FPK} &\frac{\kappa}{2u}\sin^4\alpha\, \d^2_{\alpha\alpha}\varphi +\frac{3\kappa-4}{u}\sin^3\alpha\cos\alpha\, \d_\alpha\varphi\nonumber +(u-4\sin^2\alpha)\, \d_u\varphi \\&\qquad+\frac{\kappa-4}{u}(3\sin^2\alpha\cos^2\alpha-\sin^4\alpha)\, \varphi + \varphi=0,\quad (\alpha,u)\in(0,\pi)\times(0,4]. \end{align} Recall that we can consider this equation on a larger domain $(0,\pi)\times(0,\infty)$, but since $\psi(x,y)=0$ for $y\ge2$, we have $\varphi(\alpha,u)=0$ for $u\ge4$. Note that this PDE can be rewritten as \begin{equation}\label{eq:fp_ualpha} \partial_u((u-4\sin^2\alpha)\, \varphi)+\frac{\kappa-4}{u}\partial_\alpha(\sin^3\alpha\cos\alpha \,\varphi)+\frac{\kappa}{2u}\partial_\alpha(\sin^4\alpha\,\partial_\alpha\varphi)=0. \end{equation} The crucial statement on the way to prove \cref{T:other} is the following lemma. \begin{lemma}\label{L:db1} For any $\alpha\in(0,\pi)$ we have \begin{equation}\label{mainidentity} \int_0^4 \frac{1}{u}\varphi(\alpha,u)\,du = \frac{\Gamma(1+\frac4\kappa)}{4\sqrt\pi\Gamma(\frac12+\frac4\kappa)}(\sin\alpha)^{8/\kappa-2}. \end{equation} \end{lemma} Before we go into the technical details, let us outline heuristically the main idea of the proof. If we assume $\varphi(\alpha,0+)=\varphi(\alpha,4)=0$, then integrating \eqref{eq:fp_ualpha} in $u$ yields \begin{equation*} \partial_\alpha \Bigl(\int_0^4 (\frac{\kappa-4}{u}\sin^3\alpha\cos\alpha\,\varphi(\alpha,u) +\frac{\kappa}{2u}\sin^4\alpha\,\d_\alpha \varphi(\alpha,u))\,du \Bigr)=0. \end{equation*} Hence the expression $J(\alpha):=\int_0^4( \frac{\kappa-4}{u}\sin^3\alpha\cos\alpha\,\varphi +\frac{\kappa}{2u}\sin^4\alpha\,\d_\alpha \varphi)\,du$ does not depend on $\alpha$. Moreover, let us suppose that $\alpha^4 |\d_\alpha\varphi|$ and $\alpha^3\varphi$ monotonically go to $0$ as $\alpha\to0$ for any $u\in(0,4]$. Then $J(0+)=0$ and thus $J(\alpha)=0$ for any $\alpha\in(0,\pi)$. Therefore, \begin{equation*} 0=J(\alpha)\sin^{-8/\kappa-2}\alpha=\int_0^4 \frac{\kappa}{2u}\partial_\alpha((\sin\alpha)^{2-8/\kappa}\, \varphi(\alpha,u))\, du. \end{equation*} This yields that $\int_0^4 \frac{1}{u}(\sin\alpha)^{2-8/\kappa}\, \varphi(\alpha,u)\, du$ is constant in $\alpha$, which gives \begin{equation*} \int_0^4 \frac{1}{u}\, \varphi(\alpha,u)\, du = c(\sin\alpha)^{8/\kappa-2} \end{equation*} for some $c>0$, which is almost the statement of \Cref{L:db1}. However, since the boundary behavior of $\varphi$ as $\alpha$ approaches $0$ is not clear, we developed an alternative approach which avoids these steps. Instead of integrating all the way to $0$, we will integrate only up to $\varepsilon > 0$ and obtain approximate identities. Then we would like to let $\varepsilon \searrow 0$. For this, we would need the following technical results about approximating ODEs. \begin{lemma}\label{le:ode_approximation} Let $S,T\in\mathbb{R}$, $S\le T$. Suppose $x_k\colon [S,T] \to \mathbb R^d$, $k \in \mathbb N$, are continuous functions that solve the integral equation \begin{equation}\label{inteqn} x_k(t)-x_k(s) = \int_{s}^{t} \big(F(r,x_k(r)) + g_k(r)\big) \,dr +h_k(s,t),\quad s,t\in[S,T], \end{equation} where \renewcommand\labelitemi{$\vcenter{\hbox{\tiny$\bullet$}}$} \begin{itemize} \item $F$ is a continuous function $[S,T]\times \mathbb{R}^d\to\mathbb{R}^d$ and there exists $C>0$ such that $|F(t,x)| \le C(1+|x|)$ for $t \in [S,T]$, $x\in \mathbb{R}^d$; \item $g$ and $g_k$, $k\in\mathbb{Z}_+$, are integrable functions $[S,T]\to\mathbb{R}^d$, $g_k\to g$ pointwise as $k\to\infty$, and $\sup_k \norm{g_k}_\infty < \infty$; \item $h_k$, $k\in\mathbb{Z}_+$, are functions $[S,T]^2\to\mathbb{R}^d$, and $\norm{h_k}_\infty \to 0$ as $k \to \infty$. \end{itemize} Moreover, suppose that there exist $t_k \in [S,T]$ such that $\sup_k |x_k(t_k)| < \infty$. Then there exists a continuous function $x\colon [S,T] \to \mathbb{R}^d$ such that along some subsequence $(k_j)_{j\in\mathbb{Z}_+}$ we have $x_{k_j} \to x$ uniformly as $j\to\infty$ and \begin{equation}\label{inteq} x(t)-x(s) = \int_{s}^{t} \big(F(r,x(r)) + g(r)\big) \,dr,\quad s,t\in[S,T]. \end{equation} \end{lemma} \begin{proof} First, we show that $x_k$ are uniformly bounded. Indeed, by our assumptions we have for any $t\in[S,T]$ \[ \begin{split} \abs{x_k(t)} &\le \abs{x_k(t_k)} + \int_{t_k}^t \abs{F(r,x_k(r)) + g_k(r)} \,dr + \norm{h_k}_\infty \\ &\le C + C \int_{t_k}^t (1+\abs{x_k(r)})\,dr , \end{split} \] and an application of Grönwall's inequality implies $x_k$ are uniformly bounded. Consequently, we can assume $F$ to be bounded. It follows that the family $(x_k)_{k\in\mathbb{Z}_+}$ is equicontinuous. Indeed, for $\varepsilon > 0$ let $k_\varepsilon$ large enough such that $\norm{h_k}_\infty < \varepsilon$ for $k \ge k_\varepsilon$. Then, for $k \ge k_\varepsilon$, we have \[ \begin{split} \abs{x_k(t)-x_k(s)} &\le \int_s^t \abs{F(r,x_k(r)) + g_k(r)} \, dr + \varepsilon \\ &\le C\abs{t-s} + \varepsilon \end{split} \] which is smaller than $2\varepsilon$ whenever $\abs{t-s} < \varepsilon/C$. For $k < k_\varepsilon$, by continuity of $x_k$ we can find $\delta_\varepsilon > 0$ such that $\abs{x_k(t)-x_k(s)} < \varepsilon$ whenever $\abs{t-s} < \delta_\varepsilon$. Hence, by the Arzelà-Ascoli theorem, we have $x_{k_j} \to x$ uniformly along some subsequence. Equation \eqref{inteq} follows now from \eqref{inteqn} by taking limits. \end{proof} We will later also frequently use integation by parts arguments. In order to control the boundary terms that appear, the following lemma will be useful. \begin{lemma}\label{le:nice_subseq} Let $f\colon (0,1] \to \mathbb R$ be a differentiable function such that $\int_\varepsilon^1 f(s)\,ds$ neither diverges to $+\infty$ nor $-\infty$ as $\varepsilon \searrow 0$. Let $h\colon {(0,1]} \to {(0,\infty)}$ be a differentiable function such that $\int_0^1 h(s)\,ds = +\infty$. Then there exists a sequence $t_k \searrow 0$ such that \[ \abs{f(t_k)} \le h(t_k) \quad\text{and}\quad f'(t_k) \ge h'(t_k) . \] \end{lemma} \begin{proof} First we note that there must exist a sequence $s_k \searrow 0$ such that $\abs{f(s_k)} < h(s_k)$ for all $k\in\mathbb{Z}_+$, otherwise we would have $\int_0^1 f(s)\,ds =+ \infty$ or $\int_0^1 f(s)\,ds = -\infty$. To control $f'$, we distinguish two cases. \textbf{Case 1:} We have $\abs{f(t)} \le h(t)$ for all small $t$. In that case, consider $g(t) := h(t)-f(t)$. The function $g$ cannot be always increasing for small $t$, otherwise we would have $\int_0^1 f(s)\,ds = \infty$. Consequently there must be a sequence $t_k \searrow 0$ such that $g'(t_k) \le 0$. \textbf{Case 2:} We have $\abs{f(r_k)} > h(r_k)$ along a sequence $r_k \searrow 0$. We can pick the sequence such that either $f(r_k) > 0$ for all $k$ or $f(r_k) < 0$ for all $k$. In the former case we let $t_k = \sup\{ t < r_k \mid f(t) \le h(t) \}$ (this set is non-empty due to the existence of a sequence $s_k$ with $\abs{f(s_k)} < h(s_k)$). In the latter case we let $t_k = \inf\{ t > r_k \mid f(t) \ge -h(t) \}$. Then $f'(t_k) \ge h'(t_k)$ as desired. \end{proof} \begin{corollary}\label{Cor:310} Consider the same setup as \cref{le:nice_subseq}, and suppose additionally that $f \ge 0$ and $h$ is decreasing. Then there exists a sequence $t_k \searrow 0$ such that \[ f(t_k)\le h(t_k) \quad\text{and}\quad \abs{f'(t_k)} \le \abs{h'(t_k)} . \] \end{corollary} \begin{proof} Let $t_k$ be a sequence as in \cref{le:nice_subseq}, and define $\widetilde t_k = \sup\{ t \le t_k \mid f'(t) \le \abs{h'(t)} \}$ (this set is non-empty, otherwise we would have $f(t) \to -\infty$ as $t \searrow 0$). Then we have $0 \le f(\widetilde t_k) \le f(t_k) \le h(t_k) \le h(\widetilde t_k)$ as desired and $\abs{f'(\widetilde t_k)}\le \abs{h'(\widetilde t_k)}$. \end{proof} We now proceed to the main part of our proof. In the following, we denote for $n\in\mathbb{Z}_+$, $\alpha\in(0,\pi)$, and $\varepsilon>0$ \begin{equation*} I_n(\alpha):= \int_0^4 u^{-n} \varphi(\alpha,u)\, du;\qquad I_n^{\varepsilon}(\alpha): = \int_\varepsilon^4 u^{-n} \varphi(\alpha,u)\, du. \end{equation*} From the equation \eqref{FPK}, we will deduce a recursive system of ODEs that are satisfied for the functions $I_n$. In fact, the relation is satisfied for general $n \in \mathbb R$ but we will use it only with $n\in\mathbb{Z}_+$. \begin{lemma}\label{L:mainlemma} Let $n\in\mathbb{Z}_+$ be fixed. Suppose that either $n=0$ or $I_n$ is continuous (and finite) on $(0,\pi)$. Assume that for any $\delta>0$ there exists a sequence $(\varepsilon_k)_{k\in\mathbb{Z}_+}$ converging to $0$ such that \begin{equation}\label{formeraQ} \varepsilon_k^{-n}\int_\delta^{\pi-\delta} \varphi(\varepsilon_k,\alpha)\,d\alpha \to 0 \end{equation} Then either $I_{n+1} = \infty$ everywhere or $I_{n+1}$ is twice differentiable and satisfies the following ODE \begin{multline}\label{eq:Iq_ode} 0 = nI_n - 4n\sin^2\alpha \,I_{n+1} +(\kappa-4)(3\sin^2\alpha\cos^2\alpha-\sin^4\alpha) \,I_{n+1} \\ +(3\kappa-4)\sin^3\alpha\cos\alpha \,I_{n+1}' +\frac{\kappa}{2}\sin^4\alpha \,I_{n+1}'' . \end{multline} \end{lemma} \begin{proof} Fix $n\in\mathbb{Z}_+$. Let $\varepsilon>0$. Multiplying \eqref{FPK} by $u^{-n}$ and integrating in $u \in [\varepsilon,4]$ yields \begin{align} 0 ={}& \int_\varepsilon^4 u^{-n} \partial_u((u-4\sin^2\alpha)\, \varphi) \, du\nonumber\\ &+(\kappa-4)(3\sin^2\alpha\cos^2\alpha-\sin^4\alpha) \int_\varepsilon^4 u^{-n-1}\, \varphi \, du \nonumber\\ &+(3\kappa-4)\sin^3\alpha\cos\alpha \int_\varepsilon^4 u^{-n-1}\, \d_\alpha\varphi \, du +\frac{\kappa}{2}\sin^4\alpha \int_\varepsilon^4 u^{-n-1}\, \d^2_{\alpha\alpha}\varphi \, du \nonumber\\ ={}& -\varepsilon^{-n} (\varepsilon-4\sin^2\alpha)\, \varphi(\alpha,\varepsilon) +nI_n^{\varepsilon} - 4n\sin^2\alpha \,I_{n+1}^{\varepsilon}\nonumber\\ &+(\kappa-4)(3\sin^2\alpha\cos^2\alpha-\sin^4\alpha) \,I_{n+1}^{\varepsilon} \nonumber\\ &+(3\kappa-4)\sin^3\alpha\cos\alpha \,(I_{n+1}^{\varepsilon})' +\frac{\kappa}{2}\sin^4\alpha \,(I_{n+1}^{\varepsilon})'' .\label{prelimpde} \end{align} We would like to apply \cref{le:ode_approximation} to pass to the limit as $\varepsilon\to0$ in the above ODE. Suppose now that $I_{n+1}$ is not infinite everywhere, i.e. $I_{n+1}(\alpha_0) < \infty$ for some $\alpha_0 \in (0,\pi)$. Fix arbitrary $\delta>0$ small enough such that $\alpha_0 \in (\delta,\pi-\delta)$ and set $S\mathrel{\mathop:}=\delta$, $T\mathrel{\mathop:}=\pi-\delta$, \begin{align*} x_k &\mathrel{\mathop:}= \begin{pmatrix} I_{n+1}^{(\varepsilon_k)} \\ \d_\alpha I_{n+1}^{(\varepsilon_k)} \end{pmatrix},\\ F(\alpha,x_1,x_2) &\mathrel{\mathop:}= \begin{pmatrix} x_2\\[1ex] \frac{2}{\kappa\sin^2 \alpha}\left( 4n x_1-x_1(\kappa-4)(3\cos^2\alpha-\sin^2\alpha) -(3\kappa-4)x_2\sin\alpha\cos\alpha \right) \end{pmatrix},\\ g_k(\alpha) &\mathrel{\mathop:}= \begin{pmatrix} 0 \\ -\frac{2}{\kappa\sin^4 \alpha} nI_n^{\varepsilon_k}(\alpha) \end{pmatrix},\\ h_k(\alpha_1,\alpha_2) &\mathrel{\mathop:}= \begin{pmatrix} 0 \\ \int_{\alpha_1}^{\alpha_2}\frac{2}{\kappa\sin^4 \alpha} \varepsilon_k^{-n} (\varepsilon_k-4\sin^2\alpha)\, \varphi(\alpha,\varepsilon_k)\,d\alpha \end{pmatrix} \end{align*} where $\varepsilon_k$ are the same as in the condition \eqref{formeraQ}. It is obvious that on $[\delta, \pi-\delta]$ the function $F$ is globally Lipschitz and has linear growth in $x_1, x_2$. Moreover, $g_k(\alpha) \to g(\alpha) = \begin{pmatrix} 0 & -\frac{2nI_n(\alpha)}{\kappa\sin^4 \alpha} \end{pmatrix}^T$ monotonically by the assumptions of the Lemma. Finally, thanks to \eqref{formeraQ}, we have $\norm{h_k}_\infty \to 0$ on $[\delta, \pi-\delta]$. It remains to find a sequence $\alpha_k$ such that $I_{n+1}^{(\varepsilon_k)}(\alpha_k)$ and $(I_{n+1}^{(\varepsilon_k)})'(\alpha_k)$ are bounded. First assume that there exists $\alpha',\alpha''\in[\delta,\pi-\delta]$ such that $\alpha' < \alpha_0 < \alpha''$ and $I_{n+1}(\alpha_0)< I_{n+1}(\alpha')$, $I_{n+1}(\alpha_0)< I_{n+1}(\alpha'')$. Then for all large enough $k$ we have $I_{n+1}^{\varepsilon_k}(\alpha_0)< I_{n+1}^{\varepsilon_k}(\alpha')$, $I_{n+1}^{\varepsilon_k}(\alpha_0)< I_{n+1}^{\varepsilon_k}(\alpha'')$. Put $\alpha_k:= \operatorname{argmin}_{[\alpha',\alpha'']} I_{n+1}^{\varepsilon_k}$. By above, $\alpha_k\in(\alpha',\alpha'')$ and hence $(I_{n+1}^{\varepsilon_k})'(\alpha_k) = 0$. Moreover, $I_{n+1}^{\varepsilon_k}(\alpha_k) \le I_{n+1}^{\varepsilon_k}(\alpha_0)\le I_{n+1}(\alpha_0)$. Thus the sequence $\bigl(I_{n+1}^{\varepsilon_k}(\alpha_k),(I_{n+1}^{\varepsilon_k})'(\alpha_k)\bigr)_{k\in\mathbb{Z}_+}$ is bounded. If $I_{n+1}(\alpha_0)\ge I_{n+1}(\alpha)$ for all $\alpha\in[\delta,\alpha_0]$, then \begin{equation}\label{case1proof} \sup_{\alpha\in[\delta,\alpha_0]}I_{n+1}^{\varepsilon_k}(\alpha)\le \sup_{\alpha\in[\delta,\alpha_0]}I_{n+1}(\alpha)\le I_{n+1}(\alpha_0). \end{equation} Hence for each $k$ there exists $\alpha_k\in[\delta,\alpha_0]$ such that $\abs{(I_{n+1}^{\varepsilon_k})'(\alpha_k)}\le I_{n+1}(\alpha_0)/(\alpha_0-\delta)$. Combining this with \eqref{case1proof} we see again that the sequence $\bigl(I_{n+1}^{\varepsilon_k}(\alpha_k),(I_{n+1}^{\varepsilon_k})'(\alpha_k)\bigr)_{k\in\mathbb{Z}_+}$ is bounded. The case when $I_{n+1}(\alpha_0)\ge I_{n+1}(\alpha)$ for all $\alpha\in[\alpha_0,\pi-\delta]$ is treated in a similar way. Thus we see that all the conditions of \cref{le:ode_approximation} are satisfied. By passing to the limit as $\varepsilon\to0$ in \eqref{prelimpde} and using continuity of $g$, we get \eqref{eq:Iq_ode}. \end{proof} As we mentioned before, we are planning to apply \cref{L:mainlemma} recursively starting with $n=0$. To verify condition \eqref{formeraQ} we will use the following result. \begin{lemma}\label{L:310} For any $n\ge0$ we have \begin{equation}\label{goodbound} \int_0^4\int_0^\pi u^{-n-1}\sin^2\alpha \,\varphi(\alpha,u)\,d\alpha du= \frac14\int_0^4\int_0^\pi u^{-n} \varphi(\alpha,u)\,d\alpha du, \end{equation} In case both sides of this identity are finite, for any $\delta>0$ there exists a sequence $(\varepsilon_k)_{k\in\mathbb{Z}_+}\searrow 0$ such that \begin{equation}\label{eq:ass_I-1} \varepsilon_k^{-n}\int_\delta^{\pi-\delta} \varphi(\alpha,\varepsilon_k)\,d\alpha \to 0\quad\text{as $k\to\infty$}. \end{equation} \end{lemma} \begin{proof} Fix arbitrary $\delta>0$. Integrating \eqref{eq:fp_ualpha} in $\alpha$ from $\delta$ to $\pi/2$ yields for any $u\in(0,4]$ \begin{equation}\label{diffeq} \partial_u \int_\delta^{\pi/2} (u-4\sin^2\alpha)\varphi(\alpha,u)\, d\alpha= \frac{(\kappa-4)}{u}\varphi(\delta,u)\sin^3\delta\cos\delta +\frac{\kappa}{2u}\sin^4\delta\,\,\partial_\alpha\varphi(u)\bigg\rvert_{\alpha=\delta}. \end{equation} By \cref{T:main}, $\varphi(\alpha,4)=0$ for any $\alpha\in(0,\pi/2)$. Fix now arbitrary $u_0\in(0,4]$ and denote \begin{equation*} J(\alpha):=\int_{u_0}^4 u^{-1}\varphi(\alpha,u)du,\quad \alpha\in(0,\pi). \end{equation*} Integrating \eqref{diffeq} in $u$ from $u_0$ to $4$ we get \begin{equation}\label{prelim} \Bigl|\int_\delta^{\pi/2} (u_0-4\sin^2\alpha)\varphi(\alpha,u_0)\, d\alpha\Bigr|\le C J(\delta)\delta^3 +C\delta^4 |J'(\delta)|. \end{equation} Let us pass to the limit in \eqref{prelim} as $\delta\to0$. Note that $A:=\int_0^1 J(\alpha)\,d\alpha$ is obviously finite. Hence, we can apply \cref{Cor:310} with $f=J$, $h(t)=1/t$. Then, there exists a sequence $(\delta_k)_{k\in\mathbb{Z}_+}$, such that \begin{equation*} \delta_k\downarrow0,\,\,\, \delta_k J(\delta_k)\le 1,\,\,\, \delta_k^2 |J'(\delta_k)|\le 1 \end{equation*} for all $k\in\mathbb{Z}_+$. Applying now \eqref{prelim} with $\delta=\delta_k$ and passing to the limit as $k\to\infty$, we get \begin{equation*} \int_0^{\pi/2} (u_0-4\sin^2\alpha)\varphi(\alpha,u_0)\, d\alpha=0, \end{equation*} which by symmetry of $\varphi$ implies \begin{equation*} \int_0^{\pi} (u_0-4\sin^2\alpha)\varphi(\alpha,u_0)\, d\alpha=0 \end{equation*} for any $u_0\in(0,4]$. Dividing now this identity by $u_0^n$ and integrating in $u_0$, we get \eqref{goodbound}. To show \eqref{eq:ass_I-1}, fix $\delta>0$ and note that thanks to \eqref{goodbound}, \begin{equation*} \int_0^4 \int_\delta^{\pi-\delta} u^{-n-1}\varphi(\alpha,u)\,d\alpha \,du\le \frac{1}{\sin^2\delta}\int_0^4\int_\delta^{\pi-\delta} u^{-n-1}\sin^2\alpha\,\varphi(\alpha,u)\,d\alpha \,du<\infty. \end{equation*} Therefore there must exist a sequence of $\varepsilon_k\searrow 0$ satisfying \eqref{eq:ass_I-1} because otherwise the left-hand side of the above inequality would be infinite. \end{proof} \begin{remark} \cref{L:310} can be deduced from a general PDE argument \cite[Theorem~2.3.2 and inequality (2.3.2)]{BKRS}. Indeed, note that PDE \eqref{FPK} can be written as $$ \mathcal L^*\varphi=0, $$ where $\mathcal L=\frac\kappa{2u}\sin^4\alpha\,\d^2_{\alpha\alpha}+\frac{4+\kappa}{u}\cos\alpha\sin^3\alpha\,\d_\alpha+(4\sin^2\alpha-u)\,\d_u$, $u>0$, $\alpha\in(0,\pi)$. If $n=1$, take a Lyapunov function $V(\alpha,u):=-\log u$; otherwise set $V(\alpha,u):=\frac{1}{n-1}u^{-n+1}$. Then \begin{equation*} \mathcal{L}V(\alpha,u)=(u-4\sin^2\alpha)u^{-n}. \end{equation*} Note however that even though $V$ does not satisfy all the conditions of \cite[Theorem~2.3.2]{BKRS}, a standard mollification argument and \cite[inequality (2.3.2)]{BKRS} yield \eqref{goodbound}. However, writing up rigorously all the technical details gets a bit complicated, so we found it simpler to give a direct proof. \end{remark} We are now able to prove \eqref{mainidentity} rigorously. \begin{proof}[Proof of \cref{L:db1}] Let us apply \cref{L:mainlemma} with $n=0$. We see that the right-hand side of \eqref{goodbound} is finite for $n=0$. Hence \cref{L:310} implies that\eqref{eq:ass_I-1} holds for $n=0$. Therefore, condition \eqref{formeraQ} is satisfied for $n=0$. Note now that if $I_{1}=\infty$ for all $\alpha\in(0,\pi)$, then the left-hand side of \eqref{goodbound} with $n=0$ is infinite. However this is not the case. Thus, by \cref{L:mainlemma}, the function $I_1$ is twice differentiable and solves \begin{equation*} (\kappa-4)(3\sin^2\alpha\cos^2\alpha-\sin^4\alpha) \,I_1+ (3\kappa-4)\sin^3\alpha\cos\alpha \,I_1' +\frac{\kappa}{2}\sin^4\alpha \,I_1''=0. \end{equation*} This can be rewritten as \begin{equation}\label{denssetp1} \partial_\alpha\Bigl((\kappa-4)\sin^3\alpha\cos\alpha\, I_1+\frac{\kappa}{2}\sin^4\alpha\, I_1'\Bigr)=0. \end{equation} Let $\alpha \in (0,\pi)$. Then integrating \eqref{denssetp1} in $\alpha' \in [\alpha,\pi-\alpha]$, we get \begin{equation}\label{denssetp2} (\kappa-4)\sin^3\alpha\cos\alpha (I_1(\alpha)+I_1(\pi-\alpha))+\frac{\kappa}{2}\sin^4\alpha (I_1'(\alpha)-I_1'(\pi-\alpha))=0 \end{equation} Recall that by \cref{T:main} we have that the density $\psi$ is symmetric, $\psi(x,y)=\psi(-x,y)$ for $x\in\mathbb{R}$, $y>0$. This implies that $\varphi$ is also symmetric and $\varphi(\alpha,u)=\varphi(\pi-\alpha,u)$, $\d_\alpha\varphi(\alpha,u)=-\d_\alpha\varphi(\pi-\alpha,u)$ for $\alpha\in(0,\pi)$, $u>0$. Hence $I_1(\alpha)=I_1(\pi-\alpha)$, $I_1'(\alpha)=-I_1'(\pi-\alpha)$ and \eqref{denssetp2} yields \begin{equation}\label{denssetp3} (\kappa-4)\sin^3\alpha\cos\alpha\, I_1(\alpha)+\frac{\kappa}{2}\sin^4\alpha\, I_1'(\alpha)=0. \end{equation} Therefore, $$ \d_\alpha(\sin^{2-8/\kappa}\alpha\, I_1(\alpha))=0, $$ and we finally get \begin{equation*} I_1(\alpha)=c\sin^{8/\kappa-2}\alpha,\quad \alpha\in(0,\pi), \end{equation*} for some $c>0$. The precise value of $c$ follows from \eqref{goodbound}: \begin{equation*} \frac14=\int_0^4\int_0^\pi \frac1u\sin^2\alpha\, \varphi(\alpha,u)\,d\alpha\, du=c\int_0^\pi \sin^{8/\kappa}\alpha \,d\alpha = c \sqrt{\pi}\frac{\Gamma(\frac12+\frac4\kappa)}{\Gamma(1+\frac4\kappa)}, \end{equation*} which gives \eqref{mainidentity}. \end{proof} \begin{proof}[Proof of \cref{T:other}(i)] Note that the $\Law (\gamma(t))=\Law(t^{1/2}\gamma(1))$. Therefore \begin{equation}\label{peng1} \hskip.15ex\mathsf{E}\hskip.10ex\int_0^\infty \mathbbm 1_{\gamma(t) \in \Lambda} \, dt = \int_0^\infty \mathsf{P}(\gamma(t) \in \Lambda) \,dt = \int_0^\infty \mathsf{P}(\gamma(1) \in t^{-1/2}\Lambda) \,dt . \end{equation} Fix $0<a<b$, $0<\alpha<\beta<\pi$. First consider sets $\Lambda$ of the form \begin{equation}\label{formu} \Lambda = \{x+iy \mid \cot^{-1}(x/y) \in [\alpha,\beta],\ y^2 \in [a,b] \}. \end{equation} Then, writing $\gamma(1) = \sqrt U(\cot A+i)$, we continue \eqref{peng1} in the following way \begin{align*} \hskip.15ex\mathsf{E}\hskip.10ex\int_0^\infty \mathbbm 1_{\gamma(t) \in U} \, dt&= \int_0^\infty \mathsf{P}(A\in[\alpha,\beta], U \in [a/t,b/t]) \,dt\\ &= \int_0^\infty\int_\alpha^\beta\int_{a/t}^{b/t} \varphi(\alpha',u)\,d u d\alpha' dt\\ &=\int_0^\infty\int_\alpha^\beta \frac{b-a}{u} \varphi(\alpha',u)\,d\alpha' du\\ &=(b-a)\frac{\Gamma(1+\frac4\kappa)}{4\sqrt\pi\Gamma(\frac12+\frac4\kappa)}\int_\alpha^\beta (\sin\alpha')^{8/\kappa-2} \,d\alpha', \end{align*} where the last identity follows from \cref{L:db1}. Since, \begin{align*} \int_\Lambda \bigl( 1+\frac{x^2}{y^2} \bigr)^{-4/\kappa} \,dx\,dy &= \frac12\int_\alpha^\beta\int_a^b (\sin \alpha')^{8/\kappa-2}\,du d\alpha'\\ &= \frac{b-a}{2} \int_\alpha^\beta (\sin \alpha')^{8/\kappa-2}\,d\alpha', \end{align*} we see that \begin{equation*} \hskip.15ex\mathsf{E}\hskip.10ex\int_0^\infty \mathbbm 1_{\gamma(t) \in \Lambda} \, dt=\frac{\Gamma(1+\frac4\kappa)}{2\sqrt\pi\Gamma(\frac12+\frac4\kappa)} \int_\Lambda \bigl( 1+\frac{x^2}{y^2} \bigr)^{-4/\kappa} \,dx\,dy. \end{equation*} Clearly, sets $\Lambda$ of the form \eqref{formu} generate the Borel $\sigma$-algebra on $\H$. This implies \eqref{eq:zhan_law}. \end{proof} To prove \cref{T:other}(ii), we need the following key result. \begin{lemma}\label{le:I-2_asymptotics} Let $n\in\mathbb{Z}_+$, $n\ge1$. Then $\displaystyle\int_0^\pi I_n(\alpha)\,d\alpha$ is finite for $\kappa<8/(2n-1)$ and infinite for $\kappa\ge8/(2n-1)$. Furthermore, let $\kappa<\frac8{2n-3}$. Then the function $I_n\colon(0,\pi)\to\mathbb{R}_+$ is continuous and for any $\delta>0$ there exists $\alpha_0=\alpha_0(n,\delta)\in(0,\pi/2)$ such that for $\alpha\in(0,\alpha_0)$ \begin{equation}\label{eqlim} I_n(\alpha)\ge \alpha^{8/\kappa-2n}\abs{\log\alpha}^{-\delta}. \end{equation} If, additionally, $\kappa<\frac8{2n-3}\wedge\frac{16}3$, then for any $\delta>0$ there exists $\alpha_0=\alpha_0(n,\delta)\in(0,\pi/2)$ such that for $\alpha\in(0,\alpha_0)$ \begin{equation}\label{eqlim2} I_n(\alpha)\le \alpha^{8/\kappa-2n-\delta}. \end{equation} \end{lemma} \begin{proof} We will prove this lemma by induction over $n$, with the case $n=1$ already established in \cref{L:db1}. Let us first explain the heuristic idea. Consider for simplicity the first non-trivial case $n=2$. Then approximating \eqref{eq:Iq_ode} near $\alpha \approx 0$ and knowing that $I_{1} = c_0(\sin\alpha)^{8/\kappa-2}$, the equation reads \[ 0 \approx c_0\alpha^{8/\kappa-2} - 4\alpha^2 \,I_{2} +(\kappa-4)3\alpha^2 \,I_{2} +(3\kappa-4)\alpha^3 \,I_{2}' +\frac{\kappa}{2}\alpha^4 \,I_{2}'' . \] If we naively suppose $I_{2} \approx \alpha^s$, $I_{2}' \approx s\alpha^{s-1}$, $I_{-2} \approx s(s-1)\alpha^{s-2}$, then we find that either $s = 8/\kappa-4$, cancelling the first term $I_{1}$, or $s < 8/\kappa-4$ in which case the remaining terms need to cancel each other. In the latter case, the coefficients need to sum to $0$, i.e. \begin{equation}\label{eqfors} 0 = (3\kappa-16)+(3\kappa-4)s+\frac{\kappa}{2}s(s-1). \end{equation} Recall also, that by \cref{L:310} with $n=1$, we have $\int_0^\pi I_2 \sin^2\alpha\,d\alpha<\infty$, which implies $s>-3$. However, on the interval $(-3, 8/\kappa-4)$ equation \eqref{eqfors} has no solutions, and thus the case $s < 8/\kappa-4$ is not possible. Hence, the only remaining option is $s = 8/\kappa-4$. To make this heuristic precise, we find a suitable subsequence $\alpha_k \searrow 0$ where we can apply a similar argument. Let us now proceed to the rigorous induction on $n$. \noindent \textbf{Base case}. $n=1$. In this case \eqref{eqlim}, \eqref{eqlim2} and continuity of $I_1$ was already proven in \eqref{mainidentity}. The fact that $\int_0^\pi I_1(\alpha)\,d\alpha$ is finite if and only if $\kappa<8$ is immediate. \noindent \textbf{Inductive step}. Suppose that the statement of the lemma is valid for $n\in\mathbb{Z}_+$. Let us prove it for $n+1$. If $\kappa\ge 8/(2n-1)$, then $\int_0^\pi I_n(\alpha)\,d\alpha=\infty$, and this obviously implies that $\int_0^\pi I_{n+1}(\alpha)\,d\alpha=\infty$. Therefore it is sufficient to consider the case $\kappa<8/(2n-1)$. By the inductive step, for these values of $\kappa$ we have $\int_0^\pi I_n(\alpha)\,d\alpha<\infty$. Hence, \cref{L:310} implies that condition \eqref{eq:ass_I-1} holds. This, together with continuity of $I_n$, shows that all the conditions of \cref{L:mainlemma} are met. Note that we cannot have $I_{n+1}=\infty$ for all $\alpha\in(0,\pi)$. Indeed, in this case the left-hand side of identity \eqref{goodbound} would be infinite but the right-hand side of this identity is finite (because it is equal to $C\int_0^\pi I_n(\alpha)\,d\alpha$). Thus, \cref{L:mainlemma} implies that \begin{equation}\label{td} \text{$I_{n+1}$ is twice differentiable and satisfies \eqref{eq:Iq_ode}}. \end{equation} Using this, we now show \eqref{eqlim} and \eqref{eqlim2}. The statement about the finiteness of $\int I_n\,d\alpha$ follows immediately. \textbf{Lower bound.} We begin with the lower bound \eqref{eqlim}. Denote \begin{equation}\label{defs} s:=\frac{8}{\kappa}-2n-2. \end{equation} Fix $\delta\in(0,1)$ and suppose that the lower bound does not hold, i.e. we have $I_{n+1}(\widetilde\alpha_k) < \tilde\alpha_k^s\abs{\log\tilde\alpha_k}^{-\delta}$ for a sequence of $\widetilde\alpha_k \searrow 0$. We distinguish two cases. \textbf{Case 1.1.} $I_{n+1}(\alpha) \le \alpha^s\abs{\log\alpha}^{-\delta}$ for all small $\alpha>0$. We apply \cref{le:nice_subseq} with $f(\alpha): = \partial_\alpha(\alpha^{-s}I_{n+1}(\alpha))$, $h(\alpha) = \alpha^{-1}\abs{\log\alpha}^{-\delta}$. It is easy to see that all the conditions of the lemma are satisfied, and therefore there exists a sequence of $\alpha_k \searrow 0$ such that (for some $C<\infty$) \begin{align*} \abs{I_{n+1}(\alpha_k)} &\le \alpha_k^s\abs{\log\alpha_k}^{-\delta},\\ \abs{I_{n+1}'(\alpha_k)} &\le C\alpha_k^{s-1}\abs{\log\alpha_k}^{-\delta},\\ I_{n+1}''(\alpha_k) &\ge -C\alpha_k^{s-2}\abs{\log\alpha_k}^{-\delta}. \end{align*} Plugging this into \eqref{eq:Iq_ode} we derive \begin{align} 0 &=nI_n(\alpha_k) +\sin^2\alpha_k(3\kappa-4n-12+(16-4\kappa)\sin^2\alpha_k) \,I_{n+1}(\alpha_k)\nonumber \\ &\phantom{=}+(3\kappa-4)\sin^3\alpha_k\cos\alpha_k\,I_{n+1}'(\alpha_k) +\frac{\kappa}{2}\sin^4\alpha_k \,I_{n+1}''(\alpha_k) \nonumber\\ &\ge nI_n (\alpha_k)+\sin^2\alpha_k[(3\kappa-4n-12+(16-4\kappa)\sin^2\alpha_k)\wedge0] \,\alpha_k^s\abs{\log\alpha_k}^{-\delta} \nonumber\\ &\phantom{=}-C \sin^3\alpha_k\cos\alpha_k \alpha_k^{s-1}\abs{\log\alpha_k}^{-\delta} -C \alpha_k^{s-2}\abs{\log\alpha_k}^{-\delta}\label{lowerbound}. \end{align} Multiplying \eqref{lowerbound} by $\alpha^{-s-2}\abs{\log\alpha}^\delta$ and passing to the limit as $\alpha_k \searrow 0$, we get \begin{equation}\label{lowerbound2} -C+n\liminf_{\alpha\to0} \frac{I_n(\alpha)\abs{\log\alpha}^\delta}{\alpha^{s+2}}\le0 \end{equation} By induction hypothesis (applied with $\delta/2$ in place of $\delta$), $I_n(\alpha)\alpha^{-s-2}\abs{\log\alpha}^\delta \to \infty$. This contradicts \eqref{lowerbound2}. Therefore it cannot be that $I_{n+1}(\alpha) \le \alpha^s\abs{\log\alpha}^{-\delta}$ for all small $\alpha$. \textbf{Case 1.2.} In the other case one can find two sequences $\widetilde\alpha_k, \dbtilde\alpha_k \searrow 0$ such that $\dbtilde\alpha_{k+1}\le \widetilde\alpha_k \le \dbtilde\alpha_{k}$ and $I_{n+1}(\widetilde\alpha_k) < \tilde\alpha_k^s\abs{\log\tilde\alpha_k}^{-\delta}$, $I_{n+1}(\dbtilde\alpha_k) > \dbtilde\alpha_k^s\abs{\log\dbtilde\alpha_k}^{-\delta}$. Let $\alpha_k := \argmin_{[\dbtilde\alpha_{k+1},\dbtilde\alpha_{k}]}(I_{n+1}(\alpha)-\alpha^s\abs{\log\alpha}^{-\delta}$). Then $\alpha_k\in(\dbtilde\alpha_{k+1},\dbtilde\alpha_{k})$ and \begin{align*} \abs{I_{n+1}(\alpha_k)} &< \alpha_k^s\abs{\log\alpha_k}^{-\delta},\\ I_{n+1}'(\alpha_k) &= s\alpha_k^{s-1}\abs{\log\alpha_k}^{-\delta}+o(\alpha_k^{s-1}\abs{\log\alpha_k}^{-\delta}),\\ I_{n+1}''(\alpha_k) &\ge s(s-1)\alpha_k^{s-2}\abs{\log\alpha_k}^{-\delta}+o(\alpha_k^{s-2}\abs{\log\alpha_k}^{-\delta}). \end{align*} This implies that \eqref{lowerbound} holds for this sequence $(\alpha_k)$, which again leads to a contradiction. Thus, we have shown that $I_{n+1}(\alpha)\ge\alpha^{s}\abs{\log\alpha}^{-\delta}$ for all small enough $\alpha$. Recalling the definition of $s$ in \eqref{defs}, we see that this is exactly the desired lower bound in \eqref{eqlim}. This bound implies that for $\kappa\ge 8/(2(n+1)-1)=8/(2n+1)$ we have $\int_0^\pi I_{n+1}(\alpha)\,d\alpha=\infty$. \textbf{Upper bound.} Now we proceed with the upper bound in \eqref{eqlim2}. We suppose now that $\kappa<\frac8{2n-1}\wedge\frac{16}3$. We use again notation \eqref{defs}. Write $I_{n+1}(\alpha) =: \alpha^{s(\alpha)}$. We will distinguish two cases. \textbf{Case 2.1.} Suppose that \begin{equation*} \liminf_{\alpha\to0} s(\alpha)<\limsup_{\alpha\to0} s(\alpha). \end{equation*} We show that this is impossible by deriving a contradiction. Note that by Step~1, $\limsup_{\alpha\to0} s(\alpha)\le s$. Further, there exists a sequence $\beta_k\searrow0$, such that $s(\beta_k)>-3$. Indeed, otherwise the left-hand side of identity \eqref{goodbound} would be infinite whilst the right-hand side of this identity is finite thanks to the induction hypothesis. Therefore, by continuity of $s(\alpha)$ there exists $r\in[-3,s)$, and sequences $\widetilde\alpha_k, \dbtilde\alpha_k \searrow 0$ such that $\dbtilde\alpha_{k+1}\le \widetilde\alpha_k \le \dbtilde\alpha_{k}$ and $s(\widetilde \alpha_{k})< r$, $s(\dbtilde \alpha_{k})> r$. Take now $$ \alpha_k := \argmax_{[\dbtilde\alpha_{k+1},\dbtilde\alpha_{k}]}(I_{n+1}(\alpha)-\alpha^{r}). $$ Then $\alpha_k\in(\dbtilde\alpha_{k+1},\dbtilde\alpha_{k})$ and \begin{align*} I_{n+1}(\alpha_k) &> \alpha_k^{r},\\ I_{n+1}'(\alpha_k) &= r\alpha_k^{r},\\ I_{n+1}''(\alpha_k) &\le r(r-1)\alpha_k^{r}. \end{align*} Substituting this into \eqref{eq:Iq_ode}, dividing it by $\alpha_k^{r+2}$ and letting $\alpha_k \searrow 0$, we get \begin{equation*} (3\kappa-4n-12)+(3\kappa-4) r +\frac\kappa2 r(r-1)+ n\limsup_{\alpha\to0} \frac{I_n(\alpha)}{\alpha^{r+2}}\ge0. \end{equation*} (Here we have used $\kappa \le 16/3$, implying $3\kappa-4n-12+(16-4\kappa)\sin^2\alpha_k < 0$). Since $r+2 < s+2 = 8/\kappa-2n$, we have $ \frac{I_n(\alpha)}{\alpha^{r+2}}\to0$ by the induction hypothesis. Hence, \begin{equation}\label{feqc21} (3\kappa-4n-12)+(3\kappa-4) r +\frac\kappa2 r(r-1)\ge0. \end{equation} Recall that $r\in[-3,s)$. Note that the left-hand side of the above expression is strictly negative for $r=-3$ and for $r=s$ (in the latter case it equals $n (\kappa (2 n - 1) - 12<0$). Hence the left-hand side of \eqref{feqc21} is strictly negative for any $r\in[-3,s)$ which is a contradiction. \textbf{Case 2.2.} It follows from above that $\liminf_{\alpha\to0} s(\alpha)=\limsup_{\alpha\to0}s(\alpha)=:r$ and $r\in[-3,s]$. We would like to show $r=s$ which is \eqref{eqlim2}. Suppose $r<s$. Note that \eqref{td} implies that $s(\alpha)$ is twice differentiable. Therefore, all the conditions of \cref{le:nice_subseq} are satisfied for the functions $f(\alpha):=s'(\alpha)$, $h(\alpha):=\frac{1}{\alpha\abs{\log\alpha}\log\abs{\log\alpha}}$. Thus there exists a sequence $\alpha_k\searrow0$ such that $|s'(\alpha_k)|\le \frac{1}{\alpha_k\abs{\log\alpha_k}\log\abs{\log\alpha_k}}$ and $s''(\alpha_k)\ge -\frac{1}{\alpha_k^2\abs{\log\alpha_k}\log\abs{\log\alpha_k}}$. Recalling that \begin{align*} I_{n+1}'(\alpha) &= \left(\frac{s(\alpha)}{\alpha}+s'(\alpha)\log\alpha\right)\alpha^{s(\alpha)},\\ I_{n+1}''(\alpha) &= \left(-\frac{s(\alpha)}{\alpha^2}+2\frac{s'(\alpha)}{\alpha}+s''(\alpha)\log\alpha+\left(\frac{s(\alpha)}{\alpha}+s'(\alpha)\log\alpha\right)^2\right)\alpha^{s(\alpha)}, \end{align*} we get \begin{align*} I_{n+1}(\alpha_k) &= \alpha_k^{s(\alpha_k)},\\ I_{n+1}'(\alpha_k)&=(s(\alpha_k)+o(1))\alpha_k^{s(\alpha_k)-1},\\ I_{n+1}''(\alpha_k) &\le (-s(\alpha_k)+s(\alpha_k)^2+o(1))\alpha_k^{s(\alpha_k)-2}, \end{align*} where $o(1)$ denote some functions that tend to $0$ as $\alpha\to0$. Now we substitute this into \eqref{eq:Iq_ode}, divide it by $\alpha_k^{s(\alpha_k)+2}$ and let $\alpha_k \searrow 0$. We derive \begin{equation}\label{result4} (3\kappa-4n-12)+(3\kappa-4)r+\frac{\kappa}{2}r(r-1)+n\limsup_{\alpha\to0} \frac{I_n(\alpha)}{\alpha^{s(\alpha)+2}}\ge0. \end{equation} If now $r<s$, then there exists $\delta > 0$ such that $s(\alpha)\le r+\delta<s$ for all $\alpha$ small enough. Hence, thanks to the induction hypothesis, $\limsup_{\alpha\to0} \frac{I_n(\alpha)}{\alpha^{s(\alpha)+2}}=0$. Therefore inequality \eqref{feqc21} holds for a certain $r\in[-3,s)$ which is a contradiction as before. \end{proof} Now we are ready to complete the proof of \cref{T:other} \begin{proof}[Proof of \cref{T:other}(ii)] Inequality \eqref{ineq24} follows directly from \cref{le:I-2_asymptotics} and the definition of $I_n$. Further, for $\kappa<8$ we have from \cref{L:db1}: \begin{equation*} \hskip.15ex\mathsf{E}\hskip.10ex (\Im \gamma_1)^{-2}=\int_0^\pi\int_0^4 \frac1u\varphi(\alpha,u)\,d\alpha du = \frac{2}{8-\kappa}, \end{equation*} which is \eqref{ineq25}. To show \eqref{ineq26}, fix $\kappa<8/3$. Note that in this regime by \cref{le:I-2_asymptotics}, we have $\int_0^1 I_2(\alpha)<\infty$, and thus by \cref{Cor:310} with $f=I_2$, $h=1/(\alpha |\log \alpha|)$ there exists a sequence $\alpha_k\searrow 0$ such that \begin{align} \lim_{\alpha_k\searrow 0} \alpha_k I_{2}(\alpha_k) = 0 ,\label{lim41}\\ \lim_{\alpha_k\searrow 0} \alpha_k^2 I_{2}'(\alpha_k) = 0\label{lim42} . \end{align} It was shown in the proof of \cref{le:I-2_asymptotics}, that in this case $I_2$ satisfies \eqref{eq:Iq_ode} which can be rewritten as \begin{equation}\label{eq:Iq_ode2} \frac{I_1}{\sin^2\alpha} - 4 I_{2} +(\kappa-4)(3-4\sin^2\alpha) I_{2} +(3\kappa-4)\sin\alpha\cos\alpha\, I_{2}' +\frac{\kappa}{2}\sin^2\alpha\, I_{2}''=0. \end{equation} Integrate now the above equation in $\alpha$ from $0$ to $\pi$. Note that thanks to \eqref{lim41} and \eqref{lim42} all the boundary terms vanish when integrating by parts. Note also, that by \cref{L:310}, we have $\int_0^\pi I_2(\alpha)\sin^2\alpha\,d\alpha=\frac14 \int_0^\pi I_1(\alpha)\,d\alpha$. We get \begin{equation}\label{relation} \int_0^\pi \frac{I_{1}(\alpha)}{\sin^2 \alpha}\,d\alpha+(\kappa-12)\int_0^\pi I_{2}(\alpha)\,d\alpha +2\int_0^\pi I_{1}(\alpha)\,d\alpha=0. \end{equation} Recalling the expression for $I_1$ from \cref{L:db1}, we deduce \begin{equation*} \hskip.15ex\mathsf{E}\hskip.10ex(\Im\gamma_1)^{-4} = \int_0^\pi I_2(\alpha)\,d\alpha = \frac{48-16\kappa}{(12-\kappa)(8-\kappa)(8-3\kappa)}.\qedhere \end{equation*} \end{proof} \begin{remark} A relation similar to \eqref{relation} holds for general $n$. Unfortunately, we do not have an explicit formula for $\int_0^\pi \frac{I_{n}(\alpha)}{\sin^2 \alpha}\,d\alpha$ for $n\ge2$. This prevents us from getting explicit formulas for higher order moments. \end{remark} \begin{remark} Another possible approach to find explicit formulas for $I_n$ would be through its Fourier coefficients \[ a_0 = \frac{1}{\pi}\int_0^\pi I_n\,d\alpha , \quad a_j = \frac{2}{\pi}\int_0^\pi I_n\cos(2j\alpha)\,d\alpha . \] Formally expanding \eqref{eq:Iq_ode2}, we obtain a (countable) system of linear equations for $(a_j)_{j \ge 0}$ in terms of the Fourier coefficients $(b_j)_{j \ge 0}$ of the function $I_{n-1}/\sin^2\alpha$. However, it seems difficult to solve the system of equations explicitly. Only for $a_0,a_1$ we get a system of two equations in terms of $b_0,b_1$ which correspond exactly to what we obtain from the proof above. \end{remark}